id,url,title,authors,date_published,source,filename,source_filetype 17,https://agentmodels.org/chapters/3d-reinforcement-learning.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2018-06-24T18:01:20Z,agentmodels,3d-reinforcement-learning.md, 32,https://agentmodels.org/chapters/7-multi-agent.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2016-12-04T11:26:34Z,agentmodels,7-multi-agent.md, 55,https://agentmodels.org/chapters/5b-time-inconsistency.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2019-08-28T09:06:53Z,agentmodels,5b-time-inconsistency.md, 75,https://agentmodels.org/chapters/1-introduction.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2017-04-16T22:22:12Z,agentmodels,1-introduction.md, 92,https://agentmodels.org/chapters/5a-time-inconsistency.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2019-08-24T14:52:08Z,agentmodels,5a-time-inconsistency.md, 111,https://agentmodels.org/chapters/8-guide-library.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2016-12-04T12:14:18Z,agentmodels,8-guide-library.md, 126,https://agentmodels.org/chapters/5-biases-intro.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2017-03-19T18:46:48Z,agentmodels,5-biases-intro.md, 139,https://agentmodels.org/chapters/3c-pomdp.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2017-03-19T19:27:27Z,agentmodels,3c-pomdp.md, 157,https://agentmodels.org/chapters/3b-mdp-gridworld.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2016-12-13T14:21:09Z,agentmodels,3b-mdp-gridworld.md, 173,https://agentmodels.org/chapters/5c-myopic.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2017-03-19T18:54:16Z,agentmodels,5c-myopic.md, 192,https://agentmodels.org/chapters/5e-joint-inference.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2017-03-19T18:54:16Z,agentmodels,5e-joint-inference.md, 209,https://agentmodels.org/chapters/2-webppl.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2017-03-19T18:54:16Z,agentmodels,2-webppl.md, 224,https://agentmodels.org/chapters/5d-joint-inference.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2019-08-29T10:20:19Z,agentmodels,5d-joint-inference.md, 237,https://agentmodels.org/chapters/meetup-2017.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2017-03-30T16:34:45Z,agentmodels,meetup-2017.md, 259,https://agentmodels.org/chapters/3a-mdp.html,Modeling Agents with Probabilistic Programs,"['Owain Evans', 'Andreas Stuhlmüller', 'John Salvatier', 'Daniel Filan']",2017-06-16T23:10:13Z,agentmodels,3a-mdp.md, 269,https://uploads-ssl.webflow.com/614b70a71b9f71c9c240c7a7/617938781d1308004d007e2d_Garfinkel_Tour_Of_Emerging_Cryptographic_Technologies.pdf,A Tour of Emerging Cryptographic Technologies,['Ben Garfinkel'],,agisf,, 301,https://bounded-regret.ghost.io/ml-systems-will-have-weird-failure-modes-2/,ML Systems Will Have Weird Failure Modes,['Jacob Steinhardt'],2023-05-13T10:00:00Z,agisf,, 317,https://bounded-regret.ghost.io/emergent-deception-optimization/,Emergent Deception and Emergent Optimisation,['Jacob Steinhardt'],,agisf,, 339,https://bounded-regret.ghost.io/more-is-different-for-ai/,More Is Different for AI,['Jacob Steinhardt'],2023-05-13T12:00:00Z,agisf,, 361,https://docs.google.com/document/d/1DF31DIkwS9GONzmy1W3nuI9HRAwSKy8JcIbzKYXg-ic/edit#heading=h.kimhqj72mew4,Transformative AI and Compute - Reading List,"['Lennart Heim', 'Konstantin Pilz']",,agisf,, 382,https://openai.com/blog/governance-of-superintelligence,Governance of superintelligence,['OpenAI'],,agisf,, 413,https://arxiv.org/abs/2209.00626,"[Week 2 & 3] “The alignment problem from a deep learning perspective” (Sections 2, 3 and 4) by Richard Ngo, Lawrence Chan & Sören Mindermann",['Richard Ngo'],2023-05-13T11:00:00Z,agisf,, 448,https://cset.georgetown.edu/publication/the-main-resource-is-the-human/,The Main Resource is the Human,"['Michah Musser', 'Rebecca Gelles', 'Ronnie Kinoshita', 'Catherine Aiken', 'Andrew Lohn']",,agisf,, 477,https://cset.georgetown.edu/publication/the-semiconductor-supply-chain/,The Semiconductor Supply Chain,['Saif M. Khan'],,agisf,, 501,https://arxiv.org/pdf/2307.03718.pdf,Frontier AI Regulation: Managing Emerging Risks to Public Safety,['Markus\xa0Anderljung'],2023-05-13T09:00:00Z,agisf,, 527,https://www.fhi.ox.ac.uk/wp-content/uploads/China-AI-Syllabus-1.pdf,Syllabus: Artificial Intelligence and China,['Jeffrey Ding'],,agisf,, 557,https://openai.com/blog/how-should-ai-systems-behave,"""How should AI systems behave, and who should decide?""",['OpenAI'],,agisf,, 594,https://arxiv.org/abs/2305.15324,Model evaluation for extreme risks,"['Toby Shevlane', 'Sebastian Farquhar', 'Ben Garfinkel', 'Mary Phuong', 'Jess Whittlestone', 'Jade Leung', 'Daniel Kokotajlo', 'Nahema Marchal', 'Markus Anderljung', 'Noam Kolt', 'Lewis Ho', 'Divya Siddarth', 'Shahar Avin', 'Will Hawkins', 'Been Kim', 'Iason Gabriel', 'Vijay Bolina', 'Jack Clark', 'Yoshua Bengio', 'Paul Christiano', 'Allan Dafoe']",2023-05-24T16:38:43Z,agisf,, 623,https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Triad-Report.pdf,The AI Triad and What It Means for National Security Strategy,['Ben Buchanan'],,agisf,, 646,https://arxiv.org/abs/2206.13353,Is Power-Seeking AI an Existential Risk?,['Joseph Carlsmith'],2023-05-13T10:00:00Z,agisf,, 683,https://bounded-regret.ghost.io/future-ml-systems-will-be-qualitatively-different/,Future ML Systems Will Be Qualitatively Different,['Jacob Steinhardt'],2023-05-13T12:00:00Z,agisf,, 701,https://www.fhi.ox.ac.uk/wp-content/uploads/Artificial-Intelligence-and-International-Security-Syllabus-public-1.pdf,Syllabus: Artificial Intelligence and International Security,['Remco Zwetsloot'],,agisf,, 731,https://medium.com/machine-learning-for-humans/supervised-learning-740383a2feab,"Machine Learning for Humans, Part 2.1: Supervised Learning",['Vishal Maini'],2023-05-13T13:00:00Z,agisf,, 752,https://docs.google.com/document/d/1iFszDulgpu1aZcq_aYFG7Nmcr5zgOhaeSwavOMk1akw/edit#heading=h.4whc9v22p7tb,Careers in alignment,['Richard Ngo'],2023-05-13T06:00:00Z,agisf,, 767,https://drive.google.com/file/d/1KewDov1taegTzrqJ4uurmJ2CJ0Y72EU3/view,The easy goal inference problem is still hard,['Paul Christiano'],2023-05-13T11:00:00Z,agisf,, 787,https://www.governance.ai/research-paper/towards-best-practices-in-agi-safety-and-governance,Towards Best Practices in AGI Safety and Governance,"['Jonas Schuett', 'Noemi Dreksler', 'Markus Anderljung', 'David McCaffary', 'Lennart Heim', 'Emma Bluemke', 'Ben Garfinkel']",,agisf,, 818,https://arxiv.org/abs/2205.10625,Least-to-Most Prompting Enables Complex Reasoning in Large Language Models,['Denny Zhou \xa0\xa0\xa0 Nathanael Schärli \xa0\xa0\xa0 Le Hou \xa0\xa0\xa0 Jason Wei \xa0\xa0\xa0 Nathan Scales \xa0\xa0\xa0 Xuezhi Wang'],2023-05-13T09:00:00Z,agisf,, 832,https://cset.georgetown.edu/publication/ai-chips-what-they-are-and-why-they-matter/,AI Chips: What They Are and Why They Matter,['Saif M. Khan'],,agisf,, 854,https://www.openphilanthropy.org/research/12-tentative-ideas-for-us-ai-policy/,12 Tentative Ideas for US AI Policy,['Luke Muehlhauser'],,agisf,, 897,https://arxiv.org/abs/2110.03605,Robust Feature-Level Adversaries are Interpretability Tools,['Stephen Casper'],2023-05-13T08:00:00Z,agisf,, 923,https://openai.com/charter/,OpenAI Charter,['OpenAI'],,agisf,, 945,https://openai.com/research/summarizing-books,Summarizing books with human feedback,['Wu et al.'],2023-05-13T09:00:00Z,agisf,, 962,https://docs.google.com/document/d/1CDj_sdTzZGP9Tpppy7PdaPs_4acueuNxTjMnAiCJJKs/edit,Transformative AI Governance: A Literature Review,['Matthijs Maas'],,agisf,, 978,https://bounded-regret.ghost.io/thought-experiments-provide-a-third-anchor/,Thought Experiments Provide a Third Anchor,['Jacob Steinhardt'],2023-05-13T10:00:00Z,agisf,, 1005,https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035,Lessons From the World’s Two Experiments in AI Governance,"[""Matt O'Shaughnessy"", 'Matt Sheehan']",,agisf,, 1034,https://arxiv.org/abs/2307.03718,Frontier AI Regulation:Managing Emerging Risks to Public Safety,['Markus\xa0Anderljung'],2023-07-06T17:03:25Z,agisf,, 1065,https://arxiv.org/abs/2303.11341,"A template for Arxiv Style ††thanks: Citation: Authors. Title. Pages…. DOI:000000/11111.","['Yonadav Shavit', 'Yonadav Shavit']",2023-03-20T13:50:05Z,agisf,, 1092,https://arxiv.org/pdf/1610.01644.pdf,Understanding intermediate layers using linear classifier probes,['Guillaume Alain & Yoshua Bengio'],2023-05-13T07:00:00Z,agisf,, 1113,https://drive.google.com/file/d/1JoNeUDtC_PWZUHXIeuYCU94ehLkSvWHO/view?usp=sharing,How technical safety standards could promote TAI safety,"[""Cullen O'Keefe"", 'Jade Leung', 'Markus Anderljung']",,agisf,, 1137,https://aisafety.info?state=8348,What is Dylan Hadfield-Menell's thesis on?,['Stampy aisafety.info'],2023-08-11T02:33:41Z,aisafety.info,, 1160,https://aisafety.info?state=6481,Would donating small amounts to AI safety organizations make any significant difference?,['Stampy aisafety.info'],2023-08-10T22:01:20Z,aisafety.info,, 1177,https://aisafety.info?state=1250,What is Obelisk's research strategy?,['Stampy aisafety.info'],2023-08-05T08:13:36Z,aisafety.info,, 1190,https://aisafety.info?state=8357,What does MIRI think about technical alignment?,['Stampy aisafety.info'],2023-06-06T23:31:44Z,aisafety.info,, 1206,https://aisafety.info?state=7626,Are there any AI alignment projects which governments could usefully put a very large amount of resources into?,['Stampy aisafety.info'],2023-08-10T23:12:57Z,aisafety.info,, 1222,https://aisafety.info?state=8342,What is David Krueger working on?,['Stampy aisafety.info'],2023-08-11T02:30:11Z,aisafety.info,, 1238,https://aisafety.info?state=95LE,Why would a misaligned superintelligence kill everyone in the world?,['Stampy aisafety.info'],2023-08-04T06:03:29Z,aisafety.info,, 1257,https://aisafety.info?state=6320,How can I update my emotional state regarding the urgency of AI safety?,['Stampy aisafety.info'],2023-08-04T08:10:12Z,aisafety.info,, 1268,https://aisafety.info?state=8EL6,What is deceptive alignment?,['Stampy aisafety.info'],2023-08-11T19:38:12Z,aisafety.info,, 1283,https://aisafety.info?state=6449,Would it improve the safety of quantilizers to cut off the top few percent of the distribution?,['Stampy aisafety.info'],2023-08-11T15:25:43Z,aisafety.info,, 1292,https://aisafety.info?state=8SIU,What is reward hacking?,['Stampy aisafety.info'],2023-11-01T13:03:33Z,aisafety.info,, 1304,https://aisafety.info?state=7727,"Can we get AGI by scaling up architectures similar to current ones, or are we missing key insights?",['Stampy aisafety.info'],2023-08-04T08:45:56Z,aisafety.info,, 1313,https://aisafety.info?state=6119,Can you give an AI a goal which involves “minimally impacting the world”?,['Stampy aisafety.info'],2023-08-23T17:53:17Z,aisafety.info,, 1322,https://aisafety.info?state=6982,Why might we expect a superintelligence to be hostile by default?,['Stampy aisafety.info'],2023-08-20T18:45:17Z,aisafety.info,, 1333,https://aisafety.info?state=85EC,What is Conjecture's research strategy?,['Stampy aisafety.info'],2023-08-18T20:23:59Z,aisafety.info,, 1361,https://aisafety.info?state=8QH5,Would a slowdown in AI capabilities development decrease existential risk?,['Stampy aisafety.info'],2023-07-13T21:11:06Z,aisafety.info,, 1394,https://aisafety.info?state=7778,"What is ""evidential decision theory""?",['Stampy aisafety.info'],2023-06-09T20:21:17Z,aisafety.info,, 1407,https://aisafety.info?state=8KGR,What are the different AI Alignment / Safety organizations and academics researching?,['Stampy aisafety.info'],2023-08-05T05:31:44Z,aisafety.info,, 1434,https://aisafety.info?state=7523,Why might a maximizing AI cause bad outcomes?,['Stampy aisafety.info'],2023-08-18T08:48:59Z,aisafety.info,, 1448,https://aisafety.info?state=5853,"Is it possible to code into an AI to avoid all the ways a given task could go wrong, and would it be dangerous to try that?",['Stampy aisafety.info'],2023-08-08T14:19:14Z,aisafety.info,, 1463,https://aisafety.info?state=8U2M,What should I do with my machine learning research idea for AI alignment?,['Stampy aisafety.info'],2023-08-04T10:35:39Z,aisafety.info,, 1487,https://aisafety.info?state=7820,What are the ethical challenges related to whole brain emulation?,['Stampy aisafety.info'],2023-08-08T14:56:45Z,aisafety.info,, 1517,https://aisafety.info?state=8424,What is neural network modularity?,['Stampy aisafety.info'],2023-08-05T04:05:42Z,aisafety.info,, 1528,https://aisafety.info?state=89ZQ,Are there any detailed example stories of what unaligned AGI would look like?,['Stampy aisafety.info'],2023-08-10T03:20:27Z,aisafety.info,, 1558,https://aisafety.info?state=5633,When do experts think human-level AI will be created?,['Stampy aisafety.info'],2023-08-19T18:49:02Z,aisafety.info,, 1577,https://aisafety.info?state=6220,Wouldn't a superintelligence be smart enough to know right from wrong?,['Stampy aisafety.info'],2023-08-08T19:12:18Z,aisafety.info,, 1591,https://aisafety.info?state=8326,What projects are CAIS working on?,['Stampy aisafety.info'],2023-08-05T08:10:13Z,aisafety.info,, 1610,https://aisafety.info?state=89ZS,What is reinforcement learning (RL)?,['Stampy aisafety.info'],2023-06-22T21:12:54Z,aisafety.info,, 1619,https://aisafety.info?state=6436,What is Stampy's AI Safety Info?,['Stampy aisafety.info'],2023-06-22T21:08:24Z,aisafety.info,, 1631,https://aisafety.info?state=90PK,What is a singleton?,['Stampy aisafety.info'],2023-05-20T22:50:59Z,aisafety.info,, 1641,https://aisafety.info?state=2402,How is AGI different from current AI?,['Stampy aisafety.info'],2023-08-16T14:27:38Z,aisafety.info,, 1654,https://aisafety.info?state=8H0O,Wouldn't a superintelligence be slowed down by the need to do experiments in the physical world?,['Stampy aisafety.info'],2023-07-14T07:35:05Z,aisafety.info,, 1671,https://aisafety.info?state=7060,"At a high level, what is the challenge of alignment that we must meet to secure a good future?",['Stampy aisafety.info'],2023-08-05T06:17:25Z,aisafety.info,, 1686,https://aisafety.info?state=8368,What is OpenAI's alignment strategy?,['Stampy aisafety.info'],2023-08-23T07:12:12Z,aisafety.info,, 1698,https://aisafety.info?state=7333,Would AI alignment be hard with deep learning?,['Stampy aisafety.info'],2023-08-04T00:51:45Z,aisafety.info,, 1718,https://aisafety.info?state=6628,What are OpenAI Codex and GitHub Copilot?,['Stampy aisafety.info'],2023-08-04T07:30:27Z,aisafety.info,, 1734,https://aisafety.info?state=967I,Aren't AI existential risk concerns just an example of Pascal's mugging?,['Stampy aisafety.info'],2023-06-14T22:56:10Z,aisafety.info,, 1750,https://aisafety.info?state=9USW,What is Vingean uncertainty?,['Stampy aisafety.info'],2023-08-10T15:53:42Z,aisafety.info,, 1761,https://aisafety.info?state=6272,Is there a danger in anthropomorphizing AIs and trying to understand them in human terms?,['Stampy aisafety.info'],2023-08-19T23:49:15Z,aisafety.info,, 1773,https://aisafety.info?state=7491,"What is a ""value handshake""?",['Stampy aisafety.info'],2023-08-04T14:32:41Z,aisafety.info,, 1786,https://aisafety.info?state=6182,What are the potential benefits of AI as it grows increasingly sophisticated?,['Stampy aisafety.info'],2023-08-08T13:13:45Z,aisafety.info,, 1802,https://aisafety.info?state=7777,What are the different versions of decision theory?,['Stampy aisafety.info'],2023-08-23T01:58:50Z,aisafety.info,, 1815,https://aisafety.info?state=9048,Why should someone who is religious worry about AI existential risk?,['Stampy aisafety.info'],2023-08-05T05:40:18Z,aisafety.info,, 1831,https://aisafety.info?state=8PYV,What is a shoggoth?,['Stampy aisafety.info'],2023-08-09T15:28:43Z,aisafety.info,, 1849,https://aisafety.info?state=8KGQ,What are polysemantic neurons?,['Stampy aisafety.info'],2023-08-05T05:29:13Z,aisafety.info,, 1866,https://aisafety.info?state=5950,"Are Google, OpenAI, etc. aware of the risk?",['Stampy aisafety.info'],2023-08-04T08:55:24Z,aisafety.info,, 1882,https://aisafety.info?state=MO5Y,But won't we just design AI to be helpful?,['Stampy aisafety.info'],2023-10-03T01:39:37Z,aisafety.info,, 1898,https://aisafety.info?state=8503,What are the main sources of AI existential risk?,['Stampy aisafety.info'],2023-08-15T12:03:52Z,aisafety.info,, 1925,https://aisafety.info?state=7810,"What is ""HCH""?",['Stampy aisafety.info'],2023-07-31T16:07:35Z,aisafety.info,, 1934,https://aisafety.info?state=8390,Do AIs suffer?,['Stampy aisafety.info'],2023-07-04T03:20:32Z,aisafety.info,, 1946,https://aisafety.info?state=6478,What evidence do experts usually base their timeline predictions on?,['Stampy aisafety.info'],2023-08-07T20:49:12Z,aisafety.info,, 1970,https://aisafety.info?state=6990,Can we test an AI to make sure that it’s not going to take over and do harmful things after it achieves superintelligence?,['Stampy aisafety.info'],2023-06-07T22:49:01Z,aisafety.info,, 1983,https://aisafety.info?state=8NYD,Isn't the real concern bias?,['Stampy aisafety.info'],2023-08-20T19:59:29Z,aisafety.info,, 2006,https://aisafety.info?state=8U2V,What should I do with my idea for helping with AI alignment?,['Stampy aisafety.info'],2023-06-07T16:28:55Z,aisafety.info,, 2027,https://aisafety.info?state=6984,Wouldn't a superintelligence be smart enough not to make silly mistakes in its comprehension of our instructions?,['Stampy aisafety.info'],2023-08-22T16:29:07Z,aisafety.info,, 2037,https://aisafety.info?state=7673,"What is ""Do what I mean""?",['Stampy aisafety.info'],2023-08-09T16:42:19Z,aisafety.info,, 2053,https://aisafety.info?state=8158,How might we get from artificial general intelligence to a superintelligent system?,['Stampy aisafety.info'],2023-08-16T14:51:01Z,aisafety.info,, 2065,https://aisafety.info?state=3486,Could AI alignment research be bad? How?,['Stampy aisafety.info'],2023-08-04T10:17:45Z,aisafety.info,, 2094,https://aisafety.info?state=88FN,What is reinforcement learning from human feedback (RLHF)?,['Stampy aisafety.info'],2023-07-25T00:31:20Z,aisafety.info,, 2118,https://aisafety.info?state=7594,"What are ""human values""?",['Stampy aisafety.info'],2023-08-05T02:14:22Z,aisafety.info,, 2127,https://aisafety.info?state=8U2Y,I’m interested in providing significant financial support to AI alignment. How should I go about this?,['Stampy aisafety.info'],2023-08-10T22:11:57Z,aisafety.info,, 2151,https://aisafety.info?state=7750,What are scaling laws?,['Stampy aisafety.info'],2023-08-15T13:00:19Z,aisafety.info,, 2164,https://aisafety.info?state=5844,Is it possible to block an AI from doing certain things on the Internet?,['Stampy aisafety.info'],2023-08-10T02:05:12Z,aisafety.info,, 2177,https://aisafety.info?state=7580,"What are ""pivotal acts""?",['Stampy aisafety.info'],2023-08-10T13:08:41Z,aisafety.info,, 2193,https://aisafety.info?state=7786,"What is ""hedonium""?",['Stampy aisafety.info'],2023-08-15T14:53:19Z,aisafety.info,, 2207,https://aisafety.info?state=7772,What are some of the leading AI capabilities organizations?,['Stampy aisafety.info'],2023-08-10T14:32:45Z,aisafety.info,, 2228,https://aisafety.info?state=7636,"If I only care about helping people alive today, does AI safety still matter?",['Stampy aisafety.info'],2023-08-22T15:22:19Z,aisafety.info,, 2237,https://aisafety.info?state=8U2S,How can I use a background in the social sciences to help with AI alignment?,['Stampy aisafety.info'],2023-07-10T16:45:41Z,aisafety.info,, 2255,https://aisafety.info?state=8333,What is the Center on Long-Term Risk (CLR) focused on?,['Stampy aisafety.info'],2023-08-20T11:14:19Z,aisafety.info,, 2270,https://aisafety.info?state=8364,What are Scott Garrabrant and Abram Demski working on?,['Stampy aisafety.info'],2023-08-05T05:55:11Z,aisafety.info,, 2282,https://aisafety.info?state=MNAK,But why would misaligned AI pose a threat that we can’t deal with?,['Stampy aisafety.info'],2023-10-03T09:59:05Z,aisafety.info,, 2300,https://aisafety.info?state=8E41,Will AI be able to think faster than humans?,['Stampy aisafety.info'],2023-08-21T17:06:49Z,aisafety.info,, 2309,https://aisafety.info?state=9XPJ,"What is the ""sharp left turn""?",['Stampy aisafety.info'],2023-10-02T17:29:32Z,aisafety.info,, 2324,https://aisafety.info?state=2400,Why would an AI do bad things?,['Stampy aisafety.info'],2023-09-01T19:05:48Z,aisafety.info,, 2340,https://aisafety.info?state=6347,"What is ""transformative AI""?",['Stampy aisafety.info'],2023-08-04T10:02:00Z,aisafety.info,, 2361,https://aisafety.info?state=6703,I want to work on AI alignment. How can I get funding?,['Stampy aisafety.info'],2023-08-15T18:22:43Z,aisafety.info,, 2379,https://aisafety.info?state=8XV7,What is outer alignment?,['Stampy aisafety.info'],2023-08-16T18:31:56Z,aisafety.info,, 2396,https://aisafety.info?state=6205,"What is the ""control problem""?",['Stampy aisafety.info'],2023-08-19T09:07:40Z,aisafety.info,, 2405,https://aisafety.info?state=A3MU,Wouldn't AIs need to have a power-seeking drive to pose a serious risk?,['Stampy aisafety.info'],2023-08-09T10:05:02Z,aisafety.info,, 2435,https://aisafety.info?state=6988,"Once we notice that a superintelligence is trying to take over the world, can’t we turn it off, or reprogram it?",['Stampy aisafety.info'],2023-08-23T02:25:36Z,aisafety.info,, 2448,https://aisafety.info?state=6966,Why does AI takeoff speed matter?,['Stampy aisafety.info'],2023-08-09T01:51:51Z,aisafety.info,, 2458,https://aisafety.info?state=MLJV,What is the EU AI Act?,['Stampy aisafety.info'],2023-08-17T10:46:53Z,aisafety.info,, 2485,https://aisafety.info?state=897I,What is instrumental convergence?,['Stampy aisafety.info'],2023-08-24T15:33:46Z,aisafety.info,, 2498,https://aisafety.info?state=85E0,What are some exercises and projects I can try?,['Stampy aisafety.info'],2023-08-05T04:55:33Z,aisafety.info,, 2516,https://aisafety.info?state=8AV6,How can an AGI be smarter than all of humanity?,['Stampy aisafety.info'],2023-08-21T17:11:44Z,aisafety.info,, 2537,https://aisafety.info?state=3119,Why can't we just turn the AI off if it starts to misbehave?,['Stampy aisafety.info'],2023-08-20T14:51:28Z,aisafety.info,, 2555,https://aisafety.info?state=8U2W,"How can I work on helping AI alignment researchers be more effective, e.g. as a coach?",['Stampy aisafety.info'],2023-06-07T19:39:35Z,aisafety.info,, 2566,https://aisafety.info?state=7729,Will there be a discontinuity in AI capabilities?,['Stampy aisafety.info'],2023-08-04T17:01:34Z,aisafety.info,, 2588,https://aisafety.info?state=7784,Could we tell the AI to do what's morally right?,['Stampy aisafety.info'],2023-08-07T07:36:37Z,aisafety.info,, 2600,https://aisafety.info?state=1001,What about other risks from AI?,['Stampy aisafety.info'],2023-08-23T18:03:37Z,aisafety.info,, 2609,https://aisafety.info?state=8185,What is Goodhart's law?,['Stampy aisafety.info'],2023-08-09T10:20:17Z,aisafety.info,, 2634,https://aisafety.info?state=8517,What might an international treaty on the development of AGI look like?,['Stampy aisafety.info'],2023-08-17T19:49:47Z,aisafety.info,, 2648,https://aisafety.info?state=87AG,What is corrigibility?,['Stampy aisafety.info'],2023-08-23T07:09:27Z,aisafety.info,, 2660,https://aisafety.info?state=8AF1,What is an alignment tax?,['Stampy aisafety.info'],2023-08-24T15:43:11Z,aisafety.info,, 2673,https://aisafety.info?state=6410,Isn't the real concern AI being misused by terrorists or other bad actors?,['Stampy aisafety.info'],2023-08-23T18:48:38Z,aisafety.info,, 2688,https://aisafety.info?state=9049,What is Eliciting Latent Knowledge (ELK)?,['Stampy aisafety.info'],2023-08-23T22:21:48Z,aisafety.info,, 2697,https://aisafety.info?state=8160,"What are ""mesa-optimizers""?",['Stampy aisafety.info'],2023-08-24T14:38:53Z,aisafety.info,, 2709,https://aisafety.info?state=8350,What does the scheme Externalized Reasoning Oversight involve?,['Stampy aisafety.info'],2023-08-10T16:42:55Z,aisafety.info,, 2728,https://aisafety.info?state=7715,How likely is extinction from superintelligent AI?,['Stampy aisafety.info'],2023-08-16T21:51:10Z,aisafety.info,, 2745,https://aisafety.info?state=9NRR,"What is a ""polytope"" in a neural network?",['Stampy aisafety.info'],2023-07-21T08:50:43Z,aisafety.info,, 2759,https://aisafety.info?state=7565,Will we ever build a superintelligence?,['Stampy aisafety.info'],2023-08-22T18:09:52Z,aisafety.info,, 2771,https://aisafety.info?state=7602,Is large-scale automated AI persuasion and propaganda a serious concern?,['Stampy aisafety.info'],2023-08-04T07:39:16Z,aisafety.info,, 2792,https://aisafety.info?state=8340,What was Refine?,['Stampy aisafety.info'],2023-08-09T01:21:12Z,aisafety.info,, 2801,https://aisafety.info?state=6274,Is AI alignment possible?,['Stampy aisafety.info'],2023-06-15T12:38:11Z,aisafety.info,, 2810,https://aisafety.info?state=5842,How likely is it that an AI would pretend to be a human to further its goals?,['Stampy aisafety.info'],2023-08-16T14:52:24Z,aisafety.info,, 2825,https://aisafety.info?state=MQSD,What are the key problems in AI governance?,['Stampy aisafety.info'],2023-10-03T02:03:45Z,aisafety.info,, 2855,https://aisafety.info?state=9OGZ,New to AI safety? Start here.,['Stampy aisafety.info'],2023-08-23T14:53:36Z,aisafety.info,, 2870,https://aisafety.info?state=6188,Isn’t AI just a tool like any other? Won’t it just do what we tell it to?,['Stampy aisafety.info'],2023-08-09T16:23:01Z,aisafety.info,, 2887,https://aisafety.info?state=8AF5,What are the differences between subagents and mesa-optimizers?,['Stampy aisafety.info'],2023-07-18T09:40:35Z,aisafety.info,, 2900,https://aisafety.info?state=6953,Do people seriously worry about existential risk from AI?,['Stampy aisafety.info'],2023-08-19T18:50:44Z,aisafety.info,, 2913,https://aisafety.info?state=6479,What are some AI alignment research agendas currently being pursued?,['Stampy aisafety.info'],2023-08-08T15:23:31Z,aisafety.info,, 2915,https://aisafety.info?state=86WT,Won't humans be able to beat an unaligned AI since we have a huge advantage in numbers?,['Stampy aisafety.info'],2023-08-05T05:01:30Z,aisafety.info,, 2936,https://aisafety.info?state=8PYW,What is inner alignment?,['Stampy aisafety.info'],2023-08-21T15:23:17Z,aisafety.info,, 2957,https://aisafety.info?state=6920,What can we expect the motivations of a superintelligent machine to be?,['Stampy aisafety.info'],2023-08-23T07:08:48Z,aisafety.info,, 2970,https://aisafety.info?state=9358,What is compute?,['Stampy aisafety.info'],2023-08-01T09:30:51Z,aisafety.info,, 2987,https://aisafety.info?state=7782,"What is ""agent foundations""?",['Stampy aisafety.info'],2023-08-15T02:41:42Z,aisafety.info,, 2997,https://aisafety.info?state=6178,What approaches are AI alignment organizations working on?,['Stampy aisafety.info'],2023-08-08T15:23:11Z,aisafety.info,, 3020,https://aisafety.info?state=7749,How does Redwood Research do adversarial training?,['Stampy aisafety.info'],2023-08-05T02:38:04Z,aisafety.info,, 3040,https://aisafety.info?state=89LL,What are existential risks (x-risks)?,['Stampy aisafety.info'],2023-08-23T02:10:18Z,aisafety.info,, 3051,https://aisafety.info?state=7590,What actions can I take in under five minutes to contribute to the cause of AI safety?,['Stampy aisafety.info'],2023-08-18T17:27:17Z,aisafety.info,, 3065,https://aisafety.info?state=5851,How quickly could an AI go from the first indications of problems to an unrecoverable disaster?,['Stampy aisafety.info'],2023-08-08T14:59:31Z,aisafety.info,, 3077,https://aisafety.info?state=8359,What does Evan Hubinger think of Deception + Inner Alignment?,['Stampy aisafety.info'],2023-08-15T01:02:14Z,aisafety.info,, 3093,https://aisafety.info?state=7071,"What is ""AI takeoff""?",['Stampy aisafety.info'],2023-08-16T13:40:37Z,aisafety.info,, 3102,https://aisafety.info?state=94D9,"What is the ""Bitter Lesson""?",['Stampy aisafety.info'],2023-08-10T15:49:04Z,aisafety.info,, 3121,https://aisafety.info?state=7640,How and why should I form my own views about AI safety?,['Stampy aisafety.info'],2023-08-05T02:40:08Z,aisafety.info,, 3133,https://aisafety.info?state=6592,What are brain-computer interfaces?,['Stampy aisafety.info'],2023-08-23T07:22:32Z,aisafety.info,, 3145,https://aisafety.info?state=8C7T,"What are the ""no free lunch"" theorems?",['Stampy aisafety.info'],2023-08-05T05:12:46Z,aisafety.info,, 3158,https://aisafety.info?state=7651,Where can I find mentorship and advice for becoming a researcher?,['Stampy aisafety.info'],2023-08-04T09:11:29Z,aisafety.info,, 3179,https://aisafety.info?state=6228,"We’re going to merge with the machines so this will never be a problem, right?",['Stampy aisafety.info'],2023-05-12T08:31:15Z,aisafety.info,, 3190,https://aisafety.info?state=8349,What is Encultured working on?,['Stampy aisafety.info'],2023-08-15T18:18:01Z,aisafety.info,, 3205,https://aisafety.info?state=6586,How likely is an intelligence explosion?,['Stampy aisafety.info'],2023-08-24T15:49:11Z,aisafety.info,, 3220,https://aisafety.info?state=3485,What are accident and misuse risks?,['Stampy aisafety.info'],2023-09-01T06:58:28Z,aisafety.info,, 3241,https://aisafety.info?state=6605,How could an intelligence explosion be useful?,['Stampy aisafety.info'],2023-08-23T07:32:07Z,aisafety.info,, 3250,https://aisafety.info?state=7605,What safety problems are associated with whole brain emulation?,['Stampy aisafety.info'],2023-08-07T02:32:57Z,aisafety.info,, 3283,https://aisafety.info?state=8U2R,How can I work on AGI safety outreach in academia and among experts?,['Stampy aisafety.info'],2023-07-18T04:35:13Z,aisafety.info,, 3298,https://aisafety.info?state=8FJZ,How is red teaming used in AI alignment?,['Stampy aisafety.info'],2023-08-07T14:22:37Z,aisafety.info,, 3319,https://aisafety.info?state=92JB,What are the power-seeking theorems?,['Stampy aisafety.info'],2023-08-23T14:08:55Z,aisafety.info,, 3331,https://aisafety.info?state=8E3Z,Can't we limit damage from AI systems in the same ways we limit damage from companies?,['Stampy aisafety.info'],2023-07-10T15:40:07Z,aisafety.info,, 3350,https://aisafety.info?state=7748,"What would a ""warning shot"" look like?",['Stampy aisafety.info'],2023-08-16T18:24:08Z,aisafety.info,, 3362,https://aisafety.info?state=87O6,What are some arguments why AI safety might be less important?,['Stampy aisafety.info'],2023-09-17T20:09:45Z,aisafety.info,, 3378,https://aisafety.info?state=8AEQ,What is behavioral cloning?,['Stampy aisafety.info'],2023-08-05T05:09:02Z,aisafety.info,, 3395,https://aisafety.info?state=6350,"What is ""whole brain emulation""?",['Stampy aisafety.info'],2023-08-23T07:21:07Z,aisafety.info,, 3413,https://aisafety.info?state=MEME,AI Safety Memes Wiki,['Stampy aisafety.info'],2023-07-22T19:27:28Z,aisafety.info,, 3425,https://aisafety.info?state=6590,"What is ""biological cognitive enhancement""?",['Stampy aisafety.info'],2023-08-23T07:33:05Z,aisafety.info,, 3435,https://aisafety.info?state=8486,What is AI safety?,['Stampy aisafety.info'],2023-08-23T16:34:49Z,aisafety.info,, 3454,https://aisafety.info?state=6601,"Might an ""intelligence explosion"" never occur?",['Stampy aisafety.info'],2023-07-11T22:36:25Z,aisafety.info,, 3480,https://aisafety.info?state=6968,Why might a superintelligent AI be dangerous?,['Stampy aisafety.info'],2023-08-23T07:19:08Z,aisafety.info,, 3494,https://aisafety.info?state=8U2K,Who should I talk to about my non-research AI alignment coding project idea?,['Stampy aisafety.info'],2023-08-05T05:34:59Z,aisafety.info,, 3510,https://aisafety.info?state=7616,What are the leading theories in moral philosophy and which of them might be technically the easiest to encode into an AI?,['Stampy aisafety.info'],2023-08-09T01:47:21Z,aisafety.info,, 3524,https://aisafety.info?state=897J,What is Iterated Distillation and Amplification (IDA)?,['Stampy aisafety.info'],2023-08-16T17:41:27Z,aisafety.info,, 3541,https://aisafety.info?state=8UMA,"How can I do conceptual, mathematical, or philosophical work on AI alignment?",['Stampy aisafety.info'],2023-08-04T06:46:39Z,aisafety.info,, 3560,https://aisafety.info?state=8365,What is Infra-Bayesianism?,['Stampy aisafety.info'],2023-08-19T16:08:37Z,aisafety.info,, 3582,https://aisafety.info?state=6174,"Why can't we just make a ""child AI"" and raise it?",['Stampy aisafety.info'],2023-08-04T03:24:21Z,aisafety.info,, 3595,https://aisafety.info?state=5611,Could we program an AI to automatically shut down if it starts doing things we don’t want it to?,['Stampy aisafety.info'],2023-08-18T13:24:29Z,aisafety.info,, 3605,https://aisafety.info?state=6176,Why can’t we just “put the AI in a box” so that it can’t influence the outside world?,['Stampy aisafety.info'],2023-08-10T02:05:46Z,aisafety.info,, 3622,https://aisafety.info?state=6713,I’d like to get deeper into the AI alignment literature. Where should I look?,['Stampy aisafety.info'],2023-08-04T07:33:33Z,aisafety.info,, 3634,https://aisafety.info?state=8378,What is John Wentworth's research agenda?,['Stampy aisafety.info'],2023-06-08T16:41:06Z,aisafety.info,, 3658,https://aisafety.info?state=8U2P,How can I do organizational or operations work around AI alignment?,['Stampy aisafety.info'],2023-08-04T10:38:01Z,aisafety.info,, 3671,https://aisafety.info?state=8QZF,What is tool AI?,['Stampy aisafety.info'],2023-07-06T23:19:18Z,aisafety.info,, 3694,https://aisafety.info?state=8201,What is AI Safety via Debate?,['Stampy aisafety.info'],2023-05-10T20:05:37Z,aisafety.info,, 3704,https://aisafety.info?state=8U2J,How can I work toward AI alignment as a software engineer?,['Stampy aisafety.info'],2023-06-07T19:24:48Z,aisafety.info,, 3724,https://aisafety.info?state=8QZH,What is a subagent?,['Stampy aisafety.info'],2023-06-08T10:10:48Z,aisafety.info,, 3734,https://aisafety.info?state=7754,What are some helpful AI policy resources?,['Stampy aisafety.info'],2023-08-17T13:14:33Z,aisafety.info,, 3759,https://aisafety.info?state=7629,"What could a superintelligent AI do, and what would be physically impossible even for it?",['Stampy aisafety.info'],2023-08-04T10:15:04Z,aisafety.info,, 3768,https://aisafety.info?state=6964,Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff?,['Stampy aisafety.info'],2023-08-05T06:04:35Z,aisafety.info,, 3777,https://aisafety.info?state=6192,Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer?,['Stampy aisafety.info'],2023-08-20T14:02:22Z,aisafety.info,, 3788,https://aisafety.info?state=935A,What is adversarial training?,['Stampy aisafety.info'],2023-08-05T13:15:48Z,aisafety.info,, 3803,https://aisafety.info?state=6194,Is AI safety about systems becoming malevolent or conscious and turning on us?,['Stampy aisafety.info'],2023-08-16T18:34:18Z,aisafety.info,, 3816,https://aisafety.info?state=7148,Why don't we just not build AGI if it's so dangerous?,['Stampy aisafety.info'],2023-08-12T05:54:52Z,aisafety.info,, 3831,https://aisafety.info?state=7783,What are astronomical suffering risks (s-risks)?,['Stampy aisafety.info'],2023-08-20T11:19:09Z,aisafety.info,, 3850,https://aisafety.info?state=7736,"How likely is it that governments will play a significant role? What role would be desirable, if any?",['Stampy aisafety.info'],2023-08-04T07:53:59Z,aisafety.info,, 3867,https://aisafety.info?state=8XBK,How does DeepMind do adversarial training?,['Stampy aisafety.info'],2023-06-06T20:05:57Z,aisafety.info,, 3886,https://aisafety.info?state=9B85,Isn't the real concern misuse?,['Stampy aisafety.info'],2023-08-23T07:06:21Z,aisafety.info,, 3902,https://aisafety.info?state=7648,"What is the ""windfall clause""?",['Stampy aisafety.info'],2023-08-16T18:01:19Z,aisafety.info,, 3912,https://aisafety.info?state=904J,"What is ""Constitutional AI""?",['Stampy aisafety.info'],2023-08-16T21:19:18Z,aisafety.info,, 3923,https://aisafety.info?state=8428,What is the difference between inner and outer alignment?,['Stampy aisafety.info'],2023-08-04T06:09:02Z,aisafety.info,, 3940,https://aisafety.info?state=6184,What is the general nature of the concern about AI alignment?,['Stampy aisafety.info'],2023-08-04T09:44:43Z,aisafety.info,, 3949,https://aisafety.info?state=8314,What is Aligned AI / Stuart Armstrong working on?,['Stampy aisafety.info'],2023-08-05T08:04:49Z,aisafety.info,, 3973,https://aisafety.info?state=6939,"What is ""coherent extrapolated volition (CEV)""?",['Stampy aisafety.info'],2023-08-04T06:24:52Z,aisafety.info,, 3987,https://aisafety.info?state=6571,"How might non-agentic GPT-style AI cause an ""intelligence explosion"" or otherwise contribute to existential risk?",['Stampy aisafety.info'],2023-08-15T18:42:45Z,aisafety.info,, 4003,https://aisafety.info?state=7187,"If we solve alignment, are we sure of a good future?",['Stampy aisafety.info'],2023-06-07T23:19:31Z,aisafety.info,, 4014,https://aisafety.info?state=7632,Is the UN concerned about existential risk from AI?,['Stampy aisafety.info'],2023-08-23T07:31:13Z,aisafety.info,, 4027,https://aisafety.info?state=9FQK,How can LLMs be understood as “simulators”?,['Stampy aisafety.info'],2023-07-26T18:06:22Z,aisafety.info,, 4044,https://aisafety.info?state=5632,What is an agent?,['Stampy aisafety.info'],2023-08-24T17:10:01Z,aisafety.info,, 4058,https://aisafety.info?state=85E4,What is Redwood Research's strategy?,['Stampy aisafety.info'],2023-08-08T15:04:38Z,aisafety.info,, 4082,https://aisafety.info?state=6275,How doomed is humanity?,['Stampy aisafety.info'],2023-07-09T19:34:04Z,aisafety.info,, 4093,https://aisafety.info?state=6569,Any AI will be a computer program. Why wouldn't it just do what it's programmed to do?,['Stampy aisafety.info'],2023-08-02T23:08:32Z,aisafety.info,, 4104,https://aisafety.info?state=8TJV,Want to help with AI safety? Get involved!,['Stampy aisafety.info'],2023-08-20T11:27:35Z,aisafety.info,, 4121,https://aisafety.info?state=6627,What is GPT-3?,['Stampy aisafety.info'],2023-07-14T08:05:06Z,aisafety.info,, 4131,https://aisafety.info?state=7763,What subjects should I study at university to prepare myself for alignment research?,['Stampy aisafety.info'],2023-08-19T07:29:19Z,aisafety.info,, 4149,https://aisafety.info?state=7757,"What is the ""long reflection""?",['Stampy aisafety.info'],2023-08-23T02:26:16Z,aisafety.info,, 4162,https://aisafety.info?state=5642,Could AI have emotions?,['Stampy aisafety.info'],2023-08-14T21:40:48Z,aisafety.info,, 4172,https://aisafety.info?state=6306,What is an intelligence explosion?,['Stampy aisafety.info'],2023-08-23T07:08:18Z,aisafety.info,, 4188,https://aisafety.info?state=6708,Where can I find people to talk to about AI alignment?,['Stampy aisafety.info'],2023-08-04T03:26:41Z,aisafety.info,, 4201,https://aisafety.info?state=6714,"What is the difference between AI safety, AI alignment, AI control, friendly AI, AI ethics, AI existential safety, and AGI safety?",['Stampy aisafety.info'],2023-08-19T21:31:21Z,aisafety.info,, 4212,https://aisafety.info?state=9QKC,What is least-to-most prompting?,['Stampy aisafety.info'],2023-09-09T00:56:31Z,aisafety.info,, 4223,https://aisafety.info?state=6474,I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute?,['Stampy aisafety.info'],2023-08-16T22:59:13Z,aisafety.info,, 4248,https://aisafety.info?state=6607,"How might an ""intelligence explosion"" be dangerous?",['Stampy aisafety.info'],2023-08-23T07:31:54Z,aisafety.info,, 4266,https://aisafety.info?state=8G1G,What is shard theory?,['Stampy aisafety.info'],2023-08-05T05:22:47Z,aisafety.info,, 4283,https://aisafety.info?state=7638,Does the importance of AI risk depend on caring about transhumanist utopias?,['Stampy aisafety.info'],2023-08-08T14:55:41Z,aisafety.info,, 4292,https://aisafety.info?state=6218,Won’t AI be just like us?,['Stampy aisafety.info'],2023-02-22T23:05:11Z,aisafety.info,, 4307,https://aisafety.info?state=5849,Can you stop an advanced AI from upgrading itself?,['Stampy aisafety.info'],2023-08-10T22:49:13Z,aisafety.info,, 4321,https://aisafety.info?state=9IDQ,Want to understand the research? Dive deeper.,['Stampy aisafety.info'],2023-08-20T22:34:27Z,aisafety.info,, 4353,https://aisafety.info?state=89LM,What is prosaic alignment?,['Stampy aisafety.info'],2023-08-01T22:25:57Z,aisafety.info,, 4368,https://aisafety.info?state=8G1I,What is mutual information?,['Stampy aisafety.info'],2023-08-22T16:16:38Z,aisafety.info,, 4377,https://aisafety.info?state=8W8D,What master's thesis could I write about AI safety?,['Stampy aisafety.info'],2023-05-16T23:23:01Z,aisafety.info,, 4394,https://aisafety.info?state=8222,How could a superintelligent AI use the internet to take over the physical world?,['Stampy aisafety.info'],2023-08-23T07:07:31Z,aisafety.info,, 4424,https://aisafety.info?state=8U30,"I would like to focus on AI alignment, but it might be best to prioritize improving my life situation first. What should I do?",['Stampy aisafety.info'],2023-08-08T20:02:09Z,aisafety.info,, 4442,https://aisafety.info?state=6297,Why is safety important for smarter-than-human AI?,['Stampy aisafety.info'],2023-08-05T02:56:36Z,aisafety.info,, 4465,https://aisafety.info?state=6603,Why would intelligence lead to power?,['Stampy aisafety.info'],2023-08-10T12:04:03Z,aisafety.info,, 4487,https://aisafety.info?state=6568,What is the orthogonality thesis?,['Stampy aisafety.info'],2023-08-12T07:58:17Z,aisafety.info,, 4504,https://aisafety.info?state=8U2I,How can I do machine learning programming work to help with AI alignment?,['Stampy aisafety.info'],2023-06-07T19:34:59Z,aisafety.info,, 4522,https://aisafety.info?state=7766,What would a good future with AGI look like?,['Stampy aisafety.info'],2023-08-05T02:48:34Z,aisafety.info,, 4532,https://aisafety.info?state=8EL5,What is perverse instantiation?,['Stampy aisafety.info'],2023-08-05T05:16:31Z,aisafety.info,, 4545,https://aisafety.info?state=8264,What training programs and courses are available for AGI safety?,['Stampy aisafety.info'],2023-08-23T07:19:31Z,aisafety.info,, 4558,https://aisafety.info?state=8343,What is DeepMind's safety team working on?,['Stampy aisafety.info'],2023-08-08T15:27:28Z,aisafety.info,, 4579,https://aisafety.info?state=6822,Where can I learn about interpretability?,['Stampy aisafety.info'],2023-05-16T14:34:33Z,aisafety.info,, 4603,https://aisafety.info?state=8U2Z,How can I work on AI policy?,['Stampy aisafety.info'],2023-06-07T18:50:47Z,aisafety.info,, 4615,https://aisafety.info?state=7598,How much resources did the processes of biological evolution use to evolve intelligent creatures?,['Stampy aisafety.info'],2023-08-07T20:28:32Z,aisafety.info,, 4625,https://aisafety.info?state=5635,Where can I learn more about AI alignment?,['Stampy aisafety.info'],2023-08-04T03:25:40Z,aisafety.info,, 4646,https://aisafety.info?state=8C7S,Are corporations superintelligent?,['Stampy aisafety.info'],2023-08-22T14:01:49Z,aisafety.info,, 4658,https://aisafety.info?state=6957,What are the different possible AI takeoff speeds?,['Stampy aisafety.info'],2023-06-08T10:53:06Z,aisafety.info,, 4669,https://aisafety.info?state=7794,What are some objections to the importance of AI alignment?,['Stampy aisafety.info'],2023-08-20T11:06:20Z,aisafety.info,, 4684,https://aisafety.info?state=8241,What is interpretability and what approaches are there?,['Stampy aisafety.info'],2023-08-19T23:54:00Z,aisafety.info,, 4712,https://aisafety.info?state=8316,How is the Alignment Research Center (ARC) trying to solve Eliciting Latent Knowledge (ELK)?,['Stampy aisafety.info'],2023-08-23T22:19:32Z,aisafety.info,, 4726,https://aisafety.info?state=6224,Why can’t we just use Asimov’s Three Laws of Robotics?,['Stampy aisafety.info'],2023-08-16T14:47:01Z,aisafety.info,, 4740,https://aisafety.info?state=7612,"What is ""metaphilosophy"" and how does it relate to AI safety?",['Stampy aisafety.info'],2023-05-13T10:00:18Z,aisafety.info,, 4758,https://aisafety.info?state=8U2X,How can I work on assessing AI alignment projects and distributing grants?,['Stampy aisafety.info'],2023-06-07T15:33:59Z,aisafety.info,, 4775,https://aisafety.info?state=8EL7,"How does ""chain-of-thought"" prompting work?",['Stampy aisafety.info'],2023-08-05T05:17:56Z,aisafety.info,, 4805,https://aisafety.info?state=5952,Can an AI really be smarter than humans?,['Stampy aisafety.info'],2023-08-08T00:00:10Z,aisafety.info,, 4820,https://aisafety.info?state=7642,Might an aligned superintelligence force people to have better lives and change more quickly than they want?,['Stampy aisafety.info'],2023-08-23T07:35:09Z,aisafety.info,, 4831,https://aisafety.info?state=9AKZ,What is a “treacherous turn”?,['Stampy aisafety.info'],2023-07-18T05:50:33Z,aisafety.info,, 4849,https://aisafety.info?state=8JYX,"Briefly, what are the major AI safety organizations and academics working on?",['Stampy aisafety.info'],2023-08-23T07:33:54Z,aisafety.info,, 4909,https://aisafety.info?state=7755,How powerful will a mature superintelligence be?,['Stampy aisafety.info'],2023-08-22T19:37:30Z,aisafety.info,, 4924,https://aisafety.info?state=MSJJ,What models and predictions have been made about the future of advanced AI?,['Stampy aisafety.info'],2023-09-27T09:09:59Z,aisafety.info,, 4938,https://aisafety.info?state=8U2Q,How can I work on public AI safety outreach?,['Stampy aisafety.info'],2023-07-18T00:48:42Z,aisafety.info,, 4960,https://aisafety.info?state=6300,What technical problems are MIRI working on?,['Stampy aisafety.info'],2023-08-24T13:32:43Z,aisafety.info,, 4976,https://aisafety.info?state=5864,"What are the differences between AGI, transformative AI, and superintelligence?",['Stampy aisafety.info'],2023-08-19T07:47:41Z,aisafety.info,, 4991,https://aisafety.info?state=6380,"What is a ""quantilizer""?",['Stampy aisafety.info'],2023-08-09T16:29:10Z,aisafety.info,, 5000,https://aisafety.info?state=8163,Why is AI alignment a hard problem?,['Stampy aisafety.info'],2023-08-16T18:35:52Z,aisafety.info,, 5016,https://aisafety.info?state=8157,Why would we only get one chance to align a superintelligence?,['Stampy aisafety.info'],2023-08-23T04:43:04Z,aisafety.info,, 5025,https://aisafety.info?state=6412,Isn't the real concern technological unemployment?,['Stampy aisafety.info'],2023-07-14T16:34:29Z,aisafety.info,, 5039,https://aisafety.info?state=8AF4,What is AI governance?,['Stampy aisafety.info'],2023-08-23T07:09:55Z,aisafety.info,, 5050,https://aisafety.info?state=7747,How long will it take to go from human-level AI to superintelligence?,['Stampy aisafety.info'],2023-08-11T09:32:41Z,aisafety.info,, 5059,https://aisafety.info?state=8161,What are language models?,['Stampy aisafety.info'],2023-08-23T07:10:38Z,aisafety.info,, 5069,https://aisafety.info?state=8U32,I want to take big steps to contribute to AI alignment (e.g. making it my career). What should I do?,['Stampy aisafety.info'],2023-08-20T13:08:44Z,aisafety.info,, 5083,https://aisafety.info?state=8374,What is Ought's research strategy?,['Stampy aisafety.info'],2023-08-19T07:34:14Z,aisafety.info,, 5101,https://aisafety.info?state=85E9,What is FAR AI's research strategy?,['Stampy aisafety.info'],2023-09-02T06:41:22Z,aisafety.info,, 5140,https://aisafety.info?state=8HIA,What is feature visualization?,['Stampy aisafety.info'],2023-06-08T03:19:30Z,aisafety.info,, 5154,https://aisafety.info?state=8469,What is Sam Bowman researching?,['Stampy aisafety.info'],2023-08-05T07:56:34Z,aisafety.info,, 5170,https://aisafety.info?state=8E40,Isn't capitalism the real unaligned superintelligence?,['Stampy aisafety.info'],2023-07-14T07:59:45Z,aisafety.info,, 5184,https://aisafety.info?state=9YG8,What technical research would be helpful for governance?,['Stampy aisafety.info'],2023-09-22T17:54:58Z,aisafety.info,, 5220,https://aisafety.info?state=6974,How might AI socially manipulate humans?,['Stampy aisafety.info'],2023-08-19T22:01:17Z,aisafety.info,, 5237,https://aisafety.info?state=7774,How might things go wrong with AI even without an agentic superintelligence?,['Stampy aisafety.info'],2023-08-05T02:32:27Z,aisafety.info,, 5259,https://aisafety.info?state=6483,Why might people try to build AGI rather than stronger and stronger narrow AIs?,['Stampy aisafety.info'],2023-08-23T07:18:03Z,aisafety.info,, 5278,https://aisafety.info?state=87AH,What are “type signatures”?,['Stampy aisafety.info'],2023-08-05T05:02:54Z,aisafety.info,, 5287,https://aisafety.info?state=6992,Can we constrain a goal-directed AI using specified rules?,['Stampy aisafety.info'],2023-08-04T08:53:13Z,aisafety.info,, 5296,https://aisafety.info?state=89ZU,"What are ""true names"" in the context of AI alignment?",['Stampy aisafety.info'],2023-08-05T05:05:16Z,aisafety.info,, 5310,https://aisafety.info?state=8509,What links are especially valuable to share on social media or other contexts?,['Stampy aisafety.info'],2023-08-04T09:43:21Z,aisafety.info,, 5320,https://aisafety.info?state=6172,Aren't there easy solutions to AI alignment?,['Stampy aisafety.info'],2023-08-20T14:54:55Z,aisafety.info,, 5347,https://aisafety.info?state=97FU,"What is the difference between verifiability, interpretability, transparency, and explainability?",['Stampy aisafety.info'],2023-07-19T03:16:31Z,aisafety.info,, 5371,https://aisafety.info?state=9J1L,What are the main categories of technical alignment research?,['Stampy aisafety.info'],2023-08-19T07:52:17Z,aisafety.info,, 5399,https://aisafety.info?state=7608,Wouldn't it be a good thing for humanity to die out?,['Stampy aisafety.info'],2023-02-22T23:05:04Z,aisafety.info,, 5408,https://aisafety.info?state=6196,Aren't robots the real problem? How can AI cause harm if it has no ability to directly manipulate the physical world?,['Stampy aisafety.info'],2023-08-20T10:58:20Z,aisafety.info,, 5417,https://aisafety.info?state=8V5J,Are AIs conscious?,['Stampy aisafety.info'],2023-08-23T07:21:37Z,aisafety.info,, 5436,https://aisafety.info?state=6222,Isn’t it immoral to control and impose our values on AI?,['Stampy aisafety.info'],2023-08-04T02:47:44Z,aisafety.info,, 5452,https://aisafety.info?state=85EK,What is the Alignment Research Center (ARC)'s research strategy?,['Stampy aisafety.info'],2023-08-19T07:32:37Z,aisafety.info,, 5478,https://aisafety.info?state=7058,What would a good solution to AI alignment look like?,['Stampy aisafety.info'],2023-08-04T09:13:38Z,aisafety.info,, 5492,https://aisafety.info?state=MMF7,Is smarter-than-human AI a realistic prospect?,['Stampy aisafety.info'],2023-09-23T14:15:43Z,aisafety.info,, 5501,https://aisafety.info?state=8IHO,"What are the differences between a singularity, an intelligence explosion, and a hard takeoff?",['Stampy aisafety.info'],2023-08-16T14:55:31Z,aisafety.info,, 5517,https://aisafety.info?state=86J8,What are some introductions to AI safety?,['Stampy aisafety.info'],2023-08-23T07:56:36Z,aisafety.info,, 5544,https://aisafety.info?state=8327,What is the Center for Human Compatible AI (CHAI)?,['Stampy aisafety.info'],2023-08-15T00:58:15Z,aisafety.info,, 5559,https://www.alignmentforum.org/posts/pdJqEzbQrucTEF6DW/force-neural-nets-to-use-models-then-detect-these,"Force neural nets to use models, then detect these",['Stuart_Armstrong'],2021-10-05T11:31:08Z,alignmentforum,, 5571,https://www.alignmentforum.org/posts/89EvBkc4nkbEctzR3/chu-are-you,Chu are you?,['Adele Lopez'],2021-11-06T17:39:45Z,alignmentforum,, 5580,https://www.alignmentforum.org/posts/qnYZmtpNPZyqHpot9/conversation-with-paul-christiano,Conversation with Paul Christiano,['abergal'],2019-09-11T23:20:01Z,alignmentforum,, 5602,https://www.alignmentforum.org/posts/66FKFkWAugS8diydF/modelling-continuous-progress,Modelling Continuous Progress,['Sammy Martin'],2020-06-23T18:06:47Z,alignmentforum,, 5617,https://www.alignmentforum.org/posts/6ccG9i5cTncebmhsH/frequent-arguments-about-alignment,Frequent arguments about alignment,['John Schulman'],2021-06-23T00:46:39Z,alignmentforum,, 5640,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375556/quantilal-control-for-finite-mdps,Quantilal control for finite MDPs,['Vanessa Kosoy'],2018-04-12T09:21:10Z,alignmentforum,, 5654,https://www.alignmentforum.org/posts/yijG7ptfqFBR8w885/talk-key-issues-in-near-term-ai-safety-research,Talk: Key Issues In Near-Term AI Safety Research,['Aryeh Englander'],2020-07-10T18:36:12Z,alignmentforum,, 5669,https://www.alignmentforum.org/posts/qm6bhbJmft2LJNzKH/a-technical-note-on-bilinear-layers-for-interpretability,A technical note on bilinear layers for interpretability,['Lee Sharkey'],2023-05-08T06:06:59Z,alignmentforum,, 5680,https://www.alignmentforum.org/posts/bGuMrzhJdENCo8BxX/nvidia-and-microsoft-releases-530b-parameter-transformer,"NVIDIA and Microsoft releases 530B parameter transformer model, Megatron-Turing NLG",['Ozyrus'],2021-10-11T15:28:48Z,alignmentforum,, 5694,https://www.alignmentforum.org/posts/qNZSBqLEh4qLRqgWW/intro-to-brain-like-agi-safety-6-big-picture-of-motivation,"[Intro to brain-like-AGI safety] 6. Big picture of motivation, decision-making, and RL",['Steven Byrnes'],2022-03-02T15:26:12Z,alignmentforum,, 5711,https://www.alignmentforum.org/posts/jHzb5SmviScXdtT2m/safe-scrambling,Safe Scrambling?,['Hoagy'],2020-08-29T14:31:27Z,alignmentforum,, 5720,https://www.alignmentforum.org/posts/8xRSjC76HasLnMGSf/agi-safety-from-first-principles-introduction,AGI safety from first principles: Introduction,['Richard_Ngo'],2020-09-28T19:53:23Z,alignmentforum,, 5730,https://www.alignmentforum.org/posts/PZYD5kBpeHWgE5jX4/extraction-of-human-preferences,Extraction of human preferences 👨→🤖,['arunraja-hub'],2021-08-24T16:34:14Z,alignmentforum,, 5749,https://www.alignmentforum.org/posts/j6s9H9SHrEhEfuJnq/causal-scrubbing-results-on-induction-heads,Causal scrubbing: results on induction heads,"['LawrenceC', 'Adrià Garriga-alonso', 'Nicholas Goldowsky-Dill', 'ryan_greenblatt', 'Tao Lin', 'jenny', 'Ansh Radhakrishnan', 'Buck', 'Nate Thomas']",2022-12-03T00:59:18Z,alignmentforum,, 5764,https://www.alignmentforum.org/posts/9rdpdDerangjYeQGW/meta-announces-llama-2-open-sources-it-for-commercial-use,"Meta announces Llama 2; ""open sources"" it for commercial use",['LawrenceC'],2023-07-18T19:28:29Z,alignmentforum,, 5793,https://www.alignmentforum.org/posts/JLyWP2Y9LAruR2gi9/can-we-efficiently-distinguish-different-mechanisms,Can we efficiently distinguish different mechanisms?,['paulfchristiano'],2022-12-27T00:20:02Z,alignmentforum,, 5817,https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like,What failure looks like,['paulfchristiano'],2019-03-17T20:19:00Z,alignmentforum,, 5837,https://www.alignmentforum.org/posts/ChbRgvuGaG2dAtr6i/meta-open-sources-lms-competitive-with-chinchilla-palm-and,"Meta ""open sources"" LMs competitive with Chinchilla, PaLM, and code-davinci-002 (Paper)",['LawrenceC'],2023-02-24T19:57:24Z,alignmentforum,, 5852,https://www.alignmentforum.org/posts/mtZEjDortt8E8Apjs/extracting-and-evaluating-causal-direction-in-llms,Extracting and Evaluating Causal Direction in LLMs' Activations,"['Fabien Roger', 'simeon_c']",2022-12-14T14:33:06Z,alignmentforum,, 5868,https://www.alignmentforum.org/posts/ghyw76DfRyiiMxo3t/open-problem-how-can-we-quantify-player-alignment-in-2x2,Open problem: how can we quantify player alignment in 2x2 normal-form games?,['TurnTrout'],2021-06-16T02:09:42Z,alignmentforum,, 5879,https://www.alignmentforum.org/posts/X23q6T4CDifHykqi4/draft-papers-for-realab-and-decoupled-approval-on-tampering,Draft papers for REALab and Decoupled Approval on tampering,"['Jonathan Uesato', 'Ramana Kumar']",2020-10-28T16:01:13Z,alignmentforum,, 5894,https://www.alignmentforum.org/posts/7ygmXXGjXZaEktF6M/towards-a-better-circuit-prior-improving-on-elk-state-of-the,Towards a better circuit prior: Improving on ELK state-of-the-art,"['evhub', 'kcwoolverton']",2022-03-29T01:56:40Z,alignmentforum,, 5916,https://www.alignmentforum.org/posts/BGxTpdBGbwCWrGiCL/plausible-cases-for-hrad-work-and-locating-the-crux-in-the,"Plausible cases for HRAD work, and locating the crux in the ""realism about rationality"" debate",['riceissa'],2020-06-22T01:10:24Z,alignmentforum,, 5933,https://www.alignmentforum.org/posts/7f6DNZhracD7RvxMr/learning-preferences-by-looking-at-the-world,Learning preferences by looking at the world,['Rohin Shah'],2019-02-12T22:25:17Z,alignmentforum,, 5956,https://www.alignmentforum.org/posts/zwPsiWY7FJc8QkpDJ/paying-the-corrigibility-tax,Paying the corrigibility tax,['Max H'],2023-04-19T01:57:52Z,alignmentforum,, 5980,https://www.alignmentforum.org/posts/BSee6LXg4adtrndwy/what-does-it-mean-for-an-agi-to-be-safe,What does it mean for an AGI to be 'safe'?,['So8res'],2022-10-07T04:13:05Z,alignmentforum,, 5996,https://www.alignmentforum.org/posts/yXFKh2jGysQNfX2NM/a-comment-on-the-ida-alphagozero-metaphor-capabilities,A comment on the IDA-AlphaGoZero metaphor; capabilities versus alignment,['AlexMennen'],2018-07-11T01:03:03Z,alignmentforum,, 6005,https://www.alignmentforum.org/posts/arveXgFbJwascKtQC/forecasting-ml-benchmarks-in-2023,Forecasting ML Benchmarks in 2023,['jsteinhardt'],2022-07-18T02:50:17Z,alignmentforum,, 6028,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375058/superrationality-in-arbitrary-games,Superrationality in arbitrary games,['Vanessa Kosoy'],2015-11-04T18:20:41Z,alignmentforum,, 6044,https://www.alignmentforum.org/posts/yPnAzeRAqdko3RNtR/3-premise-three-and-conclusion-ai-systems-can-affect-value,3. Premise three & Conclusion: AI systems can affect value change trajectories & the Value Change Problem,['Nora_Ammann'],2023-10-26T14:38:15Z,alignmentforum,, 6063,https://www.alignmentforum.org/posts/TWorNr22hhYegE4RT/models-don-t-get-reward,"Models Don't ""Get Reward""",['Sam Ringer'],2022-12-30T10:37:12Z,alignmentforum,, 6081,https://www.alignmentforum.org/posts/Gfbf7RsE2fvxGXKC5/some-criteria-for-sandwiching-projects,Some criteria for sandwiching projects,['dmz'],2021-08-12T03:40:38Z,alignmentforum,, 6096,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375419/cooperative-oracles-introduction,Cooperative Oracles: Introduction,['Scott Garrabrant'],2017-06-03T00:36:17Z,alignmentforum,, 6110,https://www.alignmentforum.org/posts/5rsa37pBjo4Cf9fkE/a-newcomer-s-guide-to-the-technical-ai-safety-field,A newcomer’s guide to the technical AI safety field,['zeshen'],2022-11-04T14:29:47Z,alignmentforum,, 6137,https://www.alignmentforum.org/posts/EA4Txiuo5Ce2b7iBd/where-are-intentions-to-be-found,Where are intentions to be found?,['Alex Flint'],2021-04-21T00:51:51Z,alignmentforum,, 6147,https://www.alignmentforum.org/posts/8HWGXhnCfAPgJYa9D/pitfalls-of-the-agent-model,Pitfalls of the agent model,['Alex Flint'],2021-04-27T22:19:30Z,alignmentforum,, 6166,https://www.alignmentforum.org/posts/9AcEy2zThvceT9kve/an-154-what-economic-growth-theory-has-to-say-about,[AN #154]: What economic growth theory has to say about transformative AI,['Rohin Shah'],2021-06-30T17:20:03Z,alignmentforum,, 6192,https://www.alignmentforum.org/posts/EmxfgPGvaKqhttPM8/thoughts-on-the-alignment-implications-of-scaling-language,Thoughts on the Alignment Implications of Scaling Language Models,['leogao'],2021-06-02T21:32:09Z,alignmentforum,, 6221,https://www.alignmentforum.org/posts/GRAWAqfgZEgtuCvje/the-underlying-model-of-a-morphism,The underlying model of a morphism,['Stuart_Armstrong'],2021-06-04T22:29:50Z,alignmentforum,, 6233,https://www.alignmentforum.org/posts/2rQ9vv9HY6i2Z2vQ4/what-technologies-could-cause-world-gdp-doubling-times-to-be,What technologies could cause world GDP doubling times to be <8 years?,['Daniel Kokotajlo'],2020-12-10T15:34:14Z,alignmentforum,, 6242,https://www.alignmentforum.org/posts/PC6QavgNDQjHbutAq/an-170-analyzing-the-argument-for-risk-from-power-seeking-ai,[AN #170]: Analyzing the argument for risk from power-seeking AI,['Rohin Shah'],2021-12-08T18:10:04Z,alignmentforum,, 6263,https://www.alignmentforum.org/posts/PvA2gFMAaHCHfMXrw/agi-safety-from-first-principles-alignment,AGI safety from first principles: Alignment,['Richard_Ngo'],2020-10-01T03:13:46Z,alignmentforum,, 6301,https://www.alignmentforum.org/posts/rxsg2sTyHGnMTYbeH/alex-turner-s-research-comprehensive-information-gathering,"Alex Turner's Research, Comprehensive Information Gathering",['adamShimi'],2021-06-23T09:44:34Z,alignmentforum,, 6320,https://www.alignmentforum.org/posts/LBNjeGaJZw7QdybMw/agents-over-cartesian-world-models,Agents Over Cartesian World Models,"['Mark Xu', 'evhub']",2021-04-27T02:06:57Z,alignmentforum,, 6360,https://www.alignmentforum.org/posts/dQ8wiAwnD37y6PkTa/alignment-workshop-talks,Alignment Workshop talks,['Richard_Ngo'],2023-09-28T18:26:30Z,alignmentforum,, 6383,https://www.alignmentforum.org/posts/5bd75cc58225bf06703751bf/cooperative-inverse-reinforcement-learning-vs-irrational-human-preferences,Cooperative Inverse Reinforcement Learning vs. Irrational Human Preferences,['orthonormal'],2016-06-18T00:55:10Z,alignmentforum,, 6407,https://www.alignmentforum.org/posts/Bb33LG2YC3oTpBoGj/visualizing-neural-networks-how-to-blame-the-bias,"Visualizing Neural networks, how to blame the bias",['Donald Hobson'],2022-07-09T15:52:55Z,alignmentforum,, 6427,https://www.alignmentforum.org/posts/DkcdXsP56g9kXyBdq/coherence-arguments-imply-a-force-for-goal-directed-behavior,Coherence arguments imply a force for goal-directed behavior,['KatjaGrace'],2021-03-26T16:10:05Z,alignmentforum,, 6441,https://www.alignmentforum.org/posts/jKBvMqs6t2RWAxWPc/an-71-avoiding-reward-tampering-through-current-rf,[AN #71]: Avoiding reward tampering through current-RF optimization,['Rohin Shah'],2019-10-30T17:10:02Z,alignmentforum,, 6479,https://www.alignmentforum.org/posts/BEyAWbCdtWpSGxmun/retrospective-on-the-2022-conjecture-ai-discussions,Retrospective on the 2022 Conjecture AI Discussions,['Andrea_Miotti'],2023-02-24T22:41:13Z,alignmentforum,, 6490,https://www.alignmentforum.org/posts/S8khsrXnHEwYbhd8X/an-133-building-machines-that-can-cooperate-with-humans,"[AN #133]: Building machines that can cooperate (with humans, institutions, or other machines)",['Rohin Shah'],2021-01-13T18:10:05Z,alignmentforum,, 6513,https://www.alignmentforum.org/posts/h9qQQA3g8dwq6RRTo/counterfactual-mugging-why-should-you-pay,Counterfactual Mugging: Why should you pay?,['Chris_Leong'],2019-12-17T22:16:38Z,alignmentforum,, 6523,https://www.alignmentforum.org/posts/c68SJsBpiAxkPwRHj/how-llms-are-and-are-not-myopic,How LLMs are and are not myopic,['janus'],2023-07-25T02:19:45Z,alignmentforum,, 6547,https://www.alignmentforum.org/posts/d96dDEYMfnN2St3Bj/infrafunctions-and-robust-optimization,Infrafunctions and Robust Optimization,['Diffractor'],2023-04-27T19:25:12Z,alignmentforum,, 6564,https://www.alignmentforum.org/posts/cDR8GkzCaxXoovPwh/demanding-and-designing-aligned-cognitive-architectures,Demanding and Designing Aligned Cognitive Architectures,['Koen.Holtman'],2021-12-21T17:32:57Z,alignmentforum,, 6582,https://www.alignmentforum.org/posts/6gL83HMF6tvPHKQxW/how-important-are-mdps-for-agi-safety,How important are MDPs for AGI (Safety)?,['michaelcohen'],2020-03-26T20:32:59Z,alignmentforum,, 6591,https://www.alignmentforum.org/posts/5bd75cc58225bf067037520e/c-irl-is-not-solely-a-learning-process,(C)IRL is not solely a learning process,['Stuart_Armstrong'],2016-09-15T08:35:13Z,alignmentforum,, 6608,https://www.alignmentforum.org/posts/A4djH6sc9vZq2AYBD/alignment-researchers-how-useful-is-extra-compute-for-you-1,"Alignment researchers, how useful is extra compute for you?",['Lauro Langosco'],2022-02-19T15:35:32Z,alignmentforum,, 6622,https://www.alignmentforum.org/posts/tNYfsw5q873jjXfCP/why-do-we-care-about-agency-for-alignment,Why do we care about agency for alignment?,['Chris_Leong'],2023-04-23T18:10:24Z,alignmentforum,, 6635,https://www.alignmentforum.org/posts/FKE6cAzQxEK4QH9fC/qnr-prospects-are-important-for-ai-alignment-research,QNR prospects are important for AI alignment research,['Eric Drexler'],2022-02-03T15:20:54Z,alignmentforum,, 6657,https://www.alignmentforum.org/posts/GHNokcgERpLJwJnLW/some-comments-on-stuart-armstrong-s-research-agenda-v0-9,"Some Comments on Stuart Armstrong's ""Research Agenda v0.9""",['Charlie Steiner'],2019-07-08T19:03:37Z,alignmentforum,, 6678,https://www.alignmentforum.org/posts/yB89JQdazhsDJhktH/ground-truth-label-imbalance-impairs-contrast-consistent-1,Ground-Truth Label Imbalance Impairs Contrast-Consistent Search Performance,"['Tom Angsten', 'Ami Hays']",2023-08-05T17:55:47Z,alignmentforum,, 6688,https://www.alignmentforum.org/posts/b2YBddoCKSixivSAJ/counterfactability,Counterfactability,['Scott Garrabrant'],2022-11-07T05:39:06Z,alignmentforum,, 6707,https://www.alignmentforum.org/posts/cxkwQmys6mCB6bjDA/interlude-agents-as-automobiles,Interlude: Agents as Automobiles,['Daniel Kokotajlo'],2021-12-14T18:49:21Z,alignmentforum,, 6721,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375274/transitive-negotiations-with-counterfactual-agents,Transitive negotiations with counterfactual agents,['Scott Garrabrant'],2016-10-20T23:27:04Z,alignmentforum,, 6731,https://www.alignmentforum.org/posts/poyshiMEhJsAuifKt/outer-vs-inner-misalignment-three-framings-1,Outer vs inner misalignment: three framings,['Richard_Ngo'],2022-07-06T19:46:51Z,alignmentforum,, 6757,https://www.alignmentforum.org/posts/BT7yLvSdyuerqzPpc/an-169-collaborating-with-humans-without-human-data,[AN #169]: Collaborating with humans without human data,['Rohin Shah'],2021-11-24T18:30:04Z,alignmentforum,, 6778,https://www.alignmentforum.org/posts/7Rvctxk73BrKqEaqh/call-for-research-on-evaluating-alignment-funding-advice,Call for research on evaluating alignment (funding + advice available),['Beth Barnes'],2021-08-31T23:28:49Z,alignmentforum,, 6799,https://www.alignmentforum.org/posts/4ap6WQx52txzsJx6r/the-alignment-newsletter-7-05-21-18,The Alignment Newsletter #7: 05/21/18,['Rohin Shah'],2018-05-21T16:00:45Z,alignmentforum,, 6821,https://www.alignmentforum.org/posts/kjRGMdRxXb9c5bWq5/mechanistic-interpretability-as-reverse-engineering-follow,"Mechanistic Interpretability as Reverse Engineering (follow-up to ""cars and elephants"")",['David Scott Krueger (formerly: capybaralet)'],2022-11-03T23:19:20Z,alignmentforum,, 6840,https://www.alignmentforum.org/posts/F4iogK5xdNd7jDNyw/comparing-anthropic-s-dictionary-learning-to-ours,Comparing Anthropic's Dictionary Learning to Ours,['Robert_AIZI'],2023-10-07T23:30:32Z,alignmentforum,, 6858,https://www.alignmentforum.org/posts/JqnkeqaPseTgxLgEL/conditioning-generative-models-for-alignment,Conditioning Generative Models for Alignment,['Jozdien'],2022-07-18T07:11:46Z,alignmentforum,, 6896,https://www.alignmentforum.org/posts/kFm9ZMreqeNYpg8m8/what-happens-to-variance-as-neural-network-training-is,"What happens to variance as neural network training is scaled? What does it imply about ""lottery tickets""?",['abramdemski'],2020-07-28T20:22:14Z,alignmentforum,, 6906,https://www.alignmentforum.org/posts/fgAyy4gdDrbHFHjge/formal-philosophy-and-alignment-possible-projects,Formal Philosophy and Alignment Possible Projects,['Whispermute'],2022-06-30T10:42:23Z,alignmentforum,, 6944,https://www.alignmentforum.org/posts/NJYmovr9ZZAyyTBwM/what-i-mean-by-alignment-is-in-large-part-about-making,"What I mean by ""alignment is in large part about making cognition aimable at all""",['So8res'],2023-01-30T15:22:09Z,alignmentforum,, 6957,https://www.alignmentforum.org/posts/oWN9fgYnFYJEWdAs9/comments-on-openphil-s-interpretability-rfp,Comments on OpenPhil's Interpretability RFP,['paulfchristiano'],2021-11-05T22:36:05Z,alignmentforum,, 6971,https://www.alignmentforum.org/posts/SJXujr5a2NcoFebr4/mesa-optimizers-vs-steered-optimizers,Mesa-Optimizers vs “Steered Optimizers”,['Steven Byrnes'],2020-07-10T16:49:27Z,alignmentforum,, 6992,https://www.alignmentforum.org/posts/8XAxbsdtLmMaf5zta/how-to-talk-about-reasons-why-agi-might-not-be-near,How to talk about reasons why AGI might not be near?,['Kaj_Sotala'],2023-09-17T08:18:31Z,alignmentforum,, 7012,https://www.alignmentforum.org/posts/BGD5J2KAoNmpPMzMQ/why-gpt-wants-to-mesa-optimize-and-how-we-might-change-this,Why GPT wants to mesa-optimize & how we might change this,['John_Maxwell'],2020-09-19T13:48:30Z,alignmentforum,, 7037,https://www.alignmentforum.org/posts/DjTKMEwRqpuKkJzTo/are-there-alternative-to-solving-value-transfer-and,Are there alternative to solving value transfer and extrapolation?,['Stuart_Armstrong'],2021-12-06T18:53:53Z,alignmentforum,, 7055,https://www.alignmentforum.org/posts/Kx7nv8dHtFig9ud7C/an-119-ai-safety-when-agents-are-shaped-by-environments-not,"[AN #119]: AI safety when agents are shaped by environments, not rewards",['Rohin Shah'],2020-09-30T17:10:04Z,alignmentforum,, 7093,https://www.alignmentforum.org/posts/JtuTQgp9Wnd6R6F5s/when-discussing-ai-risks-talk-about-capabilities-not,"When discussing AI risks, talk about capabilities, not intelligence",['Vika'],2023-08-11T13:38:49Z,alignmentforum,, 7110,https://www.alignmentforum.org/posts/RQoSCs9SePDMLJvfz/new-paper-when-is-truth-telling-favored-in-ai-debate,New paper: (When) is Truth-telling Favored in AI debate?,['VojtaKovarik'],2019-12-26T19:59:01Z,alignmentforum,, 7132,https://www.alignmentforum.org/posts/wFJqi75y9eW8mf8TR/does-the-lottery-ticket-hypothesis-suggest-the-scaling,Does the lottery ticket hypothesis suggest the scaling hypothesis?,['Daniel Kokotajlo'],2020-07-28T19:52:52Z,alignmentforum,, 7148,https://www.alignmentforum.org/posts/xxMYFKLqiBJZRNoPj/ai-governance-across-slow-fast-takeoff-and-easy-hard,AI Governance across Slow/Fast Takeoff and Easy/Hard Alignment spectra,['Davidmanheim'],2022-04-03T07:45:58Z,alignmentforum,, 7169,https://www.alignmentforum.org/posts/rEPnce975Fid9v5qv/brief-notes-on-transformers,Brief Notes on Transformers,['Adam Jermyn'],2022-09-26T14:46:24Z,alignmentforum,, 7185,https://www.alignmentforum.org/posts/R67qpBj5doGcjQzmc/universality-and-the-filter,Universality and the “Filter”,['maggiehayes'],2021-12-16T00:47:24Z,alignmentforum,, 7201,https://www.alignmentforum.org/posts/XwXmedJAo5m4r29eu/conditioning-predictive-models-large-language-models-as,Conditioning Predictive Models: Large language models as predictors,"['evhub', 'Adam Jermyn', 'Johannes Treutlein', 'Rubi J. Hudson', 'kcwoolverton']",2023-02-02T20:28:47Z,alignmentforum,, 7227,https://www.alignmentforum.org/posts/vrbidMiczaoHBhZGp/inframeasures-and-domain-theory,Inframeasures and Domain Theory,['Diffractor'],2021-03-28T09:19:00Z,alignmentforum,, 7250,https://www.alignmentforum.org/posts/Qprn8tMeZGLBfobu7/review-ai-alignment-posts-to-help-figure-out-how-to-make-a,Review AI Alignment posts to help figure out how to make a proper AI Alignment review,"['habryka', 'Raemon']",2023-01-10T00:19:24Z,alignmentforum,, 7259,https://www.alignmentforum.org/posts/EeTq9vbzMT4Zb4oWo/all-the-posts-i-will-never-write,All the posts I will never write,['Alexander Gietelink Oldenziel'],2022-08-14T18:29:07Z,alignmentforum,, 7289,https://www.alignmentforum.org/posts/67a8C6KsKn2NyW2Ry/counterfactual-control-incentives,Counterfactual control incentives,['Stuart_Armstrong'],2021-01-21T16:54:59Z,alignmentforum,, 7304,https://www.alignmentforum.org/posts/gWxMZisqE2j2kHCd2/ai-safety-as-featherless-bipeds-with-broad-flat-nails,AI safety as featherless bipeds *with broad flat nails*,['Stuart_Armstrong'],2020-08-19T10:22:15Z,alignmentforum,, 7313,https://www.alignmentforum.org/posts/obht9QqMDMNLwhPQS/asot-natural-abstractions-and-alphazero,[ASoT] Natural abstractions and AlphaZero,['Ulisse Mini'],2022-12-10T17:53:02Z,alignmentforum,, 7325,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375041/a-first-look-at-the-hard-problem-of-corrigibility,A first look at the hard problem of corrigibility,['jessicata'],2015-10-15T20:16:46Z,alignmentforum,, 7341,https://www.alignmentforum.org/posts/5hApNw5f7uG8RXxGS/the-open-agency-model,The Open Agency Model,['Eric Drexler'],2023-02-22T10:35:12Z,alignmentforum,, 7357,https://www.alignmentforum.org/posts/4J4TA2ZF3wmSxhxuc/attainable-utility-preservation-empirical-results,Attainable Utility Preservation: Empirical Results,"['TurnTrout', 'nealeratzlaff']",2020-02-22T00:38:38Z,alignmentforum,, 7376,https://www.alignmentforum.org/posts/2JGu9yxiJkoGdQR4s/learning-normativity-a-research-agenda,Learning Normativity: A Research Agenda,['abramdemski'],2020-11-11T21:59:41Z,alignmentforum,, 7396,https://www.alignmentforum.org/posts/hchfRj4qa4hFZxhKM/allowing-a-formal-proof-system-to-self-improve-while,Allowing a formal proof system to self improve while avoiding Lobian obstacles.,['Donald Hobson'],2019-01-23T23:04:44Z,alignmentforum,, 7409,https://www.alignmentforum.org/posts/9o2mjdo7eb7677EJS/training-trace-priors-and-speed-priors,Training Trace Priors and Speed Priors,['Adam Jermyn'],2022-06-26T18:07:09Z,alignmentforum,, 7425,https://www.alignmentforum.org/posts/WFopenhCXyHX3ukw3/how-uniform-is-the-neocortex,How uniform is the neocortex?,['zhukeepa'],2020-05-04T02:16:51Z,alignmentforum,, 7442,https://www.alignmentforum.org/posts/H9knnv8BWGKj6dZim/usd1000-bounty-for-openai-to-show-whether-gpt3-was,"$1000 bounty for OpenAI to show whether GPT3 was ""deliberately"" pretending to be stupider than it is",['jacobjacob'],2020-07-21T18:42:45Z,alignmentforum,, 7460,https://www.alignmentforum.org/posts/Ryv3FviYuovtJbgQd/subsets-and-quotients-in-interpretability,Subsets and quotients in interpretability,['Erik Jenner'],2022-12-02T23:13:34Z,alignmentforum,, 7481,https://www.alignmentforum.org/posts/ZHXutm7KpoWEj9G2s/an-unaligned-benchmark,An unaligned benchmark,['paulfchristiano'],2018-11-17T15:51:03Z,alignmentforum,, 7503,https://www.alignmentforum.org/posts/YLRPhvgN4uZ6LCLxw/human-wanting,Human wanting,['TsviBT'],2023-10-24T01:05:39Z,alignmentforum,, 7518,https://www.alignmentforum.org/posts/nLhHY2c8MWFcuWRLx/good-ontologies-induce-commutative-diagrams,Good ontologies induce commutative diagrams,['Erik Jenner'],2022-10-09T00:06:20Z,alignmentforum,, 7534,https://www.alignmentforum.org/posts/GBStFZinQYsyP8qHY/do-yourself-a-favar-security-mindset,Do yourself a FAVAR: security mindset,['lukehmiles'],2022-06-18T02:08:47Z,alignmentforum,, 7546,https://www.alignmentforum.org/posts/JSjagTDGdz2y6nNE3/on-the-purposes-of-decision-theory-research,On the purposes of decision theory research,['Wei Dai'],2019-07-25T07:18:07Z,alignmentforum,, 7557,https://www.alignmentforum.org/posts/zFGGHGfhYsGNnh7Kp/how-to-throw-away-information-in-causal-dags,How to Throw Away Information in Causal DAGs,['johnswentworth'],2020-01-08T02:40:05Z,alignmentforum,, 7566,https://www.alignmentforum.org/posts/QvvFRDG6SG3xZ8ELz/challenge-construct-a-gradient-hacker,Challenge: construct a Gradient Hacker,"['Thomas Larsen', 'Thomas Kwa']",2023-03-09T02:38:33Z,alignmentforum,, 7579,https://www.alignmentforum.org/posts/YEioD8YLgxih3ydxP/why-simulator-ais-want-to-be-active-inference-ais,Why Simulator AIs want to be Active Inference AIs,"['Jan_Kulveit', 'rosehadshar']",2023-04-10T18:23:35Z,alignmentforum,, 7597,https://www.alignmentforum.org/posts/AHBejZBsaTR6dkRHs/ai-written-critiques-help-humans-notice-flaws,AI-Written Critiques Help Humans Notice Flaws,['paulfchristiano'],2022-06-25T17:22:57Z,alignmentforum,, 7617,https://www.alignmentforum.org/posts/u3fP8vjGsDCT7X54H/towards-deconfusing-gradient-hacking,Towards Deconfusing Gradient Hacking,['leogao'],2021-10-24T00:43:33Z,alignmentforum,, 7639,https://www.alignmentforum.org/posts/7qhtuQLCCvmwCPfXK/ama-paul-christiano-alignment-researcher,"AMA: Paul Christiano, alignment researcher",['paulfchristiano'],2021-04-28T18:55:40Z,alignmentforum,, 7648,https://www.alignmentforum.org/posts/rbJLrcmHtusGBudTY/anthropics-in-infinite-universes,Anthropics in infinite universes,['Stuart_Armstrong'],2021-07-08T06:56:06Z,alignmentforum,, 7659,https://www.alignmentforum.org/posts/y5fYPAyKjWePCsq3Y/project-proposal-considerations-for-trading-off-capabilities,Project Proposal: Considerations for trading off capabilities and safety impacts of AI research,['David Scott Krueger (formerly: capybaralet)'],2019-08-06T22:22:21Z,alignmentforum,, 7680,https://www.alignmentforum.org/posts/nH4c3Q9t9F3nJ7y8W/gpts-are-predictors-not-imitators,"GPTs are Predictors, not Imitators",['Eliezer Yudkowsky'],2023-04-08T19:59:14Z,alignmentforum,, 7696,https://www.alignmentforum.org/posts/BGcXEijZ6HLnASNit/when-does-rationality-as-search-have-nontrivial-implications,When does rationality-as-search have nontrivial implications?,['nostalgebraist'],2018-11-04T22:42:01Z,alignmentforum,, 7710,https://www.alignmentforum.org/posts/LARmKTbpAkEYeG43u/anthropics-different-probabilities-different-questions,"Anthropics: different probabilities, different questions",['Stuart_Armstrong'],2021-05-06T13:14:07Z,alignmentforum,, 7720,https://www.alignmentforum.org/posts/uAqs5Q3aGEen3nKeX/anthropics-is-pretty-normal,Anthropics is pretty normal,['Stuart_Armstrong'],2019-01-17T13:26:23Z,alignmentforum,, 7732,https://www.alignmentforum.org/posts/SqcPWvvJJwwgZb6aH/prize-for-probable-problems,Prize for probable problems,['paulfchristiano'],2018-03-08T16:58:12Z,alignmentforum,, 7746,https://www.alignmentforum.org/posts/5kurn5W62C5CpSWq6/avoiding-side-effects-in-complex-environments,Avoiding Side Effects in Complex Environments,"['TurnTrout', 'nealeratzlaff']",2020-12-12T00:34:54Z,alignmentforum,, 7756,https://www.alignmentforum.org/posts/5rNCGP8deEBjedCmH/linkpost-existential-risk-analysis-in-empirical-research,[Linkpost] Existential Risk Analysis in Empirical Research Papers,['Dan H'],2022-07-02T00:09:49Z,alignmentforum,, 7768,https://www.alignmentforum.org/posts/sTe78dNJDGywu9Dz6/solving-the-mechanistic-interpretability-challenges-eis-vii,Solving the Mechanistic Interpretability challenges: EIS VII Challenge 1,"['StefanHex', 'Marius Hobbhahn']",2023-05-09T19:41:11Z,alignmentforum,, 7782,https://www.alignmentforum.org/posts/qzXTHM7Gtxv24ew8Z/prizes-for-ml-safety-benchmark-ideas-1,Prizes for ML Safety Benchmark Ideas,['joshc'],2022-10-28T02:51:46Z,alignmentforum,, 7791,https://www.alignmentforum.org/posts/ftEZPtMfnTQtXSnKd/an-60-a-new-ai-challenge-minecraft-agents-that-assist-human,[AN #60] A new AI challenge: Minecraft agents that assist human players in creative mode,['Rohin Shah'],2019-07-22T17:00:02Z,alignmentforum,, 7830,https://www.alignmentforum.org/posts/yArZKCEheZt8GkK6p/self-fulfilling-prophecies-aren-t-always-about-self,Self-Fulfilling Prophecies Aren't Always About Self-Awareness,['John_Maxwell'],2019-11-18T23:11:09Z,alignmentforum,, 7848,https://www.alignmentforum.org/posts/CkFBMG6A9ytkiXBDM/sparse-autoencoders-future-work,Sparse Autoencoders: Future Work,"['Logan Riggs', 'Aidan Ewart']",2023-09-21T15:30:47Z,alignmentforum,, 7879,https://www.alignmentforum.org/posts/AHhCrJ2KpTjsCSwbt/inner-alignment-explain-like-i-m-12-edition,Inner Alignment: Explain like I'm 12 Edition,['Rafael Harth'],2020-08-01T15:24:34Z,alignmentforum,, 7900,https://www.alignmentforum.org/posts/sam4ehxHgnJEGCKed/lessons-from-convergent-evolution-for-ai-alignment,Lessons from Convergent Evolution for AI Alignment,"['Jan_Kulveit', 'rosehadshar']",2023-03-27T16:25:14Z,alignmentforum,, 7914,https://www.alignmentforum.org/posts/hfkjegoMZ8j8KGyiR/fli-podcast-connor-leahy-on-ai-progress-chimps-memes-and,"FLI Podcast: Connor Leahy on AI Progress, Chimps, Memes, and Markets (Part 1/3)","['remember', 'Andrea_Miotti']",2023-02-10T13:55:59Z,alignmentforum,, 7935,https://www.alignmentforum.org/posts/fytgZ26AgxmrAdyB4/mesa-optimizers-via-grokking,Mesa-Optimizers via Grokking,['orthonormal'],2022-12-06T20:05:20Z,alignmentforum,, 7956,https://www.alignmentforum.org/posts/jkxkMTGfZDzBEaaY8/why-not-tool-ai,Why not tool AI?,['smithee'],2019-01-19T22:18:11Z,alignmentforum,, 7965,https://www.alignmentforum.org/posts/2yLn8iTrvHoEgqXcJ/the-two-layer-model-of-human-values-and-problems-with,"The two-layer model of human values, and problems with synthesizing preferences",['Kaj_Sotala'],2020-01-24T15:17:34Z,alignmentforum,, 7981,https://www.alignmentforum.org/posts/vyxwgQnWPdhpWQ9ZN/vlm-rm-specifying-rewards-with-natural-language,VLM-RM: Specifying Rewards with Natural Language,"['ChengCheng', 'David Lindner', 'Ethan Perez']",2023-10-23T14:11:34Z,alignmentforum,, 8010,https://www.alignmentforum.org/posts/fhJkQo34cYw6KqpH3/thinking-about-filtered-evidence-is-very-hard,Thinking About Filtered Evidence Is (Very!) Hard,['abramdemski'],2020-03-19T23:20:06Z,alignmentforum,, 8032,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375393/hch-as-a-measure-of-manipulation,HCH as a measure of manipulation,['orthonormal'],2017-03-11T03:02:53Z,alignmentforum,, 8043,https://www.alignmentforum.org/posts/JBFHzfPkXHB2XfDGj/evolution-of-modularity,Evolution of Modularity,['johnswentworth'],2019-11-14T06:49:04Z,alignmentforum,, 8056,https://www.alignmentforum.org/posts/DNKTmmNZr5M2uCZLz/beware-of-black-boxes-in-ai-alignment-research,Beware of black boxes in AI alignment research,['cousin_it'],2018-01-18T15:07:08Z,alignmentforum,, 8067,https://www.alignmentforum.org/posts/WwJdaymwKq6qyJqBX/operationalizing-compatibility-with-strategy-stealing,Operationalizing compatibility with strategy-stealing,['evhub'],2020-12-24T22:36:29Z,alignmentforum,, 8077,https://www.alignmentforum.org/posts/7BWmLhFtqzqEPs8d5/high-level-hopes-for-ai-alignment,High-level hopes for AI alignment,['HoldenKarnofsky'],2022-12-15T18:00:16Z,alignmentforum,, 8102,https://www.alignmentforum.org/posts/yDcMDJeSck7SuBs24/steganography-in-chain-of-thought-reasoning,Steganography in Chain of Thought Reasoning,['A Ray'],2022-08-08T03:47:01Z,alignmentforum,, 8114,https://www.alignmentforum.org/posts/AyfDnnAdjG7HHeD3d/miri-comments-on-cotra-s-case-for-aligning-narrowly,"MIRI comments on Cotra's ""Case for Aligning Narrowly Superhuman Models""",['Rob Bensinger'],2021-03-05T23:43:54Z,alignmentforum,, 8143,https://www.alignmentforum.org/posts/o22kP33tumooBtia3/can-corrigibility-be-learned-safely,Can corrigibility be learned safely?,['Wei Dai'],2018-04-01T23:07:47Z,alignmentforum,, 8158,https://www.alignmentforum.org/posts/3vDb6EzBpaHqDqQif/some-summaries-of-agent-foundations-work-1,Some Summaries of Agent Foundations Work,['mattmacdermott'],2023-05-15T16:09:56Z,alignmentforum,, 8191,https://www.alignmentforum.org/posts/X6ZjFShxNBNM5QCg4/towards-measures-of-optimisation-3,Towards Measures of Optimisation,"['mattmacdermott', 'Alexander Gietelink Oldenziel']",2023-05-12T15:29:33Z,alignmentforum,, 8208,https://www.alignmentforum.org/posts/9sjEFuGbb8WFZE9ER/sources-of-evidence-in-alignment,Sources of evidence in Alignment,['Martín Soto'],2023-07-02T20:38:34Z,alignmentforum,, 8236,https://www.alignmentforum.org/posts/XYDsYSbBjqgPAgcoQ/why-the-focus-on-expected-utility-maximisers,Why The Focus on Expected Utility Maximisers?,['DragonGod'],2022-12-27T15:49:37Z,alignmentforum,, 8257,https://www.alignmentforum.org/posts/CfpAXccrBvWpQw9xj/algorithmic-improvement-is-probably-faster-than-scaling-now,Algorithmic Improvement Is Probably Faster Than Scaling Now,['johnswentworth'],2023-06-06T02:57:34Z,alignmentforum,, 8267,https://www.alignmentforum.org/posts/YApiu7x3oTTzDgFFN/goal-directedness-and-behavior-redux,"Goal-Directedness and Behavior, Redux",['adamShimi'],2021-08-09T14:26:26Z,alignmentforum,, 8279,https://www.alignmentforum.org/posts/EpdXLNXyL4EYLFwF8/an-increasingly-manipulative-newsfeed,An Increasingly Manipulative Newsfeed,['Michaël Trazzi'],2019-07-01T15:26:43Z,alignmentforum,, 8307,https://www.alignmentforum.org/posts/PLqopCagHKo2EK5cE/train-first-vs-prune-first-in-neural-networks,Train first VS prune first in neural networks.,['Donald Hobson'],2022-07-09T15:53:33Z,alignmentforum,, 8318,https://www.alignmentforum.org/posts/yGuo5R9fgrrFLYWuv/when-do-utility-functions-constrain-1,When do utility functions constrain?,['Hoagy'],2019-08-23T17:19:06Z,alignmentforum,, 8327,https://www.alignmentforum.org/posts/qrn2dRSwNratuM3tq/reverse-engineering-using-interpretability,Reverse-engineering using interpretability,['Beth Barnes'],2021-12-29T23:21:14Z,alignmentforum,, 8344,https://www.alignmentforum.org/posts/8edorDRSbJa9TCipa/a-neural-network-undergoing-gradient-based-training-as-a,A Neural Network undergoing Gradient-based Training as a Complex System,['Spencer Becker-Kahn'],2023-02-19T22:08:56Z,alignmentforum,, 8361,https://www.alignmentforum.org/posts/xERh9dkBkHLHp7Lg6/making-it-harder-for-an-agi-to-trick-us-with-stvs,"Making it harder for an AGI to ""trick"" us, with STVs",['Tor Økland Barstad'],2022-07-09T14:42:30Z,alignmentforum,, 8399,https://www.alignmentforum.org/posts/TdwpN484eTbPSvZkm/rohin-shah-on-reasons-for-ai-optimism,Rohin Shah on reasons for AI optimism,['abergal'],2019-10-31T12:10:02Z,alignmentforum,, 8420,https://www.alignmentforum.org/posts/BRiMQELD5WYyvncTE/ai-unsafety-via-non-zero-sum-debate,AI Unsafety via Non-Zero-Sum Debate,['VojtaKovarik'],2020-07-03T22:03:16Z,alignmentforum,, 8443,https://www.alignmentforum.org/posts/q4j7qbEZRaTAA9Kxf/graphical-world-models-counterfactuals-and-machine-learning,"Graphical World Models, Counterfactuals, and Machine Learning Agents",['Koen.Holtman'],2021-02-17T11:07:47Z,alignmentforum,, 8470,https://www.alignmentforum.org/posts/69XPfonos795hD57o/an-87-what-might-happen-as-deep-learning-scales-even-further,[AN #87]: What might happen as deep learning scales even further?,['Rohin Shah'],2020-02-19T18:20:02Z,alignmentforum,, 8485,https://www.alignmentforum.org/posts/TRKF9g65nhPBQoxJu/distribution-shifts-and-the-importance-of-ai-safety,Distribution Shifts and The Importance of AI Safety,['Leon Lang'],2022-09-29T22:38:13Z,alignmentforum,, 8507,https://www.alignmentforum.org/posts/eowhY5NaCaqY6Pkj9/behavioural-statistics-for-a-maze-solving-agent,Behavioural statistics for a maze-solving agent,"['peligrietzer', 'TurnTrout']",2023-04-20T22:26:09Z,alignmentforum,, 8527,https://www.alignmentforum.org/posts/hjEaZgyQ2iprDhkg8/security-amplification,Security amplification,['paulfchristiano'],2019-02-06T17:28:20Z,alignmentforum,, 8542,https://www.alignmentforum.org/posts/HF2vpnmgqmHyLGRrA/the-simulation-hypothesis-undercuts-the-sia-great-filter,The Simulation Hypothesis Undercuts the SIA/Great Filter Doomsday Argument,"['Mark Xu', 'CarlShulman']",2021-10-01T22:23:23Z,alignmentforum,, 8551,https://www.alignmentforum.org/posts/KAmEZMYE3QKGmKBNd/alignment-newsletter-23,Alignment Newsletter #23,['Rohin Shah'],2018-09-10T17:10:01Z,alignmentforum,, 8578,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374ed6/an-implementation-of-modal-udt,An implementation of modal UDT,['Benya_Fallenstein'],2015-02-11T06:02:22Z,alignmentforum,, 8590,https://www.alignmentforum.org/posts/bffA9WC9nEJhtagQi/introduction-to-pragmatic-ai-safety-pragmatic-ai-safety-1,Introduction to Pragmatic AI Safety [Pragmatic AI Safety #1],"['Dan H', 'ThomasW']",2022-05-09T17:06:00Z,alignmentforum,, 8607,https://www.alignmentforum.org/posts/NbGmfxbaABPsspib7/christiano-and-yudkowsky-on-ai-predictions-and-human,Christiano and Yudkowsky on AI predictions and human intelligence,['Eliezer Yudkowsky'],2022-02-23T21:34:55Z,alignmentforum,, 8628,https://www.alignmentforum.org/posts/gpk8dARHBi7Mkmzt9/what-ai-safety-materials-do-ml-researchers-find-compelling,What AI Safety Materials Do ML Researchers Find Compelling?,"['Vael Gates', 'Collin']",2022-12-28T02:03:32Z,alignmentforum,, 8644,https://www.alignmentforum.org/posts/Wnqua6eQkewL3bqsF/matt-botvinick-on-the-spontaneous-emergence-of-learning,Matt Botvinick on the spontaneous emergence of learning algorithms,['Adam Scholl'],2020-08-12T07:47:14Z,alignmentforum,, 8660,https://www.alignmentforum.org/posts/Haawpd5rZrzkzvYRC/an-162-foundation-models-a-paradigm-shift-within-ai,[AN #162]: Foundation models: a paradigm shift within AI,['Rohin Shah'],2021-08-27T17:20:04Z,alignmentforum,, 8689,https://www.alignmentforum.org/posts/MtDmnSpPHDvLr7CdM/catastrophic-risks-from-ai-2-malicious-use,Catastrophic Risks from AI #2: Malicious Use,"['Dan H', 'Mantas Mazeika', 'ThomasW']",2023-06-22T17:10:08Z,alignmentforum,, 8721,https://www.alignmentforum.org/posts/o3smzgcH8MR9RcMgZ/safely-controlling-the-agi-agent-reward-function,Safely controlling the AGI agent reward function,['Koen.Holtman'],2021-02-17T14:47:00Z,alignmentforum,, 8734,https://www.alignmentforum.org/posts/GnPSQAi3QzHjK8ZQR/gratification-a-useful-concept-maybe-new,"Gratification: a useful concept, maybe new",['Stuart_Armstrong'],2019-08-25T18:58:16Z,alignmentforum,, 8744,https://www.alignmentforum.org/posts/f6ByNdGJYxR3Kwguy/asot-searching-for-consequentialist-structure,[ASoT] Searching for consequentialist structure,['leogao'],2022-03-27T19:09:13Z,alignmentforum,, 8758,https://www.alignmentforum.org/posts/tz3hoCs2efHjzNYm5/quantilizers-and-generative-models,Quantilizers and Generative Models,['Adam Jermyn'],2022-07-18T16:32:34Z,alignmentforum,, 8775,https://www.alignmentforum.org/posts/nmMorGE4MS4txzr8q/simulators-seminar-sequence-1-background-and-shared,[Simulators seminar sequence] #1 Background & shared assumptions,"['Jan', 'Charlie Steiner', 'Logan Riggs', 'janus', 'jacquesthibs', 'metasemi', 'Michael Oesterle', 'Lucas Teixeira', 'peligrietzer', 'remember']",2023-01-02T23:48:50Z,alignmentforum,, 8791,https://www.alignmentforum.org/posts/JcLhYQQADzTsAEaXd/ai-as-a-science-and-three-obstacles-to-alignment-strategies,"AI as a science, and three obstacles to alignment strategies",['So8res'],2023-10-25T21:00:16Z,alignmentforum,, 8817,https://www.alignmentforum.org/posts/CWD8FxA3yJPmZE9o3/automated-fact-checking-a-look-at-the-field,Automated Fact Checking: A Look at the Field,['Hoagy'],2021-10-06T23:52:54Z,alignmentforum,, 8837,https://www.alignmentforum.org/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison,2019 AI Alignment Literature Review and Charity Comparison,['Larks'],2019-12-19T03:00:55Z,alignmentforum,, 8876,https://www.alignmentforum.org/posts/5bd75cc58225bf067037541a/cooperative-oracles-nonexploited-bargaining,Cooperative Oracles: Nonexploited Bargaining,['Scott Garrabrant'],2017-06-03T00:39:55Z,alignmentforum,, 8889,https://www.alignmentforum.org/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios,Homogeneity vs. heterogeneity in AI takeoff scenarios,['evhub'],2020-12-16T01:37:21Z,alignmentforum,, 8904,https://www.alignmentforum.org/posts/4xpDnGaKz472qB4LY/buridan-s-ass-in-coordination-games,Buridan's ass in coordination games,['jessicata'],2018-07-16T02:51:31Z,alignmentforum,, 8918,https://www.alignmentforum.org/posts/EkSvsJkZE8GCeCj7u/basin-broadness-depends-on-the-size-and-number-of-orthogonal-1,Basin broadness depends on the size and number of orthogonal features,"['TheMcDouglas', 'Avery', 'Lucius Bushnaq']",2022-08-27T17:29:33Z,alignmentforum,, 8936,https://www.alignmentforum.org/posts/L6Ynch3CYMxXZkiq8/a-proof-of-loeb-s-theorem-using-computability-theory,A Proof of Löb's Theorem using Computability Theory,['jessicata'],2023-08-16T18:57:41Z,alignmentforum,, 8946,https://www.alignmentforum.org/posts/FoiiRDC3EhjHx7ayY/introducing-the-ai-alignment-forum-faq,Introducing the AI Alignment Forum (FAQ),"['habryka', 'Ben Pace', 'Raemon', 'jimrandomh']",2018-10-29T21:07:54Z,alignmentforum,, 8967,https://www.alignmentforum.org/posts/t3AJW5jP3sk36aGoC/capability-amplification-1,Capability amplification,['paulfchristiano'],2019-01-20T07:03:28Z,alignmentforum,, 8988,https://www.alignmentforum.org/posts/LdoKzGom7gPLqEZyQ/knowledge-neurons-in-pretrained-transformers,Knowledge Neurons in Pretrained Transformers,['evhub'],2021-05-17T22:54:50Z,alignmentforum,, 9003,https://www.alignmentforum.org/posts/ZiLLxaLB5CCofrzPp/reward-uncertainty,Reward uncertainty,['Rohin Shah'],2019-01-19T02:16:05Z,alignmentforum,, 9021,https://www.alignmentforum.org/posts/LgEvWDzWga7aagf7T/confusion-about-alignment-requirements,confusion about alignment requirements,['Tamsin Leake'],2022-10-06T10:32:50Z,alignmentforum,, 9041,https://www.alignmentforum.org/posts/FCffGHJnYfdE2DgRe/humans-do-acausal-coordination-all-the-time,Humans do acausal coordination all the time,['Adam Jermyn'],2022-11-02T14:40:40Z,alignmentforum,, 9054,https://www.alignmentforum.org/posts/gebzzEwn2TaA6rGkc/deep-learning-systems-are-not-less-interpretable-than-logic,Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/Etc,['johnswentworth'],2022-06-04T05:41:57Z,alignmentforum,, 9064,https://www.alignmentforum.org/posts/fMJhfNZXFzCNpCL8v/when-is-intent-alignment-sufficient-or-necessary-to-reduce,When is intent alignment sufficient or necessary to reduce AGI conflict?,"['JesseClifton', 'Sammy Martin', 'antimonyanthony']",2022-09-14T19:39:12Z,alignmentforum,, 9091,https://www.alignmentforum.org/posts/DezghAd4bdxivEknM/a-small-update-to-the-sparse-coding-interim-research-report,A small update to the Sparse Coding interim research report,"['Lee Sharkey', 'Dan Braun', 'beren']",2023-04-30T19:54:38Z,alignmentforum,, 9100,https://www.alignmentforum.org/posts/HXDkCtk9tae5wFmjG/an-173-recent-language-model-results-from-deepmind,[AN #173] Recent language model results from DeepMind,['Rohin Shah'],2022-07-21T02:30:02Z,alignmentforum,, 9123,https://www.alignmentforum.org/posts/Y3bkJ59j4dciiLYyw/intro-to-brain-like-agi-safety-4-the-short-term-predictor,[Intro to brain-like-AGI safety] 4. The “short-term predictor”,['Steven Byrnes'],2022-02-16T13:12:14Z,alignmentforum,, 9144,https://www.alignmentforum.org/posts/CQMhLujqMpQ78Ru3R/infinite-width-mlps-as-an-ensemble-prior,"Infinite-width MLPs as an ""ensemble prior""",['Vivek Hebbar'],2023-05-12T11:45:52Z,alignmentforum,, 9158,https://www.alignmentforum.org/posts/b6jJddSvWMdZHJHh3/environmental-structure-can-cause-instrumental-convergence,Environmental Structure Can Cause Instrumental Convergence,['TurnTrout'],2021-06-22T22:26:03Z,alignmentforum,, 9172,https://www.alignmentforum.org/posts/SnKfFscgC8Nj5ddi3/classical-symbol-grounding-and-causal-graphs,Classical symbol grounding and causal graphs,['Stuart_Armstrong'],2021-10-14T18:04:32Z,alignmentforum,, 9187,https://www.alignmentforum.org/posts/8F4dXYriqbsom46x5/pretraining-language-models-with-human-preferences,Pretraining Language Models with Human Preferences,"['Tomek Korbak', 'Sam Bowman', 'Ethan Perez']",2023-02-21T17:57:10Z,alignmentforum,, 9206,https://www.alignmentforum.org/posts/s4FNjvrJG6zmYdBuG/axrp-episode-9-finite-factored-sets-with-scott-garrabrant,AXRP Episode 9 - Finite Factored Sets with Scott Garrabrant,['DanielFilan'],2021-06-24T22:10:13Z,alignmentforum,, 9222,https://www.alignmentforum.org/posts/GvRb4m6jAvsrtwJGH/alignment-newsletter-22,Alignment Newsletter #22,['Rohin Shah'],2018-09-03T16:10:01Z,alignmentforum,, 9255,https://www.alignmentforum.org/posts/8eX8DJctsACtR2sfX/an-118-risks-solutions-and-prioritization-in-a-world-with,"[AN #118]: Risks, solutions, and prioritization in a world with many AI systems",['Rohin Shah'],2020-09-23T18:20:05Z,alignmentforum,, 9287,https://www.alignmentforum.org/posts/wTKjRFeSjKLDSWyww/possible-takeaways-from-the-coronavirus-pandemic-for-slow-ai,Possible takeaways from the coronavirus pandemic for slow AI takeoff,['Vika'],2020-05-31T17:51:26Z,alignmentforum,, 9307,https://www.alignmentforum.org/posts/T5awG3XQKJtprABsy/an-108-why-we-should-scrutinize-arguments-for-ai-risk,[AN #108]: Why we should scrutinize arguments for AI risk,['Rohin Shah'],2020-07-16T06:47:38Z,alignmentforum,, 9338,https://www.alignmentforum.org/posts/Cfe2LMmQC4hHTDZ8r/more-examples-of-goal-misgeneralization,More examples of goal misgeneralization,"['Rohin Shah', 'Vikrant Varma']",2022-10-07T14:38:00Z,alignmentforum,, 9356,https://www.alignmentforum.org/posts/mCZSXdZoNoWn5SkvE/imitation-learning-from-language-feedback-1,Imitation Learning from Language Feedback,"['Jérémy Scheurer', 'Tomek Korbak', 'Ethan Perez']",2023-03-30T14:11:56Z,alignmentforum,, 9389,https://www.alignmentforum.org/posts/hA6z9s72KZDYpuFhq/finite-factored-sets-conditional-orthogonality,Finite Factored Sets: Conditional Orthogonality,['Scott Garrabrant'],2021-07-09T06:01:47Z,alignmentforum,, 9399,https://www.alignmentforum.org/posts/rzqACeBGycZtqCfaX/fun-with-12-ooms-of-compute,Fun with +12 OOMs of Compute,['Daniel Kokotajlo'],2021-03-01T13:30:14Z,alignmentforum,, 9419,https://www.alignmentforum.org/posts/e5duEqhAhurT8tCyr/a-model-of-decision-making-in-the-brain-the-short-version,A model of decision-making in the brain (the short version),['Steven Byrnes'],2021-07-18T14:39:35Z,alignmentforum,, 9433,https://www.alignmentforum.org/posts/ECPmgwwWBikTtdqXo/applications-for-deconfusing-goal-directedness,Applications for Deconfusing Goal-Directedness,['adamShimi'],2021-08-08T13:05:26Z,alignmentforum,, 9463,https://www.alignmentforum.org/posts/qtTW6BFrxWw4iHcjf/lying-is-cowardice-not-strategy,"Lying is Cowardice, not Strategy","['Connor Leahy', 'Gabriel Alfour']",2023-10-24T13:24:25Z,alignmentforum,, 9478,https://www.alignmentforum.org/posts/cYsGrWEzjb324Zpjx/comparing-utilities,Comparing Utilities,['abramdemski'],2020-09-14T20:56:15Z,alignmentforum,, 9498,https://www.alignmentforum.org/posts/ewkYgtZapQRtDPT2F/additive-operations-on-cartesian-frames,Additive Operations on Cartesian Frames,['Scott Garrabrant'],2020-10-26T15:12:15Z,alignmentforum,, 9514,https://www.alignmentforum.org/posts/CmmhFtCg7hsAy3brQ/beliefs-vs-notions,"""Beliefs"" vs. ""Notions""",['David Scott Krueger (formerly: capybaralet)'],2021-03-12T16:04:31Z,alignmentforum,, 9523,https://www.alignmentforum.org/posts/JvL3SC6tPjFyCiHad/become-a-pibbss-research-affiliate-1,Become a PIBBSS Research Affiliate,"['Nora_Ammann', 'DusanDNesic']",2023-10-10T07:41:02Z,alignmentforum,, 9534,https://www.alignmentforum.org/posts/BxersHYN2qcFoonwg/experimentally-evaluating-whether-honesty-generalizes,Experimentally evaluating whether honesty generalizes,['paulfchristiano'],2021-07-01T17:47:58Z,alignmentforum,, 9545,https://www.alignmentforum.org/posts/Zfik4xESDyahRALKk/yann-lecun-on-agi-and-ai-safety,Yann LeCun on AGI and AI Safety,['Chris_Leong'],2023-08-06T21:56:53Z,alignmentforum,, 9554,https://www.alignmentforum.org/posts/bER8yqrmHatrES9nR/an-145-our-three-year-anniversary,[AN #145]: Our three year anniversary!,['Rohin Shah'],2021-04-09T17:48:22Z,alignmentforum,, 9581,https://www.alignmentforum.org/posts/pDaxobbB9FG5Dvqyv/discussion-objective-robustness-and-inner-alignment,Discussion: Objective Robustness and Inner Alignment Terminology,"['jbkjr', 'Lauro Langosco']",2021-06-23T23:25:37Z,alignmentforum,, 9600,https://www.alignmentforum.org/posts/YjqwTepi53MyM4omT/why-is-the-impact-penalty-time-inconsistent,Why is the impact penalty time-inconsistent?,['Stuart_Armstrong'],2020-07-09T17:26:07Z,alignmentforum,, 9615,https://www.alignmentforum.org/posts/KptP3J2ThDTnriric/consistencies-as-meta-preferences,Consistencies as (meta-)preferences,['Stuart_Armstrong'],2021-05-03T15:10:51Z,alignmentforum,, 9630,https://www.alignmentforum.org/posts/QsZ3ycfRYs2ps5sNA/a-loebian-argument-pattern-for-implicit-reasoning-in-natural,A Löbian argument pattern for implicit reasoning in natural language: Löbian party invitations,['Andrew_Critch'],2023-01-01T17:40:00Z,alignmentforum,, 9641,https://www.alignmentforum.org/posts/ao7KLoBEvMdHFjrNZ/counterfactuals-are-an-answer-not-a-question,"Counterfactuals are an Answer, Not a Question",['Chris_Leong'],2019-09-03T15:36:40Z,alignmentforum,, 9656,https://www.alignmentforum.org/posts/BZKLf629NDNfEkZzJ/creating-agi-safety-interlocks,Creating AGI Safety Interlocks,['Koen.Holtman'],2021-02-05T12:01:46Z,alignmentforum,, 9678,https://www.alignmentforum.org/posts/4rmvMThJYNcCptAya/axrp-episode-22-shard-theory-with-quintin-pope,AXRP Episode 22 - Shard Theory with Quintin Pope,['DanielFilan'],2023-06-15T19:00:01Z,alignmentforum,, 9707,https://www.alignmentforum.org/posts/s4GqendqFsKzfhFzD/alignment-201-curriculum,Alignment 201 curriculum,['Richard_Ngo'],2022-10-12T18:03:03Z,alignmentforum,, 9740,https://www.alignmentforum.org/posts/NXqs4nYXaq8q6dTTx/humans-consulting-hch,Humans Consulting HCH,['paulfchristiano'],2018-11-25T23:18:55Z,alignmentforum,, 9755,https://www.alignmentforum.org/posts/6Hee7w2paEzHsD6mn/collection-of-gpt-3-results,Collection of GPT-3 results,['Kaj_Sotala'],2020-07-18T20:04:50Z,alignmentforum,, 9775,https://www.alignmentforum.org/posts/farherQcqFQXqRcvv/universality-unwrapped,Universality Unwrapped,['adamShimi'],2020-08-21T18:53:26Z,alignmentforum,, 9793,https://www.alignmentforum.org/posts/WRy6KNnxwQHc5Ktjc/thoughts-on-ai-safety-via-debate,"Thoughts on ""AI safety via debate""",['Gordon Seidoh Worley'],2018-05-10T00:44:09Z,alignmentforum,, 9816,https://www.alignmentforum.org/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress,"Christiano, Cotra, and Yudkowsky on AI progress","['Eliezer Yudkowsky', 'Ajeya Cotra']",2021-11-25T16:45:32Z,alignmentforum,, 9836,https://www.alignmentforum.org/posts/HrtqLy46Fx7xqRrMo/boundaries-part-3a-defining-boundaries-as-directed-markov,"«Boundaries», Part 3a: Defining boundaries as directed Markov blankets",['Andrew_Critch'],2022-10-30T06:31:00Z,alignmentforum,, 9859,https://www.alignmentforum.org/posts/GnMWifHzAknqJsLnv/request-for-distillation-coherence-of-distributed-decisions,[Request for Distillation] Coherence of Distributed Decisions With Different Inputs Implies Conditioning,['johnswentworth'],2022-04-25T17:01:09Z,alignmentforum,, 9873,https://www.alignmentforum.org/posts/gbeyjALdjdoCGayc6/reflections-on-the-pibbss-fellowship-2022,Reflections on the PIBBSS Fellowship 2022,"['Nora_Ammann', 'particlemania']",2022-12-11T21:53:20Z,alignmentforum,, 9902,https://www.alignmentforum.org/posts/ZSYo97kcfwtFdpcwe/input-swap-graphs-discovering-the-role-of-neural-network,Input Swap Graphs: Discovering the role of neural network components at scale,['Alexandre Variengien'],2023-05-12T09:41:09Z,alignmentforum,, 9927,https://www.alignmentforum.org/posts/uLMWMeBG3ruoBRhMW/a-comparison-of-causal-scrubbing-causal-abstractions-and,"A comparison of causal scrubbing, causal abstractions, and related methods","['Erik Jenner', 'Adrià Garriga-alonso', 'Egor Zverev']",2023-06-08T23:40:34Z,alignmentforum,, 9948,https://www.alignmentforum.org/posts/f8nd9F7dL9SxueLFA/eis-iv-a-spotlight-on-feature-attribution-saliency,EIS IV: A Spotlight on Feature Attribution/Saliency,['scasper'],2023-02-15T18:46:23Z,alignmentforum,, 9965,https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking,A Mechanistic Interpretability Analysis of Grokking,"['Neel Nanda', 'Tom Lieberum']",2022-08-15T02:41:36Z,alignmentforum,, 9996,https://www.alignmentforum.org/posts/HNnRCPe2CejfupSow/fixed-points-in-mortal-population-games,Fixed points in mortal population games,['ViktoriaMalyasova'],2023-03-14T07:10:12Z,alignmentforum,, 10008,https://www.alignmentforum.org/posts/3cR2YH9dpr7SmKvCb/toy-models-and-tegum-products,Toy Models and Tegum Products,['Adam Jermyn'],2022-11-04T18:51:42Z,alignmentforum,, 10020,https://www.alignmentforum.org/posts/fRSj2W4Fjje8rQWm9/thoughts-on-sharing-information-about-language-model,Thoughts on sharing information about language model capabilities,['paulfchristiano'],2023-07-31T16:04:21Z,alignmentforum,, 10035,https://www.alignmentforum.org/posts/k8F8TBzuZtLheJt47/deconfusing-human-values-research-agenda-v1,Deconfusing Human Values Research Agenda v1,['Gordon Seidoh Worley'],2020-03-23T16:25:28Z,alignmentforum,, 10054,https://www.alignmentforum.org/posts/GqxuDtZvfgL2bEQ5v/arguments-against-myopic-training,Arguments against myopic training,['Richard_Ngo'],2020-07-09T16:07:28Z,alignmentforum,, 10080,https://www.alignmentforum.org/posts/QEfbg6vbjGgfFzJM4/countably-factored-spaces,Countably Factored Spaces,['Diffractor'],2021-09-09T04:24:58Z,alignmentforum,, 10097,https://www.alignmentforum.org/posts/ahZQbxiPPpsTutDy2/abstraction-evolution-and-gears,"Abstraction, Evolution and Gears",['johnswentworth'],2020-06-24T17:39:43Z,alignmentforum,, 10120,https://www.alignmentforum.org/posts/jeiz7WfCnGQWoShkT/mapping-out-alignment,Mapping Out Alignment,"['Logan Riggs', 'adamShimi', 'Gurkenglas', 'AlexMennen', 'Gyrodiot']",2020-08-15T01:02:31Z,alignmentforum,, 10149,https://www.alignmentforum.org/posts/kCY9dYGLoThC3aG7w/best-reasons-for-pessimism-about-impact-of-impact-measures,Best reasons for pessimism about impact of impact measures?,['TurnTrout'],2019-04-10T17:22:13Z,alignmentforum,, 10169,https://www.alignmentforum.org/posts/snmwFzoDMQMhyTirN/value-extrapolation-vs-wireheading,Value extrapolation vs Wireheading,['Stuart_Armstrong'],2022-06-17T15:02:46Z,alignmentforum,, 10181,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374f02/forum-digest-reflective-oracles,Forum Digest: Reflective Oracles,['jessicata'],2015-03-22T04:02:38Z,alignmentforum,, 10193,https://www.alignmentforum.org/posts/3ou8DayvDXxufkjHD/openai-api-base-models-are-not-sycophantic-at-any-size,"OpenAI API base models are not sycophantic, at any size",['nostalgebraist'],2023-08-29T00:58:29Z,alignmentforum,, 10212,https://www.alignmentforum.org/posts/5bd75cc58225bf067037525d/logical-inductor-limits-are-dense-under-pointwise-convergence,Logical inductor limits are dense under pointwise convergence,['SamEisenstat'],2016-10-06T08:07:53Z,alignmentforum,, 10226,https://www.alignmentforum.org/posts/73pTioGZKNcfQmvGF/the-measuring-stick-of-utility-problem,"The ""Measuring Stick of Utility"" Problem",['johnswentworth'],2022-05-25T16:17:23Z,alignmentforum,, 10238,https://www.alignmentforum.org/posts/L5Tf34FXA6weiGwEz/learning-russian-roulette,Learning Russian Roulette,['Bunthut'],2021-04-02T18:56:27Z,alignmentforum,, 10253,https://www.alignmentforum.org/posts/EF5M6CmKRd6qZk27Z/my-research-methodology,My research methodology,['paulfchristiano'],2021-03-22T21:20:07Z,alignmentforum,, 10282,https://www.alignmentforum.org/posts/aNAFrGbzXddQBMDqh/moore-s-law-ai-and-the-pace-of-progress,"Moore's Law, AI, and the pace of progress",['Veedrac'],2021-12-11T03:02:25Z,alignmentforum,, 10309,https://www.alignmentforum.org/posts/w6c47JGY3C4k4dWBc/reward-is-the-optimization-target-of-capabilities-1,Reward is the optimization target (of capabilities researchers),['Max H'],2023-05-15T03:22:49Z,alignmentforum,, 10324,https://www.alignmentforum.org/posts/9GyniEBaN3YYTqZXn/the-self-unalignment-problem,The self-unalignment problem,"['Jan_Kulveit', 'rosehadshar']",2023-04-14T12:10:12Z,alignmentforum,, 10341,https://www.alignmentforum.org/posts/kuL7YmsuQJ9v6xNhK/alignment-newsletter-40,Alignment Newsletter #40,['Rohin Shah'],2019-01-08T20:10:03Z,alignmentforum,, 10369,https://www.alignmentforum.org/posts/XusDPpXr6FYJqWkxh/an-156-the-scaling-hypothesis-a-plan-for-building-agi,[AN #156]: The scaling hypothesis: a plan for building AGI,['Rohin Shah'],2021-07-16T17:10:06Z,alignmentforum,, 10396,https://www.alignmentforum.org/posts/8fpzBHt7e6n7Qjoo9/ai-risk-for-epistemic-minimalists,AI Risk for Epistemic Minimalists,['Alex Flint'],2021-08-22T15:39:16Z,alignmentforum,, 10412,https://www.alignmentforum.org/posts/zimRZudzaewHJu4NA/animal-weapons-lessons-for-humans-in-the-age-of-x-risk-2,Animal Weapons: Lessons for Humans in the Age of X-Risk,['Damin Curtis'],2023-07-04T18:14:24Z,alignmentforum,, 10429,https://www.alignmentforum.org/posts/9CKBtxWtjvminNTmC/how-the-mtg-color-wheel-explains-ai-safety,How the MtG Color Wheel Explains AI Safety,['Scott Garrabrant'],2019-02-15T23:43:00Z,alignmentforum,, 10468,https://www.alignmentforum.org/posts/EszCTbovFfpJd5C8N/axrp-episode-4-risks-from-learned-optimization-with-evan,AXRP Episode 4 - Risks from Learned Optimization with Evan Hubinger,['DanielFilan'],2021-02-18T00:03:18Z,alignmentforum,, 10501,https://www.alignmentforum.org/posts/dvaCebTNc2tfMDcxS/a-critical-agential-account-of-free-will-causation-and,"A critical agential account of free will, causation, and physics",['jessicata'],2020-03-05T07:57:38Z,alignmentforum,, 10519,https://www.alignmentforum.org/posts/Cd7Hw492RqooYgQAS/progress-on-causal-influence-diagrams,Progress on Causal Influence Diagrams,['tom4everitt'],2021-06-30T15:34:33Z,alignmentforum,, 10538,https://www.alignmentforum.org/posts/Ayt24gxcjfY3tDmwK/transforming-myopic-optimization-to-ordinary-optimization-do,Transforming myopic optimization to ordinary optimization - Do we want to seek convergence for myopic optimization problems?,['tailcalled'],2021-12-11T20:38:47Z,alignmentforum,, 10555,https://www.alignmentforum.org/posts/S54HKhxQyttNLATKu/deconfusing-direct-vs-amortised-optimization,Deconfusing Direct vs Amortised Optimization,['beren'],2022-12-02T11:30:47Z,alignmentforum,, 10576,https://www.alignmentforum.org/posts/sCJDstZrpCB8dQveA/using-uninterpretable-llms-to-generate-interpretable-ai-code,Using (Uninterpretable) LLMs to Generate Interpretable AI Code,['Joar Skalse'],2023-07-02T01:01:54Z,alignmentforum,, 10590,https://www.alignmentforum.org/posts/nNqXfnjiezYukiMJi/reply-to-eliezer-on-biological-anchors,Reply to Eliezer on Biological Anchors,['HoldenKarnofsky'],2021-12-23T16:15:44Z,alignmentforum,, 10608,https://www.alignmentforum.org/posts/v2SvxGNijBzRYk7Ep/behaviour-manifolds-and-the-hessian-of-the-total-loss-notes,Behaviour Manifolds and the Hessian of the Total Loss - Notes and Criticism,['Spencer Becker-Kahn'],2022-09-03T00:15:41Z,alignmentforum,, 10625,https://www.alignmentforum.org/posts/TjaeCWvLZtEDAS5Ex/towards-developmental-interpretability,Towards Developmental Interpretability,"['Jesse Hoogland', 'Alexander Gietelink Oldenziel', 'Daniel Murfet', 'Stan van Wingerden']",2023-07-12T19:33:45Z,alignmentforum,, 10642,https://www.alignmentforum.org/posts/jusq6kyZ6XSrtW3Bf/paper-summary-omnigrok-grokking-beyond-algorithmic-data,Paper+Summary: OMNIGROK: GROKKING BEYOND ALGORITHMIC DATA,['Marius Hobbhahn'],2022-10-04T07:22:15Z,alignmentforum,, 10658,https://www.alignmentforum.org/posts/WFx8iPDS4WaaHyCtL/alex-irpan-my-ai-timelines-have-sped-up,"Alex Irpan: ""My AI Timelines Have Sped Up""",['Vaniver'],2020-08-19T16:23:25Z,alignmentforum,, 10671,https://www.alignmentforum.org/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world,It Looks Like You're Trying To Take Over The World,['gwern'],2022-03-09T16:35:35Z,alignmentforum,, 10680,https://www.alignmentforum.org/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction,Risks from Learned Optimization: Introduction,"['evhub', 'Chris van Merwijk', 'vlad_m', 'Joar Skalse', 'Scott Garrabrant']",2019-05-31T23:44:54Z,alignmentforum,, 10699,https://www.alignmentforum.org/posts/NqQxTn5MKEYhSnbuB/goodhart-s-curse-and-limitations-on-ai-alignment,Goodhart's Curse and Limitations on AI Alignment,['Gordon Seidoh Worley'],2019-08-19T07:57:01Z,alignmentforum,, 10719,https://www.alignmentforum.org/posts/uHE9b4HWQjuqnFYqP/alignment-newsletter-32,Alignment Newsletter #32,['Rohin Shah'],2018-11-12T17:20:04Z,alignmentforum,, 10752,https://www.alignmentforum.org/posts/psKxyNGH9HuxvpFPB/alignment-newsletter-45,Alignment Newsletter #45,['Rohin Shah'],2019-02-14T02:10:01Z,alignmentforum,, 10773,https://www.alignmentforum.org/posts/KER27SxZssfmsusxy/voi-is-only-nonnegative-when-information-is-uncorrelated,VOI is Only Nonnegative When Information is Uncorrelated With Future Action,['Diffractor'],2018-08-31T05:13:12Z,alignmentforum,, 10789,https://www.alignmentforum.org/posts/FgsoWSACQfyyaB5s7/shutdown-seeking-ai,Shutdown-Seeking AI,['Simon Goldstein'],2023-05-31T22:19:31Z,alignmentforum,, 10815,https://www.alignmentforum.org/posts/Expvyb6nndbjqigRL/examples-of-causal-abstraction,Examples of Causal Abstraction,['johnswentworth'],2019-12-12T22:54:44Z,alignmentforum,, 10836,https://www.alignmentforum.org/posts/zB4f7QqKhBHa5b37a/introduction-to-the-infra-bayesianism-sequence,Introduction To The Infra-Bayesianism Sequence,"['Diffractor', 'Vanessa Kosoy']",2020-08-26T20:31:30Z,alignmentforum,, 10859,https://www.alignmentforum.org/posts/QknPz9JQTQpGdaWDp/an-80-why-ai-risk-might-be-solved-without-additional,[AN #80]: Why AI risk might be solved without additional intervention from longtermists,['Rohin Shah'],2020-01-02T18:20:02Z,alignmentforum,, 10884,https://www.alignmentforum.org/posts/RKDQCB6smLWgs2Mhr/multi-component-learning-and-s-curves,Multi-Component Learning and S-Curves,"['Adam Jermyn', 'Buck']",2022-11-30T01:37:06Z,alignmentforum,, 10897,https://www.alignmentforum.org/posts/7b2RJJQ76hjZwarnj/specification-gaming-the-flip-side-of-ai-ingenuity,Specification gaming: the flip side of AI ingenuity,"['Vika', 'vlad_m', 'Matthew Rahtz', 'tom4everitt', 'Zac Kenton', 'janleike']",2020-05-06T23:51:58Z,alignmentforum,, 10929,https://www.alignmentforum.org/posts/W22Btd7NmGuucFejc/instrumental-convergence-for-realistic-agent-objectives,Instrumental Convergence For Realistic Agent Objectives,['TurnTrout'],2022-01-22T00:41:37Z,alignmentforum,, 10946,https://www.alignmentforum.org/posts/P6eWEMCrSjbuWwESk/an-110-learning-features-from-human-feedback-to-enable,[AN #110]: Learning features from human feedback to enable reward learning,['Rohin Shah'],2020-07-29T17:20:04Z,alignmentforum,, 10967,https://www.alignmentforum.org/posts/biP5XBmqvjopvky7P/a-eta-quick-note-on-terminology-ai-alignment-ai-x-safety,A (EtA: quick) note on terminology: AI Alignment != AI x-safety,['David Scott Krueger (formerly: capybaralet)'],2023-02-08T22:33:53Z,alignmentforum,, 10988,https://www.alignmentforum.org/posts/3BDqZMNSJDBg2oyvW/simulacra-are-things,Simulacra are Things,['janus'],2023-01-08T23:03:26Z,alignmentforum,, 11003,https://www.alignmentforum.org/posts/9vYg8MyLL4cMMaPQJ/updates-and-additions-to-embedded-agency,"Updates and additions to ""Embedded Agency""","['Rob Bensinger', 'abramdemski']",2020-08-29T04:22:26Z,alignmentforum,, 11024,https://www.alignmentforum.org/posts/zRA8B2FJLtTYRgie6/the-positional-embedding-matrix-and-previous-token-heads-how,The positional embedding matrix and previous-token heads: how do they actually work?,['AdamYedidia'],2023-08-10T01:58:59Z,alignmentforum,, 11035,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374ed2/udt-in-the-land-of-probabilistic-oracles,UDT in the Land of Probabilistic Oracles,['jessicata'],2015-02-08T09:13:36Z,alignmentforum,, 11052,https://www.alignmentforum.org/posts/jz5QoizH8HkQwWZ9Q/nash-equilibriums-can-be-arbitrarily-bad,Nash equilibriums can be arbitrarily bad,['Stuart_Armstrong'],2019-05-01T14:58:22Z,alignmentforum,, 11071,https://www.alignmentforum.org/posts/ns95FHkkzpjXh4x5Q/what-does-gpt-3-understand-symbol-grounding-and-chinese,What does GPT-3 understand? Symbol grounding and Chinese rooms,['Stuart_Armstrong'],2021-08-03T13:14:42Z,alignmentforum,, 11087,https://www.alignmentforum.org/posts/o7sN7moJA8TrZKtKi/appendix-natural-abstractions-key-claims-theorems-and,"[Appendix] Natural Abstractions: Key Claims, Theorems, and Critiques","['LawrenceC', 'Erik Jenner', 'Leon Lang']",2023-03-16T16:38:34Z,alignmentforum,, 11099,https://www.alignmentforum.org/posts/5bd75cc58225bf06703754b3/stable-pointers-to-value-an-agent-embedded-in-its-own-utility-function,Stable Pointers to Value: An Agent Embedded in Its Own Utility Function,['abramdemski'],2017-08-17T00:22:02Z,alignmentforum,, 11122,https://www.alignmentforum.org/posts/ak8a6fbhXbdqH3FgD/an-126-avoiding-wireheading-by-decoupling-action-feedback,[AN #126]: Avoiding wireheading by decoupling action feedback from action effects,['Rohin Shah'],2020-11-26T23:20:05Z,alignmentforum,, 11147,https://www.alignmentforum.org/posts/BTM4SN53mWsHLkRJL/what-is-the-subjective-experience-of-free-will-for-agents,What is the subjective experience of free will for agents?,['Gordon Seidoh Worley'],2020-04-02T15:53:39Z,alignmentforum,, 11156,https://www.alignmentforum.org/posts/EGvtZMvSFELxoRqkZ/ai-benefits-post-1-introducing-ai-benefits,AI Benefits Post 1: Introducing “AI Benefits”,['Cullen'],2020-06-22T16:59:23Z,alignmentforum,, 11173,https://www.alignmentforum.org/posts/saGr6DapTPKFaMhhP/framing-ai-childhoods,Framing AI Childhoods,['David Udell'],2022-09-06T23:40:40Z,alignmentforum,, 11193,https://www.alignmentforum.org/posts/raoeNarFYCxxyKAop/modulating-sycophancy-in-an-rlhf-model-via-activation,Modulating sycophancy in an RLHF model via activation steering,['NinaR'],2023-08-09T07:06:51Z,alignmentforum,, 11214,https://www.alignmentforum.org/posts/zB3ukZJqt3pQDw9jz/ai-will-change-the-world-but-won-t-take-it-over-by-playing-3,"AI will change the world, but won’t take it over by playing “3-dimensional chess”.","['boazbarak', 'benedelman']",2022-11-22T18:57:30Z,alignmentforum,, 11243,https://www.alignmentforum.org/posts/eqzbXmqGqXiyjX3TP/elk-thought-dump-1,ELK Thought Dump,['abramdemski'],2022-02-28T18:46:09Z,alignmentforum,, 11259,https://www.alignmentforum.org/posts/Y9xD78kufNsF7wL6f/machine-learning-projects-on-ida,Machine Learning Projects on IDA,"['Owain_Evans', 'William_S', 'stuhlmueller']",2019-06-24T18:38:19Z,alignmentforum,, 11276,https://www.alignmentforum.org/posts/cKfryXvyJ522iFuNF/a-gym-gridworld-environment-for-the-treacherous-turn,A Gym Gridworld Environment for the Treacherous Turn,['Michaël Trazzi'],2018-07-28T21:27:34Z,alignmentforum,, 11292,https://www.alignmentforum.org/posts/Nwgdq6kHke5LY692J/alignment-by-default,Alignment By Default,['johnswentworth'],2020-08-12T18:54:01Z,alignmentforum,, 11321,https://www.alignmentforum.org/posts/zNYmbFwgrxiNtayMm/what-if-we-approach-ai-safety-like-a-technical-engineering,What if we approach AI safety like a technical engineering safety problem,['zeshen'],2022-08-20T10:29:39Z,alignmentforum,, 11345,https://www.alignmentforum.org/posts/ZjDh3BmbDrWJRckEb/quantilizer-optimizer-with-a-bounded-amount-of-output-1,Quantilizer ≡ Optimizer with a Bounded Amount of Output,['itaibn0'],2021-11-16T01:03:35Z,alignmentforum,, 11356,https://www.alignmentforum.org/posts/4xbsi4wbourPkb47x/technical-agi-safety-research-outside-ai,Technical AGI safety research outside AI,['Richard_Ngo'],2019-10-18T15:00:23Z,alignmentforum,, 11393,https://www.alignmentforum.org/posts/5bd75cc58225bf067037531d/corrigibility-thoughts-i-caring-about-multiple-things,Corrigibility thoughts I: caring about multiple things,['Stuart_Armstrong'],2017-06-02T16:27:30Z,alignmentforum,, 11408,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375575/the-learning-theoretic-ai-alignment-research-agenda,The Learning-Theoretic AI Alignment Research Agenda,['Vanessa Kosoy'],2018-07-04T09:53:31Z,alignmentforum,, 11442,https://www.alignmentforum.org/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas,Three AI Safety Related Ideas,['Wei Dai'],2018-12-13T21:32:25Z,alignmentforum,, 11466,https://www.alignmentforum.org/posts/KHLnzFgBtXxJQaDxj/the-alignment-newsletter-8-05-28-18,The Alignment Newsletter #8: 05/28/18,['Rohin Shah'],2018-05-28T16:00:47Z,alignmentforum,, 11490,https://www.alignmentforum.org/posts/gqqhYijxcKAtuAFjL/a-data-limited-future,A Data limited future,['Donald Hobson'],2022-08-06T14:56:36Z,alignmentforum,, 11513,https://www.alignmentforum.org/posts/RLcEQtoc5EqTxdan8/economic-ai-safety,Economic AI Safety,['jsteinhardt'],2021-09-16T20:50:50Z,alignmentforum,, 11535,https://www.alignmentforum.org/posts/dSAJdi99XmqftqXXq/eight-claims-about-multi-agent-agi-safety,Eight claims about multi-agent AGI safety,['Richard_Ngo'],2021-01-07T13:34:55Z,alignmentforum,, 11568,https://www.alignmentforum.org/posts/h2ipMwfx4D3oenzu2/an-130-a-new-ai-x-risk-podcast-and-reviews-of-the-field,"[AN #130]: A new AI x-risk podcast, and reviews of the field",['Rohin Shah'],2020-12-24T18:20:05Z,alignmentforum,, 11595,https://www.alignmentforum.org/posts/nmxzr2zsjNtjaHh7x/actually-othello-gpt-has-a-linear-emergent-world,"Actually, Othello-GPT Has A Linear Emergent World Representation",['Neel Nanda'],2023-03-29T22:13:15Z,alignmentforum,, 11615,https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/simulators,Simulators,['janus'],2022-09-02T12:45:34Z,alignmentforum,, 11639,https://www.alignmentforum.org/posts/FvcyMMaJKhYibtFDD/bayesian-probability-is-for-things-that-are-space-like,Bayesian Probability is for things that are Space-like Separated from You,['Scott Garrabrant'],2018-07-10T23:47:49Z,alignmentforum,, 11649,https://www.alignmentforum.org/posts/qhsELHzAHFebRJE59/a-greater-than-b-greater-than-a,(A -> B) -> A,['Scott Garrabrant'],2018-09-11T22:38:20Z,alignmentforum,, 11660,https://www.alignmentforum.org/posts/Rs6vZCrnQFWQ4p37P/when-to-use-quantilization,When to use quantilization,['RyanCarey'],2019-02-05T17:17:12Z,alignmentforum,, 11676,https://www.alignmentforum.org/posts/a5NxvzFGddj2e8uXQ/updating-drexler-s-cais-model,Updating Drexler's CAIS model,['Matthew Barnett'],2023-06-16T22:53:58Z,alignmentforum,, 11694,https://www.alignmentforum.org/posts/CPrJqN2Azz7Wqfiv4/axrp-episode-18-concept-extrapolation-with-stuart-armstrong,AXRP Episode 18 - Concept Extrapolation with Stuart Armstrong,['DanielFilan'],2022-09-03T23:12:01Z,alignmentforum,, 11717,https://www.alignmentforum.org/posts/cnn3kkC6kDqRkLe7W/shah-deepmind-and-leahy-conjecture-discuss-alignment-cruxes,Shah (DeepMind) and Leahy (Conjecture) Discuss Alignment Cruxes,"['Olivia Jimenez', 'Rohin Shah', 'Connor Leahy', 'Andrea_Miotti']",2023-05-01T16:47:42Z,alignmentforum,, 11738,https://www.alignmentforum.org/posts/qTiujsznctZcnuLF3/what-organizations-other-than-conjecture-have-esp-public,What organizations other than Conjecture have (esp. public) info-hazard policies?,['David Scott Krueger (formerly: capybaralet)'],2023-03-16T14:49:12Z,alignmentforum,, 11747,https://www.alignmentforum.org/posts/ZeE7EKHTFMBs8eMxn/clarifying-ai-alignment,"Clarifying ""AI Alignment""",['paulfchristiano'],2018-11-15T14:41:58Z,alignmentforum,, 11758,https://www.alignmentforum.org/posts/LAxAmooK4uDfWmbep/anomalous-tokens-reveal-the-original-identities-of-instruct,Anomalous tokens reveal the original identities of Instruct models,"['janus', 'jdp']",2023-02-09T01:30:57Z,alignmentforum,, 11775,https://www.alignmentforum.org/posts/TLvbXNHNvppNEkXYj/human-mimicry-mainly-works-when-we-re-already-close,Human Mimicry Mainly Works When We’re Already Close,['johnswentworth'],2022-08-17T18:41:18Z,alignmentforum,, 11795,https://www.alignmentforum.org/posts/4dbK5dPiqHCgNdKnq/link-training-compute-optimal-large-language-models,[Link] Training Compute-Optimal Large Language Models,['nostalgebraist'],2022-03-31T18:01:51Z,alignmentforum,, 11804,https://www.alignmentforum.org/posts/bdayaswyewjxxrQmB/understanding-gradient-hacking,Understanding Gradient Hacking,['peterbarnett'],2021-12-10T15:58:51Z,alignmentforum,, 11826,https://www.alignmentforum.org/posts/NR4rgfKu63TcqLxcH/an-165-when-large-models-are-more-likely-to-lie,[AN #165]: When large models are more likely to lie,['Rohin Shah'],2021-09-22T17:30:05Z,alignmentforum,, 11846,https://www.alignmentforum.org/posts/Lfk2FXBwrpoM6Jm7p/security-mindset-and-takeoff-speeds,Security Mindset and Takeoff Speeds,['DanielFilan'],2020-10-27T03:20:02Z,alignmentforum,, 11873,https://www.alignmentforum.org/posts/iA9S8fLCgbjFF6fw4/refine-blogpost-day-3-the-shortforms-i-did-write,Refine Blogpost Day #3: The shortforms I did write,['Alexander Gietelink Oldenziel'],2022-09-16T21:03:34Z,alignmentforum,, 11899,https://www.alignmentforum.org/posts/ZQJ9H9ZeRF8mjB2aF/an-55-regulatory-markets-and-international-standards-as-a,[AN #55] Regulatory markets and international standards as a means of ensuring beneficial AI,['Rohin Shah'],2019-05-05T02:20:01Z,alignmentforum,, 11927,https://www.alignmentforum.org/posts/wDvHP9KzGqPCtw3PS/different-views-of-alignment-have-different-consequences-for,Different views of alignment have different consequences for imperfect methods,['Stuart_Armstrong'],2023-09-28T16:31:20Z,alignmentforum,, 11946,https://www.alignmentforum.org/posts/a9SPcZ6GXAg9cNKdi/linkpost-some-high-level-thoughts-on-the-deepmind-alignment,[Linkpost] Some high-level thoughts on the DeepMind alignment team's strategy,"['Vika', 'Rohin Shah']",2023-03-07T11:55:01Z,alignmentforum,, 11977,https://www.alignmentforum.org/posts/rokpjK3jcy5aKKwiT/reply-to-jebari-and-lundborg-on-artificial-superintelligence-1,Reply to Jebari and Lundborg on Artificial Superintelligence,['Richard_Ngo'],2020-10-25T13:50:24Z,alignmentforum,, 11993,https://www.alignmentforum.org/posts/2DQHvGaH6C7dmwtdT/logical-representation-of-causal-models,Logical Representation of Causal Models,['johnswentworth'],2020-01-21T20:04:54Z,alignmentforum,, 12003,https://www.alignmentforum.org/posts/MnCMkh7hirX8YwT2t/hch-speculation-post-2a,HCH Speculation Post #2A,['Charlie Steiner'],2021-03-17T13:26:46Z,alignmentforum,, 12027,https://www.alignmentforum.org/posts/jfMExCKWipKeCdSuG/practical-anthropics-summary,Practical anthropics summary,['Stuart_Armstrong'],2021-07-08T15:10:45Z,alignmentforum,, 12041,https://www.alignmentforum.org/posts/fccbYpCvTrBFMGmfu/what-does-the-launch-of-x-ai-mean-for-ai-safety,What does the launch of x.ai mean for AI Safety?,['Chris_Leong'],2023-07-12T19:42:47Z,alignmentforum,, 12055,https://www.alignmentforum.org/posts/ynt9TD6PrYw6iT49m/malign-generalization-without-internal-search,Malign generalization without internal search,['Matthew Barnett'],2020-01-12T18:03:43Z,alignmentforum,, 12070,https://www.alignmentforum.org/posts/xEzudcydk7APZbnai/ai-alignment-podcast-on-lethal-autonomous-weapons-with-paul,AI Alignment Podcast: On Lethal Autonomous Weapons with Paul Scharre,['Palus Astra'],2020-03-16T23:00:18Z,alignmentforum,, 12100,https://www.alignmentforum.org/posts/BibDWWeo37pzuZCmL/sources-of-intuitions-and-data-on-agi,Sources of intuitions and data on AGI,['Scott Garrabrant'],2018-01-31T23:30:17Z,alignmentforum,, 12118,https://www.alignmentforum.org/posts/EPLk8QxETC5FEhoxK/arc-evals-new-report-evaluating-language-model-agents-on,ARC Evals new report: Evaluating Language-Model Agents on Realistic Autonomous Tasks,['Beth Barnes'],2023-08-01T18:30:57Z,alignmentforum,, 12142,https://www.alignmentforum.org/posts/pba68kdmmHrp9oHGG/phylactery-decision-theory,Phylactery Decision Theory,['Bunthut'],2021-04-02T20:55:46Z,alignmentforum,, 12155,https://www.alignmentforum.org/posts/ojwujybfRC9SwRhAP/powerplay-an-open-source-toolchain-to-study-ai-power-seeking,POWERplay: An open-source toolchain to study AI power-seeking,['Edouard Harris'],2022-10-24T20:03:58Z,alignmentforum,, 12170,https://www.alignmentforum.org/posts/6skeZgctugzBBEBw3/ai-alignment-podcast-an-overview-of-technical-ai-alignment,AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah,['Palus Astra'],2020-04-16T00:50:38Z,alignmentforum,, 12195,https://www.alignmentforum.org/posts/6iedrXht3GpKTQWRF/what-if-memes-are-common-in-highly-capable-minds,What if memes are common in highly capable minds?,['Daniel Kokotajlo'],2020-07-30T20:45:18Z,alignmentforum,, 12210,https://www.alignmentforum.org/posts/H79dxa7XXMBhwqZLm/an-148-analyzing-generalization-across-more-axes-than-just,[AN #148]: Analyzing generalization across more axes than just accuracy or loss,['Rohin Shah'],2021-04-28T18:30:03Z,alignmentforum,, 12227,https://www.alignmentforum.org/posts/cemhavELfHFHRaA7Q/misalignment-by-default-in-multi-agent-systems,Misalignment-by-default in multi-agent systems,"['Edouard Harris', 'simonsdsuo']",2022-10-13T15:38:59Z,alignmentforum,, 12244,https://www.alignmentforum.org/posts/TATWqHvxKEpL34yKz/intelligence-or-evolution,Intelligence or Evolution?,['Ramana Kumar'],2021-10-09T17:14:41Z,alignmentforum,, 12260,https://www.alignmentforum.org/posts/uPa63suC8idWhYGbg/mech-interp-challenge-november-deciphering-the-cumulative,Mech Interp Challenge: November - Deciphering the Cumulative Sum Model,['TheMcDouglas'],2023-11-02T17:10:07Z,alignmentforum,, 12282,https://www.alignmentforum.org/posts/3kzFPA5uuaGZWg4PS/an-81-universality-as-a-potential-solution-to-conceptual,[AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment,['Rohin Shah'],2020-01-08T18:00:02Z,alignmentforum,, 12312,https://www.alignmentforum.org/posts/3S4nyoNEEuvNsbXt8/common-misconceptions-about-openai,Common misconceptions about OpenAI,['Jacob_Hilton'],2022-08-25T14:02:26Z,alignmentforum,, 12337,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375401/finding-reflective-oracle-distributions-using-a-kakutani-map,Finding reflective oracle distributions using a Kakutani map,['jessicata'],2017-05-02T02:12:06Z,alignmentforum,, 12347,https://www.alignmentforum.org/posts/LwJwDNFhjurAKFiJm/theories-of-change-for-ai-auditing,Theories of Change for AI Auditing,"['Lee Sharkey', 'beren', 'Marius Hobbhahn']",2023-11-13T19:33:44Z,alignmentforum,, 12374,https://www.alignmentforum.org/posts/tj8AC3vhTnBywdZoA/intro-to-brain-like-agi-safety-15-conclusion-open-problems-1,"[Intro to brain-like-AGI safety] 15. Conclusion: Open problems, how to help, AMA",['Steven Byrnes'],2022-05-17T15:11:12Z,alignmentforum,, 12416,https://www.alignmentforum.org/posts/pz7Mxyr7Ac43tWMaC/against-evolution-as-an-analogy-for-how-humans-will-create,Against evolution as an analogy for how humans will create AGI,['Steven Byrnes'],2021-03-23T12:29:57Z,alignmentforum,, 12436,https://www.alignmentforum.org/posts/3e6pmovj6EJ729M2i/general-alignment-plus-human-values-or-alignment-via-human,"General alignment plus human values, or alignment via human values?",['Stuart_Armstrong'],2021-10-22T10:11:39Z,alignmentforum,, 12451,https://www.alignmentforum.org/posts/468faQvTy7RfG4uxj/temporally-layered-architecture-for-adaptive-distributed-and,"Temporally Layered Architecture for Adaptive, Distributed and Continuous Control",['Roman Leventov'],2023-02-02T06:29:21Z,alignmentforum,, 12463,https://www.alignmentforum.org/posts/C5PZNi5fueH2RC6aF/repl-s-and-elk,REPL's and ELK,['scottviteri'],2022-02-17T01:14:08Z,alignmentforum,, 12472,https://www.alignmentforum.org/posts/29QmG4bQDFtAzSmpv/an-141-the-case-for-practicing-alignment-work-on-gpt-3-and,[AN #141]: The case for practicing alignment work on GPT-3 and other large models,['Rohin Shah'],2021-03-10T18:30:04Z,alignmentforum,, 12493,https://www.alignmentforum.org/posts/bDwQddhqaTiMhbpPF/what-are-some-exercises-for-building-generating-intuitions,What are some exercises for building/generating intuitions about key disagreements in AI alignment?,['riceissa'],2020-03-16T07:41:59Z,alignmentforum,, 12508,https://www.alignmentforum.org/posts/g7yj6afwKf5GurCyZ/alignment-newsletter-27,Alignment Newsletter #27,['Rohin Shah'],2018-10-09T01:10:02Z,alignmentforum,, 12537,https://www.alignmentforum.org/posts/SfPrNY45kQaBozwmu/an-extremely-opinionated-annotated-list-of-my-favourite,An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers,['Neel Nanda'],2022-10-18T21:08:33Z,alignmentforum,, 12565,https://www.alignmentforum.org/posts/A5jN7vqAxsHCDC4dy/meta-preferences-two-ways-generator-vs-patch,Meta-preferences two ways: generator vs. patch,['Charlie Steiner'],2020-04-01T00:51:49Z,alignmentforum,, 12579,https://www.alignmentforum.org/posts/iWj7Ti9GA98M5JaMy/alignment-newsletter-52,Alignment Newsletter #52,['Rohin Shah'],2019-04-06T01:20:02Z,alignmentforum,, 12601,https://www.alignmentforum.org/posts/74crqQnH8v9JtJcda/egan-s-theorem,Egan's Theorem?,['johnswentworth'],2020-09-13T17:47:02Z,alignmentforum,, 12610,https://www.alignmentforum.org/posts/kQKcEkzKmpPe6qxDb/broad-picture-of-human-values-1,Broad Picture of Human Values,['Thane Ruthenis'],2022-08-20T19:42:20Z,alignmentforum,, 12632,https://www.alignmentforum.org/posts/9mzRsm5GyMYwvGSZi/axrp-episode-21-interpretability-for-engineers-with-stephen,AXRP Episode 21 - Interpretability for Engineers with Stephen Casper,['DanielFilan'],2023-05-02T00:50:07Z,alignmentforum,, 12667,https://www.alignmentforum.org/posts/rTYGMbmEsFkxyyXuR/understanding-and-controlling-auto-induced-distributional,Understanding and controlling auto-induced distributional shift,['LRudL'],2021-12-13T14:59:41Z,alignmentforum,, 12687,https://www.alignmentforum.org/posts/cqdDGuTs2NamtEhBW/maxent-and-abstractions-current-best-arguments,Maxent and Abstractions: Current Best Arguments,['johnswentworth'],2022-05-18T19:54:44Z,alignmentforum,, 12702,https://www.alignmentforum.org/posts/pHHhyZX5zwvwNqDXm/finding-the-variables,Finding the variables,['Stuart_Armstrong'],2019-03-04T19:37:55Z,alignmentforum,, 12723,https://www.alignmentforum.org/posts/nFDXq7HTv9Xugcqaw/is-the-term-mesa-optimizer-too-narrow,Is the term mesa optimizer too narrow?,['Matthew Barnett'],2019-12-14T23:20:43Z,alignmentforum,, 12737,https://www.alignmentforum.org/posts/jrKftFZMZjvNdQLNR/box-inversion-revisited,Box inversion revisited,['Jan_Kulveit'],2023-11-07T11:09:37Z,alignmentforum,, 12766,https://www.alignmentforum.org/posts/RZNmNwc9SxdKayeQh/unifying-bargaining-notions-2-2,Unifying Bargaining Notions (2/2),['Diffractor'],2022-07-27T03:40:31Z,alignmentforum,, 12782,https://www.alignmentforum.org/posts/tmuFmHuyb4eWmPXz8/rant-on-problem-factorization-for-alignment,Rant on Problem Factorization for Alignment,['johnswentworth'],2022-08-05T19:23:24Z,alignmentforum,, 12798,https://www.alignmentforum.org/posts/caMoe6yNfXcaCG2u3/200-cop-in-mi-image-model-interpretability,200 COP in MI: Image Model Interpretability,['Neel Nanda'],2023-01-08T14:53:15Z,alignmentforum,, 12822,https://www.alignmentforum.org/posts/AhF8iXLu5PchsmyKf/language-model-tools-for-alignment-research,Language Model Tools for Alignment Research,['Logan Riggs'],2022-04-08T17:32:33Z,alignmentforum,, 12841,https://www.alignmentforum.org/posts/3aDeaJzxinoGNWNpC/confucianism-in-ai-alignment,Confucianism in AI Alignment,['johnswentworth'],2020-11-02T21:16:46Z,alignmentforum,, 12854,https://www.alignmentforum.org/posts/75dnjiD8kv2khe9eQ/measuring-hardware-overhang,Measuring hardware overhang,['hippke'],2020-08-05T19:59:00Z,alignmentforum,, 12870,https://www.alignmentforum.org/posts/HA3oArypzNANvXC38/alphago-zero-and-capability-amplification,AlphaGo Zero and capability amplification,['paulfchristiano'],2019-01-09T00:40:13Z,alignmentforum,, 12885,https://www.alignmentforum.org/posts/rYDas2DDGGDRc8gGB/unifying-bargaining-notions-1-2,Unifying Bargaining Notions (1/2),['Diffractor'],2022-07-25T00:28:28Z,alignmentforum,, 12902,https://www.alignmentforum.org/posts/QSBgGv8byWMjmaGE5/preparing-for-the-talk-with-ai-projects,"Preparing for ""The Talk"" with AI projects",['Daniel Kokotajlo'],2020-06-13T23:01:24Z,alignmentforum,, 12921,https://www.alignmentforum.org/posts/whq89vpQPp7mo5FG2/mlsn-8-mechanistic-interpretability-using-law-to-inform-ai,"[MLSN #8] Mechanistic interpretability, using law to inform AI alignment, scaling laws for proxy gaming","['Dan H', 'ThomasW']",2023-02-20T15:54:14Z,alignmentforum,, 12958,https://www.alignmentforum.org/posts/FX5JmftqL2j6K8dn4/shapley-value-attribution-in-chain-of-thought,Shapley Value Attribution in Chain of Thought,['leogao'],2023-04-14T05:56:18Z,alignmentforum,, 12971,https://www.alignmentforum.org/posts/4TuzWEKysvYdhRXLd/paradigm-building-introduction,Paradigm-building: Introduction,['Cameron Berg'],2022-02-08T00:06:29Z,alignmentforum,, 12980,https://www.alignmentforum.org/posts/cSzaxcmeYW6z7cgtc/contest-usd1-000-for-good-questions-to-ask-to-an-oracle-ai,"Contest: $1,000 for good questions to ask to an Oracle AI",['Stuart_Armstrong'],2019-07-31T18:48:59Z,alignmentforum,, 12999,https://www.alignmentforum.org/posts/axKWaxjc2CHH5gGyN/ai-will-not-want-to-self-improve,AI Will Not Want to Self-Improve,['petersalib'],2023-05-16T20:53:34Z,alignmentforum,, 13019,https://www.alignmentforum.org/posts/6xKMSfK8oTpTtWKZN/direction-of-fit-1,Direction of Fit,['NicholasKees'],2023-10-02T12:34:24Z,alignmentforum,, 13036,https://www.alignmentforum.org/posts/jvGqQGDrYzZM4MyaN/growth-and-form-in-a-toy-model-of-superposition,Growth and Form in a Toy Model of Superposition,"['Liam Carroll', 'Edmund Lau']",2023-11-08T11:08:04Z,alignmentforum,, 13056,https://www.alignmentforum.org/posts/Afdohjyt6gESu4ANf/most-people-start-with-the-same-few-bad-ideas,Most People Start With The Same Few Bad Ideas,['johnswentworth'],2022-09-09T00:29:13Z,alignmentforum,, 13072,https://www.alignmentforum.org/posts/XdnCyorFzYskS7EtP/analyzing-the-problem-gpt-3-is-trying-to-solve,Analyzing the Problem GPT-3 is Trying to Solve,['adamShimi'],2020-08-06T21:58:56Z,alignmentforum,, 13091,https://www.alignmentforum.org/posts/8NpwfjFuEPMjTdriJ/gricean-communication-and-meta-preferences,Gricean communication and meta-preferences,['Charlie Steiner'],2020-02-10T05:05:30Z,alignmentforum,, 13106,https://www.alignmentforum.org/posts/cfXwr6NC9AqZ9kr8g/literature-review-on-goal-directedness,Literature Review on Goal-Directedness,"['adamShimi', 'Michele Campolo', 'Joe_Collman']",2021-01-18T11:15:37Z,alignmentforum,, 13137,https://www.alignmentforum.org/posts/NG6FrXgmqPd5Wn3mh/trying-to-disambiguate-different-questions-about-whether,Trying to disambiguate different questions about whether RLHF is “good”,['Buck'],2022-12-14T04:03:27Z,alignmentforum,, 13163,https://www.alignmentforum.org/posts/y6Wuq9ihruEAdJRvZ/interlude-but-who-optimizes-the-optimizer,Interlude: But Who Optimizes The Optimizer?,['Paul Bricman'],2022-09-23T15:30:07Z,alignmentforum,, 13193,https://www.alignmentforum.org/posts/9f4zBjiFndqbR8y6e/vaniver-s-elk-submission,Vaniver's ELK Submission,['Vaniver'],2022-03-28T21:14:37Z,alignmentforum,, 13206,https://www.alignmentforum.org/posts/mdQEraEZQLg7jtozn/subagents-and-impact-measures-full-and-fully-illustrated,"Subagents and impact measures, full and fully illustrated",['Stuart_Armstrong'],2020-02-24T13:12:05Z,alignmentforum,, 13223,https://www.alignmentforum.org/posts/YgpDYjTx7DCEgziG5/apply-to-the-ml-for-alignment-bootcamp-mlab-in-berkeley-jan,Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22],"['habryka', 'Buck']",2021-11-03T18:22:59Z,alignmentforum,, 13232,https://www.alignmentforum.org/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic,"What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)",['Andrew_Critch'],2021-03-31T23:50:32Z,alignmentforum,, 13256,https://www.alignmentforum.org/posts/pip63HtEAxHGfSEGk/tall-tales-at-different-scales-evaluating-scaling-trends-for,Tall Tales at Different Scales: Evaluating Scaling Trends For Deception In Language Models,"['Felix Hofstätter', 'Francis Rhys Ward', 'HarrietW', 'LAThomson', 'Ollie J', 'patrik-bartak']",2023-11-08T11:37:44Z,alignmentforum,, 13278,https://www.alignmentforum.org/posts/cAC4AXiNC5ig6jQnc/understanding-and-controlling-a-maze-solving-policy-network,Understanding and controlling a maze-solving policy network,"['TurnTrout', 'peligrietzer', 'Ulisse Mini', 'Monte M', 'David Udell']",2023-03-11T18:59:56Z,alignmentforum,, 13291,https://www.alignmentforum.org/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines,Forecasting Thread: AI Timelines,"['Amandango', 'Daniel Kokotajlo', 'Ben Pace']",2020-08-22T02:33:09Z,alignmentforum,, 13306,https://www.alignmentforum.org/posts/DfcXaGH7XGYjW22C2/law-following-ai-3-lawless-ai-agents-undermine-stabilizing,Law-Following AI 3: Lawless AI Agents Undermine Stabilizing Agreements,['Cullen'],2022-04-27T17:30:26Z,alignmentforum,, 13320,https://www.alignmentforum.org/posts/qPoaA5ZSedivA4xJa/our-take-on-chai-s-research-agenda-in-under-1500-words,Our take on CHAI’s research agenda in under 1500 words,['Alex Flint'],2020-06-17T12:24:33Z,alignmentforum,, 13343,https://www.alignmentforum.org/posts/rRAfak9JRRxjsbsdk/paper-in-context-reinforcement-learning-with-algorithm,Paper: In-context Reinforcement Learning with Algorithm Distillation [Deepmind],['LawrenceC'],2022-10-26T18:45:03Z,alignmentforum,, 13356,https://www.alignmentforum.org/posts/NjYdGP59Krhie4WBp/updating-utility-functions,Updating Utility Functions,"['JustinShovelain', 'Joar Skalse']",2022-05-09T09:44:47Z,alignmentforum,, 13372,https://www.alignmentforum.org/posts/jusSrXEAsiqehBsmh/vignettes-workshop-ai-impacts,Vignettes Workshop (AI Impacts),['Daniel Kokotajlo'],2021-06-15T12:05:39Z,alignmentforum,, 13382,https://www.alignmentforum.org/posts/NxApPkbjt9hXraSts/for-elk-truth-is-mostly-a-distraction,For ELK truth is mostly a distraction,['c.trout'],2022-11-04T21:14:52Z,alignmentforum,, 13400,https://www.alignmentforum.org/posts/diruo47z32eprenTg/my-computational-framework-for-the-brain,My computational framework for the brain,['Steven Byrnes'],2020-09-14T14:19:22Z,alignmentforum,, 13411,https://www.alignmentforum.org/posts/YgNYA6pj2hPSDQiTE/distinguishing-definitions-of-takeoff,Distinguishing definitions of takeoff,['Matthew Barnett'],2020-02-14T00:16:34Z,alignmentforum,, 13434,https://www.alignmentforum.org/posts/GwihvYHE3u6s3LbeM/loebian-emotional-processing-of-emergent-cooperation-an,Löbian emotional processing of emergent cooperation: an example,['Andrew_Critch'],2023-01-17T05:59:04Z,alignmentforum,, 13447,https://www.alignmentforum.org/posts/RyH8LtgMbRAJ9Dv6R/basalt-a-benchmark-for-learning-from-human-feedback,BASALT: A Benchmark for Learning from Human Feedback,['Rohin Shah'],2021-07-08T17:40:35Z,alignmentforum,, 13460,https://www.alignmentforum.org/posts/2269iGRnWruLHsZ5r/transformer-circuits,Transformer Circuits,['evhub'],2021-12-22T21:09:23Z,alignmentforum,, 13481,https://www.alignmentforum.org/posts/3D2MGF2fZhWSNb7aw/prediction-can-be-outer-aligned-at-optimum,Prediction can be Outer Aligned at Optimum,['Lukas Finnveden'],2021-01-10T18:48:21Z,alignmentforum,, 13502,https://www.alignmentforum.org/posts/NrtbF3JHFnqBCztXC/law-following-ai-1-sequence-introduction-and-structure,Law-Following AI 1: Sequence Introduction and Structure,['Cullen'],2022-04-27T17:26:57Z,alignmentforum,, 13524,https://www.alignmentforum.org/posts/22xf8GmwqGzHbiuLg/seriously-what-goes-wrong-with-reward-the-agent-when-it,"Seriously, what goes wrong with ""reward the agent when it makes you smile""?",['TurnTrout'],2022-08-11T22:22:32Z,alignmentforum,, 13544,https://www.alignmentforum.org/posts/rtBpBNgXjwtsLJDbG/misc-questions-about-efficientzero-1,Misc. questions about EfficientZero,['Daniel Kokotajlo'],2021-12-04T19:45:13Z,alignmentforum,, 13565,https://www.alignmentforum.org/posts/HvNAmkXPTSoA4dvzv/comments-on-cais,Comments on CAIS,['Richard_Ngo'],2019-01-12T15:20:22Z,alignmentforum,, 13586,https://www.alignmentforum.org/posts/SZjHimszxqjNJzQWK/boundaries-vs-frames,Boundaries vs Frames,['Scott Garrabrant'],2022-10-31T15:14:37Z,alignmentforum,, 13597,https://www.alignmentforum.org/posts/maeTg6zXw4DBAXXrK/automating-consistency,Automating Consistency,['Hoagy'],2023-02-17T13:24:23Z,alignmentforum,, 13616,https://www.alignmentforum.org/posts/dpzLqQQSs7XRacEfK/understanding-the-lottery-ticket-hypothesis,Understanding the Lottery Ticket Hypothesis,['Alex Flint'],2021-05-14T00:25:21Z,alignmentforum,, 13634,https://www.alignmentforum.org/posts/yjC5LmjSRD2hR9Pfa/on-the-falsifiability-of-hypercomputation,On the falsifiability of hypercomputation,['jessicata'],2020-02-07T08:16:07Z,alignmentforum,, 13649,https://www.alignmentforum.org/posts/AXCX2S4NbcyEu7hbX/an-54-boxing-a-finite-horizon-ai-system-to-keep-it,[AN #54] Boxing a finite-horizon AI system to keep it unambitious,['Rohin Shah'],2019-04-28T05:20:01Z,alignmentforum,, 13676,https://www.alignmentforum.org/posts/vrJBQZJpvswXFFkcd/decision-theory-is-multifaceted,Decision Theory is multifaceted,['Michele Campolo'],2020-09-13T22:30:21Z,alignmentforum,, 13697,https://www.alignmentforum.org/posts/KtCJNw93KHg7MSSvw/adversarial-attacks-and-optimal-control,Adversarial attacks and optimal control,['Jan'],2022-05-22T18:22:50Z,alignmentforum,, 13710,https://www.alignmentforum.org/posts/ngEvKav9w57XrGQnb/cognitive-emulation-a-naive-ai-safety-proposal,Cognitive Emulation: A Naive AI Safety Proposal,"['Connor Leahy', 'Gabriel Alfour']",2023-02-25T19:35:02Z,alignmentforum,, 13726,https://www.alignmentforum.org/posts/ExRN5Bu3696cf9Ccm/the-engineer-s-interpretability-sequence-eis-i-intro,The Engineer’s Interpretability Sequence (EIS) I: Intro,['scasper'],2023-02-09T16:28:06Z,alignmentforum,, 13743,https://www.alignmentforum.org/posts/BEMvcaeixt3uEqyBk/what-does-optimization-mean-again-optimizing-and-goodhart,"What does Optimization Mean, Again? (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 2)",['Davidmanheim'],2019-07-28T09:30:30Z,alignmentforum,, 13764,https://www.alignmentforum.org/posts/SbnE48y3f2Srdo4yk/chai-assistance-games-and-fully-updated-deference-scott,"CHAI, Assistance Games, And Fully-Updated Deference [Scott Alexander]",['berglund'],2022-10-04T17:04:10Z,alignmentforum,, 13783,https://www.alignmentforum.org/posts/Kz9NHBMeJxzSwb7R9/elicitation-for-modeling-transformative-ai-risks,Elicitation for Modeling Transformative AI Risks,['Davidmanheim'],2021-12-16T15:24:05Z,alignmentforum,, 13804,https://www.alignmentforum.org/posts/4hdHto3uHejhY2F3Q/partial-agency,Partial Agency,['abramdemski'],2019-09-27T22:04:47Z,alignmentforum,, 13820,https://www.alignmentforum.org/posts/vhfATmAoJcN8RqGg6/a-guide-to-iterated-amplification-and-debate,A guide to Iterated Amplification & Debate,['Rafael Harth'],2020-11-15T17:14:55Z,alignmentforum,, 13847,https://www.alignmentforum.org/posts/CD8gcugDu5z2Eeq7k/will-openai-s-work-unintentionally-increase-existential,Will OpenAI's work unintentionally increase existential risks related to AI?,['adamShimi'],2020-08-11T18:16:56Z,alignmentforum,, 13868,https://www.alignmentforum.org/posts/EYEkYX6vijL7zsKEt/reward-functions-and-updating-assumptions-can-hide-a,Reward functions and updating assumptions can hide a multitude of sins,['Stuart_Armstrong'],2020-05-18T15:18:08Z,alignmentforum,, 13886,https://www.alignmentforum.org/posts/vqpEC3MPioHX7bv4t/environments-as-a-bottleneck-in-agi-development,Environments as a bottleneck in AGI development,['Richard_Ngo'],2020-07-17T05:02:57Z,alignmentforum,, 13907,https://www.alignmentforum.org/posts/dNzhdiFE398KcGDc9/testing-the-natural-abstraction-hypothesis-project-update,Testing The Natural Abstraction Hypothesis: Project Update,['johnswentworth'],2021-09-20T03:44:43Z,alignmentforum,, 13923,https://www.alignmentforum.org/posts/NBffcjqm2P4dNbjrE/exploring-safe-exploration,Exploring safe exploration,['evhub'],2020-01-06T21:07:38Z,alignmentforum,, 13941,https://www.alignmentforum.org/posts/5Kv2qNfRyXXihNrx2/ai-safety-debate-and-its-applications,AI Safety Debate and Its Applications,['VojtaKovarik'],2019-07-23T22:31:58Z,alignmentforum,, 13965,https://www.alignmentforum.org/posts/jotZXEixzToEHnrfr/the-hubinger-lectures-on-agi-safety-an-introductory-lecture,The Hubinger lectures on AGI safety: an introductory lecture series,['evhub'],2023-06-22T00:59:28Z,alignmentforum,, 13988,https://www.alignmentforum.org/posts/9a33WfdPe9Cd26vL9/alignment-newsletter-18,Alignment Newsletter #18,['Rohin Shah'],2018-08-06T16:00:03Z,alignmentforum,, 14018,https://www.alignmentforum.org/posts/XtBJTFszs8oP3vXic/ai-x-risk-greater-than-35-mostly-based-on-a-recent-peer,AI X-risk >35% mostly based on a recent peer-reviewed argument,['michaelcohen'],2022-11-02T14:27:00Z,alignmentforum,, 14043,https://www.alignmentforum.org/posts/JZEpqrLh2xHx2xfAd/reward-splintering-as-reverse-of-interpretability,Reward splintering as reverse of interpretability,['Stuart_Armstrong'],2021-08-31T22:27:31Z,alignmentforum,, 14058,https://www.alignmentforum.org/posts/C9YMrPAyMXfB8cLPb/more-on-disambiguating-discontinuity,"More on disambiguating ""discontinuity""",['Aryeh Englander'],2020-06-09T15:16:34Z,alignmentforum,, 14083,https://www.alignmentforum.org/posts/BJ4Ek5BaJEKei3Czf/alignment-newsletter-48,Alignment Newsletter #48,['Rohin Shah'],2019-03-11T21:10:02Z,alignmentforum,, 14093,https://www.alignmentforum.org/posts/fkLYhTQteAu5SinAc/corrigibility,Corrigibility,['paulfchristiano'],2018-11-27T21:50:11Z,alignmentforum,, 14115,https://www.alignmentforum.org/posts/oNGCgNag2zSMqg2Z7/frontier-model-security,Frontier Model Security,['Vaniver'],2023-07-26T04:48:02Z,alignmentforum,, 14137,https://www.alignmentforum.org/posts/q3vAgFnbDja9hZm9E/openai-solves-some-formal-math-olympiad-problems,OpenAI Solves (Some) Formal Math Olympiad Problems,['Michaël Trazzi'],2022-02-02T21:49:37Z,alignmentforum,, 14148,https://www.alignmentforum.org/posts/Eav2BizSejDcztFC8/focus-on-the-hardest-part-first,Focus on the Hardest Part First,['Johannes C. Mayer'],2023-09-11T07:53:33Z,alignmentforum,, 14157,https://www.alignmentforum.org/posts/4Tx6ALN8erdgRojkk/quick-thoughts-on-scalable-oversight-super-human-feedback,"Quick thoughts on ""scalable oversight"" / ""super-human feedback"" research",['David Scott Krueger (formerly: capybaralet)'],2023-01-25T12:55:31Z,alignmentforum,, 14173,https://www.alignmentforum.org/posts/tAQRxccEDYZY5vxvy/japan-ai-alignment-conference,Japan AI Alignment Conference,"['Chris Scammell', 'Katrina Joslin']",2023-03-10T06:56:57Z,alignmentforum,, 14183,https://www.alignmentforum.org/posts/5eX8ko7GCxwR5N9mN/what-is-ambitious-value-learning,What is ambitious value learning?,['Rohin Shah'],2018-11-01T16:20:28Z,alignmentforum,, 14197,https://www.alignmentforum.org/posts/JpEPKbXiTvmyqYdTr/a-test-for-symbol-grounding-methods-true-zero-sum-games-1,A test for symbol grounding methods: true zero-sum games,['Stuart_Armstrong'],2019-11-26T14:15:15Z,alignmentforum,, 14207,https://www.alignmentforum.org/posts/YQALrtMkeqemAF5GX/another-list-of-theories-of-impact-for-interpretability,Another list of theories of impact for interpretability,['Beth Barnes'],2022-04-13T13:29:28Z,alignmentforum,, 14240,https://www.alignmentforum.org/posts/dKxX76SCfCvceJXHv/ai-alignment-2018-19-review,AI Alignment 2018-19 Review,['Rohin Shah'],2020-01-28T02:19:53Z,alignmentforum,, 14292,https://www.alignmentforum.org/posts/oSgac8x8fgNj22ky3/compositional-preference-models-for-aligning-lms,Compositional preference models for aligning LMs,['Tomek Korbak'],2023-10-25T12:17:29Z,alignmentforum,, 14311,https://www.alignmentforum.org/posts/6HmaGnXd4EJfpfait/udt-as-a-nash-equilibrium,UDT as a Nash Equilibrium,['cousin_it'],2018-02-06T14:08:30Z,alignmentforum,, 14320,https://www.alignmentforum.org/posts/Gs29k3beHiqWFZqnn/conjecture-internal-infohazard-policy,Conjecture: Internal Infohazard Policy,"['Connor Leahy', 'Sid Black', 'Chris Scammell', 'Andrea_Miotti']",2022-07-29T19:07:08Z,alignmentforum,, 14338,https://www.alignmentforum.org/posts/EjsA2M8p8ERyFHLLY/takeaways-from-the-mechanistic-interpretability-challenges,Takeaways from the Mechanistic Interpretability Challenges,['scasper'],2023-06-08T18:56:47Z,alignmentforum,, 14359,https://www.alignmentforum.org/posts/DvmhXysefEyEvXuXS/overcoming-clinginess-in-impact-measures,Overcoming Clinginess in Impact Measures,['TurnTrout'],2018-06-30T22:51:29Z,alignmentforum,, 14385,https://www.alignmentforum.org/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1,A Semitechnical Introductory Dialogue on Solomonoff Induction,['Eliezer Yudkowsky'],2021-03-04T17:27:36Z,alignmentforum,, 14404,https://www.alignmentforum.org/posts/pHaPds4SqfewLrEbW/more-money-with-less-risk-sell-services-instead-of-model,More money with less risk: sell services instead of model access,['lukehmiles'],2023-03-04T20:51:36Z,alignmentforum,, 14427,https://www.alignmentforum.org/posts/qnjDGitKxYaesbsem/a-comment-on-ajeya-cotra-s-draft-report-on-ai-timelines,A comment on Ajeya Cotra's draft report on AI timelines,['Matthew Barnett'],2022-02-24T00:41:48Z,alignmentforum,, 14443,https://www.alignmentforum.org/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into,Research Agenda v0.9: Synthesising a human's preferences into a utility function,['Stuart_Armstrong'],2019-06-17T17:46:39Z,alignmentforum,, 14470,https://www.alignmentforum.org/posts/hPJMum5CNH5MKe27C/an-72-alignment-robustness-methodology-and-system-building,"[AN #72]: Alignment, robustness, methodology, and system building as research priorities for AI safety",['Rohin Shah'],2019-11-06T18:10:02Z,alignmentforum,, 14499,https://www.alignmentforum.org/posts/fqryrxnvpSr5w2dDJ/touch-reality-as-soon-as-possible-when-doing-machine,Touch reality as soon as possible (when doing machine learning research),['LawrenceC'],2023-01-03T19:11:59Z,alignmentforum,, 14519,https://www.alignmentforum.org/posts/kjudfaQazMmC74SbF/causal-scrubbing-results-on-a-paren-balance-checker,Causal scrubbing: results on a paren balance checker,"['LawrenceC', 'Adrià Garriga-alonso', 'Nicholas Goldowsky-Dill', 'ryan_greenblatt', 'Tao Lin', 'jenny', 'Ansh Radhakrishnan', 'Buck', 'Nate Thomas']",2022-12-03T00:59:08Z,alignmentforum,, 14538,https://www.alignmentforum.org/posts/sYjCeZTwA84pHkhBJ/appendix-how-a-subagent-could-get-powerful,Appendix: how a subagent could get powerful,['Stuart_Armstrong'],2020-01-28T15:28:56Z,alignmentforum,, 14548,https://www.alignmentforum.org/posts/3gAccKDW6nRKFumpP/why-not-just-outsource-alignment-research-to-an-ai,Why Not Just Outsource Alignment Research To An AI?,['johnswentworth'],2023-03-09T21:49:20Z,alignmentforum,, 14570,https://www.alignmentforum.org/posts/frApEhpyKQAcFvbXJ/reward-is-not-enough,Reward Is Not Enough,['Steven Byrnes'],2021-06-16T13:52:34Z,alignmentforum,, 14596,https://www.alignmentforum.org/posts/AomSXpFcqmgeDyWWo/misalignment-and-misuse-whose-values-are-manifest,Misalignment and misuse: whose values are manifest?,['KatjaGrace'],2020-11-13T10:10:01Z,alignmentforum,, 14605,https://www.alignmentforum.org/posts/3Mwm7bpWgyvqwrMBT/on-corrigibility-and-its-basin,On corrigibility and its basin,['Donald Hobson'],2022-06-20T16:33:06Z,alignmentforum,, 14624,https://www.alignmentforum.org/posts/5bd75cc58225bf06703752a9/counterfactuals-on-pomdp,Counterfactuals on POMDP,['Stuart_Armstrong'],2017-06-02T16:30:47Z,alignmentforum,, 14634,https://www.alignmentforum.org/posts/4Gt42jX7RiaNaxCwP/more-information-about-the-dangerous-capability-evaluations,More information about the dangerous capability evaluations we did with GPT-4 and Claude.,['Beth Barnes'],2023-03-19T00:25:40Z,alignmentforum,, 14650,https://www.alignmentforum.org/posts/BAzCGCys4BkzGDCWR/the-prototypical-catastrophic-ai-action-is-getting-root,The prototypical catastrophic AI action is getting root access to its datacenter,['Buck'],2022-06-02T23:46:31Z,alignmentforum,, 14666,https://www.alignmentforum.org/posts/gEw8ig38mCGjia7dj/answering-questions-honestly-instead-of-predicting-human,Answering questions honestly instead of predicting human answers: lots of problems and some solutions,['evhub'],2021-07-13T18:49:02Z,alignmentforum,, 14701,https://www.alignmentforum.org/posts/zjMKpSB2Xccn9qi5t/elk-prize-results,ELK prize results,"['paulfchristiano', 'Mark Xu']",2022-03-09T00:01:02Z,alignmentforum,, 14736,https://www.alignmentforum.org/posts/qfopgsFBJLs2u9iww/goal-program-bricks,goal-program bricks,['Tamsin Leake'],2022-08-13T10:08:42Z,alignmentforum,, 14766,https://www.alignmentforum.org/posts/u7o7HtChnZ5x8SqvA/axrp-episode-3-negotiable-reinforcement-learning-with-andrew,AXRP Episode 3 - Negotiable Reinforcement Learning with Andrew Critch,['DanielFilan'],2020-12-29T20:45:23Z,alignmentforum,, 14784,https://www.alignmentforum.org/posts/qZGoHkRgANQpGHWnu/evan-hubinger-on-inner-alignment-outer-alignment-and,"Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI",['Palus Astra'],2020-07-01T17:30:48Z,alignmentforum,, 14811,https://www.alignmentforum.org/posts/Xv77XjuZEkjRsvkJp/what-would-we-do-if-alignment-were-futile,What would we do if alignment were futile?,['Grant Demaree'],2021-11-14T08:09:29Z,alignmentforum,, 14834,https://www.alignmentforum.org/posts/W95gbuognJu5WxkTW/the-value-definition-problem,The Value Definition Problem,['Sammy Martin'],2019-11-18T19:56:43Z,alignmentforum,, 14855,https://www.alignmentforum.org/posts/hBtjpY2wAASEpZXgN/a-walkthrough-of-a-mathematical-framework-for-transformer,A Walkthrough of A Mathematical Framework for Transformer Circuits,['Neel Nanda'],2022-10-25T20:24:55Z,alignmentforum,, 14865,https://www.alignmentforum.org/posts/xbABZRxoSTAnsf8os/axrp-episode-10-ai-s-future-and-impacts-with-katja-grace,AXRP Episode 10 - AI’s Future and Impacts with Katja Grace,['DanielFilan'],2021-07-23T22:10:15Z,alignmentforum,, 14886,https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1,Some AI research areas and their relevance to existential safety,['Andrew_Critch'],2020-11-19T03:18:23Z,alignmentforum,, 14921,https://www.alignmentforum.org/posts/BKvJNzALpxS3LafEs/measuring-and-improving-the-faithfulness-of-model-generated,Measuring and Improving the Faithfulness of Model-Generated Reasoning,"['Ansh Radhakrishnan', 'tamera', 'Ethan Perez', 'Sam Bowman']",2023-07-18T16:36:34Z,alignmentforum,, 14939,https://www.alignmentforum.org/posts/zgQSsA2o6avsEsyMa/structural-stability-of-coupled-optimizers,(Structural) Stability of Coupled Optimizers,['Paul Bricman'],2022-09-30T11:28:36Z,alignmentforum,, 14963,https://www.alignmentforum.org/posts/Y76durQHrfqwgwM5o/lcdt-a-myopic-decision-theory,"LCDT, A Myopic Decision Theory","['adamShimi', 'evhub']",2021-08-03T22:41:45Z,alignmentforum,, 14981,https://www.alignmentforum.org/posts/DQ4y5tvotag5KPzcu/linkpost-interpretability-dreams,[Linkpost] Interpretability Dreams,['DanielFilan'],2023-05-24T21:08:17Z,alignmentforum,, 15002,https://www.alignmentforum.org/posts/hHaXzJQi6SKkeXzbg/200-cop-in-mi-analysing-training-dynamics,200 COP in MI: Analysing Training Dynamics,['Neel Nanda'],2023-01-04T16:08:58Z,alignmentforum,, 15026,https://www.alignmentforum.org/posts/jfG6vdJZCwTQmG7kb/re-examining-layernorm,Re-Examining LayerNorm,['Eric Winsor'],2022-12-01T22:20:24Z,alignmentforum,, 15045,https://www.alignmentforum.org/posts/MJc9AqyMWpG3BqfyK/generalizing-power-to-multi-agent-games,Generalizing POWER to multi-agent games,"['midco', 'TurnTrout']",2021-03-22T02:41:45Z,alignmentforum,, 15059,https://www.alignmentforum.org/posts/tmZRyXvH9dgopcnuE/life-and-expanding-steerable-consequences,Life and expanding steerable consequences,['Alex Flint'],2021-05-07T18:33:40Z,alignmentforum,, 15071,https://www.alignmentforum.org/posts/5uNfgjaDAwkhcLJca/deep-learning-curriculum-for-large-language-model-alignment,Deep learning curriculum for large language model alignment,['Jacob_Hilton'],2022-07-13T21:58:33Z,alignmentforum,, 15080,https://www.alignmentforum.org/posts/tsYcsZAkKsqLXC3Bu/analogies-between-software-reverse-engineering-and,Analogies between Software Reverse Engineering and Mechanistic Interpretability,"['Neel Nanda', 'Itay Yona']",2022-12-26T12:26:58Z,alignmentforum,, 15103,https://www.alignmentforum.org/posts/YnBFravZQ5qm6Nmyh/alignment-newsletter-41,Alignment Newsletter #41,['Rohin Shah'],2019-01-17T08:10:02Z,alignmentforum,, 15114,https://www.alignmentforum.org/posts/q2rCMHNXazALgQpGH/conditions-for-mesa-optimization,Conditions for Mesa-Optimization,"['evhub', 'Chris van Merwijk', 'vlad_m', 'Joar Skalse', 'Scott Garrabrant']",2019-06-01T20:52:19Z,alignmentforum,, 15148,https://www.alignmentforum.org/posts/BzYmJYECAc3xyCTt6/the-plan-2022-update,The Plan - 2022 Update,['johnswentworth'],2022-12-01T20:43:51Z,alignmentforum,, 15172,https://www.alignmentforum.org/posts/m5frrcYTSH6ENjsc9/challenge-know-everything-that-the-best-go-bot-knows-about,Challenge: know everything that the best go bot knows about go,['DanielFilan'],2021-05-11T05:10:01Z,alignmentforum,, 15182,https://www.alignmentforum.org/posts/sbGau4QBwToYWEg4k/llms-sometimes-generate-purely-negatively-reinforced-text,LLMs Sometimes Generate Purely Negatively-Reinforced Text,['Fabien Roger'],2023-06-16T16:31:33Z,alignmentforum,, 15198,https://www.alignmentforum.org/posts/pGvM95EfNXwBzjNCJ/instrumental-convergence-in-single-agent-systems,Instrumental convergence in single-agent systems,"['Edouard Harris', 'simonsdsuo']",2022-10-12T12:24:59Z,alignmentforum,, 15217,https://www.alignmentforum.org/posts/gbuwgyYG9WvtsErki/how-should-ais-update-a-prior-over-human-preferences,How should AIs update a prior over human preferences?,['Stuart_Armstrong'],2020-05-15T13:14:31Z,alignmentforum,, 15227,https://www.alignmentforum.org/posts/KwQYsF4XFtPqjgwvH/some-thoughts-on-automating-alignment-research-1,Some thoughts on automating alignment research,['Lukas Finnveden'],2023-05-26T01:50:20Z,alignmentforum,, 15261,https://www.alignmentforum.org/posts/eqxqgFxymP8hXDTt5/announcing-the-inverse-scaling-prize-usd250k-prize-pool,Announcing the Inverse Scaling Prize ($250k Prize Pool),"['Ethan Perez', 'Ian McKenzie', 'Sam Bowman']",2022-06-27T15:58:19Z,alignmentforum,, 15282,https://www.alignmentforum.org/posts/pgTioHEzaSddx5csN/counterfactuals-and-reflective-oracles,Counterfactuals and reflective oracles,['Nisan'],2018-09-05T08:54:06Z,alignmentforum,, 15308,https://www.alignmentforum.org/posts/TgPCet7m9DnkuxyKP/new-paper-the-incentives-that-shape-behaviour,New paper: The Incentives that Shape Behaviour,['RyanCarey'],2020-01-23T19:07:37Z,alignmentforum,, 15318,https://www.alignmentforum.org/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison,2018 AI Alignment Literature Review and Charity Comparison,['Larks'],2018-12-18T04:46:55Z,alignmentforum,, 15357,https://www.alignmentforum.org/posts/KJPLW9XTaF3WxKxmq/alignment-newsletter-33,Alignment Newsletter #33,['Rohin Shah'],2018-11-19T17:20:03Z,alignmentforum,, 15374,https://www.alignmentforum.org/posts/AiaAq5XeECg7MpTL7/for-every-choice-of-agi-difficulty-conditioning-on-gradual,"For every choice of AGI difficulty, conditioning on gradual take-off implies shorter timelines.",['Francis Rhys Ward'],2022-04-21T07:44:54Z,alignmentforum,, 15390,https://www.alignmentforum.org/posts/iCzGrppxQAJhRXhmD/an-152-how-we-ve-overestimated-few-shot-learning,[AN #152]: How we’ve overestimated few-shot learning capabilities,['Rohin Shah'],2021-06-16T17:20:04Z,alignmentforum,, 15416,https://www.alignmentforum.org/posts/LigbvLH9yKR5Zhd6y/what-s-wrong-with-these-analogies-for-understanding-informed,What's wrong with these analogies for understanding Informed Oversight and IDA?,['Wei Dai'],2019-03-20T09:11:34Z,alignmentforum,, 15426,https://www.alignmentforum.org/posts/MfCDfuBHXL5ijJFco/steelmining-via-analogy,Steelmining via Analogy,['Paul Bricman'],2022-08-13T09:59:29Z,alignmentforum,, 15441,https://www.alignmentforum.org/posts/r6p5cqT6aWYGCYHJx/review-of-but-exactly-how-complex-and-fragile,Review of 'But exactly how complex and fragile?',['TurnTrout'],2021-01-06T18:39:04Z,alignmentforum,, 15459,https://www.alignmentforum.org/posts/BXzhtfXeP8WwdQTXy/unpacking-shard-theory-as-hunch-question-theory-and-insight,"Unpacking ""Shard Theory"" as Hunch, Question, Theory, and Insight",['Jacy Reese Anthis'],2022-11-16T13:54:16Z,alignmentforum,, 15472,https://www.alignmentforum.org/posts/SEfjw57Qw8mCzy36n/timeline-of-ai-safety,Timeline of AI safety,['riceissa'],2021-02-07T22:29:01Z,alignmentforum,, 15494,https://www.alignmentforum.org/posts/rZTjsKy4Jvu6krWJt/pre-training-fine-tuning-favors-deception,Pre-Training + Fine-Tuning Favors Deception,['Mark Xu'],2021-05-08T18:36:06Z,alignmentforum,, 15505,https://www.alignmentforum.org/posts/ipCAL4tx7jcJsFasY/abstraction-causality-and-embedded-maps-here-be-monsters,"Abstraction, Causality, and Embedded Maps: Here Be Monsters",['johnswentworth'],2019-12-18T20:25:05Z,alignmentforum,, 15523,https://www.alignmentforum.org/posts/XPRAY34Sutc2wWYZf/when-hindsight-isn-t-20-20-incentive-design-with-imperfect,When Hindsight Isn't 20/20: Incentive Design With Imperfect Credit Allocation,['johnswentworth'],2020-11-08T19:16:03Z,alignmentforum,, 15537,https://www.alignmentforum.org/posts/5XbBm6gkuSdMJy9DT/conditions-for-mathematical-equivalence-of-stochastic,Conditions for mathematical equivalence of Stochastic Gradient Descent and Natural Selection,['Oliver Sourbut'],2022-05-09T21:39:00Z,alignmentforum,, 15558,https://www.alignmentforum.org/posts/hSdgugekxgdyacXTu/announcing-the-vitalik-buterin-fellowships-in-ai-existential,Announcing the Vitalik Buterin Fellowships in AI Existential Safety!,['DanielFilan'],2021-09-21T00:33:08Z,alignmentforum,, 15568,https://www.alignmentforum.org/posts/PtaN3oMFPfAAuBNtw/on-the-falsifiability-of-hypercomputation-part-2-finite,"On the falsifiability of hypercomputation, part 2: finite input streams",['jessicata'],2020-02-17T03:51:57Z,alignmentforum,, 15584,https://www.alignmentforum.org/posts/H5gXpFtg93qDMZ6Xn/aligning-a-toy-model-of-optimization,Aligning a toy model of optimization,['paulfchristiano'],2019-06-28T20:23:51Z,alignmentforum,, 15594,https://www.alignmentforum.org/posts/4ufbirCCLsFiscWuY/a-proposed-method-for-forecasting-transformative-ai,A proposed method for forecasting transformative AI,['Matthew Barnett'],2023-02-10T19:34:01Z,alignmentforum,, 15611,https://www.alignmentforum.org/posts/vnocLyeWXcAxtdDnP/a-comprehensive-mechanistic-interpretability-explainer-and,A Comprehensive Mechanistic Interpretability Explainer & Glossary,['Neel Nanda'],2022-12-21T12:35:09Z,alignmentforum,, 15625,https://www.alignmentforum.org/posts/FZL4ftXvcuKmmobmj/causal-confusion-as-an-argument-against-the-scaling,Causal confusion as an argument against the scaling hypothesis,"['RobertKirk', 'David Scott Krueger (formerly: capybaralet)']",2022-06-20T10:54:06Z,alignmentforum,, 15654,https://www.alignmentforum.org/posts/mbGjzyy6eJXT4gFpm/update-to-mysteries-of-mode-collapse-text-davinci-002-not,Update to Mysteries of mode collapse: text-davinci-002 not RLHF,['janus'],2022-11-19T23:51:28Z,alignmentforum,, 15670,https://www.alignmentforum.org/posts/KfX7Ld7BeCMQn5gbz/obstacles-to-gradient-hacking,Obstacles to gradient hacking,['leogao'],2021-09-05T22:42:23Z,alignmentforum,, 15686,https://www.alignmentforum.org/posts/wujPGixayiZSMYfm6/stable-pointers-to-value-ii-environmental-goals,Stable Pointers to Value II: Environmental Goals,['abramdemski'],2018-02-09T06:03:00Z,alignmentforum,, 15712,https://www.alignmentforum.org/posts/iCDBQtby4L2fZ7yns/payor-s-lemma-in-natural-language,Payor's Lemma in Natural Language,['Andrew_Critch'],2023-03-02T12:22:13Z,alignmentforum,, 15722,https://www.alignmentforum.org/posts/79qCdyfGxWNKbH8zk/an-158-should-we-be-optimistic-about-generalization,[AN #158]: Should we be optimistic about generalization?,['Rohin Shah'],2021-07-29T17:20:03Z,alignmentforum,, 15746,https://www.alignmentforum.org/posts/CQAMdzA4MZEhNRtTp/human-values-and-biases-are-inaccessible-to-the-genome,Human values & biases are inaccessible to the genome,['TurnTrout'],2022-07-07T17:29:56Z,alignmentforum,, 15759,https://www.alignmentforum.org/posts/psyhmuDhazzFJKjXf/oracle-predictions-don-t-apply-to-non-existent-worlds,Oracle predictions don't apply to non-existent worlds,['Chris_Leong'],2021-09-15T09:44:32Z,alignmentforum,, 15769,https://www.alignmentforum.org/posts/qZFGPJi3u8xuvnWHQ/4-risks-from-causing-illegitimate-value-change-performative,4. Risks from causing illegitimate value change (performative predictors),['Nora_Ammann'],2023-10-26T14:38:26Z,alignmentforum,, 15789,https://www.alignmentforum.org/posts/6bpW2kyeKaBtuJuEk/why-i-hate-the-accident-vs-misuse-ai-x-risk-dichotomy-quick,"Why I hate the ""accident vs. misuse"" AI x-risk dichotomy (quick thoughts on ""structural risk"")",['David Scott Krueger (formerly: capybaralet)'],2023-01-30T18:50:18Z,alignmentforum,, 15807,https://www.alignmentforum.org/posts/YTq4X6inEudiHkHDF/prosaic-ai-alignment,Prosaic AI alignment,['paulfchristiano'],2018-11-20T13:56:40Z,alignmentforum,, 15833,https://www.alignmentforum.org/posts/i5dLfi6m6FCexReK9/a-brief-review-of-the-reasons-multi-objective-rl-could-be,A brief review of the reasons multi-objective RL could be important in AI Safety Research,['Ben Smith'],2021-09-29T17:09:57Z,alignmentforum,, 15844,https://www.alignmentforum.org/posts/5bd75cc58225bf06703754b7/logical-induction-with-incomputable-sequences,Logical Induction with incomputable sequences,['AlexMennen'],2017-08-17T00:39:01Z,alignmentforum,, 15854,https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge,ARC's first technical report: Eliciting Latent Knowledge,"['paulfchristiano', 'Mark Xu', 'Ajeya Cotra']",2021-12-14T20:09:50Z,alignmentforum,, 15867,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375158/the-many-counterfactuals-of-counterfactual-mugging,The many counterfactuals of counterfactual mugging,['Scott Garrabrant'],2016-04-12T20:04:38Z,alignmentforum,, 15877,https://www.alignmentforum.org/posts/oTn2PPZLY7a2xJmqh/the-insulated-goal-program-idea,the Insulated Goal-Program idea,['Tamsin Leake'],2022-08-13T09:57:47Z,alignmentforum,, 15890,https://www.alignmentforum.org/posts/YAa4qcMyoucRS2Ykr/basic-inframeasure-theory,Basic Inframeasure Theory,['Diffractor'],2020-08-27T08:02:06Z,alignmentforum,, 15912,https://www.alignmentforum.org/posts/XzvR3QKkt9EPbAYyT/applying-the-counterfactual-prisoner-s-dilemma-to-logical,Applying the Counterfactual Prisoner's Dilemma to Logical Uncertainty,['Chris_Leong'],2020-09-16T10:34:45Z,alignmentforum,, 15922,https://www.alignmentforum.org/posts/krHDNc7cDvfEL8z9a/niceness-is-unnatural,Niceness is unnatural,['So8res'],2022-10-13T01:30:02Z,alignmentforum,, 15941,https://www.alignmentforum.org/posts/Q3fesop6HKnemJ5Jc/disagreement-with-bio-anchors-that-lead-to-shorter-timelines,Disagreement with bio anchors that lead to shorter timelines,['Marius Hobbhahn'],2022-11-16T14:40:17Z,alignmentforum,, 15962,https://www.alignmentforum.org/posts/J8ifgynkfhpmrGrL8/why-almost-every-rl-agent-does-learned-optimization,Why almost every RL agent does learned optimization,['Lee Sharkey'],2023-02-12T04:58:35Z,alignmentforum,, 15986,https://www.alignmentforum.org/posts/mvqmY9MQ3qf88xRuM/fixed-point-discussion,Fixed Point Discussion,['Scott Garrabrant'],2018-11-24T20:53:40Z,alignmentforum,, 16010,https://www.alignmentforum.org/posts/i9p5KWNWcthccsxqm/updating-the-lottery-ticket-hypothesis,Updating the Lottery Ticket Hypothesis,['johnswentworth'],2021-04-18T21:45:06Z,alignmentforum,, 16026,https://www.alignmentforum.org/posts/jr5kyRhNriCX2Ayyg/finite-factored-sets-polynomials-and-probability,Finite Factored Sets: Polynomials and Probability,['Scott Garrabrant'],2021-08-17T21:53:03Z,alignmentforum,, 16035,https://www.alignmentforum.org/posts/rAhJrdxjsXcngn3ip/an-observation-about-hubinger-et-al-s-framework-for-learned,An observation about Hubinger et al.'s framework for learned optimization,['Spencer Becker-Kahn'],2022-05-13T16:20:34Z,alignmentforum,, 16052,https://www.alignmentforum.org/posts/zrunBA8B5bmm2XZ59/reversible-changes-consider-a-bucket-of-water,Reversible changes: consider a bucket of water,['Stuart_Armstrong'],2019-08-26T22:55:24Z,alignmentforum,, 16062,https://www.alignmentforum.org/posts/XKwKJCXgSKhSr9bZY/project-intro-selection-theorems-for-modularity,Project Intro: Selection Theorems for Modularity,"['TheMcDouglas', 'Avery', 'Lucius Bushnaq']",2022-04-04T12:59:19Z,alignmentforum,, 16090,https://www.alignmentforum.org/posts/hZGoeGdJsnzJbQJMp/mech-interp-puzzle-2-word2vec-style-embeddings,Mech Interp Puzzle 2: Word2Vec Style Embeddings,['Neel Nanda'],2023-07-28T00:50:00Z,alignmentforum,, 16103,https://www.alignmentforum.org/posts/wi3upQibefMcFs5to/levels-of-pluralism,Levels of Pluralism,['adamShimi'],2022-07-27T09:35:32Z,alignmentforum,, 16119,https://www.alignmentforum.org/posts/QNQuWB3hS5FrGp5yZ/programmatic-backdoors-dnns-can-use-sgd-to-run-arbitrary,Programmatic backdoors: DNNs can use SGD to run arbitrary stateful computation,"['Fabien Roger', 'Buck']",2023-10-23T16:37:46Z,alignmentforum,, 16147,https://www.alignmentforum.org/posts/ej2r2JADoWiEtxkCd/sgd-s-bias,SGD's Bias,['johnswentworth'],2021-05-18T23:19:51Z,alignmentforum,, 16156,https://www.alignmentforum.org/posts/fftQP7zrnYkDqgwfj/elk-sub-note-taking-in-internal-rollouts,ELK Sub - Note-taking in internal rollouts,['Hoagy'],2022-03-09T17:23:19Z,alignmentforum,, 16180,https://www.alignmentforum.org/posts/t7f6gF2kpafCMw6rv/modeling-the-impact-of-safety-agendas,Modeling the impact of safety agendas,['Ben Cottier'],2021-11-05T19:46:05Z,alignmentforum,, 16200,https://www.alignmentforum.org/posts/23cMcXb2zFbfJKz3n/some-ideas-for-epistles-to-the-ai-ethicists,Some ideas for epistles to the AI ethicists,['Charlie Steiner'],2022-09-14T09:07:15Z,alignmentforum,, 16221,https://www.alignmentforum.org/posts/bwyKCQD7PFWKhELMr/by-default-gpts-think-in-plain-sight,"By Default, GPTs Think In Plain Sight",['Fabien Roger'],2022-11-19T19:15:30Z,alignmentforum,, 16238,https://www.alignmentforum.org/posts/2JJtxitp6nqu6ffak/basic-facts-about-language-models-during-training-1,Basic facts about language models during training,['beren'],2023-02-21T11:46:12Z,alignmentforum,, 16261,https://www.alignmentforum.org/posts/PhKZgz5Gxw9soHtng/optimization-speculations-on-the-x-and-only-x-problem,"Optimization, speculations on the X and only X problem.",['Donald Hobson'],2021-03-30T21:38:02Z,alignmentforum,, 16281,https://www.alignmentforum.org/posts/3D3DsX5rMbk3jEZ5h/needed-ai-infohazard-policy,Needed: AI infohazard policy,['Vanessa Kosoy'],2020-09-21T15:26:05Z,alignmentforum,, 16302,https://www.alignmentforum.org/posts/j9vCEjRFDwmH8FTKH/different-perspectives-on-concept-extrapolation,Different perspectives on concept extrapolation,['Stuart_Armstrong'],2022-04-08T10:42:30Z,alignmentforum,, 16323,https://www.alignmentforum.org/posts/j84JhErNezMxyK4dH/llm-modularity-the-separability-of-capabilities-in-large,LLM Modularity: The Separability of Capabilities in Large Language Models,['NickyP'],2023-03-26T21:57:03Z,alignmentforum,, 16339,https://www.alignmentforum.org/posts/HtxLbGvD7htCybLmZ/singularities-against-the-singularity-announcing-workshop-on,Singularities against the Singularity: Announcing Workshop on Singular Learning Theory and Alignment,"['Jesse Hoogland', 'Alexander Gietelink Oldenziel', 'Daniel Murfet']",2023-04-01T09:58:23Z,alignmentforum,, 16349,https://www.alignmentforum.org/posts/aLhLGns2BSun3EzXB/paper-constitutional-ai-harmlessness-from-ai-feedback,Paper: Constitutional AI: Harmlessness from AI Feedback (Anthropic),['LawrenceC'],2022-12-16T22:12:54Z,alignmentforum,, 16367,https://www.alignmentforum.org/posts/AeHtdxHheMjHredaq/what-you-see-isn-t-always-what-you-want,What You See Isn't Always What You Want,['TurnTrout'],2019-09-13T04:17:38Z,alignmentforum,, 16381,https://www.alignmentforum.org/posts/Couhhp4pPHbbhJ2Mg/will-we-run-out-of-ml-data-evidence-from-projecting-dataset,Will we run out of ML data? Evidence from projecting dataset size trends,['Pablo Villalobos'],2022-11-14T16:42:27Z,alignmentforum,, 16401,https://www.alignmentforum.org/posts/ZRTr6rEcpYtfMTDBs/less-realistic-tales-of-doom,Less Realistic Tales of Doom,['Mark Xu'],2021-05-06T23:02:00Z,alignmentforum,, 16422,https://www.alignmentforum.org/posts/MCWGCyz2mjtRoWiyP/endgame-safety-for-agi,“Endgame safety” for AGI,['Steven Byrnes'],2023-01-24T14:15:33Z,alignmentforum,, 16443,https://www.alignmentforum.org/posts/zd2DrbHApWypJD2Rz/udt2-and-against-ud-assa,"""UDT2"" and ""against UD+ASSA""",['Wei Dai'],2019-05-12T04:18:37Z,alignmentforum,, 16461,https://www.alignmentforum.org/posts/WnPEe99YuyRxktMD3/the-goodhart-game,The Goodhart Game,['John_Maxwell'],2019-11-18T23:22:13Z,alignmentforum,, 16473,https://www.alignmentforum.org/posts/zvWqPmQasssaAWkrj/an-159-building-agents-that-know-how-to-experiment-by,"[AN #159]: Building agents that know how to experiment, by training on procedurally generated games",['Rohin Shah'],2021-08-04T17:10:04Z,alignmentforum,, 16495,https://www.alignmentforum.org/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models,The case for aligning narrowly superhuman models,['Ajeya Cotra'],2021-03-05T22:29:42Z,alignmentforum,, 16516,https://www.alignmentforum.org/posts/XTGceK7xE8rxrRLcr/memetic-hazards-of-agi-architecture-posts-1,Memetic hazards of AGI architecture posts,['Ozyrus'],2021-10-16T16:10:08Z,alignmentforum,, 16526,https://www.alignmentforum.org/posts/Pkr97mB9Y4rkx5DdZ/utility-uncertainty-vs-expected-information-gain,Utility uncertainty vs. expected information gain,['michaelcohen'],2023-03-09T13:32:21Z,alignmentforum,, 16536,https://www.alignmentforum.org/posts/df4Jjg9cmJ7R2bkzR/reward-is-not-necessary-how-to-create-a-compositional-self-1,Reward is not Necessary: How to Create a Compositional Self-Preserving Agent for Life-Long Learning,['Roman Leventov'],2023-01-12T16:43:42Z,alignmentforum,, 16549,https://www.alignmentforum.org/posts/9iHwqnH4ZeqkGDbrb/transparency-for-generalizing-alignment-from-toy-models-1,Transparency for Generalizing Alignment from Toy Models,['Johannes C. Mayer'],2023-04-02T10:47:04Z,alignmentforum,, 16571,https://www.alignmentforum.org/posts/SzrmsbkqydpZyPuEh/my-take-on-vanessa-kosoy-s-take-on-agi-safety,My take on Vanessa Kosoy's take on AGI safety,['Steven Byrnes'],2021-09-30T12:23:58Z,alignmentforum,, 16604,https://www.alignmentforum.org/posts/HLCcTypehEJtstNnD/a-non-logarithmic-argument-for-kelly,A non-logarithmic argument for Kelly,['Bunthut'],2021-03-04T16:21:13Z,alignmentforum,, 16616,https://www.alignmentforum.org/posts/LY7rovMiJ4FhHxmH5/thoughts-on-hardware-compute-requirements-for-agi,Thoughts on hardware / compute requirements for AGI,['Steven Byrnes'],2023-01-24T14:03:39Z,alignmentforum,, 16635,https://www.alignmentforum.org/posts/P8caCHGJdm2GcniAp/what-is-wrong-with-this-utility-switch-button-problem,"What is wrong with this ""utility switch button problem"" approach?",['Donald Hobson'],2023-09-25T21:36:47Z,alignmentforum,, 16644,https://www.alignmentforum.org/posts/aW5CPtqtvs2EYKMMK/evidence-sets-towards-inductive-biases-based-analysis-of,Evidence Sets: Towards Inductive-Biases based Analysis of Prosaic AGI,['bayesian_kitten'],2021-12-16T22:41:42Z,alignmentforum,, 16679,https://www.alignmentforum.org/posts/G25RBnBk5BNpv3KyF/a-greater-than-b-greater-than-a-in-causal-dags,(A -> B) -> A in Causal DAGs,['johnswentworth'],2020-01-22T18:22:29Z,alignmentforum,, 16689,https://www.alignmentforum.org/posts/rgPxEKFBLpLqJpMBM/response-to-blake-richards-agi-generality-alignment-and-loss,"Response to Blake Richards: AGI, generality, alignment, & loss functions",['Steven Byrnes'],2022-07-12T13:56:01Z,alignmentforum,, 16701,https://www.alignmentforum.org/posts/kphJvksj5TndGapuh/directions-and-desiderata-for-ai-alignment,Directions and desiderata for AI alignment,['paulfchristiano'],2019-01-13T07:47:14Z,alignmentforum,, 16722,https://www.alignmentforum.org/posts/QrhCsuaEmSLzc8NQ4/elk-contest-submission-route-understanding-through-the-human,ELK contest submission: route understanding through the human ontology,"['Vika', 'Ramana Kumar', 'Vikrant Varma']",2022-03-14T21:42:27Z,alignmentforum,, 16740,https://www.alignmentforum.org/posts/85HgXZvNdTdfRJhar/an-137-quantifying-the-benefits-of-pretraining-on-downstream,[AN #137]: Quantifying the benefits of pretraining on downstream task performance,['Rohin Shah'],2021-02-10T18:10:03Z,alignmentforum,, 16768,https://www.alignmentforum.org/posts/QL7J9wmS6W2fWpofd/but-is-it-really-in-rome-an-investigation-of-the-rome-model,But is it really in Rome? An investigation of the ROME model editing technique,['jacquesthibs'],2022-12-30T02:40:37Z,alignmentforum,, 16794,https://www.alignmentforum.org/posts/A9eAPjpFjPwNW2rku/gradient-hacking-via-schelling-goals,Gradient Hacking via Schelling Goals,['Adam Scherlis'],2021-12-28T20:38:12Z,alignmentforum,, 16812,https://www.alignmentforum.org/posts/qALeGJ9nPcs9eC9Af/learning-with-catastrophes,Learning with catastrophes,['paulfchristiano'],2019-01-23T03:01:26Z,alignmentforum,, 16827,https://www.alignmentforum.org/posts/PmhTzHHFEem5hX79R/grouped-loss-may-disfavor-discontinuous-capabilities,Grouped Loss may disfavor discontinuous capabilities,['Adam Jermyn'],2022-07-09T17:22:24Z,alignmentforum,, 16843,https://www.alignmentforum.org/posts/zt6hRsDE84HeBKh7E/reducing-sycophancy-and-improving-honesty-via-activation,Reducing sycophancy and improving honesty via activation steering,['NinaR'],2023-07-28T02:46:23Z,alignmentforum,, 16856,https://www.alignmentforum.org/posts/h3mX4esebgXnMyZSM/tracking-compute-stocks-and-flows-case-studies,Tracking Compute Stocks and Flows: Case Studies?,['Cullen'],2022-10-05T17:57:55Z,alignmentforum,, 16866,https://www.alignmentforum.org/posts/rxsdSgnZrgYWc2XAp/an-172-sorry-for-the-long-hiatus,[AN #172] Sorry for the long hiatus!,['Rohin Shah'],2022-07-05T06:20:04Z,alignmentforum,, 16882,https://www.alignmentforum.org/posts/pL56xPoniLvtMDQ4J/the-inner-alignment-problem,The Inner Alignment Problem,"['evhub', 'Chris van Merwijk', 'vlad_m', 'Joar Skalse', 'Scott Garrabrant']",2019-06-04T01:20:36Z,alignmentforum,, 16905,https://www.alignmentforum.org/posts/E9EevrzBcDMap6dbs/the-thingness-of-things,The Thingness of Things,['TsviBT'],2023-01-01T22:19:08Z,alignmentforum,, 16920,https://www.alignmentforum.org/posts/cCrpbZ4qTCEYXbzje/ngo-and-yudkowsky-on-scientific-reasoning-and-pivotal-acts,Ngo and Yudkowsky on scientific reasoning and pivotal acts,"['Eliezer Yudkowsky', 'Richard_Ngo']",2022-02-21T20:54:54Z,alignmentforum,, 16937,https://www.alignmentforum.org/posts/Qo2EkG3dEMv8GnX8d/ai-strategy-nearcasting,AI strategy nearcasting,['HoldenKarnofsky'],2022-08-25T17:26:28Z,alignmentforum,, 16957,https://www.alignmentforum.org/posts/efWfvrWLgJmbBAs3m/embedded-world-models,Embedded World-Models,"['abramdemski', 'Scott Garrabrant']",2018-11-02T16:07:21Z,alignmentforum,, 16982,https://www.alignmentforum.org/posts/kF74mHH6SujRoEEFA/abram-demski-s-elk-thoughts-and-proposal-distillation,Abram Demski's ELK thoughts and proposal - distillation,['Rubi J. Hudson'],2022-07-19T06:57:35Z,alignmentforum,, 17010,https://www.alignmentforum.org/posts/57fTWCpsAyjeAimTp/interpretability-in-ml-a-broad-overview-2,Interpretability in ML: A Broad Overview,['anonymous'],2020-08-04T19:03:21Z,alignmentforum,, 17035,https://www.alignmentforum.org/posts/tTLhEqaWJGwjLaop3/descriptive-vs-specifiable-values,Descriptive vs. specifiable values,['TsviBT'],2023-03-26T09:10:56Z,alignmentforum,, 17054,https://www.alignmentforum.org/posts/92J4zJHkqmXTduxzY/and-the-ai-would-have-got-away-with-it-too-if,"And the AI would have got away with it too, if...",['Stuart_Armstrong'],2019-05-22T21:35:36Z,alignmentforum,, 17066,https://www.alignmentforum.org/posts/hPPGuiXf3zhKqgCMb/david-wolpert-on-knowledge,David Wolpert on Knowledge,['Alex Flint'],2021-09-21T01:54:58Z,alignmentforum,, 17078,https://www.alignmentforum.org/posts/3L46WGauGpr7nYubu/the-plan,The Plan,['johnswentworth'],2021-12-10T23:41:39Z,alignmentforum,, 17105,https://www.alignmentforum.org/posts/GSBCw94DsxLgDat6r/interpreting-yudkowsky-on-deep-vs-shallow-knowledge,Interpreting Yudkowsky on Deep vs Shallow Knowledge,['adamShimi'],2021-12-05T17:32:27Z,alignmentforum,, 17122,https://www.alignmentforum.org/posts/h9DesGT3WT9u2k7Hr/the-easy-goal-inference-problem-is-still-hard,The easy goal inference problem is still hard,['paulfchristiano'],2018-11-03T14:41:55Z,alignmentforum,, 17133,https://www.alignmentforum.org/posts/2AvX8cX47CdwjbkjY/we-may-be-able-to-see-sharp-left-turns-coming,We may be able to see sharp left turns coming,"['Ethan Perez', 'Neel Nanda']",2022-09-03T02:55:45Z,alignmentforum,, 17142,https://www.alignmentforum.org/posts/qRKyZGcoio9JhdmvX/book-report-theory-of-games-and-economic-behavior-von,Book report: Theory of Games and Economic Behavior (von Neumann & Morgenstern),['Nisan'],2020-05-11T09:47:01Z,alignmentforum,, 17162,https://www.alignmentforum.org/posts/uR3znuBnaevssYDZY/rfc-meta-ethical-uncertainty-in-agi-alignment,RFC: Meta-ethical uncertainty in AGI alignment,['Gordon Seidoh Worley'],2018-06-08T20:56:27Z,alignmentforum,, 17172,https://www.alignmentforum.org/posts/RzGJB7GoosQmfALuE/how-much-do-personal-biases-in-risk-assessment-affect,How much do personal biases in risk assessment affect assessment of AI risks?,['Gordon Seidoh Worley'],2023-05-03T06:12:57Z,alignmentforum,, 17183,https://www.alignmentforum.org/posts/BRuWm4GxcTNPn4XDX/deconfusing-logical-counterfactuals,Deconfusing Logical Counterfactuals,['Chris_Leong'],2019-01-30T15:13:41Z,alignmentforum,, 17198,https://www.alignmentforum.org/posts/GPoPKk2wr2r4uPwxe/some-real-examples-of-gradient-hacking,Some real examples of gradient hacking,['Oliver Sourbut'],2021-11-22T00:11:35Z,alignmentforum,, 17212,https://www.alignmentforum.org/posts/HkWB5KCJQ2aLsMzjt/locality-of-goals,Locality of goals,['adamShimi'],2020-06-22T21:56:01Z,alignmentforum,, 17231,https://www.alignmentforum.org/posts/upP8PYgHfXgvgh3FF/training-human-models-is-an-unsolved-problem,Training human models is an unsolved problem,['Charlie Steiner'],2019-05-10T07:17:27Z,alignmentforum,, 17248,https://www.alignmentforum.org/posts/nN7bHuHZYaWv9RDJL/announcing-timaeus,Announcing Timaeus,"['Jesse Hoogland', 'Daniel Murfet', 'Alexander Gietelink Oldenziel', 'Stan van Wingerden']",2023-10-22T11:59:04Z,alignmentforum,, 17272,https://www.alignmentforum.org/posts/FSQ4RCJobu9pussjY/ideological-inference-engines-making-deontology,Ideological Inference Engines: Making Deontology Differentiable*,['Paul Bricman'],2022-09-12T12:00:44Z,alignmentforum,, 17295,https://www.alignmentforum.org/posts/89YgRc3NevJEDEen5/mesa-optimizers-and-over-optimization-failure-optimizing-and,"Mesa-Optimizers and Over-optimization Failure (Optimizing and Goodhart Effects, Clarifying Thoughts - Part 4)",['Davidmanheim'],2019-08-12T08:07:02Z,alignmentforum,, 17314,https://www.alignmentforum.org/posts/4JuKoFguzuMrNn6Qr/hch-is-not-just-mechanical-turk,HCH is not just Mechanical Turk,['William_S'],2019-02-09T00:46:26Z,alignmentforum,, 17332,https://www.alignmentforum.org/posts/gvAFSv7Gtcwcbst32/alignment-newsletter-31,Alignment Newsletter #31,['Rohin Shah'],2018-11-05T23:50:02Z,alignmentforum,, 17356,https://www.alignmentforum.org/posts/iy2o4nQj9DnQD7Yhj/discussion-with-nate-soares-on-a-key-alignment-difficulty,Discussion with Nate Soares on a key alignment difficulty,['HoldenKarnofsky'],2023-03-13T21:20:03Z,alignmentforum,, 17369,https://www.alignmentforum.org/posts/DJnvFsZ2maKxPi7v7/what-s-up-with-confusingly-pervasive-goal-directedness,What's Up With Confusingly Pervasive Goal Directedness?,['Raemon'],2022-01-20T19:22:38Z,alignmentforum,, 17389,https://www.alignmentforum.org/posts/TTNS3tk5McHqrJCbR/abstraction-information-at-a-distance,Abstraction = Information at a Distance,['johnswentworth'],2020-03-19T00:19:49Z,alignmentforum,, 17401,https://www.alignmentforum.org/posts/6g8cAftfQufLmFDYT/you-re-measuring-model-complexity-wrong,You’re Measuring Model Complexity Wrong,"['Jesse Hoogland', 'Stan van Wingerden']",2023-10-11T11:46:12Z,alignmentforum,, 17419,https://www.alignmentforum.org/posts/nisaAr7wMDiMLc2so/instrumental-convergence-scale-and-physical-interactions,Instrumental convergence: scale and physical interactions,"['Edouard Harris', 'simonsdsuo']",2022-10-14T15:50:29Z,alignmentforum,, 17447,https://www.alignmentforum.org/posts/EFrJdhKPZXa4MA3Gr/vanessa-kosoy-s-predca-distilled,"Vanessa Kosoy's PreDCA, distilled",['Martín Soto'],2022-11-12T11:38:13Z,alignmentforum,, 17476,https://www.alignmentforum.org/posts/rEMpTapcAzjTiSckf/on-developing-a-mathematical-theory-of-interpretability,On Developing a Mathematical Theory of Interpretability,['Spencer Becker-Kahn'],2023-02-09T01:45:02Z,alignmentforum,, 17491,https://www.alignmentforum.org/posts/cJWipZCfAbg9tFX2P/announcing-the-2023-pibbss-summer-research-fellowship,Announcing the 2023 PIBBSS Summer Research Fellowship,"['Nora_Ammann', 'DusanDNesic']",2023-01-12T21:31:53Z,alignmentforum,, 17500,https://www.alignmentforum.org/posts/fNTCveSa4HvqvZR2F/problems-with-ai-debate,Problems with AI debate,['Stuart_Armstrong'],2019-08-26T19:21:40Z,alignmentforum,, 17517,https://www.alignmentforum.org/posts/bnY3L48TtDrKTzGRb/ai-safety-success-stories,"AI Safety ""Success Stories""",['Wei Dai'],2019-09-07T02:54:15Z,alignmentforum,, 17526,https://www.alignmentforum.org/posts/jFvFreCeejRKaZv4v/understanding-and-avoiding-value-drift,Understanding and avoiding value drift,['TurnTrout'],2022-09-09T04:16:48Z,alignmentforum,, 17545,https://www.alignmentforum.org/posts/zjhZpZi76kEBRnjiw/relevant-pre-agi-possibilities,Relevant pre-AGI possibilities,['Daniel Kokotajlo'],2020-06-20T10:52:00Z,alignmentforum,, 17582,https://www.alignmentforum.org/posts/6fMvGoyy3kgnonRNM/reliability-amplification,Reliability amplification,['paulfchristiano'],2019-01-31T21:12:19Z,alignmentforum,, 17603,https://www.alignmentforum.org/posts/BXMCgpktdiawT3K5v/multi-agent-safety,Multi-agent safety,['Richard_Ngo'],2020-05-16T01:59:05Z,alignmentforum,, 17616,https://www.alignmentforum.org/posts/DCL3MmMiPsuMxP45a/even-superhuman-go-ais-have-surprising-failure-modes,Even Superhuman Go AIs Have Surprising Failure Modes,"['AdamGleave', 'EuanMcLean', 'tw', 'Kellin Pelrine', 'Tom Tseng', 'Yawen Duan', 'Joseph Miller', 'MichaelDennis']",2023-07-20T17:31:36Z,alignmentforum,, 17638,https://www.alignmentforum.org/posts/oQHoy2tnLsKuEDYtJ/motivating-abstraction-first-decision-theory,Motivating Abstraction-First Decision Theory,['johnswentworth'],2020-04-29T17:47:32Z,alignmentforum,, 17657,https://www.alignmentforum.org/posts/97tCruegbz8GXRFzi/generalized-kelly-betting,Generalized Kelly betting,['Linda Linsefors'],2018-07-19T01:38:21Z,alignmentforum,, 17666,https://www.alignmentforum.org/posts/Tb2gAaHLmigfFGksq/how-biosafety-could-inform-ai-standards,How biosafety could inform AI standards,['Olivia Jimenez'],2023-06-09T14:41:20Z,alignmentforum,, 17702,https://www.alignmentforum.org/posts/fuSaKr6t6Zuh6GKaQ/when-is-goodhart-catastrophic,When is Goodhart catastrophic?,"['Drake Thomas', 'Thomas Kwa']",2023-05-09T03:59:16Z,alignmentforum,, 17720,https://www.alignmentforum.org/posts/uwfstudGoNFSEtMAT/what-is-the-interpretation-of-the-do-operator,What is the interpretation of the do() operator?,['Bunthut'],2020-08-26T21:54:08Z,alignmentforum,, 17730,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374f4e/optimal-and-causal-counterfactual-worlds,Optimal and Causal Counterfactual Worlds,['Scott Garrabrant'],2015-05-12T03:16:30Z,alignmentforum,, 17740,https://www.alignmentforum.org/posts/7fkaJLzRiEr2hmSDi/re-define-intent-alignment,Re-Define Intent Alignment?,['abramdemski'],2021-07-22T19:00:32Z,alignmentforum,, 17754,https://www.alignmentforum.org/posts/g3pbJPQpNJyFfbHKd/the-alignment-stability-problem,The alignment stability problem,['Seth Herd'],2023-03-26T02:10:13Z,alignmentforum,, 17772,https://www.alignmentforum.org/posts/H9KekSfzHnPLTz4DE/boomerang-protocol-to-dissolve-some-commitment-races-2,Boomerang - protocol to dissolve some commitment races,['Filip Sondej'],2023-05-30T16:21:14Z,alignmentforum,, 17786,https://www.alignmentforum.org/posts/Pkthep47ukcrK3MNm/in-a-multipolar-scenario-how-do-people-expect-systems-to-be,"In a multipolar scenario, how do people expect systems to be trained to interact with systems developed by other labs?",['JesseClifton'],2020-12-01T20:04:18Z,alignmentforum,, 17798,https://www.alignmentforum.org/posts/7q6jQ7y9xQNAkbTtt/morally-underdefined-situations-can-be-deadly,Morally underdefined situations can be deadly,['Stuart_Armstrong'],2021-11-22T14:48:11Z,alignmentforum,, 17816,https://www.alignmentforum.org/posts/GDnFsyfKedevKHAuJ/stuart-russell-and-melanie-mitchell-on-munk-debates,Stuart Russell and Melanie Mitchell on Munk Debates,['Alex Flint'],2021-10-29T19:13:58Z,alignmentforum,, 17830,https://www.alignmentforum.org/posts/KfDh7FqwmNGExTryT/impact-stories-for-model-internals-an-exercise-for,Impact stories for model internals: an exercise for interpretability researchers,['jenny'],2023-09-25T23:15:29Z,alignmentforum,, 17863,https://www.alignmentforum.org/posts/Rhg27MqkxJsnZwoYg/the-codex-skeptic-faq,The Codex Skeptic FAQ,['Michaël Trazzi'],2021-08-24T16:01:19Z,alignmentforum,, 17880,https://www.alignmentforum.org/posts/KrQvZM8uFjSTJ7hq3/recent-progress-in-the-theory-of-neural-networks-1,Recent Progress in the Theory of Neural Networks,['interstice'],2019-12-04T23:11:32Z,alignmentforum,, 17906,https://www.alignmentforum.org/posts/Km9sHjHTsBdbgwKyi/monitoring-for-deceptive-alignment,Monitoring for deceptive alignment,['evhub'],2022-09-08T23:07:03Z,alignmentforum,, 17927,https://www.alignmentforum.org/posts/2j7mtf58Zr9XehjxP/how-do-scaling-laws-work-for-fine-tuning,How do scaling laws work for fine-tuning?,['Daniel Kokotajlo'],2021-04-04T12:18:35Z,alignmentforum,, 17936,https://www.alignmentforum.org/posts/DypLJKRcQKt9hcpBP/some-alternative-ai-safety-research-projects,Some alternative AI safety research projects,['Michele Campolo'],2022-06-28T14:09:28Z,alignmentforum,, 17960,https://www.alignmentforum.org/posts/A9vvxguZMytsN3ze9/reply-to-paul-christiano-on-inaccessible-information,Reply to Paul Christiano on Inaccessible Information,['Alex Flint'],2020-06-05T09:10:08Z,alignmentforum,, 17974,https://www.alignmentforum.org/posts/dmp9PZjpSSX5NeXHM/goodhart-endgame,Goodhart: Endgame,['Charlie Steiner'],2021-11-19T01:26:30Z,alignmentforum,, 17997,https://www.alignmentforum.org/posts/5bd75cc58225bf067037528e/updatelessness-and-son-of-x,Updatelessness and Son of X,['Scott Garrabrant'],2016-11-04T22:58:23Z,alignmentforum,, 18006,https://www.alignmentforum.org/posts/qAhT2qvKXboXqLk4e/early-2022-paper-round-up,Early 2022 Paper Round-up,['jsteinhardt'],2022-04-14T20:50:29Z,alignmentforum,, 18023,https://www.alignmentforum.org/posts/XjDcwtgkHGWYA7stn/is-elk-enough-diamond-matrix-and-child-ai,"Is ELK enough? Diamond, Matrix and Child AI",['adamShimi'],2022-02-15T02:29:33Z,alignmentforum,, 18038,https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety,Chris Olah’s views on AGI safety,['evhub'],2019-11-01T20:13:35Z,alignmentforum,, 18064,https://www.alignmentforum.org/posts/CL8RFFCdsBwWWfKYS/a-small-negative-result-on-debate,A Small Negative Result on Debate,['Sam Bowman'],2022-04-12T18:19:26Z,alignmentforum,, 18074,https://www.alignmentforum.org/posts/AtfQFj8umeyBBkkxa/a-bird-s-eye-view-of-the-ml-field-pragmatic-ai-safety-2,A Bird's Eye View of the ML Field [Pragmatic AI Safety #2],"['Dan H', 'ThomasW']",2022-05-09T17:18:54Z,alignmentforum,, 18093,https://www.alignmentforum.org/posts/SkcM4hwgH3AP6iqjs/can-you-get-agi-from-a-transformer,Can you get AGI from a Transformer?,['Steven Byrnes'],2020-07-23T15:27:52Z,alignmentforum,, 18120,https://www.alignmentforum.org/posts/FfTxEf3uFPsZf9EMP/avoiding-perpetual-risk-from-tai,Avoiding perpetual risk from TAI,['scasper'],2022-12-26T22:34:49Z,alignmentforum,, 18143,https://www.alignmentforum.org/posts/LKAogXdruuZXdx6ZH/publish-or-perish-a-quick-note-on-why-you-should-try-to-make,"""Publish or Perish"" (a quick note on why you should try to make your work legible to existing academic communities)",['David Scott Krueger (formerly: capybaralet)'],2023-03-18T19:01:54Z,alignmentforum,, 18157,https://www.alignmentforum.org/posts/eQ4eLQAmPvp9anJcB/agents-vs-predictors-concrete-differentiating-factors,Agents vs. Predictors: Concrete differentiating factors,['evhub'],2023-02-24T23:50:40Z,alignmentforum,, 18175,https://www.alignmentforum.org/posts/QjA6kipHYqwACkPNw/2-premise-two-some-cases-of-value-change-are-il-legitimate,2. Premise two: Some cases of value change are (il)legitimate,['Nora_Ammann'],2023-10-26T14:36:54Z,alignmentforum,, 18193,https://www.alignmentforum.org/posts/dyg4KSyMJC8cNDMK6/take-8-queer-the-inner-outer-alignment-dichotomy,Take 8: Queer the inner/outer alignment dichotomy.,['Charlie Steiner'],2022-12-09T17:46:26Z,alignmentforum,, 18203,https://www.alignmentforum.org/posts/pZhDWxDmwzuSwLjou/asymptotically-unambitious-agi,Asymptotically Unambitious AGI,['michaelcohen'],2020-04-10T12:31:54Z,alignmentforum,, 18214,https://www.alignmentforum.org/posts/FQqcejhNWGG8vHDch/on-solving-problems-before-they-appear-the-weird,On Solving Problems Before They Appear: The Weird Epistemologies of Alignment,['adamShimi'],2021-10-11T08:20:37Z,alignmentforum,, 18238,https://www.alignmentforum.org/posts/dJQo7xPn4TyGnKgeC/hiring-engineers-and-researchers-to-help-align-gpt-3,Hiring engineers and researchers to help align GPT-3,['paulfchristiano'],2020-10-01T18:54:24Z,alignmentforum,, 18260,https://www.alignmentforum.org/posts/TMHWfRE7zZkzgFDSo/a-review-of-the-bio-anchors-report,A review of the Bio-Anchors report,['jylin04'],2022-10-03T10:27:58Z,alignmentforum,, 18273,https://www.alignmentforum.org/posts/ouEQLAca3bbiimAFQ/the-effect-of-horizon-length-on-scaling-laws,The effect of horizon length on scaling laws,['Jacob_Hilton'],2023-02-01T03:59:26Z,alignmentforum,, 18283,https://www.alignmentforum.org/posts/KZh9eKCkBRbevkGnP/repeated-and-improved-sleeping-beauty-problem,Repeated (and improved) Sleeping Beauty problem,['Linda Linsefors'],2018-07-10T22:32:56Z,alignmentforum,, 18295,https://www.alignmentforum.org/posts/YyKKMeCCxnzdohuxj/an-96-buck-and-i-discuss-argue-about-ai-alignment,[AN #96]: Buck and I discuss/argue about AI Alignment,['Rohin Shah'],2020-04-22T17:20:03Z,alignmentforum,, 18318,https://www.alignmentforum.org/posts/4XPa3xa44jAWiCkmy/risks-from-learned-optimization-conclusion-and-related-work,Risks from Learned Optimization: Conclusion and Related Work,"['evhub', 'Chris van Merwijk', 'vlad_m', 'Joar Skalse', 'Scott Garrabrant']",2019-06-07T19:53:52Z,alignmentforum,, 18351,https://www.alignmentforum.org/posts/a47XLgmX5ecWmxruY/quantifying-general-intelligence-2,Quantifying General Intelligence,['JasonBrown'],2022-06-17T21:57:14Z,alignmentforum,, 18367,https://www.alignmentforum.org/posts/tKYGvA9dKHa3GWBBk/theories-of-impact-for-science-of-deep-learning,Theories of impact for Science of Deep Learning,['Marius Hobbhahn'],2022-12-01T14:39:46Z,alignmentforum,, 18387,https://www.alignmentforum.org/posts/hubbRt4DPegiA5gRR/a-shift-in-arguments-for-ai-risk,A shift in arguments for AI risk,['Richard_Ngo'],2019-05-28T13:47:36Z,alignmentforum,, 18420,https://www.alignmentforum.org/posts/CTh74TaWgvRiXnkS6/toy-models-of-superposition,Toy Models of Superposition,['evhub'],2022-09-21T23:48:03Z,alignmentforum,, 18437,https://www.alignmentforum.org/posts/sYHrW4wwfoMBxNDcA/real-time-research-recording-can-a-transformer-re-derive,Real-Time Research Recording: Can a Transformer Re-Derive Positional Info?,['Neel Nanda'],2022-11-01T23:56:06Z,alignmentforum,, 18447,https://www.alignmentforum.org/posts/Qz6w4GYZpgeDp6ATB/beyond-astronomical-waste,Beyond Astronomical Waste,['Wei Dai'],2018-06-07T21:04:45Z,alignmentforum,, 18468,https://www.alignmentforum.org/posts/AwxBGFy59DYDk4ooe/an-146-plausible-stories-of-how-we-might-fail-to-avert-an,[AN #146]: Plausible stories of how we might fail to avert an existential catastrophe,['Rohin Shah'],2021-04-14T17:30:04Z,alignmentforum,, 18499,https://www.alignmentforum.org/posts/GYmDaFgePMchYj6P7/an-100-what-might-go-wrong-if-you-learn-a-reward-function,[AN #100]: What might go wrong if you learn a reward function while acting,['Rohin Shah'],2020-05-20T17:30:03Z,alignmentforum,, 18513,https://www.alignmentforum.org/posts/pWruFSY7494vnucCE/biextensional-equivalence,Biextensional Equivalence,['Scott Garrabrant'],2020-10-28T14:07:38Z,alignmentforum,, 18526,https://www.alignmentforum.org/posts/A7QgKwWvAkuXonAy5/how-do-selection-theorems-relate-to-interpretability,How Do Selection Theorems Relate To Interpretability?,['johnswentworth'],2022-06-09T19:40:00Z,alignmentforum,, 18539,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375250/the-set-of-logical-inductors-is-not-convex,The set of Logical Inductors is not Convex,['Scott Garrabrant'],2016-09-27T09:05:00Z,alignmentforum,, 18550,https://www.alignmentforum.org/posts/4GuKi9wKYnthr8QP9/sections-5-and-6-contemporary-architectures-humans-in-the,"Sections 5 & 6: Contemporary Architectures, Humans in the Loop",['JesseClifton'],2019-12-20T03:52:44Z,alignmentforum,, 18585,https://www.alignmentforum.org/posts/wuJpYLcMEBz4kcgAn/what-is-abstraction-1,What is Abstraction?,['johnswentworth'],2019-12-06T20:30:04Z,alignmentforum,, 18594,https://www.alignmentforum.org/posts/mSF4KTxAGRG3EHmhb/ai-x-risk-approximately-ordered-by-embarrassment,"AI x-risk, approximately ordered by embarrassment",['Alex Lawsen'],2023-04-12T23:01:01Z,alignmentforum,, 18631,https://www.alignmentforum.org/posts/Lj9QXcqkcuR4iHJ7Q/multi-dimensional-rewards-for-agi-interpretability-and,Multi-dimensional rewards for AGI interpretability and control,['Steven Byrnes'],2021-01-04T03:08:42Z,alignmentforum,, 18646,https://www.alignmentforum.org/posts/M8WdeNWacMrmorNdd/towards-formalizing-universality,Towards formalizing universality,['paulfchristiano'],2019-01-13T20:39:22Z,alignmentforum,, 18661,https://www.alignmentforum.org/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames,Introduction to Cartesian Frames,['Scott Garrabrant'],2020-10-22T13:00:00Z,alignmentforum,, 18678,https://www.alignmentforum.org/posts/SDr45pcgJJyvTqmZa/for-the-past-in-some-ways-only-we-are-moral-degenerates,"For the past, in some ways only, we are moral degenerates",['Stuart_Armstrong'],2019-06-07T15:57:11Z,alignmentforum,, 18690,https://www.alignmentforum.org/posts/Fji2nHBaB6SjdSscr/safer-sandboxing-via-collective-separation,Safer sandboxing via collective separation,['Richard_Ngo'],2020-09-09T19:49:14Z,alignmentforum,, 18708,https://www.alignmentforum.org/posts/HHunb8FPnhWaDAQci/the-alignment-problem-in-different-capability-regimes,The alignment problem in different capability regimes,['Buck'],2021-09-09T19:46:17Z,alignmentforum,, 18733,https://www.alignmentforum.org/posts/fM5ZWGDbnjb7ThNKJ/are-minimal-circuits-deceptive,Are minimal circuits deceptive?,['evhub'],2019-09-07T18:11:30Z,alignmentforum,, 18743,https://www.alignmentforum.org/posts/uEwECj53prjKLcBC5/vc-theory-overview,VC Theory Overview,['Joar Skalse'],2023-07-02T22:46:00Z,alignmentforum,, 18759,https://www.alignmentforum.org/posts/TQwXPHfyyQwr22NMh/box-inversion-hypothesis,Box inversion hypothesis,['Jan Kulveit'],2020-10-20T16:20:51Z,alignmentforum,, 18772,https://www.alignmentforum.org/posts/8BL7w55PS4rWYmrmv/prize-and-fast-track-to-alignment-research-at-alter,Prize and fast track to alignment research at ALTER,['Vanessa Kosoy'],2022-09-17T16:58:25Z,alignmentforum,, 18800,https://www.alignmentforum.org/posts/MxLK2fvEuijAYgsc2/smoothmin-and-personal-identity,Smoothmin and personal identity,['Stuart_Armstrong'],2019-03-08T15:16:29Z,alignmentforum,, 18811,https://www.alignmentforum.org/posts/vbfAwZqKs84agyGWC/paper-teaching-gpt3-to-express-uncertainty-in-words,Paper: Teaching GPT3 to express uncertainty in words,['Owain_Evans'],2022-05-31T13:27:17Z,alignmentforum,, 18830,https://www.alignmentforum.org/posts/TeYro2ntqHNyQFx8r/policy-alignment,Policy Alignment,['abramdemski'],2018-06-30T00:24:25Z,alignmentforum,, 18852,https://www.alignmentforum.org/posts/iNvWCTpyEd4zTqjjv/examples-of-prompts-that-make-gpt-4-output-falsehoods-1,Examples of Prompts that Make GPT-4 Output Falsehoods,"['scasper', 'Luke Bailey']",2023-07-22T20:21:40Z,alignmentforum,, 18875,https://www.alignmentforum.org/posts/WkPf6XCzfJLCm2pbK/cdt-edt-udt,CDT=EDT=UDT,['abramdemski'],2019-01-13T23:46:11Z,alignmentforum,, 18895,https://www.alignmentforum.org/posts/XWwvwytieLtEWaFJX/deep-deceptiveness,Deep Deceptiveness,['So8res'],2023-03-21T02:51:53Z,alignmentforum,, 18915,https://www.alignmentforum.org/posts/KEZzAge6mgyo5GDi9/example-markov-chain,Example: Markov Chain,['johnswentworth'],2020-01-10T20:19:31Z,alignmentforum,, 18926,https://www.alignmentforum.org/posts/dYHiMeSdLrrX3cy4a/embedding-safety-in-ml-development,Embedding safety in ML development,['zeshen'],2022-10-31T12:27:13Z,alignmentforum,, 18979,https://www.alignmentforum.org/posts/pnAxcABq9GBDG5BNW/open-problems-in-negative-side-effect-minimization,Open Problems in Negative Side Effect Minimization,"['Fabian Schimpf', 'Lukas Fluri']",2022-05-06T09:37:59Z,alignmentforum,, 19001,https://www.alignmentforum.org/posts/XFEJg9gxak5agyxJo/alignment-newsletter-36,Alignment Newsletter #36,['Rohin Shah'],2018-12-12T01:10:01Z,alignmentforum,, 19022,https://www.alignmentforum.org/posts/CYKeDjD7FEvAnzBBF/introduction-to-inaccessible-information,Introduction to inaccessible information,['Ryan Kidd'],2021-12-09T01:28:48Z,alignmentforum,, 19049,https://www.alignmentforum.org/posts/Yp2vYb4zHXEeoTkJc/welcome-and-faq,Welcome & FAQ!,"['Ruby', 'habryka']",2021-08-24T20:14:21Z,alignmentforum,, 19068,https://www.alignmentforum.org/posts/eqi83c2nNSX7TFSfW/no-surjection-onto-function-space-for-manifold-x,No surjection onto function space for manifold X,['Stuart_Armstrong'],2019-01-09T18:07:26Z,alignmentforum,, 19082,https://www.alignmentforum.org/posts/gbNqWpDwmrWmzopQW/is-deontological-ai-safe-feedback-draft,Is Deontological AI Safe? [Feedback Draft],"['Dan H', ""William D'Alessandro""]",2023-05-27T16:39:26Z,alignmentforum,, 19101,https://www.alignmentforum.org/posts/7GQZyooNi5nqgoyyJ/mlsn-2-adversarial-training,[MLSN #2]: Adversarial Training,['Dan H'],2021-12-09T17:16:50Z,alignmentforum,, 19132,https://www.alignmentforum.org/posts/w8QBmgQwb83vDMXoz/dynamic-inconsistency-of-the-inaction-and-initial-state,Dynamic inconsistency of the inaction and initial state baseline,['Stuart_Armstrong'],2020-07-07T12:02:29Z,alignmentforum,, 19142,https://www.alignmentforum.org/posts/HkghiK6Rt35nbgwKA/hard-coding-neural-computation,Hard-Coding Neural Computation,['MadHatter'],2021-12-13T04:35:52Z,alignmentforum,, 19163,https://www.alignmentforum.org/posts/6zbRy3aADCsRmFcgv/hiding-complexity,Hiding Complexity,['Rafael Harth'],2020-11-20T16:35:25Z,alignmentforum,, 19175,https://www.alignmentforum.org/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why,[Intro to brain-like-AGI safety] 1. What's the problem & Why work on it now?,['Steven Byrnes'],2022-01-26T15:23:22Z,alignmentforum,, 19194,https://www.alignmentforum.org/posts/xCpuSfT5Lt6kkR3po/my-take-on-agent-foundations-formalizing-metaphilosophical,My take on agent foundations: formalizing metaphilosophical competence,['zhukeepa'],2018-04-01T06:33:10Z,alignmentforum,, 19205,https://www.alignmentforum.org/posts/8LEPDY36jBYpijrSw/what-counts-as-defection,What counts as defection?,['TurnTrout'],2020-07-12T22:03:39Z,alignmentforum,, 19220,https://www.alignmentforum.org/posts/3zZjF3YKJ257x79mu/what-i-learned-running-refine,What I Learned Running Refine,['adamShimi'],2022-11-24T14:49:59Z,alignmentforum,, 19243,https://www.alignmentforum.org/posts/PGK3AJtNG4rPHuZxy/cirl-corrigibility-is-fragile,CIRL Corrigibility is Fragile,"['rachelAF', 'AdamGleave']",2022-12-21T01:40:50Z,alignmentforum,, 19264,https://www.alignmentforum.org/posts/jkRFZNAZmWskTdCSt/behavioral-sufficient-statistics-for-goal-directedness,Behavioral Sufficient Statistics for Goal-Directedness,['adamShimi'],2021-03-11T15:01:22Z,alignmentforum,, 19274,https://www.alignmentforum.org/posts/CHSRhSKcrSmQWnD6A/towards-an-intentional-research-agenda,Towards an Intentional Research Agenda,['romeostevensit'],2019-08-23T05:27:54Z,alignmentforum,, 19298,https://www.alignmentforum.org/posts/BYy62ib5tAkn9rsKn/sia-is-basically-just-bayesian-updating-on-existence,SIA is basically just Bayesian updating on existence,['Stuart_Armstrong'],2021-06-04T13:17:21Z,alignmentforum,, 19317,https://www.alignmentforum.org/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky,"My Objections to ""We’re All Gonna Die with Eliezer Yudkowsky""",['Quintin Pope'],2023-03-21T00:06:08Z,alignmentforum,, 19342,https://www.alignmentforum.org/posts/fzGbKHbSytXH5SKTN/penalize-model-complexity-via-self-distillation,Penalize Model Complexity Via Self-Distillation,['research_prime_space'],2023-04-04T18:52:41Z,alignmentforum,, 19351,https://www.alignmentforum.org/posts/RrirwtP7cNmHtJRxE/shapes-of-mind-and-pluralism-in-alignment,Shapes of Mind and Pluralism in Alignment,['adamShimi'],2022-08-13T10:01:42Z,alignmentforum,, 19369,https://www.alignmentforum.org/posts/5bd75cc58225bf067037550d/reflective-oracles-as-a-solution-to-the-converse-lawvere,Reflective oracles as a solution to the converse Lawvere problem,['SamEisenstat'],2018-11-29T03:23:51Z,alignmentforum,, 19387,https://www.alignmentforum.org/posts/c3RsLTcxrvH4rXpBL/how-honest-is-gpt-3,"How ""honest"" is GPT-3?",['abramdemski'],2020-07-08T19:38:02Z,alignmentforum,, 19403,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375373/generalizing-foundations-of-decision-theory,Generalizing Foundations of Decision Theory,['abramdemski'],2017-03-04T16:46:11Z,alignmentforum,, 19426,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374f85/an-oracle-standard-trick,An Oracle standard trick,['Stuart_Armstrong'],2015-06-03T14:25:08Z,alignmentforum,, 19438,https://www.alignmentforum.org/posts/uXH4r6MmKPedk8rMA/gradient-hacking,Gradient hacking,['evhub'],2019-10-16T00:53:01Z,alignmentforum,, 19452,https://www.alignmentforum.org/posts/dEqjwwvYtg9NEmZoq/an-106-evaluating-generalization-ability-of-learned-reward,[AN #106]: Evaluating generalization ability of learned reward models,['Rohin Shah'],2020-07-01T17:20:03Z,alignmentforum,, 19467,https://www.alignmentforum.org/posts/itTLCFj5NCHhFbK2Q/are-limited-horizon-agents-a-good-heuristic-for-the-off,Are limited-horizon agents a good heuristic for the off-switch problem?,['anonymous'],2021-12-05T19:28:00Z,alignmentforum,, 19477,https://www.alignmentforum.org/posts/8Q5h6hyBXTEgC6EZf/do-what-we-mean-vs-do-what-we-say,Do what we mean vs. do what we say,['Rohin Shah'],2018-08-30T22:03:28Z,alignmentforum,, 19497,https://www.alignmentforum.org/posts/oNvifJbFTRebDcoHc/planning-capacity-and-daemons,Planning capacity and daemons,['lukehmiles'],2022-09-26T00:15:42Z,alignmentforum,, 19525,https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like,What 2026 looks like,['Daniel Kokotajlo'],2021-08-06T16:14:50Z,alignmentforum,, 19555,https://www.alignmentforum.org/posts/ZJY3eotLdfBPCLP3z/theoretical-neuroscience-for-alignment-theory,Theoretical Neuroscience For Alignment Theory,['Cameron Berg'],2021-12-07T21:50:10Z,alignmentforum,, 19583,https://www.alignmentforum.org/posts/Hna4aoMwr6Qx9rHBs/linkpost-introducing-superalignment,[Linkpost] Introducing Superalignment,['beren'],2023-07-05T18:23:18Z,alignmentforum,, 19610,https://www.alignmentforum.org/posts/xCxeBSHqMEaP3jDvY/reframing-impact,Reframing Impact,['TurnTrout'],2019-09-20T19:03:28Z,alignmentforum,, 19631,https://www.alignmentforum.org/posts/nJEJAcS6Bs4BJbkZb/catastrophic-risks-from-ai-5-rogue-ais,Catastrophic Risks from AI #5: Rogue AIs,"['Dan H', 'Mantas Mazeika', 'ThomasW']",2023-06-27T22:06:11Z,alignmentforum,, 19658,https://www.alignmentforum.org/posts/XAeWHqQTWjJmzB4k6/reference-post-trivial-decision-theory-problem,Reference Post: Trivial Decision Theory Problem,['Chris_Leong'],2020-02-15T17:13:26Z,alignmentforum,, 19667,https://www.alignmentforum.org/posts/bWxNPMy5MhPnQTzKz/what-discovering-latent-knowledge-did-and-did-not-find-4,What Discovering Latent Knowledge Did and Did Not Find,['Fabien Roger'],2023-03-13T19:29:46Z,alignmentforum,, 19699,https://www.alignmentforum.org/posts/wPudaEemohdYPmsye/information-loss-greater-than-basin-flatness,Information Loss --> Basin flatness,['Vivek Hebbar'],2022-05-21T12:58:20Z,alignmentforum,, 19715,https://www.alignmentforum.org/posts/SYeuMzspmwoQABWdw/infra-miscellanea,Infra-Miscellanea,['Diffractor'],2022-04-22T02:09:07Z,alignmentforum,, 19739,https://www.alignmentforum.org/posts/khr3KvExuZxdnkDtD/the-inter-agent-facet-of-ai-alignment,The Inter-Agent Facet of AI Alignment,['Michael Oesterle'],2022-09-18T20:39:32Z,alignmentforum,, 19754,https://www.alignmentforum.org/posts/fnrpxdnodQmanibmB/preface-to-the-sequence-on-factored-cognition,Preface to the Sequence on Factored Cognition,['Rafael Harth'],2020-11-30T18:49:26Z,alignmentforum,, 19769,https://www.alignmentforum.org/posts/9xfRjaKDTb57BaGWv/the-0-2-ooms-year-target,The 0.2 OOMs/year target,['Cleo Nardo'],2023-03-30T18:15:41Z,alignmentforum,, 19782,https://www.alignmentforum.org/posts/WmBukJkEFM72Xr397/mesa-search-vs-mesa-control,Mesa-Search vs Mesa-Control,['abramdemski'],2020-08-18T18:52:00Z,alignmentforum,, 19806,https://www.alignmentforum.org/posts/JKgGvJCzNoBQss2bq/beliefs-and-disagreements-about-automating-alignment,Beliefs and Disagreements about Automating Alignment Research,['Ian McKenzie'],2022-08-24T18:37:00Z,alignmentforum,, 19831,https://www.alignmentforum.org/posts/c2tEfqEMi6jcJ4kdg/brain-like-agi-project-aintelope,"Brain-like AGI project ""aintelope""",['Gunnar_Zarncke'],2022-08-14T16:33:40Z,alignmentforum,, 19847,https://www.alignmentforum.org/posts/LvznjZuygoeoTpSE6/engineering-monosemanticity-in-toy-models,Engineering Monosemanticity in Toy Models,"['Adam Jermyn', 'evhub', 'Nicholas Schiefer']",2022-11-18T01:43:39Z,alignmentforum,, 19878,https://www.alignmentforum.org/posts/2xrBxhRhde7Xddt38/redwood-s-technique-focused-epistemic-strategy,Redwood's Technique-Focused Epistemic Strategy,['adamShimi'],2021-12-12T16:36:23Z,alignmentforum,, 19895,https://www.alignmentforum.org/posts/nbvd4o9uDPe5whFxa/dangerous-optimisation-includes-variance-minimisation,Dangerous optimisation includes variance minimisation,['Stuart_Armstrong'],2021-06-08T11:34:05Z,alignmentforum,, 19907,https://www.alignmentforum.org/posts/suxvE2ddnYMPJN9HD/realism-about-rationality,Realism about rationality,['Richard_Ngo'],2018-09-16T10:46:29Z,alignmentforum,, 19919,https://www.alignmentforum.org/posts/8F8dagB4q4BzR5JNz/gary-marcus-vs-cortical-uniformity,Gary Marcus vs Cortical Uniformity,['Steven Byrnes'],2020-06-28T18:18:55Z,alignmentforum,, 19937,https://www.alignmentforum.org/posts/xKT5oTXDCJNramc7y/eric-michaud-on-the-quantization-model-of-neural-scaling,"Eric Michaud on the Quantization Model of Neural Scaling, Interpretability and Grokking",['Michaël Trazzi'],2023-07-12T22:45:46Z,alignmentforum,, 19954,https://www.alignmentforum.org/posts/gor57NZtxG4bq5eej/an-94-ai-alignment-as-translation-between-humans-and,[AN #94]: AI alignment as translation between humans and machines,['Rohin Shah'],2020-04-08T17:10:03Z,alignmentforum,, 19987,https://www.alignmentforum.org/posts/ZPEGLoWMN242Dob6g/review-of-debate-on-instrumental-convergence-between-lecun,"Review of 'Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More'",['TurnTrout'],2021-01-12T03:57:07Z,alignmentforum,, 19996,https://www.alignmentforum.org/posts/nzRh8yQHi3bx9bLsD/normative-vs-descriptive-models-of-agency-2,Normative vs Descriptive Models of Agency,['mattmacdermott'],2023-02-02T20:28:29Z,alignmentforum,, 20016,https://www.alignmentforum.org/posts/Cu4v9MHGuhLnDQTuF/counterfactual-induction-lemma-4,Counterfactual Induction (Lemma 4),['Diffractor'],2019-12-17T05:05:16Z,alignmentforum,, 20028,https://www.alignmentforum.org/posts/kczouh3rvEoxJWFh5/embedded-self-justification-or-something-like-that,"“embedded self-justification,” or something like that",['nostalgebraist'],2019-11-03T03:20:02Z,alignmentforum,, 20042,https://www.alignmentforum.org/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-convergently-instrumental-in-mdps,Seeking Power is Often Convergently Instrumental in MDPs,"['TurnTrout', 'Logan Riggs']",2019-12-05T02:33:34Z,alignmentforum,, 20059,https://www.alignmentforum.org/posts/vnfPeiY3bwhaEMoXR/link-chatgpt-discussion,[LINK] - ChatGPT discussion,['JanBrauner'],2022-12-01T15:04:45Z,alignmentforum,, 20075,https://www.alignmentforum.org/posts/3ouxBRRzjxarTukMW/apply-to-the-second-iteration-of-the-ml-for-alignment,Apply to the second iteration of the ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2],['Buck'],2022-05-06T04:23:45Z,alignmentforum,, 20090,https://www.alignmentforum.org/posts/ccNggNeBgMZFy3FRr/open-phil-releases-rfps-on-llm-benchmarks-and-forecasting,Open Phil releases RFPs on LLM Benchmarks and Forecasting,['LawrenceC'],2023-11-11T03:01:10Z,alignmentforum,, 20120,https://www.alignmentforum.org/posts/i3v7WeCXyWiYfhihF/stop-gradients-lead-to-fixed-point-predictions,Stop-gradients lead to fixed point predictions,"['Johannes Treutlein', 'Caspar Oesterheld', 'Rubi J. Hudson', 'Emery Cooper']",2023-01-28T22:47:35Z,alignmentforum,, 20142,https://www.alignmentforum.org/posts/7DhwRLoKm4nMrFFsH/measuring-and-forecasting-risks,Measuring and forecasting risks,"['abergal', 'Nick_Beckstead', 'jsteinhardt']",2021-10-29T07:27:33Z,alignmentforum,, 20169,https://www.alignmentforum.org/posts/Nq58w4SiZMjHdAPaX/what-exactly-is-gpt-3-s-base-objective,What exactly is GPT-3's base objective?,['Daniel Kokotajlo'],2021-11-10T00:57:35Z,alignmentforum,, 20178,https://www.alignmentforum.org/posts/Z5ZBPEgufmDsm7LAv/what-can-the-principal-agent-literature-tell-us-about-ai,What can the principal-agent literature tell us about AI risk?,['Alexis Carlier'],2020-02-08T21:28:10Z,alignmentforum,, 20208,https://www.alignmentforum.org/posts/pRD5u2omuDoMTuH39/multi-agent-inverse-reinforcement-learning-suboptimal,Multi-Agent Inverse Reinforcement Learning: Suboptimal Demonstrations and Alternative Solution Concepts,['sage_bergerson'],2021-09-07T16:11:59Z,alignmentforum,, 20232,https://www.alignmentforum.org/posts/vakhhNHduW9gmENTW/announcing-new-round-of-key-phenomena-in-ai-risk-reading,"Announcing new round of ""Key Phenomena in AI Risk"" Reading Group","['DusanDNesic', 'Nora_Ammann']",2023-10-20T07:11:09Z,alignmentforum,, 20241,https://www.alignmentforum.org/posts/j5foHZhZ7RBhwRL7Z/do-mesa-optimizer-risk-arguments-rely-on-the-train-test,Do mesa-optimizer risk arguments rely on the train-test paradigm?,['Ben Cottier'],2020-09-10T15:36:38Z,alignmentforum,, 20255,https://www.alignmentforum.org/posts/3xotPYdAs7GfT9a9r/pointing-to-a-flower,Pointing to a Flower,['johnswentworth'],2020-05-18T18:54:54Z,alignmentforum,, 20266,https://www.alignmentforum.org/posts/gvm8WvLmazj2NdTPk/alignment-with-argument-networks-and-assessment-predictions,Alignment with argument-networks and assessment-predictions,['Tor Økland Barstad'],2022-12-13T02:17:01Z,alignmentforum,, 20292,https://www.alignmentforum.org/posts/mZy6AMgCw9CPjNCoK/computational-model-causal-diagrams-with-symmetry,Computational Model: Causal Diagrams with Symmetry,['johnswentworth'],2019-08-22T17:54:11Z,alignmentforum,, 20308,https://www.alignmentforum.org/posts/2WuSZo7esdobiW2mr/the-lightcone-theorem-a-better-foundation-for-natural,The Lightcone Theorem: A Better Foundation For Natural Abstraction?,['johnswentworth'],2023-05-15T02:24:02Z,alignmentforum,, 20320,https://www.alignmentforum.org/posts/dfXwJh4X5aAcS8gF5/refining-the-sharp-left-turn-threat-model-part-2-applying,"Refining the Sharp Left Turn threat model, part 2: applying alignment techniques","['Vika', 'Vikrant Varma', 'Ramana Kumar', 'Rohin Shah']",2022-11-25T14:36:09Z,alignmentforum,, 20345,https://www.alignmentforum.org/posts/Dm2mXk94PATehdr9J/alignment-newsletter-43,Alignment Newsletter #43,['Rohin Shah'],2019-01-29T21:10:02Z,alignmentforum,, 20383,https://www.alignmentforum.org/posts/PvbzCuj293D5Hxvu3/fisherian-runaway-as-a-decision-theoretic-problem,Fisherian Runaway as a decision-theoretic problem,['Bunthut'],2021-03-20T16:34:43Z,alignmentforum,, 20398,https://www.alignmentforum.org/posts/bEa4FuLS4r7hExoty/stable-pointers-to-value-iii-recursive-quantilization,Stable Pointers to Value III: Recursive Quantilization,['abramdemski'],2018-07-21T08:06:32Z,alignmentforum,, 20420,https://www.alignmentforum.org/posts/22GrdspteQc8EonMn/a-very-crude-deception-eval-is-already-passed,A very crude deception eval is already passed,['Beth Barnes'],2021-10-29T17:57:29Z,alignmentforum,, 20434,https://www.alignmentforum.org/posts/WmNeCipNwg9CmGy3T/markets-are-universal-for-logical-induction,Markets are Universal for Logical Induction,['johnswentworth'],2019-08-22T06:44:57Z,alignmentforum,, 20450,https://www.alignmentforum.org/posts/iDNEjbdHhjzvLLAmm/should-we-publish-mechanistic-interpretability-research,Should we publish mechanistic interpretability research?,"['Marius Hobbhahn', 'LawrenceC']",2023-04-21T16:19:41Z,alignmentforum,, 20472,https://www.alignmentforum.org/posts/KyM9p6q5SELM3Ncdu/to-what-extent-are-the-scaling-properties-of-transformer-1,To what extent are the scaling properties of Transformer networks exceptional?,['abramdemski'],2020-07-28T20:06:24Z,alignmentforum,, 20481,https://www.alignmentforum.org/posts/BgoKdAzogxmgkuuAt/behavior-cloning-is-miscalibrated,Behavior Cloning is Miscalibrated,['leogao'],2021-12-05T01:36:02Z,alignmentforum,, 20501,https://www.alignmentforum.org/posts/7EnZgaepSBwaZXA5y/counterfactual-planning-in-agi-systems,Counterfactual Planning in AGI Systems,['Koen.Holtman'],2021-02-03T13:54:09Z,alignmentforum,, 20526,https://www.alignmentforum.org/posts/mC2omdN4ekcsNkCmp/asot-simulators-show-us-behavioural-properties-by-default-1,[ASoT] Simulators show us behavioural properties by default,['Jozdien'],2023-01-13T18:42:34Z,alignmentforum,, 20548,https://www.alignmentforum.org/posts/xpGFA7bdoiNDTut8C/take-3-no-indescribable-heavenworlds,Take 3: No indescribable heavenworlds.,['Charlie Steiner'],2022-12-04T02:48:17Z,alignmentforum,, 20559,https://www.alignmentforum.org/posts/nwrkwTd6uKBesYYfx/subagents-of-cartesian-frames,Subagents of Cartesian Frames,['Scott Garrabrant'],2020-11-02T22:02:53Z,alignmentforum,, 20569,https://www.alignmentforum.org/posts/WZXqNYbJhtidjRXSi/what-will-gpt-2030-look-like,What will GPT-2030 look like?,['jsteinhardt'],2023-06-07T23:40:03Z,alignmentforum,, 20592,https://www.alignmentforum.org/posts/LpjjWDBXr88gzcYK2/learning-and-manipulating-learning,Learning and manipulating learning,['Stuart_Armstrong'],2020-05-19T13:02:42Z,alignmentforum,, 20608,https://www.alignmentforum.org/posts/brQYmeX4HFrPbs4XP/an-agent-is-a-worldline-in-tegmark-v,An Agent is a Worldline in Tegmark V,['komponisto'],2018-07-12T05:12:21Z,alignmentforum,, 20617,https://www.alignmentforum.org/posts/FDjTgDcGPc7B98AES/searching-for-search-4,Searching for Search,"['NicholasKees', 'janus']",2022-11-28T15:31:50Z,alignmentforum,, 20637,https://www.alignmentforum.org/posts/QPqztHpToij2nx7ET/hessian-and-basin-volume,Hessian and Basin volume,['Vivek Hebbar'],2022-07-10T06:59:34Z,alignmentforum,, 20651,https://www.alignmentforum.org/posts/wucncPjud27mLWZzQ/intro-to-brain-like-agi-safety-10-the-alignment-problem,[Intro to brain-like-AGI safety] 10. The alignment problem,['Steven Byrnes'],2022-03-30T13:24:33Z,alignmentforum,, 20677,https://www.alignmentforum.org/posts/h7BA7TQTo3dxvYrek/representational-tethers-tying-ai-latents-to-human-ones,Representational Tethers: Tying AI Latents To Human Ones,['Paul Bricman'],2022-09-16T14:45:39Z,alignmentforum,, 20697,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374f9d/vingean-reflection-open-problems,Vingean Reflection: Open Problems,['abramdemski'],2015-07-03T18:44:30Z,alignmentforum,, 20732,https://www.alignmentforum.org/posts/qGjCt4Xq83MBaygPx/a-simple-example-of-conditional-orthogonality-in-finite,A simple example of conditional orthogonality in finite factored sets,['DanielFilan'],2021-07-06T00:36:40Z,alignmentforum,, 20743,https://www.alignmentforum.org/posts/dkjwSLfvKwpaQSuWo/misgeneralization-as-a-misnomer,Misgeneralization as a misnomer,['So8res'],2023-04-06T20:43:33Z,alignmentforum,, 20764,https://www.alignmentforum.org/posts/DLjCSHjwbxzEEa6Hu/the-inescapability-of-knowledge,The inescapability of knowledge,['Alex Flint'],2021-07-11T22:59:15Z,alignmentforum,, 20776,https://www.alignmentforum.org/posts/5bd75cc58225bf06703754d5/delegative-reinforcement-learning-with-a-merely-sane-advisor,Delegative Reinforcement Learning with a Merely Sane Advisor,['Vanessa Kosoy'],2017-10-05T14:15:45Z,alignmentforum,, 20794,https://www.alignmentforum.org/posts/yXiu6DBxKKWXC8Ygx/finding-neurons-in-a-haystack-case-studies-with-sparse,Finding Neurons in a Haystack: Case Studies with Sparse Probing,"['wesg', 'Neel Nanda']",2023-05-03T13:30:31Z,alignmentforum,, 20808,https://www.alignmentforum.org/posts/Eg9FE2iYp3ngySsMD/don-t-influence-the-influencers,Don't Influence the Influencers!,['lhc'],2021-12-19T09:02:19Z,alignmentforum,, 20827,https://www.alignmentforum.org/posts/gBLs3GefMdtWe6iSk/pros-and-cons-of-working-on-near-term-technical-ai-safety,Pros and cons of working on near-term technical AI safety and assurance,['Aryeh Englander'],2021-06-17T20:17:13Z,alignmentforum,, 20851,https://www.alignmentforum.org/posts/Gyggp2DJRMRLSnhid/a-brief-note-on-simplicity-bias-1,A brief note on Simplicity Bias,['Spencer Becker-Kahn'],2022-08-14T02:05:01Z,alignmentforum,, 20863,https://www.alignmentforum.org/posts/CDSXoC54CjbXQNLGr/epistemology-of-hch,Epistemology of HCH,['adamShimi'],2021-02-09T11:46:29Z,alignmentforum,, 20885,https://www.alignmentforum.org/posts/SajYfrsoTHxiXPNtf/boundaries-part-3b-alignment-problems-in-terms-of-boundaries,"«Boundaries», Part 3b: Alignment problems in terms of boundaries",['Andrew_Critch'],2022-12-14T22:34:41Z,alignmentforum,, 20915,https://www.alignmentforum.org/posts/zzXawbXDwCZobwF9D/my-agi-threat-model-misaligned-model-based-rl-agent,My AGI Threat Model: Misaligned Model-Based RL Agent,['Steven Byrnes'],2021-03-25T13:45:11Z,alignmentforum,, 20945,https://www.alignmentforum.org/posts/KSWSkxXJqWGd5jYLB/the-speed-simplicity-prior-is-probably-anti-deceptive,The Speed + Simplicity Prior is probably anti-deceptive,['anonymous'],2022-04-27T19:30:20Z,alignmentforum,, 20960,https://www.alignmentforum.org/posts/zRn6aQyD8uhAN7qCc/sam-altman-planning-for-agi-and-beyond,"Sam Altman: ""Planning for AGI and beyond""",['LawrenceC'],2023-02-24T20:28:00Z,alignmentforum,, 20987,https://www.alignmentforum.org/posts/osxNg6yBCJ4ur9hpi/does-agent-like-behavior-imply-agent-like-architecture,Does Agent-like Behavior Imply Agent-like Architecture?,['Scott Garrabrant'],2019-08-23T02:01:10Z,alignmentforum,, 20996,https://www.alignmentforum.org/posts/usKXS5jGDzjwqv3FJ/refining-the-sharp-left-turn-threat-model-part-1-claims-and,"Refining the Sharp Left Turn threat model, part 1: claims and mechanisms","['Vika', 'Vikrant Varma', 'Ramana Kumar', 'Mary Phuong']",2022-08-12T15:17:38Z,alignmentforum,, 21017,https://www.alignmentforum.org/posts/WxW6Gc6f2z3mzmqKs/debate-on-instrumental-convergence-between-lecun-russell,"Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More",['Ben Pace'],2019-10-04T04:08:50Z,alignmentforum,, 21048,https://www.alignmentforum.org/posts/h3Fp3Erddwnm5uthZ/disagreement-with-paul-alignment-induction,Disagreement with Paul: alignment induction,['Stuart_Armstrong'],2018-09-10T13:54:10Z,alignmentforum,, 21059,https://www.alignmentforum.org/posts/zswuToWK6zpYSwmCn/some-background-for-reasoning-about-dual-use-alignment,Some background for reasoning about dual-use alignment research,['Charlie Steiner'],2023-05-18T14:50:54Z,alignmentforum,, 21078,https://www.alignmentforum.org/posts/pr3bLc2LtjARfK7nx/world-state-is-the-wrong-abstraction-for-impact,World State is the Wrong Abstraction for Impact,['TurnTrout'],2019-10-01T21:03:40Z,alignmentforum,, 21090,https://www.alignmentforum.org/posts/PrYbdKcj89f8swCkr/infra-topology,Infra-Topology,['Diffractor'],2022-04-22T02:10:40Z,alignmentforum,, 21113,https://www.alignmentforum.org/posts/firtXAWGdvzXYAh9B/paper-transformers-learn-in-context-by-gradient-descent,Paper: Transformers learn in-context by gradient descent,['LawrenceC'],2022-12-16T11:10:17Z,alignmentforum,, 21124,https://www.alignmentforum.org/posts/bCQbSFrnnAk7CJNpM/still-no-lie-detector-for-llms,Still no Lie Detector for LLMs,"['Whispermute', 'ben_levinstein']",2023-07-18T19:56:58Z,alignmentforum,, 21145,https://www.alignmentforum.org/posts/atBQ3NHyqnBadrsGP/latent-adversarial-training,Latent Adversarial Training,['Adam Jermyn'],2022-06-29T20:04:00Z,alignmentforum,, 21154,https://www.alignmentforum.org/posts/rv65vAPqpZGFLcnnD/different-way-classifiers-can-be-diverse,Different way classifiers can be diverse,['Stuart_Armstrong'],2022-01-17T16:30:05Z,alignmentforum,, 21167,https://www.alignmentforum.org/posts/CZHwwDd7t9aYra5HN/dslt-2-why-neural-networks-obey-occam-s-razor,DSLT 2. Why Neural Networks obey Occam's Razor,['Liam Carroll'],2023-06-18T00:23:15Z,alignmentforum,, 21189,https://www.alignmentforum.org/posts/PRwQ6eMaEkTX2uks3/infra-exercises-part-1,"Infra-Exercises, Part 1","['Diffractor', 'Jack Parker', 'Connall Garrod']",2022-09-01T05:06:59Z,alignmentforum,, 21198,https://www.alignmentforum.org/posts/iMM6dvHzco6jBMFMX/value-loading-in-the-human-brain-a-worked-example,Value loading in the human brain: a worked example,['Steven Byrnes'],2021-08-04T17:20:06Z,alignmentforum,, 21214,https://www.alignmentforum.org/posts/pEB3LrNxvMKFLGBSG/traps-of-formalization-in-deconfusion,Traps of Formalization in Deconfusion,['adamShimi'],2021-08-05T22:40:50Z,alignmentforum,, 21236,https://www.alignmentforum.org/posts/AXj9KSvda6XwNwLrS/an-168-four-technical-topics-for-which-open-phil-is,[AN #168]: Four technical topics for which Open Phil is soliciting grant proposals,['Rohin Shah'],2021-10-28T17:20:03Z,alignmentforum,, 21270,https://www.alignmentforum.org/posts/3oNZA9wTrFJRH6Sau/my-thoughts-on-openai-s-alignment-plan,My thoughts on OpenAI's Alignment plan,['Donald Hobson'],2022-12-10T10:35:27Z,alignmentforum,, 21299,https://www.alignmentforum.org/posts/WAqG5BQMzAs34mpc2/towards-deconfusing-values,Towards deconfusing values,['Gordon Seidoh Worley'],2020-01-29T19:28:08Z,alignmentforum,, 21316,https://www.alignmentforum.org/posts/xmzNAoWcYQfMv3j6J/conditions-under-which-misaligned-subagents-can-not-arise-in,Conditions under which misaligned subagents can (not) arise in classifiers,['anon1'],2018-07-11T01:52:58Z,alignmentforum,, 21329,https://www.alignmentforum.org/posts/7EupfLrZ63pbdyb9J/superrational-agents-kelly-bet-influence,Superrational Agents Kelly Bet Influence!,['abramdemski'],2021-04-16T22:08:18Z,alignmentforum,, 21341,https://www.alignmentforum.org/posts/cxQtz3RP4qsqTkEwL/an-121-forecasting-transformative-ai-timelines-using,[AN #121]: Forecasting transformative AI timelines using biological anchors,['Rohin Shah'],2020-10-14T17:20:05Z,alignmentforum,, 21356,https://www.alignmentforum.org/posts/kdwk5aHNjM53PZFKL/faq-advice-for-ai-alignment-researchers,FAQ: Advice for AI Alignment Researchers,['Rohin Shah'],2021-04-26T18:59:53Z,alignmentforum,, 21365,https://www.alignmentforum.org/posts/c27yRmcBxC6txibWW/concepts-of-agency-in-biology-okasha-2023-brief-paper,"""Concepts of Agency in Biology"" (Okasha, 2023) - Brief Paper Summary",['Nora_Ammann'],2023-07-08T18:22:43Z,alignmentforum,, 21386,https://www.alignmentforum.org/posts/C9vj5ZX3KsgFfwXAN/axrp-episode-7-side-effects-with-victoria-krakovna,AXRP Episode 7 - Side Effects with Victoria Krakovna,['DanielFilan'],2021-05-14T03:50:12Z,alignmentforum,, 21411,https://www.alignmentforum.org/posts/JcpwEKbmNHdwhpq5n/problem-relaxation-as-a-tactic,Problem relaxation as a tactic,['TurnTrout'],2020-04-22T23:44:42Z,alignmentforum,, 21429,https://www.alignmentforum.org/posts/87EzRDAHkQJptLthE/but-why-would-the-ai-kill-us,But why would the AI kill us?,['So8res'],2023-04-17T18:42:40Z,alignmentforum,, 21446,https://www.alignmentforum.org/posts/beLgLr6edbZw4koh2/an-143-how-to-make-embedded-agents-that-reason,[AN #143]: How to make embedded agents that reason probabilistically about their environments,['Rohin Shah'],2021-03-24T17:20:05Z,alignmentforum,, 21470,https://www.alignmentforum.org/posts/3JfrwRNgSqH9fqsQT/alignment-newsletter-28,Alignment Newsletter #28,['Rohin Shah'],2018-10-15T21:20:12Z,alignmentforum,, 21485,https://www.alignmentforum.org/posts/GdkixRevWpEanYgou/catastrophic-regressional-goodhart-appendix,Catastrophic Regressional Goodhart: Appendix,"['Thomas Kwa', 'Drake Thomas']",2023-05-15T00:10:31Z,alignmentforum,, 21502,https://www.alignmentforum.org/posts/nRu92PXLrdwqdtQmn/more-recent-progress-in-the-theory-of-neural-networks-1,More Recent Progress in the Theory of Neural Networks,['jylin04'],2022-10-06T16:57:10Z,alignmentforum,, 21520,https://www.alignmentforum.org/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem,The Rocket Alignment Problem,['Eliezer Yudkowsky'],2018-10-04T00:38:59Z,alignmentforum,, 21538,https://www.alignmentforum.org/posts/NTwA3J99RPkgmp6jh/an-62-are-adversarial-examples-caused-by-real-but,[AN #62] Are adversarial examples caused by real but imperceptible features?,['Rohin Shah'],2019-08-22T17:10:02Z,alignmentforum,, 21571,https://www.alignmentforum.org/posts/rPWQzRRQbjtgYn7rE/concept-extrapolation-key-posts,Concept extrapolation: key posts,['Stuart_Armstrong'],2022-04-19T10:01:25Z,alignmentforum,, 21581,https://www.alignmentforum.org/posts/FNyqL7mxSkgLpck4w/traversing-a-cognition-space,Traversing a Cognition Space,['Rafael Harth'],2020-12-07T18:32:21Z,alignmentforum,, 21597,https://www.alignmentforum.org/posts/bG8u3HDZ5AQDJhtTk/contra-common-knowledge,Contra Common Knowledge,['abramdemski'],2023-01-04T22:50:38Z,alignmentforum,, 21617,https://www.alignmentforum.org/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem,The Commitment Races problem,['Daniel Kokotajlo'],2019-08-23T01:58:20Z,alignmentforum,, 21638,https://www.alignmentforum.org/posts/hFaXe4Mi64xkE6Kqp/attributing-to-interactions-with-gcpd-and-gwpd,Attributing to interactions with GCPD and GWPD,['jenny'],2023-10-11T15:06:17Z,alignmentforum,, 21654,https://www.alignmentforum.org/posts/QBgFnBMJpmGkE5ioc/we-have-achieved-noob-gains-in-ai,We have achieved Noob Gains in AI,['phdead'],2022-05-18T20:56:49Z,alignmentforum,, 21673,https://www.alignmentforum.org/posts/6jkGf5WEKMpMFXZp2/what-failure-looks-like-distilling-the-discussion,What Failure Looks Like: Distilling the Discussion,['Ben Pace'],2020-07-29T21:49:17Z,alignmentforum,, 21692,https://www.alignmentforum.org/posts/A7RgYuYH4HywNeYWD/mode-collapse-in-rl-may-be-fueled-by-the-update-equation,Mode collapse in RL may be fueled by the update equation,"['TurnTrout', 'MichaelEinhorn']",2023-06-19T21:51:04Z,alignmentforum,, 21706,https://www.alignmentforum.org/posts/xJyY5QkQvNJpZLJRo/radical-probabilism-1,Radical Probabilism,['abramdemski'],2020-08-18T21:14:20Z,alignmentforum,, 21740,https://www.alignmentforum.org/posts/ey7jACdF4j6GrQLrG/thoughts-on-safety-in-predictive-learning,Thoughts on safety in predictive learning,['Steven Byrnes'],2021-06-30T19:17:40Z,alignmentforum,, 21762,https://www.alignmentforum.org/posts/htrZrxduciZ5QaCjw/language-models-seem-to-be-much-better-than-humans-at-next,Language models seem to be much better than humans at next-token prediction,"['Buck', 'Fabien Roger', 'LawrenceC']",2022-08-11T17:45:41Z,alignmentforum,, 21771,https://www.alignmentforum.org/posts/dfRtxWcFDupfWpLQo/perform-tractable-research-while-avoiding-capabilities,Perform Tractable Research While Avoiding Capabilities Externalities [Pragmatic AI Safety #4],"['Dan H', 'ThomasW']",2022-05-30T20:25:05Z,alignmentforum,, 21807,https://www.alignmentforum.org/posts/DfcywmqRSkBaCB6Ma/intuitions-about-goal-directed-behavior,Intuitions about goal-directed behavior,['Rohin Shah'],2018-12-01T04:25:47Z,alignmentforum,, 21828,https://www.alignmentforum.org/posts/xN8MTRN7GchFJB8WJ/constraints-from-naturalized-ethics,Constraints from naturalized ethics.,['Charlie Steiner'],2020-07-25T14:54:52Z,alignmentforum,, 21843,https://www.alignmentforum.org/posts/ZqSESJYcA8m2Hh7Qm/immobile-ai-makes-a-move-anti-wireheading-ontology-change,"Immobile AI makes a move: anti-wireheading, ontology change, and model splintering",['Stuart_Armstrong'],2021-09-17T15:24:02Z,alignmentforum,, 21864,https://www.alignmentforum.org/posts/BinkknLBYxskMXuME/if-interpretability-research-goes-well-it-may-get-dangerous,"If interpretability research goes well, it may get dangerous",['So8res'],2023-04-03T21:48:19Z,alignmentforum,, 21877,https://www.alignmentforum.org/posts/BtffzD5yNB4CzSTJe/genetic-fitness-is-a-measure-of-selection-strength-not-the,"Genetic fitness is a measure of selection strength, not the selection target",['Kaj_Sotala'],2023-11-04T19:02:14Z,alignmentforum,, 21898,https://www.alignmentforum.org/posts/QvtHSsZLFCAHmzes7/a-naive-alignment-strategy-and-optimism-about-generalization,A naive alignment strategy and optimism about generalization,['paulfchristiano'],2021-06-10T00:10:02Z,alignmentforum,, 21913,https://www.alignmentforum.org/posts/2zEeb36XL6HLnjDkj/procedurally-evaluating-factual-accuracy-a-request-for,Procedurally evaluating factual accuracy: a request for research,['Jacob_Hilton'],2022-03-30T16:37:38Z,alignmentforum,, 21937,https://www.alignmentforum.org/posts/Ee29dFnPhaeRmYdMy/example-population-ethics-ordered-discounted-utility,Example population ethics: ordered discounted utility,['Stuart_Armstrong'],2019-03-11T16:10:43Z,alignmentforum,, 21946,https://www.alignmentforum.org/posts/CnruhwFGQBThvgJiX/formal-solution-to-the-inner-alignment-problem,Formal Solution to the Inner Alignment Problem,['michaelcohen'],2021-02-18T14:51:14Z,alignmentforum,, 21961,https://www.alignmentforum.org/posts/wPLeBqsLgJyFyuTr7/a-distillation-of-evan-hubinger-s-training-stories-for-seri,A distillation of Evan Hubinger's training stories (for SERI MATS),['Daphne_W'],2022-07-18T03:38:39Z,alignmentforum,, 21994,https://www.alignmentforum.org/posts/5bd75cc58225bf06703750cf/stabilizing-logical-counterfactuals-by-pseudorandomization,Stabilizing logical counterfactuals by pseudorandomization,['Vanessa Kosoy'],2016-05-25T12:05:07Z,alignmentforum,, 22012,https://www.alignmentforum.org/posts/9mscdgJ7ao3vbbrjs/an-70-agents-that-help-humans-who-are-still-learning-about,[AN #70]: Agents that help humans who are still learning about their own preferences,['Rohin Shah'],2019-10-23T17:10:02Z,alignmentforum,, 22063,https://www.alignmentforum.org/posts/c2oM7qytRByv6ZFtz/impact-measure-desiderata,Impact Measure Desiderata,['TurnTrout'],2018-09-02T22:21:19Z,alignmentforum,, 22081,https://www.alignmentforum.org/posts/tnEQMnpyBFK5QBRz3/full-time-agi-safety,Full-time AGI Safety!,['Steven Byrnes'],2021-03-01T12:42:15Z,alignmentforum,, 22090,https://www.alignmentforum.org/posts/EAqHkKtbefvyRs4nw/counterfactual-induction,Counterfactual Induction,['Diffractor'],2019-12-17T05:03:32Z,alignmentforum,, 22103,https://www.alignmentforum.org/posts/rmfjo4Wmtgq8qa2B7/think-carefully-before-calling-rl-policies-agents,"Think carefully before calling RL policies ""agents""",['TurnTrout'],2023-06-02T03:46:07Z,alignmentforum,, 22118,https://www.alignmentforum.org/posts/ehLX2RdbD5ZkeJyuJ/optimization-regularization-through-time-penalty,Optimization Regularization through Time Penalty,['Linda Linsefors'],2019-01-01T13:05:33Z,alignmentforum,, 22136,https://www.alignmentforum.org/posts/eE4QrWsdYQxNynbTM/an-104-the-perils-of-inaccessible-information-and-what-we,"[AN #104]: The perils of inaccessible information, and what we can learn about AI alignment from COVID",['Rohin Shah'],2020-06-18T17:10:03Z,alignmentforum,, 22163,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374eca/non-manipulative-oracles,Non-manipulative oracles,['Stuart_Armstrong'],2015-02-06T17:05:42Z,alignmentforum,, 22174,https://www.alignmentforum.org/posts/ZBYE2F5DBiZtj6m95/is-causality-in-the-map-or-the-territory,Is Causality in the Map or the Territory?,['johnswentworth'],2019-12-17T23:19:24Z,alignmentforum,, 22184,https://www.alignmentforum.org/posts/XMGWdfTC7XjgTz3X7/a-correspondence-theorem-in-the-maximum-entropy-framework,A Correspondence Theorem in the Maximum Entropy Framework,['johnswentworth'],2020-11-11T22:46:39Z,alignmentforum,, 22199,https://www.alignmentforum.org/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version,Embedded Agency (full-text version),"['Scott Garrabrant', 'abramdemski']",2018-11-15T19:49:29Z,alignmentforum,, 22234,https://www.alignmentforum.org/posts/eoHbneGvqDu25Hasc/rl-with-kl-penalties-is-better-seen-as-bayesian-inference,RL with KL penalties is better seen as Bayesian inference,"['Tomek Korbak', 'Ethan Perez']",2022-05-25T09:23:33Z,alignmentforum,, 22253,https://www.alignmentforum.org/posts/gLyRQCg6kp5cqTQTm/collective-identity,Collective Identity,"['NicholasKees', 'ukc10014', 'Garrett Baker']",2023-05-18T09:00:24Z,alignmentforum,, 22269,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375410/cirl-wireheading,CIRL Wireheading,['tom4everitt'],2017-08-08T06:33:57Z,alignmentforum,, 22281,https://www.alignmentforum.org/posts/JFibrXBewkSDmixuo/hypothesis-gradient-descent-prefers-general-circuits,Hypothesis: gradient descent prefers general circuits,['Quintin Pope'],2022-02-08T21:12:38Z,alignmentforum,, 22299,https://www.alignmentforum.org/posts/Dr3owdPqEAFK4pq8S/considerations-on-interaction-between-ai-and-expected-value,Considerations on interaction between AI and expected value of the future,['Beth Barnes'],2021-12-07T02:46:19Z,alignmentforum,, 22333,https://www.alignmentforum.org/posts/kMJxwCZ4mc9w4ezbs/how-an-alien-theory-of-mind-might-be-unlearnable,How an alien theory of mind might be unlearnable,['Stuart_Armstrong'],2022-01-03T11:16:21Z,alignmentforum,, 22347,https://www.alignmentforum.org/posts/XArPqdkwCtEekgYxv/problems-involving-abstraction,Problems Involving Abstraction?,['johnswentworth'],2020-10-20T16:49:40Z,alignmentforum,, 22368,https://www.alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro,Testing The Natural Abstraction Hypothesis: Project Intro,['johnswentworth'],2021-04-06T21:24:43Z,alignmentforum,, 22379,https://www.alignmentforum.org/posts/Gzw6FwPD9FeL4GTWC/usd1000-usd-prize-circular-dependency-of-counterfactuals,$1000 USD prize - Circular Dependency of Counterfactuals,['Chris_Leong'],2022-01-01T09:43:26Z,alignmentforum,, 22394,https://www.alignmentforum.org/posts/d52aS7jNcmi6miGbw/take-1-we-re-not-going-to-reverse-engineer-the-ai,Take 1: We're not going to reverse-engineer the AI.,['Charlie Steiner'],2022-12-01T22:41:33Z,alignmentforum,, 22415,https://www.alignmentforum.org/posts/PP2Lrpvhd3bBvR8Aj/smoke-without-fire-is-scary,Smoke without fire is scary,['Adam Jermyn'],2022-10-04T21:08:33Z,alignmentforum,, 22442,https://www.alignmentforum.org/posts/gJ76SLJAaKZrFCRTj/definitions-of-causal-abstraction-reviewing-beckers-and,Definitions of Causal Abstraction: Reviewing Beckers & Halpern,['johnswentworth'],2020-01-07T00:03:43Z,alignmentforum,, 22457,https://www.alignmentforum.org/posts/XXmeWeY8ZzaLgmvAK/alignment-newsletter-20,Alignment Newsletter #20,['Rohin Shah'],2018-08-20T16:00:05Z,alignmentforum,, 22489,https://www.alignmentforum.org/posts/jrewt3rLFiKWrKuyZ/big-picture-of-phasic-dopamine,Big picture of phasic dopamine,['Steven Byrnes'],2021-06-08T13:07:43Z,alignmentforum,, 22516,https://www.alignmentforum.org/posts/3fkBWpE4f9nYbdf7E/multi-agent-predictive-minds-and-ai-alignment,Multi-agent predictive minds and AI alignment,['Jan_Kulveit'],2018-12-12T23:48:03Z,alignmentforum,, 22544,https://www.alignmentforum.org/posts/u4BaLRK6mJJcvycEk/deliberation-reactions-and-control-tentative-definitions-and,"Deliberation, Reactions, and Control: Tentative Definitions and a Restatement of Instrumental Convergence",['Oliver Sourbut'],2022-06-27T17:25:46Z,alignmentforum,, 22566,https://www.alignmentforum.org/posts/rnkiczuRGHdgfyth3/brain-over-body-biases-and-the-embodied-value-problem-in-ai,"Brain-over-body biases, and the embodied value problem in AI alignment",['geoffreymiller'],2022-09-24T22:24:40Z,alignmentforum,, 22592,https://www.alignmentforum.org/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review,Reflections on Larks’ 2020 AI alignment literature review,['Alex Flint'],2021-01-01T22:53:36Z,alignmentforum,, 22615,https://www.alignmentforum.org/posts/QLosiQsPJepZWtXG4/knowledge-is-not-just-mutual-information,Knowledge is not just mutual information,['Alex Flint'],2021-06-10T01:01:32Z,alignmentforum,, 22627,https://www.alignmentforum.org/posts/tAkbnojHdjqeixBiR/brute-forcing-the-universe-a-non-standard-shot-at-diamond-1,Brute-forcing the universe: a non-standard shot at diamond alignment,['Martín Soto'],2022-11-22T22:36:37Z,alignmentforum,, 22650,https://www.alignmentforum.org/posts/EnN7cm3KaRrEAuWfa/comment-on-coherence-arguments-do-not-imply-goal-directed,Comment on Coherence arguments do not imply goal directed behavior,['Ronny Fernandez'],2019-12-06T09:30:26Z,alignmentforum,, 22667,https://www.alignmentforum.org/posts/2sTTEkzvscWCPBQAk/gradient-filtering,Gradient Filtering,"['Jozdien', 'janus']",2023-01-18T20:09:21Z,alignmentforum,, 22683,https://www.alignmentforum.org/posts/EjgfreeibTXRx9Ham/ten-levels-of-ai-alignment-difficulty,Ten Levels of AI Alignment Difficulty,['Sammy Martin'],2023-07-03T20:20:21Z,alignmentforum,, 22710,https://www.alignmentforum.org/posts/NcA3dMJoWWEN4BQet/logical-counterfactuals-and-the-cooperation-game,Logical Counterfactuals & the Cooperation Game,['Chris_Leong'],2018-08-14T14:00:34Z,alignmentforum,, 22719,https://www.alignmentforum.org/posts/zuHezdoGr2KtM2n43/new-year-new-research-agenda-post,"New year, new research agenda post",['Charlie Steiner'],2022-01-12T17:58:16Z,alignmentforum,, 22742,https://www.alignmentforum.org/posts/Yc5QSSZCQ9qdyxZF6/the-more-power-at-stake-the-stronger-instrumental,"The More Power At Stake, The Stronger Instrumental Convergence Gets For Optimal Policies",['TurnTrout'],2021-07-11T17:36:24Z,alignmentforum,, 22755,https://www.alignmentforum.org/posts/nvLNjY7aoh2i7JxbB/extended-picture-theory-or-models-inside-models-inside,Extended Picture Theory or Models inside Models inside Models,['Chris_Leong'],2021-03-10T13:24:41Z,alignmentforum,, 22767,https://www.alignmentforum.org/posts/ZYDkHWjShKazTywbg/book-review-the-alignment-problem-by-brian-christian,"[Book Review] ""The Alignment Problem"" by Brian Christian",['lsusr'],2021-09-20T06:36:23Z,alignmentforum,, 22801,https://www.alignmentforum.org/posts/AXpXG9oTiucidnqPK/take-13-rlhf-bad-conditioning-good,"Take 13: RLHF bad, conditioning good.",['Charlie Steiner'],2022-12-22T10:44:06Z,alignmentforum,, 22817,https://www.alignmentforum.org/posts/xKbtjfQ2y4anW3fZQ/alignment-newsletter-17,Alignment Newsletter #17,['Rohin Shah'],2018-07-30T16:10:02Z,alignmentforum,, 22847,https://www.alignmentforum.org/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn,Evolution provides no evidence for the sharp left turn,['Quintin Pope'],2023-04-11T18:43:08Z,alignmentforum,, 22871,https://www.alignmentforum.org/posts/AKaf8zN2neXQEvLit/role-architectures-applying-llms-to-consequential-tasks,Role Architectures: Applying LLMs to consequential tasks,['Eric Drexler'],2023-03-30T15:00:29Z,alignmentforum,, 22892,https://www.alignmentforum.org/posts/q3xFWK3qcR7JGTxsv/ai-benefits-post-3-direct-and-indirect-approaches-to-ai,AI Benefits Post 3: Direct and Indirect Approaches to AI Benefits,['Cullen'],2020-07-06T18:48:02Z,alignmentforum,, 22908,https://www.alignmentforum.org/posts/QGaioedKBJE39YJeD/continuous-adversarial-quality-assurance-extending-rlhf-and,Continuous Adversarial Quality Assurance: Extending RLHF and Constitutional AI,['Benaya Koren'],2023-07-08T17:32:53Z,alignmentforum,, 22940,https://www.alignmentforum.org/posts/dh8WsHfzmQJ5L7bd8/preferences-and-biases-the-information-argument,"Preferences and biases, the information argument",['Stuart_Armstrong'],2021-03-23T12:44:47Z,alignmentforum,, 22949,https://www.alignmentforum.org/posts/n2urKnXbevj2ryvGY/agency-as-a-natural-abstraction,Agency As a Natural Abstraction,['Thane Ruthenis'],2022-05-13T18:02:50Z,alignmentforum,, 22970,https://www.alignmentforum.org/posts/o7MXZgx3SGpqSxHYZ/no-i-won-t-go-there-it-feels-like-you-re-trying-to-pascal,"No, I won't go there, it feels like you're trying to Pascal-mug me",['Rupert'],2018-07-11T01:37:13Z,alignmentforum,, 22983,https://www.alignmentforum.org/posts/HKZqH4QtoDcGCfcby/corrigibility-via-thought-process-deference-1,Corrigibility Via Thought-Process Deference,['Thane Ruthenis'],2022-11-24T17:06:39Z,alignmentforum,, 23008,https://www.alignmentforum.org/posts/dktT3BiinsBZLw96h/linkpost-a-general-language-assistant-as-a-laboratory-for,[Linkpost] A General Language Assistant as a Laboratory for Alignment,['Quintin Pope'],2021-12-03T19:42:39Z,alignmentforum,, 23030,https://www.alignmentforum.org/posts/gf9hhmSvpZfyfS34B/ngo-s-view-on-alignment-difficulty,Ngo's view on alignment difficulty,"['Richard_Ngo', 'Eliezer Yudkowsky']",2021-12-14T21:34:51Z,alignmentforum,, 23054,https://www.alignmentforum.org/posts/fJBTRa7m7KnCDdzG5/a-stylized-dialogue-on-john-wentworth-s-claims-about-markets,A stylized dialogue on John Wentworth's claims about markets and optimization,['So8res'],2023-03-25T22:32:53Z,alignmentforum,, 23069,https://www.alignmentforum.org/posts/Lz2nCYnBeaZyS68Xb/probability-as-minimal-map,Probability as Minimal Map,['johnswentworth'],2019-09-01T19:19:57Z,alignmentforum,, 23085,https://www.alignmentforum.org/posts/9et86yPRk6RinJNt3/an-95-a-framework-for-thinking-about-how-to-make-ai-go-well,[AN #95]: A framework for thinking about how to make AI go well,['Rohin Shah'],2020-04-15T17:10:03Z,alignmentforum,, 23107,https://www.alignmentforum.org/posts/MBrhMSZno6qbfGQdZ/comparing-reward-learning-reward-tampering-formalisms,Comparing reward learning/reward tampering formalisms,['Stuart_Armstrong'],2020-05-21T12:03:55Z,alignmentforum,, 23124,https://www.alignmentforum.org/posts/4Qd2pDWeFPgYZfkSg/values-form-a-shifting-landscape-and-why-you-might-care,Values Form a Shifting Landscape (and why you might care),['VojtaKovarik'],2020-12-05T23:56:58Z,alignmentforum,, 23139,https://www.alignmentforum.org/posts/yGaw4NqRha8hgx5ny/the-case-for-becoming-a-black-box-investigator-of-language,The case for becoming a black-box investigator of language models,['Buck'],2022-05-06T14:35:25Z,alignmentforum,, 23157,https://www.alignmentforum.org/posts/buaGz3aiqCotzjKie/game-theoretic-alignment-in-terms-of-attainable-utility,Game-theoretic Alignment in terms of Attainable Utility,"['midco', 'TurnTrout']",2021-06-08T12:36:07Z,alignmentforum,, 23172,https://www.alignmentforum.org/posts/BJAcnMBHGua3tFKu5/axrp-episode-2-learning-human-biases-with-rohin-shah,AXRP Episode 2 - Learning Human Biases with Rohin Shah,['DanielFilan'],2020-12-29T20:43:28Z,alignmentforum,, 23187,https://www.alignmentforum.org/posts/cfvBm2kBtFTgxBB7s/predictive-coding-rl-sl-bayes-mpc,Predictive coding = RL + SL + Bayes + MPC,['Steven Byrnes'],2019-12-10T11:45:56Z,alignmentforum,, 23204,https://www.alignmentforum.org/posts/rmBS5nTJh6pxERWEu/short-summary-of-mairy-s-room,Short summary of mAIry's room,['Stuart_Armstrong'],2021-01-18T18:11:36Z,alignmentforum,, 23220,https://www.alignmentforum.org/posts/bxkWd6WdkPqGmdHEk/path-dependence-in-ml-inductive-biases,Path dependence in ML inductive biases,"['Vivek Hebbar', 'evhub']",2022-09-10T01:38:23Z,alignmentforum,, 23252,https://www.alignmentforum.org/posts/BS7Syu2buhLYRjkwY/algorithmic-similarity,Algorithmic Similarity,['LukasM'],2019-08-23T16:39:48Z,alignmentforum,, 23267,https://www.alignmentforum.org/posts/JZrN4ckaCfd6J37cG/how-i-formed-my-own-views-about-ai-safety,How I Formed My Own Views About AI Safety,['Neel Nanda'],2022-02-27T18:50:17Z,alignmentforum,, 23289,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375417/acausal-trade-full-decision-algorithms,Acausal trade: full decision algorithms,['Stuart_Armstrong'],2017-05-15T10:12:44Z,alignmentforum,, 23312,https://www.alignmentforum.org/posts/fx8Mdorwmt696Ramm/databases-of-human-behaviour-and-preferences,Databases of human behaviour and preferences?,['Stuart_Armstrong'],2020-04-21T18:06:52Z,alignmentforum,, 23321,https://www.alignmentforum.org/posts/tepqESMuRmyhtmDS7/forecasting-progress-in-language-models,Forecasting progress in language models,"['Matthew Barnett', 'Metaculus']",2021-10-28T20:41:00Z,alignmentforum,, 23336,https://www.alignmentforum.org/posts/iJDmL7HJtN5CYKReM/empirical-observations-of-objective-robustness-failures,Empirical Observations of Objective Robustness Failures,"['jbkjr', 'Lauro Langosco']",2021-06-23T23:23:28Z,alignmentforum,, 23357,https://www.alignmentforum.org/posts/3gAKoaziTXmvHusRv/some-hacky-elk-ideas,Some Hacky ELK Ideas,['johnswentworth'],2022-02-15T02:27:39Z,alignmentforum,, 23378,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375533/an-untrollable-mathematician,An Untrollable Mathematician,['abramdemski'],2018-01-23T18:46:17Z,alignmentforum,, 23389,https://www.alignmentforum.org/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse,Mysteries of mode collapse,['janus'],2022-11-08T10:37:58Z,alignmentforum,, 23409,https://www.alignmentforum.org/posts/ebdf8GZxt3L9grwwN/deliberation-as-a-method-to-find-the-actual-preferences-of,"Deliberation as a method to find the ""actual preferences"" of humans",['riceissa'],2019-10-22T09:23:31Z,alignmentforum,, 23428,https://www.alignmentforum.org/posts/Fq8ybxtcFvKEsWmF8/ai-takeoff-story-a-continuation-of-progress-by-other-means,AI takeoff story: a continuation of progress by other means,['Edouard Harris'],2021-09-27T15:55:44Z,alignmentforum,, 23452,https://www.alignmentforum.org/posts/pfmFe5fgEn2weJuer/go-west-young-man-preferences-in-imperfect-maps,"""Go west, young man!"" - Preferences in (imperfect) maps",['Stuart_Armstrong'],2020-07-31T07:51:00Z,alignmentforum,, 23464,https://www.alignmentforum.org/posts/adKSWktLbxfihDANM/against-the-backward-approach-to-goal-directedness,Against the Backward Approach to Goal-Directedness,['adamShimi'],2021-01-19T18:46:20Z,alignmentforum,, 23474,https://www.alignmentforum.org/posts/5bd75cc58225bf06703750b1/another-view-of-quantilizers-avoiding-goodhart-s-law,Another view of quantilizers: avoiding Goodhart's Law,['jessicata'],2016-01-09T04:02:26Z,alignmentforum,, 23484,https://www.alignmentforum.org/posts/2vxoTfuScspraSJeC/encultured-ai-pre-planning-part-2-providing-a-service,"Encultured AI Pre-planning, Part 2: Providing a Service","['Andrew_Critch', 'Nick Hay']",2022-08-11T20:11:25Z,alignmentforum,, 23496,https://www.alignmentforum.org/posts/zAvhvGa6ToieNGuy2/communication-prior-as-alignment-strategy,Communication Prior as Alignment Strategy,['johnswentworth'],2020-11-12T22:06:15Z,alignmentforum,, 23514,https://www.alignmentforum.org/posts/BnDF5kejzQLqd5cjH/alignment-as-a-bottleneck-to-usefulness-of-gpt-3,Alignment As A Bottleneck To Usefulness Of GPT-3,['johnswentworth'],2020-07-21T20:02:36Z,alignmentforum,, 23539,https://www.alignmentforum.org/posts/yYkrbS5iAwdEQyynW/how-do-new-models-from-openai-deepmind-and-anthropic-perform,"How do new models from OpenAI, DeepMind and Anthropic perform on TruthfulQA?",['Owain_Evans'],2022-02-26T12:46:04Z,alignmentforum,, 23560,https://www.alignmentforum.org/posts/mojJ6Hpri8rfzY78b/fixed-point-exercises,Fixed Point Exercises,['Scott Garrabrant'],2018-11-17T01:39:50Z,alignmentforum,, 23570,https://www.alignmentforum.org/posts/3ydumADYt9xkaKRTF/conditioning-predictive-models-interactions-with-other,Conditioning Predictive Models: Interactions with other approaches,"['evhub', 'Adam Jermyn', 'Johannes Treutlein', 'Rubi J. Hudson', 'kcwoolverton']",2023-02-08T18:19:23Z,alignmentforum,, 23601,https://www.alignmentforum.org/posts/8hf5hNksjn78CouKR/language-agents-reduce-the-risk-of-existential-catastrophe,Language Agents Reduce the Risk of Existential Catastrophe,"['cdkg', 'Simon Goldstein']",2023-05-28T19:10:18Z,alignmentforum,, 23626,https://www.alignmentforum.org/posts/opE6L8jBTTNAyaDbB/a-multi-disciplinary-view-on-ai-safety-research,A multi-disciplinary view on AI safety research,['Roman Leventov'],2023-02-08T16:50:32Z,alignmentforum,, 23657,https://www.alignmentforum.org/posts/7bNXqdDPYpnfCNQhA/an-89-a-unifying-formalism-for-preference-learning,[AN #89]: A unifying formalism for preference learning algorithms,['Rohin Shah'],2020-03-04T18:20:01Z,alignmentforum,, 23679,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374e92/stable-self-improvement-as-a-research-problem,Stable self-improvement as a research problem,['paulfchristiano'],2014-11-17T17:51:02Z,alignmentforum,, 23707,https://www.alignmentforum.org/posts/ZYGjDpGQaHvg8HLfw/ai-alignment-writing-day-roundup-1,AI Alignment Writing Day Roundup #1,['Ben Pace'],2019-08-30T01:26:05Z,alignmentforum,, 23734,https://www.alignmentforum.org/posts/5bd75cc58225bf067037550c/policy-selection-solves-most-problems,Policy Selection Solves Most Problems,['abramdemski'],2017-12-01T00:35:47Z,alignmentforum,, 23750,https://www.alignmentforum.org/posts/qpZTWb2wvgSt5WQ4H/defining-myopia,Defining Myopia,['abramdemski'],2019-10-19T21:32:49Z,alignmentforum,, 23771,https://www.alignmentforum.org/posts/z3S2xnoDYfohrQQoe/controllables-and-observables-revisited,"Controllables and Observables, Revisited",['Scott Garrabrant'],2020-10-29T16:38:15Z,alignmentforum,, 23787,https://www.alignmentforum.org/posts/JNqXyEuKM4wbFZzpL/actionable-guidance-and-roadmap-recommendations-for-the-nist-1,Actionable-guidance and roadmap recommendations for the NIST AI Risk Management Framework,"['Dan H', 'Tony Barrett']",2022-05-17T15:26:23Z,alignmentforum,, 23808,https://www.alignmentforum.org/posts/xhKr5KtvdJRssMeJ3/anthropic-s-core-views-on-ai-safety,Anthropic's Core Views on AI Safety,['Zac Hatfield-Dodds'],2023-03-09T16:55:15Z,alignmentforum,, 23832,https://www.alignmentforum.org/posts/3eP8D5Sxih3NhPE6F/usd20k-in-prizes-ai-safety-arguments-competition,[$20K in Prizes] AI Safety Arguments Competition,"['Dan H', 'Kevin Liu', 'ozhang', 'ThomasW', 'Sidney Hough']",2022-04-26T16:13:16Z,alignmentforum,, 23843,https://www.alignmentforum.org/posts/oheKfWA7SsvpK7SGp/probability-is-real-and-value-is-complex,"Probability is Real, and Value is Complex",['abramdemski'],2018-07-20T05:24:50Z,alignmentforum,, 23859,https://www.alignmentforum.org/posts/H7v5yyXAmmgu9DJmi/eliciting-latent-knowledge-via-hypothetical-sensors,Eliciting Latent Knowledge Via Hypothetical Sensors,['John_Maxwell'],2021-12-30T15:53:30Z,alignmentforum,, 23884,https://www.alignmentforum.org/posts/3SG4WbNPoP8fsuZgs/agency-in-conway-s-game-of-life,Agency in Conway’s Game of Life,['Alex Flint'],2021-05-13T01:07:19Z,alignmentforum,, 23893,https://www.alignmentforum.org/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem,A shot at the diamond-alignment problem,['TurnTrout'],2022-10-06T18:29:11Z,alignmentforum,, 23923,https://www.alignmentforum.org/posts/GLrnyH4ChFhMqsy4v/powerful-mesa-optimisation-is-already-here,Powerful mesa-optimisation is already here,['Roman Leventov'],2023-02-17T05:00:00Z,alignmentforum,, 23945,https://www.alignmentforum.org/posts/cZqPGDxbJcbShGwDn/an-86-improving-debate-and-factored-cognition-through-human,[AN #86]: Improving debate and factored cognition through human experiments,['Rohin Shah'],2020-02-12T18:10:02Z,alignmentforum,, 23980,https://www.alignmentforum.org/posts/sDKi2pQ3fnTSbR7H8/trying-to-isolate-objectives-approaches-toward-high-level,Trying to isolate objectives: approaches toward high-level interpretability,['Jozdien'],2023-01-09T18:33:19Z,alignmentforum,, 23998,https://www.alignmentforum.org/posts/bevquxoYwkMx3NK6L/an-115-ai-safety-research-problems-in-the-ai-ga-framework,[AN #115]: AI safety research problems in the AI-GA framework,['Rohin Shah'],2020-09-02T17:10:04Z,alignmentforum,, 24025,https://www.alignmentforum.org/posts/Z5XXdQDxhpgiXASQW/ai-benefits-post-2-how-ai-benefits-differs-from-ai-alignment,AI Benefits Post 2: How AI Benefits Differs from AI Alignment & AI for Good,['Cullen'],2020-06-29T17:00:42Z,alignmentforum,, 24042,https://www.alignmentforum.org/posts/k3J3sYgmjMmpkzbbc/why-unriggable-almost-implies-uninfluenceable,Why unriggable *almost* implies uninfluenceable,['Stuart_Armstrong'],2021-04-09T17:07:07Z,alignmentforum,, 24055,https://www.alignmentforum.org/posts/bBicgqvwjPbaQrJJA/dirty-concepts-in-ai-alignment-discourses-and-some-guesses,"“Dirty concepts” in AI alignment discourses, and some guesses for how to deal with them","['Nora_Ammann', 'peckzy']",2023-08-20T09:13:34Z,alignmentforum,, 24064,https://www.alignmentforum.org/posts/9Fdd9N7Escg3tcymb/preventing-language-models-from-hiding-their-reasoning,Preventing Language Models from hiding their reasoning,"['Fabien Roger', 'ryan_greenblatt']",2023-10-31T14:34:05Z,alignmentforum,, 24080,https://www.alignmentforum.org/posts/iZS3am4acMh8g4Ycb/book-review-architects-of-intelligence-by-martin-ford-2018,Book review: Architects of Intelligence by Martin Ford (2018),['Ofer'],2020-08-11T17:30:21Z,alignmentforum,, 24089,https://www.alignmentforum.org/posts/hGE3Pcc7qmK75bjhc/introducing-the-fund-for-alignment-research-we-re-hiring,Introducing the Fund for Alignment Research (We're Hiring!),"['AdamGleave', 'Scott Emmons', 'Ethan Perez', 'Claudia Shi']",2022-07-06T02:07:48Z,alignmentforum,, 24124,https://www.alignmentforum.org/posts/P7P2iG4zvBNANvQFK/comments-on-the-singularity-is-nowhere-near,"Comments on ""The Singularity is Nowhere Near""",['Steven Byrnes'],2021-03-16T23:59:31Z,alignmentforum,, 24141,https://www.alignmentforum.org/posts/GQat3Nrd9CStHyGaq/response-to-katja-grace-s-ai-x-risk-counterarguments,Response to Katja Grace's AI x-risk counterarguments,"['Erik Jenner', 'Johannes Treutlein']",2022-10-19T01:17:55Z,alignmentforum,, 24170,https://www.alignmentforum.org/posts/75o8oja43LXGAqbAR/palm-2-and-gpt-4-in-extrapolating-gpt-n-performance,"PaLM-2 & GPT-4 in ""Extrapolating GPT-N performance""",['Lukas Finnveden'],2023-05-30T18:33:41Z,alignmentforum,, 24183,https://www.alignmentforum.org/posts/QQMzxSJDgWkAhupi5/take-12-rlhf-s-use-is-evidence-that-orgs-will-jam-rl-at-real,Take 12: RLHF's use is evidence that orgs will jam RL at real-world problems.,['Charlie Steiner'],2022-12-20T05:01:51Z,alignmentforum,, 24204,https://www.alignmentforum.org/posts/5bd75cc58225bf06703751b2/in-memoryless-cartesian-environments-every-udt-policy-is-a,"In memoryless Cartesian environments, every UDT policy is a CDT+SIA policy",['jessicata'],2016-06-11T04:05:47Z,alignmentforum,, 24215,https://www.alignmentforum.org/posts/mrZp6qC7DDXKQZeeC/failures-of-udt-aixi-part-1-improper-randomizing,"Failures of UDT-AIXI, Part 1: Improper Randomizing",['Diffractor'],2019-01-06T03:53:04Z,alignmentforum,, 24237,https://www.alignmentforum.org/posts/hjrqXjEpaw9ogScPh/training-trace-priors,Training Trace Priors,['Adam Jermyn'],2022-06-13T14:22:20Z,alignmentforum,, 24250,https://www.alignmentforum.org/posts/mL8KdftNGBScmBcBg/optimization-concepts-in-the-game-of-life,Optimization Concepts in the Game of Life,"['Vika', 'Ramana Kumar']",2021-10-16T20:51:36Z,alignmentforum,, 24274,https://www.alignmentforum.org/posts/ZmZBataeY58anJRBb/getting-from-an-unaligned-agi-to-an-aligned-agi,Getting from an unaligned AGI to an aligned AGI?,['Tor Økland Barstad'],2022-06-21T12:36:14Z,alignmentforum,, 24297,https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing,Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research],"['LawrenceC', 'Adrià Garriga-alonso', 'Nicholas Goldowsky-Dill', 'ryan_greenblatt', 'jenny', 'Ansh Radhakrishnan', 'Buck', 'Nate Thomas']",2022-12-03T00:58:37Z,alignmentforum,, 24318,https://www.alignmentforum.org/posts/Jute9YcbYvm4ZWdXk/what-are-biases-anyway-multiple-type-signatures,"What are biases, anyway? Multiple type signatures",['Stuart_Armstrong'],2021-08-31T21:17:00Z,alignmentforum,, 24332,https://www.alignmentforum.org/posts/H7KB44oKoSjSCkpzL/worrying-about-the-vase-whitelisting,Worrying about the Vase: Whitelisting,['TurnTrout'],2018-06-16T02:17:09Z,alignmentforum,, 24355,https://www.alignmentforum.org/posts/rASeoR7iZ9Fokzh7L/problems-in-ai-alignment-that-philosophers-could-potentially,Problems in AI Alignment that philosophers could potentially contribute to,['Wei Dai'],2019-08-17T17:38:32Z,alignmentforum,, 24379,https://www.alignmentforum.org/posts/DJRe5obJd7kqCkvRr/don-t-leave-your-fingerprints-on-the-future,Don't leave your fingerprints on the future,['So8res'],2022-10-08T00:35:35Z,alignmentforum,, 24399,https://www.alignmentforum.org/posts/5bd75cc58225bf06703751d1/three-oracle-designs,Three Oracle designs,['Stuart_Armstrong'],2016-07-20T15:16:12Z,alignmentforum,, 24422,https://www.alignmentforum.org/posts/JMebqicMD6azB8MwK/open-problems-in-activation-engineering,Open problems in activation engineering,"['TurnTrout', 'woog', 'lisathiergart', 'Ulisse Mini']",2023-07-24T19:46:09Z,alignmentforum,, 24441,https://www.alignmentforum.org/posts/A7GeRNLzuFnhvGGgb/principles-for-alignment-agency-projects,Principles for Alignment/Agency Projects,['johnswentworth'],2022-07-07T02:07:36Z,alignmentforum,, 24464,https://www.alignmentforum.org/posts/aBixCPqSnTsPsTJBQ/truthful-ai-developing-and-governing-ai-that-does-not-lie,Truthful AI: Developing and governing AI that does not lie,"['Owain_Evans', 'owencb', 'Lukas Finnveden']",2021-10-18T18:37:38Z,alignmentforum,, 24487,https://www.alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1,Imitative Generalisation (AKA 'Learning the Prior'),['Beth Barnes'],2021-01-10T00:30:36Z,alignmentforum,, 24510,https://www.alignmentforum.org/posts/K5Qp7ioupgb7r73Ca/logical-updatelessness-as-a-robust-delegation-problem,Logical Updatelessness as a Robust Delegation Problem,['Scott Garrabrant'],2017-10-27T21:16:18Z,alignmentforum,, 24522,https://www.alignmentforum.org/posts/Aipqop4XpqPeGpWNi/alignment-newsletter-47,Alignment Newsletter #47,['Rohin Shah'],2019-03-04T04:30:12Z,alignmentforum,, 24561,https://www.alignmentforum.org/posts/TTTHwLpcewGjQHWzh/what-is-the-true-name-of-modularity,What Is The True Name of Modularity?,"['TheMcDouglas', 'Lucius Bushnaq', 'Avery']",2022-07-01T14:55:12Z,alignmentforum,, 24578,https://www.alignmentforum.org/posts/jYvm4mmjvGHcPXtGL/a-concrete-proposal-for-adversarial-ida,A Concrete Proposal for Adversarial IDA,['evhub'],2019-03-26T19:50:35Z,alignmentforum,, 24600,https://www.alignmentforum.org/posts/2dt8miopNAvhKZPNf/alignment-newsletter-42,Alignment Newsletter #42,['Rohin Shah'],2019-01-22T02:00:02Z,alignmentforum,, 24626,https://www.alignmentforum.org/posts/bPa6AzRgGZGmxbq6n/remaking-efficientzero-as-best-i-can,Remaking EfficientZero (as best I can),['Hoagy'],2022-07-04T11:03:53Z,alignmentforum,, 24663,https://www.alignmentforum.org/posts/KMocAf9jnAKc2jXri/sections-1-and-2-introduction-strategy-and-governance,"Sections 1 & 2: Introduction, Strategy and Governance",['JesseClifton'],2019-12-17T21:27:30Z,alignmentforum,, 24691,https://www.alignmentforum.org/posts/6eKL9wDqeiELbKPDj/unfaithful-explanations-in-chain-of-thought-prompting,Unfaithful Explanations in Chain-of-Thought Prompting,['miles'],2023-06-03T00:22:15Z,alignmentforum,, 24718,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374fd3/an-idea-for-corrigible-recursively-improving-math-oracles,"An Idea For Corrigible, Recursively Improving Math Oracles",['jimrandomh'],2015-07-20T03:35:11Z,alignmentforum,, 24736,https://www.alignmentforum.org/posts/ZmvJ6kJ4ADcHcypYJ/an-57-why-we-should-focus-on-robustness-in-ai-safety-and-the,"[AN #57] Why we should focus on robustness in AI safety, and the analogous problems in programming",['Rohin Shah'],2019-06-05T23:20:01Z,alignmentforum,, 24757,https://www.alignmentforum.org/posts/3us74zNBGgFJeAXTo/looking-for-an-alignment-tutor,Looking for an alignment tutor,['JanBrauner'],2022-12-17T19:08:11Z,alignmentforum,, 24776,https://www.alignmentforum.org/posts/Qryk6FqjtZk9FHHJR/sparse-autoencoders-find-highly-interpretable-directions-in,Sparse Autoencoders Find Highly Interpretable Directions in Language Models,"['Logan Riggs', 'Hoagy', 'Aidan Ewart', 'Robert_AIZI']",2023-09-21T15:30:24Z,alignmentforum,, 24788,https://www.alignmentforum.org/posts/fHnwCDDbDHWqbJ8Nd/eis-x-continual-learning-modularity-compression-and,"EIS X: Continual Learning, Modularity, Compression, and Biological Brains",['scasper'],2023-02-21T16:59:42Z,alignmentforum,, 24818,https://www.alignmentforum.org/posts/idb5Ppp9zghcichJ5/a-general-model-of-safety-oriented-ai-development,A general model of safety-oriented AI development,['Wei Dai'],2018-06-11T21:00:03Z,alignmentforum,, 24829,https://www.alignmentforum.org/posts/DkDy2hvkwbQ54GM9u/introducing-effisciences-ai-safety-unit-1,Introducing EffiSciences’ AI Safety Unit,"['WCargo', 'Charbel-Raphaël', 'Florent_Berthet']",2023-06-30T07:44:57Z,alignmentforum,, 24850,https://www.alignmentforum.org/posts/2Ps9easGbqdMP6win/weekly-event-alignment-researcher-coffee-time-in-walled,[Weekly Event] Alignment Researcher Coffee Time (in Walled Garden),['adamShimi'],2021-05-02T12:59:21Z,alignmentforum,, 24859,https://www.alignmentforum.org/posts/Qi77Tu3ehdacAbBBe/agency-from-a-causal-perspective,Agency from a causal perspective,"['tom4everitt', 'mattmacdermott', 'James Fox', 'Francis Rhys Ward', 'Jonathan Richens']",2023-06-30T17:37:58Z,alignmentforum,, 24880,https://www.alignmentforum.org/posts/midXmMb2Xg37F2Kgn/new-scaling-laws-for-large-language-models,New Scaling Laws for Large Language Models,['1a3orn'],2022-04-01T20:41:18Z,alignmentforum,, 24894,https://www.alignmentforum.org/posts/QzkTfj4HGpLEdNjXX/an-artificially-structured-argument-for-expecting-agi-ruin,An artificially structured argument for expecting AGI ruin,['Rob Bensinger'],2023-05-07T21:52:54Z,alignmentforum,, 24925,https://www.alignmentforum.org/posts/EBKJq2gkhvdMg5nTQ/instrumentality-makes-agents-agenty,Instrumentality makes agents agenty,['porby'],2023-02-21T04:28:57Z,alignmentforum,, 24944,https://www.alignmentforum.org/posts/Gfw7JMdKirxeSPiAk/solving-the-whole-agi-control-problem-version-0-0001,"Solving the whole AGI control problem, version 0.0001",['Steven Byrnes'],2021-04-08T15:14:08Z,alignmentforum,, 24987,https://www.alignmentforum.org/posts/QHKfYy9LLAsjC5rTK/a-mystery-about-high-dimensional-concept-encoding,A Mystery About High Dimensional Concept Encoding,['Fabien Roger'],2022-11-03T17:05:56Z,alignmentforum,, 25001,https://www.alignmentforum.org/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent,Coherent behaviour in the real world is an incoherent concept,['Richard_Ngo'],2019-02-11T17:00:26Z,alignmentforum,, 25016,https://www.alignmentforum.org/posts/aDDjCJAGqcpmA5apw/eis-viii-an-engineer-s-understanding-of-deceptive-alignment,EIS VIII: An Engineer’s Understanding of Deceptive Alignment,['scasper'],2023-02-19T15:25:46Z,alignmentforum,, 25039,https://www.alignmentforum.org/posts/T9oFjteStcE2ijCJi/modeling-risks-from-learned-optimization,Modeling Risks From Learned Optimization,['Ben Cottier'],2021-10-12T20:54:19Z,alignmentforum,, 25068,https://www.alignmentforum.org/posts/EFZ64igiNNwiLHaYk/bayesianism-versus-conservatism-versus-goodhart,Bayesianism versus conservatism versus Goodhart,['Stuart_Armstrong'],2021-07-16T23:39:18Z,alignmentforum,, 25087,https://www.alignmentforum.org/posts/Xg2YycEfCnLYrCcjy/defining-capability-and-alignment-in-gradient-descent,Defining capability and alignment in gradient descent,['Edouard Harris'],2020-11-05T14:36:53Z,alignmentforum,, 25105,https://www.alignmentforum.org/posts/rxoBY9CMkqDsHt25t/eliciting-latent-knowledge-elk-distillation-summary,Eliciting Latent Knowledge (ELK) - Distillation/Summary,['Marius Hobbhahn'],2022-06-08T13:18:51Z,alignmentforum,, 25131,https://www.alignmentforum.org/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past,Can you control the past?,['Joe Carlsmith'],2021-08-27T19:39:30Z,alignmentforum,, 25151,https://www.alignmentforum.org/posts/JTzLjARpevuNpGPZm/time-in-cartesian-frames,Time in Cartesian Frames,['Scott Garrabrant'],2020-11-11T20:25:19Z,alignmentforum,, 25168,https://www.alignmentforum.org/posts/GrbeyZzp6NwzSWpds/safety-implications-of-lecun-s-path-to-machine-intelligence,Safety Implications of LeCun's path to machine intelligence,['Ivan Vendrov'],2022-07-15T21:47:44Z,alignmentforum,, 25191,https://www.alignmentforum.org/posts/sunXMY5WyDcrHsNRr/a-world-in-which-the-alignment-problem-seems-lower-stakes,A world in which the alignment problem seems lower-stakes,['TurnTrout'],2021-07-08T02:31:04Z,alignmentforum,, 25208,https://www.alignmentforum.org/posts/5bd75cc58225bf06703753d9/acausal-trade-being-unusual,Acausal trade: being unusual,['Stuart_Armstrong'],2017-05-16T18:38:43Z,alignmentforum,, 25223,https://www.alignmentforum.org/posts/TTn6vTcZ3szBctvgb/simulators-seminar-sequence-2-semiotic-physics-revamped,[Simulators seminar sequence] #2 Semiotic physics - revamped,"['Jan', 'Charlie Steiner', 'Logan Riggs', 'janus', 'jacquesthibs', 'metasemi', 'Michael Oesterle', 'Lucas Teixeira', 'peligrietzer', 'remember']",2023-02-27T00:25:53Z,alignmentforum,, 25242,https://www.alignmentforum.org/posts/rArsypGqq49bk4iRr/can-there-be-an-indescribable-hellworld,Can there be an indescribable hellworld?,['Stuart_Armstrong'],2019-01-29T15:00:54Z,alignmentforum,, 25253,https://www.alignmentforum.org/posts/rhiAvDqc3h29dpG34/instrumental-ignoring-ai-dumb-but-not-useless,"Instrumental ignoring AI, Dumb but not useless.",['Donald Hobson'],2022-10-30T16:55:48Z,alignmentforum,, 25276,https://www.alignmentforum.org/posts/dKAJqBDZRMMsaaYo5/in-logical-time-all-games-are-iterated-games,"In Logical Time, All Games are Iterated Games",['abramdemski'],2018-09-20T02:01:07Z,alignmentforum,, 25299,https://www.alignmentforum.org/posts/dhbLE8BqRvhBtsXhS/mlsn-3-neurips-safety-paper-roundup,[MLSN #3]: NeurIPS Safety Paper Roundup,['Dan H'],2022-03-08T15:17:26Z,alignmentforum,, 25342,https://www.alignmentforum.org/posts/z6QQJbtpkEAX3Aojj/interim-research-report-taking-features-out-of-superposition,[Interim research report] Taking features out of superposition with sparse autoencoders,"['Lee Sharkey', 'Dan Braun', 'beren']",2022-12-13T15:41:49Z,alignmentforum,, 25364,https://www.alignmentforum.org/posts/tD9zEiHfkvakpnNam/a-challenge-for-agi-organizations-and-a-challenge-for-1,"A challenge for AGI organizations, and a challenge for readers","['Rob Bensinger', 'Eliezer Yudkowsky']",2022-12-01T23:11:44Z,alignmentforum,, 25379,https://www.alignmentforum.org/posts/bumgqvRjTadFFkoAd/science-of-deep-learning-a-technical-agenda,Science of Deep Learning - a technical agenda,['Marius Hobbhahn'],2022-10-18T14:54:35Z,alignmentforum,, 25396,https://www.alignmentforum.org/posts/p7mMJvwDbuvo4K7NE/telopheme-telophore-and-telotect,"Telopheme, telophore, and telotect",['TsviBT'],2023-09-17T16:24:03Z,alignmentforum,, 25417,https://www.alignmentforum.org/posts/HyodRjYtiA2xozCrk/1-premise-one-values-are-malleable,1. Premise one: Values are malleable,['Nora_Ammann'],2023-10-26T14:36:41Z,alignmentforum,, 25431,https://www.alignmentforum.org/posts/YJq6R9Wgk5Atjx54D/does-bayes-beat-goodhart,Does Bayes Beat Goodhart?,['abramdemski'],2019-06-03T02:31:23Z,alignmentforum,, 25454,https://www.alignmentforum.org/posts/pW6YJEzoRFe9cshuN/impossible-moral-problems-and-moral-authority,Impossible moral problems and moral authority,['Charlie Steiner'],2019-11-18T09:28:29Z,alignmentforum,, 25465,https://www.alignmentforum.org/posts/YgAKhkBdgeTCn6P53/ai-deception-a-survey-of-examples-risks-and-potential,"AI Deception: A Survey of Examples, Risks, and Potential Solutions","['Simon Goldstein', 'Peter S. Park']",2023-08-29T01:29:51Z,alignmentforum,, 25490,https://www.alignmentforum.org/posts/BSrfDWpHgFpzGRwJS/defusing-agi-danger,Defusing AGI Danger,['Mark Xu'],2020-12-24T22:58:19Z,alignmentforum,, 25518,https://www.alignmentforum.org/posts/5bd75cc58225bf06703754e4/hyperreal-brouwer,Hyperreal Brouwer,['Scott Garrabrant'],2018-11-29T03:15:24Z,alignmentforum,, 25527,https://www.alignmentforum.org/posts/tuwwLQT4wqk25ndxk/thoughts-on-agi-organizations-and-capabilities-work,Thoughts on AGI organizations and capabilities work,"['Rob Bensinger', 'So8res']",2022-12-07T19:46:04Z,alignmentforum,, 25550,https://www.alignmentforum.org/posts/7fBKErNKhtwB4nt4N/some-ideas-for-follow-up-projects-to-redwood-research-s,Some ideas for follow-up projects to Redwood Research’s recent paper,['JanBrauner'],2022-06-06T13:29:09Z,alignmentforum,, 25573,https://www.alignmentforum.org/posts/Ji6hQbwH7tK7mejhk/proposal-method-of-locating-useful-subnets-in-large-models,[Proposal] Method of locating useful subnets in large models,['Quintin Pope'],2021-10-13T20:52:14Z,alignmentforum,, 25600,https://www.alignmentforum.org/posts/3hJhAsZHXP44Ljah5/axrp-episode-17-training-for-very-high-reliability-with,AXRP Episode 17 - Training for Very High Reliability with Daniel Ziegler,['DanielFilan'],2022-08-21T23:50:21Z,alignmentforum,, 25618,https://www.alignmentforum.org/posts/RFtkRXHebkwxygDe2/an-interpretability-illusion-for-activation-patching-of,An Interpretability Illusion for Activation Patching of Arbitrary Subspaces,"['Georg Lange', 'Alex Makelov', 'Neel Nanda']",2023-08-29T01:04:19Z,alignmentforum,, 25629,https://www.alignmentforum.org/posts/r3NHPD3dLFNk9QE2Y/search-versus-design-1,Search versus design,['Alex Flint'],2020-08-16T16:53:19Z,alignmentforum,, 25647,https://www.alignmentforum.org/posts/H9sxfAZGGAsx5BdYD/what-are-the-high-level-approaches-to-ai-alignment,What are the high-level approaches to AI alignment?,['Gordon Seidoh Worley'],2020-06-16T17:10:32Z,alignmentforum,, 25656,https://www.alignmentforum.org/posts/DbZDdupuffc4Xgm7H/1hr-talk-intro-to-agi-safety,1hr talk: Intro to AGI safety,['Steven Byrnes'],2019-06-18T21:41:29Z,alignmentforum,, 25699,https://www.alignmentforum.org/posts/ChtGdxk9mwZ2Rxogt/smartyheadercode-anomalous-tokens-for-gpt3-5-and-gpt-4-1,SmartyHeaderCode: anomalous tokens for GPT3.5 and GPT-4,['AdamYedidia'],2023-04-15T22:35:30Z,alignmentforum,, 25719,https://www.alignmentforum.org/posts/vmfW2qTac4vF3YS3J/probability-is-fake-frequency-is-real,"Probability is fake, frequency is real",['Linda Linsefors'],2018-07-10T22:32:30Z,alignmentforum,, 25735,https://www.alignmentforum.org/posts/RHdoMEbP8MxeAmQo5/why-don-t-quantilizers-also-cut-off-the-upper-end-of-the,Why don't quantilizers also cut off the upper end of the distribution?,['Alex_Altair'],2023-05-15T01:40:50Z,alignmentforum,, 25745,https://www.alignmentforum.org/posts/ZwshvqiqCvXPsZEct/the-learning-theoretic-agenda-status-2023,The Learning-Theoretic Agenda: Status 2023,['Vanessa Kosoy'],2023-04-19T05:21:29Z,alignmentforum,, 25772,https://www.alignmentforum.org/posts/Sn5NiiD5WBi4dLzaB/agi-will-drastically-increase-economies-of-scale,AGI will drastically increase economies of scale,['Wei Dai'],2019-06-07T23:17:39Z,alignmentforum,, 25786,https://www.alignmentforum.org/posts/N2JcFZ3LCCsnK2Fep/the-minimal-latents-approach-to-natural-abstractions,"The ""Minimal Latents"" Approach to Natural Abstractions",['johnswentworth'],2022-12-20T01:22:25Z,alignmentforum,, 25803,https://www.alignmentforum.org/posts/bLr68nrLSwgzqLpzu/axrp-episode-16-preparing-for-debate-ai-with-geoffrey-irving,AXRP Episode 16 - Preparing for Debate AI with Geoffrey Irving,['DanielFilan'],2022-07-01T22:20:18Z,alignmentforum,, 25837,https://www.alignmentforum.org/posts/dJumQtpoKhjDKH9q8/lima-less-is-more-for-alignment,LIMA: Less Is More for Alignment,['Ulisse Mini'],2023-05-30T17:10:32Z,alignmentforum,, 25852,https://www.alignmentforum.org/posts/vvEebH5jEvxnJEvBC/abstractions-as-redundant-information,Abstractions as Redundant Information,['johnswentworth'],2022-02-13T04:17:30Z,alignmentforum,, 25866,https://www.alignmentforum.org/posts/s2KJWLAPyjtmQ9ze3/search-in-territory-vs-search-in-map,Search-in-Territory vs Search-in-Map,['johnswentworth'],2021-06-05T23:22:36Z,alignmentforum,, 25883,https://www.alignmentforum.org/posts/DQBpu6LweoXyxLSsf/some-reasons-why-a-predictor-wants-to-be-a-consequentialist,Some reasons why a predictor wants to be a consequentialist,['Lauro Langosco'],2022-04-15T15:02:44Z,alignmentforum,, 25905,https://www.alignmentforum.org/posts/5bd75cc58225bf067037541c/acausal-trade-universal-utility-or-selling-non-existence-insurance-too-late,"Acausal trade: universal utility, or selling non-existence insurance too late",['Stuart_Armstrong'],2017-06-02T15:33:07Z,alignmentforum,, 25925,https://www.alignmentforum.org/posts/7Zn4BwgsiPFhdB6h8/the-pointers-problem-clarifications-variations,The Pointers Problem: Clarifications/Variations,['abramdemski'],2021-01-05T17:29:46Z,alignmentforum,, 25960,https://www.alignmentforum.org/posts/DwqgLXn5qYC7GqExF/godzilla-strategies,Godzilla Strategies,['johnswentworth'],2022-06-11T15:44:16Z,alignmentforum,, 25973,https://www.alignmentforum.org/posts/sxhfSBej6gdAwcn7X/coordinate-free-interpretability-theory,Coordinate-Free Interpretability Theory,['johnswentworth'],2022-09-14T23:33:50Z,alignmentforum,, 25990,https://www.alignmentforum.org/posts/3kkmXfvCv9DmT3kwx/conditioning-predictive-models-outer-alignment-via-careful,Conditioning Predictive Models: Outer alignment via careful conditioning,"['evhub', 'Adam Jermyn', 'Johannes Treutlein', 'Rubi J. Hudson', 'kcwoolverton']",2023-02-02T20:28:59Z,alignmentforum,, 26020,https://www.alignmentforum.org/posts/HHSuvG2hqAnGT5Wzp/gradient-descent-in-activation-space-a-tale-of-two-papers,Gradient Descent in Activation Space: a Tale of Two Papers,['Blaine'],2023-04-12T04:48:56Z,alignmentforum,, 26036,https://www.alignmentforum.org/posts/QujNmRy3uFyrkfqb7/take-10-fine-tuning-with-rlhf-is-aesthetically-unsatisfying,Take 10: Fine-tuning with RLHF is aesthetically unsatisfying.,['Charlie Steiner'],2022-12-13T07:04:36Z,alignmentforum,, 26051,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375532/logical-counterfactuals-and-differential-privacy,Logical counterfactuals and differential privacy,['Nisan'],2018-02-04T00:17:43Z,alignmentforum,, 26068,https://www.alignmentforum.org/posts/qEjh8rpxjG4qGtfuK/the-backchaining-to-local-search-technique-in-ai-alignment,"The ""Backchaining to Local Search"" Technique in AI Alignment",['adamShimi'],2020-09-18T15:05:03Z,alignmentforum,, 26079,https://www.alignmentforum.org/posts/pY4J2qNaHgKp2nbEd/neurips-ml-safety-workshop-2022,NeurIPS ML Safety Workshop 2022,['Dan H'],2022-07-26T15:28:52Z,alignmentforum,, 26106,https://www.alignmentforum.org/posts/xdtNd8xCdzpgfnGme/clarifying-the-confusion-around-inner-alignment,Clarifying the confusion around inner alignment,['Rauno Arike'],2022-05-13T23:05:27Z,alignmentforum,, 26128,https://www.alignmentforum.org/posts/fzFyCJ6gB9kBL9RqW/axrp-episode-8-assistance-games-with-dylan-hadfield-menell,AXRP Episode 8 - Assistance Games with Dylan Hadfield-Menell,['DanielFilan'],2021-06-08T23:20:12Z,alignmentforum,, 26158,https://www.alignmentforum.org/posts/wMCbo7HX3cFbtHZcM/an-161-creating-generalizable-reward-functions-for-multiple,[AN #161]: Creating generalizable reward functions for multiple tasks by learning a model of functional similarity,['Rohin Shah'],2021-08-20T17:20:04Z,alignmentforum,, 26197,https://www.alignmentforum.org/posts/qwqowdhnMreKQvxLv/paper-large-language-models-can-self-improve-linkpost,Paper: Large Language Models Can Self-improve [Linkpost],['Evan R. Murphy'],2022-10-02T01:29:00Z,alignmentforum,, 26212,https://www.alignmentforum.org/posts/jdMjzFf7tmT6ofLk9/alignment-newsletter-53,Alignment Newsletter #53,['Rohin Shah'],2019-04-18T17:20:03Z,alignmentforum,, 26250,https://www.alignmentforum.org/posts/5bd75cc58225bf06703752c6/my-current-take-on-the-paul-miri-disagreement-on-alignability-of-messy-ai,My current take on the Paul-MIRI disagreement on alignability of messy AI,['jessicata'],2017-01-29T20:52:12Z,alignmentforum,, 26268,https://www.alignmentforum.org/posts/E3vqfD3CLtNDNoeBr/inner-alignment-what-are-we-pointing-at,Inner alignment: what are we pointing at?,['lukehmiles'],2022-09-18T11:09:59Z,alignmentforum,, 26280,https://www.alignmentforum.org/posts/thZdioHTZALRPKmiH/value-extrapolation-partially-resolves-symbol-grounding,Value extrapolation partially resolves symbol grounding,['Stuart_Armstrong'],2022-01-12T16:30:19Z,alignmentforum,, 26292,https://www.alignmentforum.org/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization,"A central AI alignment problem: capabilities generalization, and the sharp left turn",['So8res'],2022-06-15T13:10:19Z,alignmentforum,, 26311,https://www.alignmentforum.org/posts/CxEbvETK2WNfHw7v9/intuitions-on-universal-behavior-of-information-at-a,Intuitions on Universal Behavior of Information at a Distance,['johnswentworth'],2020-04-20T21:44:42Z,alignmentforum,, 26322,https://www.alignmentforum.org/posts/7Hr8t6xwuuxBTqADK/approval-directed-agents-1,Approval-directed agents,['paulfchristiano'],2018-11-22T21:15:29Z,alignmentforum,, 26340,https://www.alignmentforum.org/posts/cQwT8asti3kyA62zc/automating-auditing-an-ambitious-concrete-technical-research,Automating Auditing: An ambitious concrete technical research proposal,['evhub'],2021-08-11T20:32:41Z,alignmentforum,, 26364,https://www.alignmentforum.org/posts/wZyQSMmrFZizmipho/on-value-in-humans-other-animals-and-ai,"On value in humans, other animals, and AI",['Michele Campolo'],2023-01-31T23:33:56Z,alignmentforum,, 26380,https://www.alignmentforum.org/posts/vZzg8NS7wBtqcwhoJ/nearcast-based-deployment-problem-analysis,"Nearcast-based ""deployment problem"" analysis",['HoldenKarnofsky'],2022-09-21T18:52:23Z,alignmentforum,, 26423,https://www.alignmentforum.org/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty,Ngo and Yudkowsky on alignment difficulty,"['Eliezer Yudkowsky', 'Richard_Ngo']",2021-11-15T20:31:34Z,alignmentforum,, 26447,https://www.alignmentforum.org/posts/PTcktJADsAmpYEjoP/proofs-section-1-1-initial-results-to-lf-duality,Proofs Section 1.1 (Initial results to LF-duality),['Diffractor'],2020-08-27T07:59:13Z,alignmentforum,, 26463,https://www.alignmentforum.org/posts/TAz44Lb9n9yf52pv8/othello-gpt-reflections-on-the-research-process,Othello-GPT: Reflections on the Research Process,['Neel Nanda'],2023-03-29T22:13:42Z,alignmentforum,, 26484,https://www.alignmentforum.org/posts/qpbYwTqKQG8G7mdFK/the-reasonable-effectiveness-of-mathematics-or-ai-vs,The Reasonable Effectiveness of Mathematics or: AI vs sandwiches,['Vanessa Kosoy'],2020-02-14T18:46:39Z,alignmentforum,, 26498,https://www.alignmentforum.org/posts/kmpNkeqEGvFue7AvA/value-formation-an-overarching-model,Value Formation: An Overarching Model,['Thane Ruthenis'],2022-11-15T17:16:20Z,alignmentforum,, 26522,https://www.alignmentforum.org/posts/27h99G7P6fkucKdkk/what-are-the-best-precedents-for-industries-failing-to,What are the best precedents for industries failing to invest in valuable AI research?,['Daniel Kokotajlo'],2020-12-14T23:57:09Z,alignmentforum,, 26531,https://www.alignmentforum.org/posts/RwYh4grJs4pbJdTh3/motivations-natural-selection-and-curriculum-engineering,"Motivations, Natural Selection, and Curriculum Engineering",['Oliver Sourbut'],2021-12-16T01:07:26Z,alignmentforum,, 26565,https://www.alignmentforum.org/posts/4K52SS7fm9mp5rMdX/three-ways-that-sufficiently-optimized-agents-appear,"Three ways that ""Sufficiently optimized agents appear coherent"" can be false",['Wei Dai'],2019-03-05T21:52:35Z,alignmentforum,, 26586,https://www.alignmentforum.org/posts/JPan54R525D68NoEt/the-date-of-ai-takeover-is-not-the-day-the-ai-takes-over,The date of AI Takeover is not the day the AI takes over,['Daniel Kokotajlo'],2020-10-22T10:41:09Z,alignmentforum,, 26604,https://www.alignmentforum.org/posts/4vrL94CqXuyHQMhqo/integrating-hidden-variables-improves-approximation,Integrating Hidden Variables Improves Approximation,['johnswentworth'],2020-04-16T21:43:05Z,alignmentforum,, 26613,https://www.alignmentforum.org/posts/CjW4axQDqLd2oDCGG/misconceptions-about-continuous-takeoff,Misconceptions about continuous takeoff,['Matthew Barnett'],2019-10-08T21:31:38Z,alignmentforum,, 26627,https://www.alignmentforum.org/posts/SPfZiEwHotPncJBLz/a-framework-to-explain-bayesian-models,A Framework to Explain Bayesian Models,['Jsevillamol'],2021-12-06T10:38:26Z,alignmentforum,, 26646,https://www.alignmentforum.org/posts/HpzHjKjGQ4cKiY3jX/3-levels-of-threat-obfuscation,3 levels of threat obfuscation,['HoldenKarnofsky'],2023-08-02T14:58:33Z,alignmentforum,, 26667,https://www.alignmentforum.org/posts/HekjhtWesBWTQW5eF/agis-as-collectives,AGIs as collectives,['Richard_Ngo'],2020-05-22T20:36:53Z,alignmentforum,, 26693,https://www.alignmentforum.org/posts/7TFJAvjYfMKxKQ4XS/eis-v-blind-spots-in-ai-safety-interpretability-research,EIS V: Blind Spots In AI Safety Interpretability Research,['scasper'],2023-02-16T19:09:11Z,alignmentforum,, 26726,https://www.alignmentforum.org/posts/jwe6jpubuMiuSRqff/usd20-million-in-nsf-grants-for-safety-research,$20 Million in NSF Grants for Safety Research,['Dan H'],2023-02-28T04:44:38Z,alignmentforum,, 26749,https://www.alignmentforum.org/posts/Jgs7LQwmvErxR9BCC/current-themes-in-mechanistic-interpretability-research,Current themes in mechanistic interpretability research,"['Lee Sharkey', 'Sid Black', 'beren']",2022-11-16T14:14:02Z,alignmentforum,, 26783,https://www.alignmentforum.org/posts/L8LHBTMvhLDpxDaqv/research-agenda-formalizing-abstractions-of-computations-1,Research agenda: Formalizing abstractions of computations,['Erik Jenner'],2023-02-02T04:29:07Z,alignmentforum,, 26794,https://www.alignmentforum.org/posts/eax34WBLNmB4Gv6so/designing-agent-incentives-to-avoid-side-effects,Designing agent incentives to avoid side effects,"['Vika', 'TurnTrout']",2019-03-11T20:55:10Z,alignmentforum,, 26818,https://www.alignmentforum.org/posts/TSxAXeHHhgSxR5wGZ/a-summary-of-aligning-narrowly-superhuman-models,A summary of aligning narrowly superhuman models,['gugu'],2022-02-10T18:26:29Z,alignmentforum,, 26842,https://www.alignmentforum.org/posts/fhbb8MGEs3t5dTCLD/logical-counterfactuals-and-proposition-graphs-part-3,"Logical Counterfactuals and Proposition graphs, Part 3",['Donald Hobson'],2019-09-05T15:03:53Z,alignmentforum,, 26856,https://www.alignmentforum.org/posts/fLpuusx9wQyyEBtkJ/power-seeking-can-be-probable-and-predictive-for-trained,Power-seeking can be probable and predictive for trained agents,"['Vika', 'janos']",2023-02-28T21:10:26Z,alignmentforum,, 26877,https://www.alignmentforum.org/posts/tGCyRQigGoqA4oSRo/generalizing-koopman-pitman-darmois,Generalizing Koopman-Pitman-Darmois,['johnswentworth'],2021-07-15T22:33:04Z,alignmentforum,, 26898,https://www.alignmentforum.org/posts/k48vB92mjE9Z28C3s/implied-utilities-of-simulators-are-broad-dense-and-shallow,"Implied ""utilities"" of simulators are broad, dense, and shallow",['porby'],2023-03-01T03:23:23Z,alignmentforum,, 26915,https://www.alignmentforum.org/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free,Open question: are minimal circuits daemon-free?,['paulfchristiano'],2018-05-05T22:40:21Z,alignmentforum,, 26929,https://www.alignmentforum.org/posts/7uexzZka8YtQzMwPf/how-smart-are-humans,How Smart Are Humans?,['Joar Skalse'],2023-07-02T15:46:55Z,alignmentforum,, 26939,https://www.alignmentforum.org/posts/fq7Ehb2oWwXtZic8S/reinforcement-learning-in-the-iterated-amplification,Reinforcement Learning in the Iterated Amplification Framework,['William_S'],2019-02-09T00:56:08Z,alignmentforum,, 26964,https://www.alignmentforum.org/posts/fWiYdyicEaCHCcAKx/vingean-agency,Vingean Agency,['abramdemski'],2022-08-24T20:08:53Z,alignmentforum,, 26979,https://www.alignmentforum.org/posts/8W5gNgEKnyAscg8BF/thoughts-on-implementing-corrigible-robust-alignment,Thoughts on implementing corrigible robust alignment,['Steven Byrnes'],2019-11-26T14:06:46Z,alignmentforum,, 27002,https://www.alignmentforum.org/posts/SpDHvbcJsiE5mxBzj/truth-and-advantage-response-to-a-draft-of-ai-safety-seems,"Truth and Advantage: Response to a draft of ""AI safety seems hard to measure""",['So8res'],2023-03-22T03:36:03Z,alignmentforum,, 27016,https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai,An overview of 11 proposals for building safe advanced AI,['evhub'],2020-05-29T20:38:02Z,alignmentforum,, 27059,https://www.alignmentforum.org/posts/zCq4ca3tTcfQgrFZM/maths-writer-cowritter-needed-how-you-can-t-distinguish,Maths writer/cowritter needed: how you can't distinguish early exponential from early sigmoid,['Stuart_Armstrong'],2020-05-06T09:41:49Z,alignmentforum,, 27069,https://www.alignmentforum.org/posts/iNGXKB8iExpcLvu55/identifiability-problem-for-superrational-decision-theories,Identifiability Problem for Superrational Decision Theories,['Bunthut'],2021-04-09T20:33:33Z,alignmentforum,, 27079,https://www.alignmentforum.org/posts/3PTfkdLLqZE9vppXC/ais-should-learn-human-preferences-not-biases,"AIs should learn human preferences, not biases",['Stuart_Armstrong'],2022-04-08T13:45:07Z,alignmentforum,, 27091,https://www.alignmentforum.org/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1,Model splintering: moving from one imperfect model to another,['Stuart_Armstrong'],2020-08-27T11:53:59Z,alignmentforum,, 27104,https://www.alignmentforum.org/posts/omj76gXR67jsG4hxs/web-ai-discussion-groups,Web AI discussion Groups,['Donald Hobson'],2020-06-30T11:22:46Z,alignmentforum,, 27118,https://www.alignmentforum.org/posts/uPWDwFJnxLaDiyv4M/open-minded-updatelessness,Open-minded updatelessness,"['Nicolas Macé', 'JesseClifton', 'Sylvester Kollin']",2023-07-10T11:08:22Z,alignmentforum,, 27137,https://www.alignmentforum.org/posts/EPMnjRdhHNd65XpDt/yet-more-modal-combat,Yet More Modal Combat,['Donald Hobson'],2021-08-24T10:32:49Z,alignmentforum,, 27148,https://www.alignmentforum.org/posts/FuGDYNvA6qh4qyFah/thoughts-on-human-compatible,"Thoughts on ""Human-Compatible""",['TurnTrout'],2019-10-10T05:24:32Z,alignmentforum,, 27169,https://www.alignmentforum.org/posts/jiXMZHGmEf7qPrKPc/systems-that-cannot-be-unsafe-cannot-be-safe,Systems that cannot be unsafe cannot be safe,['Davidmanheim'],2023-05-02T08:53:35Z,alignmentforum,, 27182,https://www.alignmentforum.org/posts/yXikQ87FFw3oPPaYh/how-common-is-it-for-one-entity-to-have-a-3-year,How common is it for one entity to have a 3+ year technological lead on its nearest competitor?,['Daniel Kokotajlo'],2019-11-17T15:23:37Z,alignmentforum,, 27191,https://www.alignmentforum.org/posts/kwgM5nGe9QXcB4TTu/outperforming-the-human-atari-benchmark,Outperforming the human Atari benchmark,['Vaniver'],2020-03-31T19:33:46Z,alignmentforum,, 27201,https://www.alignmentforum.org/posts/XNjRwEX9kxbpzWFWd/200-cop-in-mi-looking-for-circuits-in-the-wild,200 COP in MI: Looking for Circuits in the Wild,['Neel Nanda'],2022-12-29T20:59:53Z,alignmentforum,, 27223,https://www.alignmentforum.org/posts/aFZju8Kh4MWJaChav/the-voyage-of-novelty,The voyage of novelty,['TsviBT'],2023-04-30T12:52:17Z,alignmentforum,, 27246,https://www.alignmentforum.org/posts/c92YC89tznC7579Ej/conservatism-in-neocortex-like-agis,Conservatism in neocortex-like AGIs,['Steven Byrnes'],2020-12-08T16:37:21Z,alignmentforum,, 27268,https://www.alignmentforum.org/posts/kYNMXjg8Tmcq3vjM6/eis-ix-interpretability-and-adversaries,EIS IX: Interpretability and Adversaries,['scasper'],2023-02-20T18:25:44Z,alignmentforum,, 27290,https://www.alignmentforum.org/posts/LbSWMfbAHQnFpApQ7/concrete-advice-for-forming-inside-views-on-ai-safety,Concrete Advice for Forming Inside Views on AI Safety,['Neel Nanda'],2022-08-17T22:02:02Z,alignmentforum,, 27303,https://www.alignmentforum.org/posts/QEHb8tWLztMyvrv6f/potential-alignment-mental-tool-keeping-track-of-the-types,Potential Alignment mental tool: Keeping track of the types,['Donald Hobson'],2021-11-22T20:05:32Z,alignmentforum,, 27323,https://www.alignmentforum.org/posts/wcNEXDHowiWkRxDNv/inner-alignment-in-salt-starved-rats,Inner Alignment in Salt-Starved Rats,['Steven Byrnes'],2020-11-19T02:40:10Z,alignmentforum,, 27340,https://www.alignmentforum.org/posts/wxBBRzR4FS7nGBjbD/mathematical-mindset,Mathematical Mindset,['komponisto'],2018-07-11T03:03:12Z,alignmentforum,, 27352,https://www.alignmentforum.org/posts/itGmH2AknmjWyAwj8/double-inverse-embedded-agency-problem,(Double-)Inverse Embedded Agency Problem,['shminux'],2020-01-08T04:30:25Z,alignmentforum,, 27363,https://www.alignmentforum.org/posts/xfEsxAtBTLgFe7fSZ/the-sia-population-update-can-be-surprisingly-small,The SIA population update can be surprisingly small,['Stuart_Armstrong'],2021-07-08T10:45:03Z,alignmentforum,, 27375,https://www.alignmentforum.org/posts/9aFpMtpivqPCBfx2w/human-priors-features-and-models-languages-and-solmonoff,"Human priors, features and models, languages, and Solmonoff induction",['Stuart_Armstrong'],2021-05-10T10:55:12Z,alignmentforum,, 27397,https://www.alignmentforum.org/posts/9rjW9rhyhJijHTM92/learning-human-preferences-black-box-white-box-and,"Learning human preferences: black-box, white-box, and structured white-box access",['Stuart_Armstrong'],2020-08-24T11:42:35Z,alignmentforum,, 27413,https://www.alignmentforum.org/posts/yZb5eFvDoaqB337X5/investigating-causal-understanding-in-llms,Investigating causal understanding in LLMs,"['Marius Hobbhahn', 'Tom Lieberum']",2022-06-14T13:57:59Z,alignmentforum,, 27435,https://www.alignmentforum.org/posts/5bd75cc58225bf067037532c/thoughts-on-quantilizers,Thoughts on Quantilizers,['Stuart_Armstrong'],2017-06-02T16:24:37Z,alignmentforum,, 27456,https://www.alignmentforum.org/posts/wm2rdS3sDY9M5kpWb/the-game-theory-of-blackmail,The Game Theory of Blackmail,['Linda Linsefors'],2019-03-22T17:44:37Z,alignmentforum,, 27472,https://www.alignmentforum.org/posts/99WtcMpsRqZcrocCd/ten-experiments-in-modularity-which-we-d-like-you-to-run,"Ten experiments in modularity, which we'd like you to run!","['TheMcDouglas', 'Lucius Bushnaq', 'Avery']",2022-06-16T09:17:29Z,alignmentforum,, 27496,https://www.alignmentforum.org/posts/foEr8gtkpzmjkvcDp/methodological-therapy-an-agenda-for-tackling-research,Methodological Therapy: An Agenda For Tackling Research Bottlenecks,"['adamShimi', 'Lucas Teixeira', 'remember']",2022-09-22T18:41:03Z,alignmentforum,, 27512,https://www.alignmentforum.org/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens,interpreting GPT: the logit lens,['nostalgebraist'],2020-08-31T02:47:08Z,alignmentforum,, 27529,https://www.alignmentforum.org/posts/z3xTDPDsndJBmHLFH/elk-proposal-thinking-via-a-human-imitator,ELK Proposal: Thinking Via A Human Imitator,['TurnTrout'],2022-02-22T01:52:42Z,alignmentforum,, 27551,https://www.alignmentforum.org/posts/urZzJPwHtjewdKKHc/using-expected-utility-for-good-hart,Using expected utility for Good(hart),['Stuart_Armstrong'],2018-08-27T03:32:51Z,alignmentforum,, 27568,https://www.alignmentforum.org/posts/cgqh99SHsCv3jJYDS/we-found-an-neuron-in-gpt-2,We Found An Neuron in GPT-2,"['Joseph Miller', 'Clement Neo']",2023-02-11T18:27:29Z,alignmentforum,, 27579,https://www.alignmentforum.org/posts/bsNXqHgiDA6dAKNun/axrp-episode-24-superalignment-with-jan-leike,AXRP Episode 24 - Superalignment with Jan Leike,['DanielFilan'],2023-07-27T04:00:02Z,alignmentforum,, 27612,https://www.alignmentforum.org/posts/jtMXj24Masrnq3SpS/logical-induction-for-software-engineers,Logical induction for software engineers,['Alex Flint'],2022-12-03T19:55:35Z,alignmentforum,, 27629,https://www.alignmentforum.org/posts/HzSdYWvdrdQqG9tqW/convergence-towards-world-models-a-gears-level-model,Convergence Towards World-Models: A Gears-Level Model,['Thane Ruthenis'],2022-08-04T23:31:33Z,alignmentforum,, 27655,https://www.alignmentforum.org/posts/Kcbo4rXu3jYPnauoK/challenges-with-breaking-into-miri-style-research,Challenges with Breaking into MIRI-Style Research,['Chris_Leong'],2022-01-17T09:23:34Z,alignmentforum,, 27676,https://www.alignmentforum.org/posts/iR4kGzrWEJpXJ39ZB/seri-mats-program-winter-2022-cohort,SERI MATS Program - Winter 2022 Cohort,"['Ryan Kidd', 'Victor Warlop', 'Christian Smith']",2022-10-08T19:09:53Z,alignmentforum,, 27690,https://www.alignmentforum.org/posts/ybThg9nA7u6f8qfZZ/techniques-for-enhancing-human-feedback,Techniques for enhancing human feedback,"['abergal', 'Ajeya Cotra', 'Nick_Beckstead']",2021-10-29T07:27:47Z,alignmentforum,, 27716,https://www.alignmentforum.org/posts/hKMgCaAYS4hnanxBL/musings-on-general-systems-alignment,Musings on general systems alignment,['Alex Flint'],2021-06-30T18:16:27Z,alignmentforum,, 27733,https://www.alignmentforum.org/posts/LkytHQSKbQFf6toW5/anthropomorphisation-vs-value-learning-type-1-vs-type-2,Anthropomorphisation vs value learning: type 1 vs type 2 errors,['Stuart_Armstrong'],2020-09-22T10:46:49Z,alignmentforum,, 27748,https://www.alignmentforum.org/posts/u9CqcufkAJBwXdbx7/an-153-experiments-that-demonstrate-failures-of-objective,[AN #153]: Experiments that demonstrate failures of objective robustness,['Rohin Shah'],2021-06-26T17:10:03Z,alignmentforum,, 27771,https://www.alignmentforum.org/posts/87Y7w73phjBxnPyPD/safe-exploration-and-corrigibility,Safe exploration and corrigibility,['evhub'],2019-12-28T23:12:17Z,alignmentforum,, 27793,https://www.alignmentforum.org/posts/HiutLvY2x7zrsTQkx/possible-research-directions-to-improve-the-mechanistic,Possible research directions to improve the mechanistic explanation of neural networks,['delton137'],2021-11-09T02:36:31Z,alignmentforum,, 27822,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374f8c/a-simple-model-of-the-loebstacle,A simple model of the Löbstacle,['orthonormal'],2015-06-11T16:23:22Z,alignmentforum,, 27833,https://www.alignmentforum.org/posts/behyPgMWFhXpKi73P/axrp-episode-6-debate-and-imitative-generalization-with-beth,AXRP Episode 6 - Debate and Imitative Generalization with Beth Barnes,['DanielFilan'],2021-04-08T21:20:13Z,alignmentforum,, 27863,https://www.alignmentforum.org/posts/gwdwukkc8NfpyPitw/why-are-counterfactuals-elusive-2,Why are counterfactuals elusive?,['Martín Soto'],2023-03-03T20:13:49Z,alignmentforum,, 27873,https://www.alignmentforum.org/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very,How might we align transformative AI if it’s developed very soon?,['HoldenKarnofsky'],2022-08-29T15:42:09Z,alignmentforum,, 27914,https://www.alignmentforum.org/posts/AaABQpuoNC8gpHf2n/a-barebones-guide-to-mechanistic-interpretability,A Barebones Guide to Mechanistic Interpretability Prerequisites,['Neel Nanda'],2022-10-24T20:45:28Z,alignmentforum,, 27927,https://www.alignmentforum.org/posts/TPy4RJvzogqqupDKk/a-survey-of-early-impact-measures,A Survey of Early Impact Measures,['Matthew Barnett'],2019-08-06T01:22:27Z,alignmentforum,, 27951,https://www.alignmentforum.org/posts/gAAFzqJkfeSHvcwTw/why-1-boxing-doesn-t-imply-backwards-causation,Why 1-boxing doesn't imply backwards causation,['Chris_Leong'],2021-03-25T02:32:13Z,alignmentforum,, 27966,https://www.alignmentforum.org/posts/MDSQEZeyakzAEKyGk/the-alignment-newsletter-12-06-25-18,The Alignment Newsletter #12: 06/25/18,['Rohin Shah'],2018-06-25T16:00:43Z,alignmentforum,, 27999,https://www.alignmentforum.org/posts/GpSzShaaf8po4rcmA/qapr-5-grokking-is-maybe-not-that-big-a-deal,QAPR 5: grokking is maybe not *that* big a deal?,['Quintin Pope'],2023-07-23T20:14:33Z,alignmentforum,, 28011,https://www.alignmentforum.org/posts/rgh4tdNrQyJYXyNs8/qapr-3-interpretability-guided-training-of-neural-nets,QAPR 3: interpretability-guided training of neural nets,['Quintin Pope'],2022-09-28T16:02:11Z,alignmentforum,, 28052,https://www.alignmentforum.org/posts/3onCb5ph3ywLQZMX2/alignment-newsletter-one-year-retrospective,Alignment Newsletter One Year Retrospective,['Rohin Shah'],2019-04-10T06:58:59Z,alignmentforum,, 28074,https://www.alignmentforum.org/posts/7jNveWML34EsjCD4c/safety-via-selection-for-obedience,Safety via selection for obedience,['Richard_Ngo'],2020-09-10T10:04:50Z,alignmentforum,, 28103,https://www.alignmentforum.org/posts/TSmgTGaLyhL965jX6/why-is-pseudo-alignment-worse-than-other-ways-ml-can-fail-to,"Why is pseudo-alignment ""worse"" than other ways ML can fail to generalize?",['nostalgebraist'],2020-07-18T22:54:51Z,alignmentforum,, 28120,https://www.alignmentforum.org/posts/yyoKYmfFx7zpPyD99/reward-model-hacking-as-a-challenge-for-reward-learning,Reward model hacking as a challenge for reward learning,['Erik Jenner'],2022-04-12T09:39:35Z,alignmentforum,, 28148,https://www.alignmentforum.org/posts/EmnvtFLnQBte66Ydh/mlsn-5-prize-compilation,[MLSN #5]: Prize Compilation,['Dan H'],2022-09-26T21:55:54Z,alignmentforum,, 28185,https://www.alignmentforum.org/posts/gdEDPHjCY5DKsMsvE/the-pragmascope-idea,The Pragmascope Idea,['johnswentworth'],2022-08-04T21:52:15Z,alignmentforum,, 28198,https://www.alignmentforum.org/posts/HSETWwdJnb45jsvT8/autonomy-the-missing-agi-ingredient,autonomy: the missing AGI ingredient?,['nostalgebraist'],2022-05-25T00:33:25Z,alignmentforum,, 28213,https://www.alignmentforum.org/posts/Yj9hW27sMJ4Hx4Bd4/an-149-the-newsletter-s-editorial-policy,[AN #149]: The newsletter's editorial policy,['Rohin Shah'],2021-05-05T17:10:03Z,alignmentforum,, 28243,https://www.alignmentforum.org/posts/vpdJz4k5BgGzuGo7A/intro-to-brain-like-agi-safety-9-takeaways-from-neuro-2-2-on,[Intro to brain-like-AGI safety] 9. Takeaways from neuro 2/2: On AGI motivation,['Steven Byrnes'],2022-03-23T12:48:18Z,alignmentforum,, 28270,https://www.alignmentforum.org/posts/a2Bxq4g2sPZwKiQmK/sticky-goals-a-concrete-experiment-for-understanding,Sticky goals: a concrete experiment for understanding deceptive alignment,['evhub'],2022-09-02T21:57:08Z,alignmentforum,, 28283,https://www.alignmentforum.org/posts/CvKnhXTu9BPcdKE4W/an-untrollable-mathematician-illustrated,An Untrollable Mathematician Illustrated,['abramdemski'],2018-03-20T00:00:00Z,alignmentforum,, 28299,https://www.alignmentforum.org/posts/WGEPBmErv8ufrq8Fc/teleosemantics,Teleosemantics!,['abramdemski'],2023-02-23T23:26:16Z,alignmentforum,, 28313,https://www.alignmentforum.org/posts/zxmzBTwKkPMxQQcfR/let-s-use-ai-to-harden-human-defenses-against-ai,Let’s use AI to harden human defenses against AI manipulation,['Tom Davidson'],2023-05-17T23:33:02Z,alignmentforum,, 28331,https://www.alignmentforum.org/posts/jJf4FrfiQdDGg7uco/the-telephone-theorem-information-at-a-distance-is-mediated,The Telephone Theorem: Information At A Distance Is Mediated By Deterministic Constraints,['johnswentworth'],2021-08-31T16:50:13Z,alignmentforum,, 28345,https://www.alignmentforum.org/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization,Utility Maximization = Description Length Minimization,['johnswentworth'],2021-02-18T18:04:23Z,alignmentforum,, 28355,https://www.alignmentforum.org/posts/9fL22eBJMtyCLvL7j/soft-optimization-makes-the-value-target-bigger,Soft optimization makes the value target bigger,['Jeremy Gillen'],2023-01-02T16:06:50Z,alignmentforum,, 28374,https://www.alignmentforum.org/posts/8AjDwHp9pvZdm6ZEp/agency-and-the-unreliable-autonomous-car,Agency and the unreliable autonomous car,['Alex Flint'],2021-07-07T14:58:27Z,alignmentforum,, 28386,https://www.alignmentforum.org/posts/fDPsYdDtkzhBp9A8D/intro-to-brain-like-agi-safety-8-takeaways-from-neuro-1-2-on,[Intro to brain-like-AGI safety] 8. Takeaways from neuro 1/2: On AGI development,['Steven Byrnes'],2022-03-16T13:59:20Z,alignmentforum,, 28407,https://www.alignmentforum.org/posts/Hna2P8gcTyRgNDYBY/race-along-rashomon-ridge,Race Along Rashomon Ridge,"['Stephen Fowler', 'Peter S. Park', 'MichaelEinhorn']",2022-07-07T03:21:00Z,alignmentforum,, 28422,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374ebe/multibit-reflective-oracles,Multibit reflective oracles,['Benya_Fallenstein'],2015-01-25T02:23:00Z,alignmentforum,, 28443,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375416/acausal-trade-introduction,Acausal trade: Introduction,['Stuart_Armstrong'],2017-05-11T12:03:35Z,alignmentforum,, 28453,https://www.alignmentforum.org/posts/SRJ5J9Tnyq7bySxbt/answering-questions-honestly-given-world-model-mismatches,Answering questions honestly given world-model mismatches,['paulfchristiano'],2021-06-13T18:00:08Z,alignmentforum,, 28479,https://www.alignmentforum.org/posts/gYaKZeBbSL4y2RLP3/strategic-implications-of-ais-ability-to-coordinate-at-low,"Strategic implications of AIs' ability to coordinate at low cost, for example by merging",['Wei Dai'],2019-04-25T05:08:22Z,alignmentforum,, 28497,https://www.alignmentforum.org/posts/7GGRmAyMzqzidmBbi/alignment-as-constraints,Alignment as Constraints,['Logan Riggs'],2022-05-13T22:07:50Z,alignmentforum,, 28517,https://www.alignmentforum.org/posts/CPr8bRGekTyvh7nGC/infra-bayesian-physicalism-proofs-part-ii,Infra-Bayesian physicalism: proofs part II,['Vanessa Kosoy'],2021-11-30T22:27:05Z,alignmentforum,, 28540,https://www.alignmentforum.org/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang,Are we in an AI overhang?,['Andy Jones'],2020-07-27T12:48:05Z,alignmentforum,, 28553,https://www.alignmentforum.org/posts/sD6KuprcS3PFym2eM/three-kinds-of-competitiveness,Three Kinds of Competitiveness,['Daniel Kokotajlo'],2020-03-31T01:00:56Z,alignmentforum,, 28571,https://www.alignmentforum.org/posts/d2jgBurQygbXzhPxc/how-special-are-human-brains-among-animal-brains,How special are human brains among animal brains?,['zhukeepa'],2020-04-01T01:35:37Z,alignmentforum,, 28588,https://www.alignmentforum.org/posts/ursraZGcpfMjCXtnn/autointerpretation-finds-sparse-coding-beats-alternatives,AutoInterpretation Finds Sparse Coding Beats Alternatives,['Hoagy'],2023-07-17T01:41:17Z,alignmentforum,, 28606,https://www.alignmentforum.org/posts/atSHHCSP3NKBtqxes/what-sorts-of-systems-can-be-deceptive,What sorts of systems can be deceptive?,['Andrei Alexandru'],2022-10-31T22:00:27Z,alignmentforum,, 28625,https://www.alignmentforum.org/posts/RLHkSBQ7zmTzAjsio/nlp-position-paper-when-combatting-hype-proceed-with-caution,"NLP Position Paper: When Combatting Hype, Proceed with Caution",['Sam Bowman'],2021-10-15T20:57:46Z,alignmentforum,, 28642,https://www.alignmentforum.org/posts/6tjHf5ykvFqaNCErH/anthropic-s-responsible-scaling-policy-and-long-term-benefit,Anthropic's Responsible Scaling Policy & Long-Term Benefit Trust,['Zac Hatfield-Dodds'],2023-09-19T15:09:27Z,alignmentforum,, 28662,https://www.alignmentforum.org/posts/aymbce8ge9ve2C4Po/eis-xii-summary,EIS XII: Summary,['scasper'],2023-02-23T17:45:56Z,alignmentforum,, 28687,https://www.alignmentforum.org/posts/BwaxYiJ3ZmXHLoZJ6/supplement-to-big-picture-of-phasic-dopamine,"Supplement to ""Big picture of phasic dopamine""",['Steven Byrnes'],2021-06-08T13:08:01Z,alignmentforum,, 28702,https://www.alignmentforum.org/posts/rjvLpRzd8mqDyZmcF/comments-on-allan-dafoe-on-ai-governance,Comments on Allan Dafoe on AI Governance,['Alex Flint'],2021-11-29T16:16:03Z,alignmentforum,, 28718,https://www.alignmentforum.org/posts/cHd6xLX6qeNYQACat/the-alignment-problems-1,The Alignment Problems,['Martín Soto'],2023-01-12T22:29:27Z,alignmentforum,, 28734,https://www.alignmentforum.org/posts/Kr76XzME7TFkN937z/predictors-exist-cdt-going-bonkers-forever,Predictors exist: CDT going bonkers... forever,['Stuart_Armstrong'],2020-01-14T16:19:13Z,alignmentforum,, 28744,https://www.alignmentforum.org/posts/jJApGWG95495pYM7C/how-to-measure-flop-s-for-neural-networks-empirically,How to measure FLOP/s for Neural Networks empirically?,['Marius Hobbhahn'],2021-11-29T15:18:07Z,alignmentforum,, 28764,https://www.alignmentforum.org/posts/qusBXzCpxijTudvBB/my-agi-safety-research-2022-review-23-plans,"My AGI safety research—2022 review, ’23 plans",['Steven Byrnes'],2022-12-14T15:15:52Z,alignmentforum,, 28780,https://www.alignmentforum.org/posts/ejEgaYSaefCevapPa/critique-of-some-recent-philosophy-of-llms-minds,Critique of some recent philosophy of LLMs’ minds,['Roman Leventov'],2023-01-20T12:53:38Z,alignmentforum,, 28795,https://www.alignmentforum.org/posts/5bd75cc58225bf067037553c/strategy-nonconvexity-induced-by-a-choice-of-potential-oracles,Strategy Nonconvexity Induced by a Choice of Potential Oracles,['Diffractor'],2018-01-27T00:41:04Z,alignmentforum,, 28809,https://www.alignmentforum.org/posts/C8XTFtiA5xtje6957/deception-i-ain-t-got-time-for-that,Deception?! I ain’t got time for that!,['Paul Colognese'],2022-07-18T00:06:15Z,alignmentforum,, 28828,https://www.alignmentforum.org/posts/vCQNTuowPcnu6xqQN/distinguishing-test-from-training,Distinguishing test from training,['So8res'],2022-11-29T21:41:20Z,alignmentforum,, 28844,https://www.alignmentforum.org/posts/XGbWaA3gbDphajMHm/elk-proposal-make-the-reporter-care-about-the-predictor-s,ELK Proposal - Make the Reporter care about the Predictor’s beliefs,"['Adam Jermyn', 'Nicholas Schiefer']",2022-06-11T22:53:41Z,alignmentforum,, 28858,https://www.alignmentforum.org/posts/YWwzccGbcHMJMpT45/ai-safety-via-market-making,AI safety via market making,['evhub'],2020-06-26T23:07:27Z,alignmentforum,, 28884,https://www.alignmentforum.org/posts/Tpn2Fx9daLvj28kes/continuing-the-takeoffs-debate,Continuing the takeoffs debate,['Richard_Ngo'],2020-11-23T15:58:48Z,alignmentforum,, 28901,https://www.alignmentforum.org/posts/pLZ3bdeng4u5W8Yft/let-s-talk-about-convergent-rationality-1,"Let's talk about ""Convergent Rationality""",['David Scott Krueger (formerly: capybaralet)'],2019-06-12T21:53:35Z,alignmentforum,, 28922,https://www.alignmentforum.org/posts/iALu99gYbodt4mLqg/should-we-rely-on-the-speed-prior-for-safety,Should we rely on the speed prior for safety?,['Marc-Everin Carauleanu'],2021-12-14T20:45:02Z,alignmentforum,, 28938,https://www.alignmentforum.org/posts/N2NebPD78ioyWHhNm/some-existing-selection-theorems,Some Existing Selection Theorems,['johnswentworth'],2021-09-30T16:13:18Z,alignmentforum,, 28961,https://www.alignmentforum.org/posts/oiftkZnFBqyHGALwv/agents-as-p-b-chain-reactions,Agents as P₂B Chain Reactions,['Daniel Kokotajlo'],2021-12-04T21:35:06Z,alignmentforum,, 28979,https://www.alignmentforum.org/posts/Qup9gorqpd9qKAEav/200-cop-in-mi-studying-learned-features-in-language-models,200 COP in MI: Studying Learned Features in Language Models,['Neel Nanda'],2023-01-19T03:48:24Z,alignmentforum,, 29000,https://www.alignmentforum.org/posts/tdENX8dzdro8PXAzP/short-remark-on-the-subjective-mathematical-naturalness-of,Short Remark on the (subjective) mathematical 'naturalness' of the Nanda--Lieberum addition modulo 113 algorithm,['Spencer Becker-Kahn'],2023-06-01T11:31:38Z,alignmentforum,, 29015,https://www.alignmentforum.org/posts/dYnHLWMXCYdm9xu5j/simulator-framing-and-confusions-about-llms,'simulator' framing and confusions about LLMs,['Beth Barnes'],2022-12-31T23:38:57Z,alignmentforum,, 29038,https://www.alignmentforum.org/posts/oiuZjPfknKsSc5waC/commentary-on-agi-safety-from-first-principles,Commentary on AGI Safety from First Principles,['Richard_Ngo'],2020-11-23T21:37:31Z,alignmentforum,, 29069,https://www.alignmentforum.org/posts/Av3frxNy3y3i2kpaa/does-circuit-analysis-interpretability-scale-evidence-from,Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla,"['Neel Nanda', 'Tom Lieberum', 'Matthew Rahtz', 'János Kramár', 'Geoffrey Irving', 'Rohin Shah', 'Vlad Mikulik']",2023-07-20T10:50:59Z,alignmentforum,, 29084,https://www.alignmentforum.org/posts/HiufALieNbWHqR9en/you-only-get-one-shot-an-intuition-pump-for-embedded-agency,You Only Get One Shot: an Intuition Pump for Embedded Agency,['Oliver Sourbut'],2022-06-09T21:38:24Z,alignmentforum,, 29093,https://www.alignmentforum.org/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer,Where I agree and disagree with Eliezer,['paulfchristiano'],2022-06-19T19:15:56Z,alignmentforum,, 29129,https://www.alignmentforum.org/posts/s5jrfbsGLyEexh4GT/elicit-language-models-as-research-assistants,Elicit: Language Models as Research Assistants,"['stuhlmueller', 'jungofthewon']",2022-04-09T14:56:38Z,alignmentforum,, 29164,https://www.alignmentforum.org/posts/PrLnptfNDg2wBWNyb/paper-the-capacity-for-moral-self-correction-in-large,Paper: The Capacity for Moral Self-Correction in Large Language Models (Anthropic),['LawrenceC'],2023-02-16T19:47:21Z,alignmentforum,, 29182,https://www.alignmentforum.org/posts/Sj9YurD9vwpfPErs2/an-98-understanding-neural-net-training-by-seeing-which,[AN #98]: Understanding neural net training by seeing which gradients were helpful,['Rohin Shah'],2020-05-06T17:10:03Z,alignmentforum,, 29206,https://www.alignmentforum.org/posts/KeHGinpj2WyzDEQAx/5-risks-from-preventing-legitimate-value-change-value,5. Risks from preventing legitimate value change (value collapse),['Nora_Ammann'],2023-10-26T14:38:35Z,alignmentforum,, 29226,https://www.alignmentforum.org/posts/DiEWbwrChuzuhJhGr/benchmark-for-successful-concept-extrapolation-avoiding-goal,Benchmark for successful concept extrapolation/avoiding goal misgeneralization,['Stuart_Armstrong'],2022-07-04T20:48:15Z,alignmentforum,, 29238,https://www.alignmentforum.org/posts/wgHbNZHsqfiXiqofd/anthropics-and-fermi-grabby-visible-zoo-keeping-and-early,"Anthropics and Fermi: grabby, visible, zoo-keeping, and early aliens",['Stuart_Armstrong'],2021-07-08T15:07:31Z,alignmentforum,, 29257,https://www.alignmentforum.org/posts/ebYiodG3MAEqskCDG/a-survey-of-tool-use-and-workflows-in-alignment-research-1,A survey of tool use and workflows in alignment research,"['Logan Riggs', 'Jan', 'janus', 'jacquesthibs']",2022-03-23T23:44:30Z,alignmentforum,, 29282,https://www.alignmentforum.org/posts/tYbusKv4Yci3GaBiM/take-11-aligning-language-models-should-be-weirder,"Take 11: ""Aligning language models"" should be weirder.",['Charlie Steiner'],2022-12-18T14:14:54Z,alignmentforum,, 29303,https://www.alignmentforum.org/posts/dkruhqAEhXnbAk7iJ/the-accumulation-of-knowledge-literature-review,The accumulation of knowledge: literature review,['Alex Flint'],2021-07-10T18:36:18Z,alignmentforum,, 29319,https://www.alignmentforum.org/posts/hePucCfKyiRHECz3e/finite-factored-sets-inferring-time,Finite Factored Sets: Inferring Time,['Scott Garrabrant'],2021-08-31T21:18:36Z,alignmentforum,, 29335,https://www.alignmentforum.org/posts/buhaT2pxsfLrknzxT/preprint-the-computational-limits-of-deep-learning,[Preprint] The Computational Limits of Deep Learning,['Gordon Seidoh Worley'],2020-07-21T21:25:57Z,alignmentforum,, 29354,https://www.alignmentforum.org/posts/RnxkAiGcQpfErjHYT/underlying-model-of-an-imperfect-morphism,Underlying model of an imperfect morphism,['Stuart_Armstrong'],2021-07-16T13:13:10Z,alignmentforum,, 29368,https://www.alignmentforum.org/posts/PfcQguFpT8CDHcozj/finite-factored-sets-in-pictures-6,Finite Factored Sets in Pictures,['Magdalena Wache'],2022-12-11T18:49:00Z,alignmentforum,, 29384,https://www.alignmentforum.org/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk,The Main Sources of AI Risk?,"['Daniel Kokotajlo', 'Wei Dai']",2019-03-21T18:28:33Z,alignmentforum,, 29417,https://www.alignmentforum.org/posts/H4bAKBwnFktvoLtp4/chatgpt-struggles-to-respond-to-the-real-world,ChatGPT struggles to respond to the real world,['Alex Flint'],2023-01-12T16:02:56Z,alignmentforum,, 29433,https://www.alignmentforum.org/posts/gToGqwS9z2QFvwJ7b/an-103-arches-an-agenda-for-existential-safety-and-combining,"[AN #103]: ARCHES: an agenda for existential safety, and combining natural language with deep RL",['Rohin Shah'],2020-06-10T17:20:02Z,alignmentforum,, 29461,https://www.alignmentforum.org/posts/hLKKH9CM6NDiJBabC/what-are-the-most-plausible-ai-safety-warning-shot-scenarios,"What are the most plausible ""AI Safety warning shot"" scenarios?",['Daniel Kokotajlo'],2020-03-26T20:59:58Z,alignmentforum,, 29473,https://www.alignmentforum.org/posts/EzoCZjTdWTMgacKGS/clr-s-recent-work-on-multi-agent-systems,CLR's recent work on multi-agent systems,['JesseClifton'],2021-03-09T02:28:48Z,alignmentforum,, 29505,https://www.alignmentforum.org/posts/bJdaB2Mz4mBvwFBeb/what-i-talk-about-when-i-talk-about-ai-x-risk-3-core-claims-1,What I talk about when I talk about AI x-risk: 3 core claims I want machine learning researchers to address.,['David Scott Krueger (formerly: capybaralet)'],2019-12-02T18:20:48Z,alignmentforum,, 29526,https://www.alignmentforum.org/posts/hxbjEgjNTSbdXqDFE/additive-and-multiplicative-subagents,Additive and Multiplicative Subagents,['Scott Garrabrant'],2020-11-06T14:26:36Z,alignmentforum,, 29546,https://www.alignmentforum.org/posts/Et2pWrj4nWfdNAawh/what-specific-dangers-arise-when-asking-gpt-n-to-write-an,What specific dangers arise when asking GPT-N to write an Alignment Forum post?,['Matthew Barnett'],2020-07-28T02:56:13Z,alignmentforum,, 29561,https://www.alignmentforum.org/posts/k43v47eQjaj6fY7LE/solving-the-mechanistic-interpretability-challenges-eis-vii-1,Solving the Mechanistic Interpretability challenges: EIS VII Challenge 2,"['StefanHex', 'Marius Hobbhahn']",2023-05-25T15:37:55Z,alignmentforum,, 29577,https://www.alignmentforum.org/posts/S3W4Xrmp6AL7nxRHd/formalising-decision-theory-is-hard,Formalising decision theory is hard,['Lukas Finnveden'],2019-08-23T03:27:25Z,alignmentforum,, 29596,https://www.alignmentforum.org/posts/Jo2LWuuGEGHHfGZCM/naturalism-and-ai-alignment,Naturalism and AI alignment,['Michele Campolo'],2021-04-24T16:16:16Z,alignmentforum,, 29616,https://www.alignmentforum.org/posts/4783ufKpx8xvLMPc6/human-ai-interaction,Human-AI Interaction,['Rohin Shah'],2019-01-15T01:57:16Z,alignmentforum,, 29639,https://www.alignmentforum.org/posts/aEtc5GgqJGFtTH2kQ/the-big-picture-of-alignment-talk-part-2-1,The Big Picture Of Alignment (Talk Part 2),['johnswentworth'],2022-02-25T02:53:23Z,alignmentforum,, 29655,https://www.alignmentforum.org/posts/yf4KcTyk2hoXZh9x4/beliefs-at-different-timescales,Beliefs at different timescales,['Nisan'],2018-11-04T20:10:59Z,alignmentforum,, 29665,https://www.alignmentforum.org/posts/hLFD6qSN9MmQxKjG5/embedded-agency-via-abstraction,Embedded Agency via Abstraction,['johnswentworth'],2019-08-26T23:03:50Z,alignmentforum,, 29694,https://www.alignmentforum.org/posts/tZwaGp5wMQqKh3krz/dslt-3-neural-networks-are-singular,DSLT 3. Neural Networks are Singular,['Liam Carroll'],2023-06-20T08:20:56Z,alignmentforum,, 29718,https://www.alignmentforum.org/posts/63nvBi4ooCAsKCphw/alignment-newsletter-16-07-23-18,Alignment Newsletter #16: 07/23/18,['Rohin Shah'],2018-07-23T16:20:03Z,alignmentforum,, 29745,https://www.alignmentforum.org/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment,Clarifying some key hypotheses in AI alignment,"['Ben Cottier', 'Rohin Shah']",2019-08-15T21:29:07Z,alignmentforum,, 29768,https://www.alignmentforum.org/posts/vzLNnc59LyzqDCyJC/causal-representation-learning-as-a-technique-to-prevent,Causal representation learning as a technique to prevent goal misgeneralization,['PabloAMC'],2023-01-04T00:07:30Z,alignmentforum,, 29787,https://www.alignmentforum.org/posts/fopZesxLCGAXqqaPv/don-t-align-agents-to-evaluations-of-plans,Don't align agents to evaluations of plans,['TurnTrout'],2022-11-26T21:16:23Z,alignmentforum,, 29803,https://www.alignmentforum.org/posts/EjX63wQoMSoCHMrmY/announcing-ai-safety-mentors-and-mentees,Announcing AI safety Mentors and Mentees,['Marius Hobbhahn'],2022-11-23T15:21:13Z,alignmentforum,, 29821,https://www.alignmentforum.org/posts/CuBKm8bkfWhYegcw8/google-ai-integrates-palm-with-robotics-saycan-update,Google AI integrates PaLM with robotics: SayCan update [Linkpost],['Evan R. Murphy'],2022-08-24T20:54:34Z,alignmentforum,, 29840,https://www.alignmentforum.org/posts/4az2cFrJp3ya4y6Wx/resources-for-ai-alignment-cartography,Resources for AI Alignment Cartography,['Gyrodiot'],2020-04-04T14:20:11Z,alignmentforum,, 29857,https://www.alignmentforum.org/posts/xdSDFQs4aC5GrdHNZ/the-big-picture-of-alignment-talk-part-1,The Big Picture Of Alignment (Talk Part 1),['johnswentworth'],2022-02-21T05:49:35Z,alignmentforum,, 29878,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374f11/reflective-oracles-and-the-procrastination-paradox,Reflective oracles and the procrastination paradox,['jessicata'],2015-03-26T22:18:15Z,alignmentforum,, 29889,https://www.alignmentforum.org/posts/HAz7apopTzozrqW2k/strategy-for-conditioning-generative-models,Strategy For Conditioning Generative Models,"['james.lucassen', 'evhub']",2022-09-01T04:34:17Z,alignmentforum,, 29914,https://www.alignmentforum.org/posts/YDSjSpD7yBoivMHay/usd500-bounty-prize-problem-channel-capacity-using,"$500 Bounty/Prize Problem: Channel Capacity Using ""Insensitive"" Functions",['johnswentworth'],2023-05-16T21:31:35Z,alignmentforum,, 29924,https://www.alignmentforum.org/posts/n3w3ww9Xuf8SngBfE/replacement-for-ponr-concept,Replacement for PONR concept,['Daniel Kokotajlo'],2022-09-02T00:09:46Z,alignmentforum,, 29939,https://www.alignmentforum.org/posts/SQ9cZtfrzDJmw9A2m/my-overview-of-the-ai-alignment-landscape-a-bird-s-eye-view,My Overview of the AI Alignment Landscape: A Bird's Eye View,['Neel Nanda'],2021-12-15T23:44:32Z,alignmentforum,, 29976,https://www.alignmentforum.org/posts/9vwekjD6xyuePX7Zr/contrast-pairs-drive-the-empirical-performance-of-contrast,Contrast Pairs Drive the Empirical Performance of Contrast Consistent Search (CCS),['Scott Emmons'],2023-05-31T17:09:02Z,alignmentforum,, 29991,https://www.alignmentforum.org/posts/idipkijjz5PoxAwju/warning-shots-probably-wouldn-t-change-the-picture-much,Warning Shots Probably Wouldn't Change The Picture Much,['So8res'],2022-10-06T05:15:39Z,alignmentforum,, 30009,https://www.alignmentforum.org/posts/xQYF3LR64NYn8vkoy/proofs-section-2-1-theorem-1-lemmas,"Proofs Section 2.1 (Theorem 1, Lemmas)",['Diffractor'],2020-08-27T07:55:00Z,alignmentforum,, 30025,https://www.alignmentforum.org/posts/nbq2bWLcYmSGup9aF/a-transparency-and-interpretability-tech-tree,A transparency and interpretability tech tree,['evhub'],2022-06-16T23:44:15Z,alignmentforum,, 30051,https://www.alignmentforum.org/posts/w2TAEvME2yAG9MHeq/gradient-hacking-is-extremely-difficult,Gradient hacking is extremely difficult,['beren'],2023-01-24T15:45:47Z,alignmentforum,, 30075,https://www.alignmentforum.org/posts/nsygJvidfgidmgKqX/axrp-episode-20-reform-ai-alignment-with-scott-aaronson,AXRP Episode 20 - ‘Reform’ AI Alignment with Scott Aaronson,['DanielFilan'],2023-04-12T21:30:07Z,alignmentforum,, 30097,https://www.alignmentforum.org/posts/a7YgzDYx4FhdB3TmR/an-155-a-minecraft-benchmark-for-algorithms-that-learn,[AN #155]: A Minecraft benchmark for algorithms that learn without reward functions,['Rohin Shah'],2021-07-08T17:20:03Z,alignmentforum,, 30126,https://www.alignmentforum.org/posts/vLepnCxCWW6YTw8eW/competitive-safety-via-gradated-curricula,Competitive safety via gradated curricula,['Richard_Ngo'],2020-05-05T18:11:08Z,alignmentforum,, 30144,https://www.alignmentforum.org/posts/QmdrkuArFHphqANRE/on-the-risks-of-emergent-behavior-in-foundation-models,On The Risks of Emergent Behavior in Foundation Models,['jsteinhardt'],2021-10-18T20:00:16Z,alignmentforum,, 30168,https://www.alignmentforum.org/posts/L896Fp8hLSbh8Ryei/axrp-episode-15-natural-abstractions-with-john-wentworth,AXRP Episode 15 - Natural Abstractions with John Wentworth,['DanielFilan'],2022-05-23T05:40:19Z,alignmentforum,, 30198,https://www.alignmentforum.org/posts/AdXzZDoYFqHCfupDB/a-note-on-semiotic-physics,A note on 'semiotic physics',['metasemi'],2023-02-11T05:12:31Z,alignmentforum,, 30210,https://www.alignmentforum.org/posts/SPa6YYyeam2exxPMy/alignment-newsletter-30,Alignment Newsletter #30,['Rohin Shah'],2018-10-29T16:10:02Z,alignmentforum,, 30237,https://www.alignmentforum.org/posts/HqLxuZ4LhaFhmAHWk/iterated-distillation-and-amplification-1,Iterated Distillation and Amplification,['Ajeya Cotra'],2018-11-30T04:47:14Z,alignmentforum,, 30253,https://www.alignmentforum.org/posts/Zi7nmuSmBFbQWgFBa/infra-bayesianism-unwrapped,Infra-Bayesianism Unwrapped,['adamShimi'],2021-01-20T13:35:04Z,alignmentforum,, 30281,https://www.alignmentforum.org/posts/6hdxTTPWF2iAbXjAb/suggestions-of-posts-on-the-af-to-review,Suggestions of posts on the AF to review,['adamShimi'],2021-02-16T12:40:53Z,alignmentforum,, 30292,https://www.alignmentforum.org/posts/CjFZeDD6iCnNubDoS/humans-provide-an-untapped-wealth-of-evidence-about,Humans provide an untapped wealth of evidence about alignment,"['TurnTrout', 'Quintin Pope']",2022-07-14T02:31:49Z,alignmentforum,, 30308,https://www.alignmentforum.org/posts/aKoeYyHZTRu8izMwA/what-makes-a-good-measurement-device,What Makes A Good Measurement Device?,['johnswentworth'],2022-08-24T22:45:06Z,alignmentforum,, 30317,https://www.alignmentforum.org/posts/X7k23zk9aBjjpgLd3/dutch-booking-cdt-revised-argument,Dutch-Booking CDT: Revised Argument,['abramdemski'],2020-10-27T04:31:16Z,alignmentforum,, 30337,https://www.alignmentforum.org/posts/LBzFCPbG5s95mf43M/where-i-currently-disagree-with-ryan-greenblatt-s-version-of,Where I currently disagree with Ryan Greenblatt’s version of the ELK approach,['So8res'],2022-09-29T21:18:44Z,alignmentforum,, 30360,https://www.alignmentforum.org/posts/KnPN7ett8RszE79PH/demons-in-imperfect-search,Demons in Imperfect Search,['johnswentworth'],2020-02-11T20:25:20Z,alignmentforum,, 30377,https://www.alignmentforum.org/posts/xuYdCDgoBno5haJB6/an-extended-rocket-alignment-analogy,An extended rocket alignment analogy,['remember'],2022-08-13T18:22:04Z,alignmentforum,, 30391,https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities,AGI Ruin: A List of Lethalities,['Eliezer Yudkowsky'],2022-06-05T22:05:52Z,alignmentforum,, 30420,https://www.alignmentforum.org/posts/WhETfFgkfNSTShc4y/breaking-down-goal-directed-behaviour,Breaking Down Goal-Directed Behaviour,['Oliver Sourbut'],2022-06-16T18:45:12Z,alignmentforum,, 30431,https://www.alignmentforum.org/posts/vayxfTSQEDtwhPGpW/refactoring-alignment-attempt-2,Refactoring Alignment (attempt #2),['abramdemski'],2021-07-26T20:12:15Z,alignmentforum,, 30454,https://www.alignmentforum.org/posts/FebeDdcToayY6rHSf/how-bomai-might-fail,How BoMAI Might fail,['Donald Hobson'],2022-04-07T15:32:23Z,alignmentforum,, 30478,https://www.alignmentforum.org/posts/EaZghEwcCJRAuee66/my-thoughts-on-the-social-response-to-ai-risk,My thoughts on the social response to AI risk,['Matthew Barnett'],2023-11-01T21:17:08Z,alignmentforum,, 30506,https://www.alignmentforum.org/posts/qLaShfcnXGnYeKFJW/the-promise-and-peril-of-finite-sets,The Promise and Peril of Finite Sets,['davidad'],2021-12-10T12:29:57Z,alignmentforum,, 30527,https://www.alignmentforum.org/posts/3JRBqRtHBDyPE3sGa/a-case-for-the-least-forgiving-take-on-alignment,A Case for the Least Forgiving Take On Alignment,['Thane Ruthenis'],2023-05-02T21:34:50Z,alignmentforum,, 30541,https://www.alignmentforum.org/posts/9zpT9dikrrebdq3Jf/will-humans-build-goal-directed-agents,Will humans build goal-directed agents?,['Rohin Shah'],2019-01-05T01:33:37Z,alignmentforum,, 30566,https://www.alignmentforum.org/posts/d2n74bwham8motxyX/optimization-at-a-distance,Optimization at a Distance,['johnswentworth'],2022-05-16T17:58:25Z,alignmentforum,, 30581,https://www.alignmentforum.org/posts/cYJqGWuBwymLdFpLT/non-unitary-quantum-logic-seri-mats-research-sprint,Non-Unitary Quantum Logic -- SERI MATS Research Sprint,['Yegreg'],2023-02-16T19:31:51Z,alignmentforum,, 30594,https://www.alignmentforum.org/posts/WJzsTmsDctYCCyMfy/humans-are-embedded-agents-too,Humans Are Embedded Agents Too,['johnswentworth'],2019-12-23T19:21:16Z,alignmentforum,, 30613,https://www.alignmentforum.org/posts/5bd75cc58225bf06703753a9/formal-open-problem-in-decision-theory,Formal Open Problem in Decision Theory,['Scott Garrabrant'],2018-11-29T03:25:46Z,alignmentforum,, 30624,https://www.alignmentforum.org/posts/hsf7tQgjTZfHjiExn/my-take-on-jacob-cannell-s-take-on-agi-safety,My take on Jacob Cannell’s take on AGI safety,['Steven Byrnes'],2022-11-28T14:01:16Z,alignmentforum,, 30654,https://www.alignmentforum.org/posts/MhudbfBNQcMxBBvj8/there-should-be-more-ai-safety-orgs,There should be more AI safety orgs,['Marius Hobbhahn'],2023-09-21T14:53:53Z,alignmentforum,, 30676,https://www.alignmentforum.org/posts/vQNJrJqebXEWjJfnz/a-note-about-differential-technological-development,A note about differential technological development,['So8res'],2022-07-15T04:46:53Z,alignmentforum,, 30692,https://www.alignmentforum.org/posts/vDGvHBDuMtcPd8Lks/public-static-what-is-abstraction,Public Static: What is Abstraction?,['johnswentworth'],2020-06-09T18:36:50Z,alignmentforum,, 30706,https://www.alignmentforum.org/posts/jnyTqPRHwcieAXgrA/finding-goals-in-the-world-model,Finding Goals in the World Model,"['Jeremy Gillen', 'JamesH', 'Thomas Larsen']",2022-08-22T18:06:48Z,alignmentforum,, 30726,https://www.alignmentforum.org/posts/EHbJ69JDs4suovpLw/testing-palm-prompts-on-gpt3,Testing PaLM prompts on GPT3,['Yitz'],2022-04-06T05:21:07Z,alignmentforum,, 30742,https://www.alignmentforum.org/posts/2mhFMgtAjFJesaSYR/2-d-robustness,2-D Robustness,['vlad_m'],2019-08-30T20:27:34Z,alignmentforum,, 30759,https://www.alignmentforum.org/posts/4iPBctHSeHx8AkS6Z/the-steering-problem,The Steering Problem,['paulfchristiano'],2018-11-13T17:14:57Z,alignmentforum,, 30776,https://www.alignmentforum.org/posts/due5BtsbpTzSZbeKT/logical-counterfactuals-and-proposition-graphs-part-2,"Logical Counterfactuals and Proposition graphs, Part 2",['Donald Hobson'],2019-08-31T20:58:13Z,alignmentforum,, 30785,https://www.alignmentforum.org/posts/vcrXS5DmvBuJaKucp/axrp-episode-11-attainable-utility-and-power-with-alex,AXRP Episode 11 - Attainable Utility and Power with Alex Turner,['DanielFilan'],2021-09-25T21:10:27Z,alignmentforum,, 30820,https://www.alignmentforum.org/posts/iGs2jHc6Mcm3jtefk/ai-problems-shared-by-non-ai-systems,AI Problems Shared by Non-AI Systems,['VojtaKovarik'],2020-12-05T22:15:28Z,alignmentforum,, 30842,https://www.alignmentforum.org/posts/Wigdk6s4xsC8fqhYT/avoiding-xrisk-from-ai-doesn-t-mean-focusing-on-ai-xrisk,Avoiding xrisk from AI doesn't mean focusing on AI xrisk,['Stuart_Armstrong'],2023-05-02T19:27:32Z,alignmentforum,, 30858,https://www.alignmentforum.org/posts/zFwie6AoPyqGmMSsc/an-147-an-overview-of-the-interpretability-landscape,[AN #147]: An overview of the interpretability landscape,['Rohin Shah'],2021-04-21T17:10:04Z,alignmentforum,, 30877,https://www.alignmentforum.org/posts/ALvnz3DrjHwmLG29F/values-valence-and-alignment,"Values, Valence, and Alignment",['Gordon Seidoh Worley'],2019-12-05T21:06:33Z,alignmentforum,, 30891,https://www.alignmentforum.org/posts/jP9cKxqwqk2qQ6HiM/towards-deconfusing-wireheading-and-reward-maximization,Towards deconfusing wireheading and reward maximization,['leogao'],2022-09-21T00:36:43Z,alignmentforum,, 30907,https://www.alignmentforum.org/posts/KuKaQEu7JjBNzcoj5/explicitness,Explicitness,['TsviBT'],2023-06-12T15:05:05Z,alignmentforum,, 30922,https://www.alignmentforum.org/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case,Counterarguments to the basic AI x-risk case,['KatjaGrace'],2022-10-14T13:00:06Z,alignmentforum,, 30955,https://www.alignmentforum.org/posts/qrWFvMnRm4SkKnpRZ/formulating-reductive-agency-in-causal-models,Formulating Reductive Agency in Causal Models,['johnswentworth'],2020-01-23T17:03:45Z,alignmentforum,, 30968,https://www.alignmentforum.org/posts/NaZPjaLPCGZWdTyrL/sudt-a-toy-decision-theory-for-updateless-anthropics,SUDT: A toy decision theory for updateless anthropics,['Benya'],2014-02-23T23:50:38Z,alignmentforum,, 30980,https://www.alignmentforum.org/posts/Ky3WnDwQbLAucGrXf/exploratory-analysis-of-rlhf-transformers-with,Exploratory Analysis of RLHF Transformers with TransformerLens,['Curt Tigges'],2023-04-03T16:09:13Z,alignmentforum,, 31000,https://www.alignmentforum.org/posts/H5iePjNKaaYQyZpgR/request-for-proposals-for-projects-in-ai-alignment-that-work,Request for proposals for projects in AI alignment that work with deep learning systems,"['abergal', 'Nick_Beckstead']",2021-10-29T07:26:59Z,alignmentforum,, 31029,https://www.alignmentforum.org/posts/cejX4S3Dex3C2dt79/call-for-contributors-to-the-alignment-newsletter,Call for contributors to the Alignment Newsletter,['Rohin Shah'],2019-08-21T18:21:31Z,alignmentforum,, 31044,https://www.alignmentforum.org/posts/98jCNefEaBBb7jwu6/building-a-transformer-from-scratch-ai-safety-up-skilling,Building a transformer from scratch - AI safety up-skilling challenge,['Marius Hobbhahn'],2022-10-12T15:40:11Z,alignmentforum,, 31059,https://www.alignmentforum.org/posts/pnqkGiGcshtgF2fnQ/working-towards-ai-alignment-is-better,Working towards AI alignment is better,['Johannes C. Mayer'],2022-12-09T15:39:08Z,alignmentforum,, 31069,https://www.alignmentforum.org/posts/DEDcFw6zWfW9nb2YM/how-to-throw-away-information,How to Throw Away Information,['johnswentworth'],2019-09-05T21:10:07Z,alignmentforum,, 31081,https://www.alignmentforum.org/posts/qHvHvBR8L6oycnMXe/we-re-redwood-research-we-do-applied-alignment-research-ama,"We're Redwood Research, we do applied alignment research, AMA",['Nate Thomas'],2021-10-06T05:51:59Z,alignmentforum,, 31099,https://www.alignmentforum.org/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post,The Waluigi Effect (mega-post),['Cleo Nardo'],2023-03-03T03:22:09Z,alignmentforum,, 31120,https://www.alignmentforum.org/posts/nzmCvRvPm4xJuqztv/deepmind-is-hiring-for-the-scalable-alignment-and-alignment,DeepMind is hiring for the Scalable Alignment and Alignment Teams,"['Rohin Shah', 'Geoffrey Irving']",2022-05-13T12:17:13Z,alignmentforum,, 31154,https://www.alignmentforum.org/posts/G2Lne2Fi7Qra5Lbuf/selection-theorems-a-program-for-understanding-agents,Selection Theorems: A Program For Understanding Agents,['johnswentworth'],2021-09-28T05:03:19Z,alignmentforum,, 31167,https://www.alignmentforum.org/posts/sEqu6jMgnHG2fvaoQ/simplified-preferences-needed-simplified-preferences,Simplified preferences needed; simplified preferences sufficient,['Stuart_Armstrong'],2019-03-05T19:39:55Z,alignmentforum,, 31176,https://www.alignmentforum.org/posts/wCtegGaWxttfKZsfx/we-don-t-understand-what-happened-with-culture-enough,We don't understand what happened with culture enough,['Jan_Kulveit'],2023-10-09T09:54:20Z,alignmentforum,, 31189,https://www.alignmentforum.org/posts/xPeWJaAzp2LeDdP4Z/extracting-money-from-causal-decision-theorists,Extracting Money from Causal Decision Theorists,['Caspar Oesterheld'],2021-01-28T17:58:44Z,alignmentforum,, 31206,https://www.alignmentforum.org/posts/Wap8sSDoiigrJibHA/garrabrant-and-shah-on-human-modeling-in-agi,Garrabrant and Shah on human modeling in AGI,['Rob Bensinger'],2021-08-04T04:35:11Z,alignmentforum,, 31233,https://www.alignmentforum.org/posts/zdeYiQgwYRs2bEmCK/applying-overoptimization-to-selection-vs-control-optimizing,"Applying Overoptimization to Selection vs. Control (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 3)",['Davidmanheim'],2019-07-28T09:32:26Z,alignmentforum,, 31254,https://www.alignmentforum.org/posts/cqRGZisKbpSjgaJbc/an-ov-coherent-toy-model-of-attention-head-superposition-1,An OV-Coherent Toy Model of Attention Head Superposition,"['LaurenGreenspan', 'keith_wynroe']",2023-08-29T19:44:11Z,alignmentforum,, 31271,https://www.alignmentforum.org/posts/pxiaLFjyr4WPmFdcm/take-2-building-tools-to-help-build-fai-is-a-legitimate,"Take 2: Building tools to help build FAI is a legitimate strategy, but it's dual-use.",['Charlie Steiner'],2022-12-03T00:54:03Z,alignmentforum,, 31287,https://www.alignmentforum.org/posts/zAwvyBJJNu4vHWvfk/maps-and-blueprint-the-two-sides-of-the-alignment-equation,Maps and Blueprint; the Two Sides of the Alignment Equation,['Nora_Ammann'],2022-10-25T16:29:40Z,alignmentforum,, 31303,https://www.alignmentforum.org/posts/EoY6P6mpz7ZozhAxm/an-79-recursive-reward-modeling-as-an-alignment-technique,[AN #79]: Recursive reward modeling as an alignment technique integrated with deep RL,['Rohin Shah'],2020-01-01T18:00:02Z,alignmentforum,, 31335,https://www.alignmentforum.org/posts/zXibERtEWpKuG5XAC/intro-to-brain-like-agi-safety-7-from-hardcoded-drives-to,[Intro to brain-like-AGI safety] 7. From hardcoded drives to foresighted plans: A worked example,['Steven Byrnes'],2022-03-09T14:28:20Z,alignmentforum,, 31356,https://www.alignmentforum.org/posts/KoEY9CjrKe93ErYhd/self-confirming-predictions-can-be-arbitrarily-bad,Self-confirming predictions can be arbitrarily bad,['Stuart_Armstrong'],2019-05-03T11:34:47Z,alignmentforum,, 31370,https://www.alignmentforum.org/posts/SwcyMEgLyd4C3Dern/the-parable-of-predict-o-matic,The Parable of Predict-O-Matic,['abramdemski'],2019-10-15T00:49:20Z,alignmentforum,, 31391,https://www.alignmentforum.org/posts/HjFAp4RiaaGqH6gNC/how-are-you-dealing-with-ontology-identification,How are you dealing with ontology identification?,['Erik Jenner'],2022-10-04T23:28:27Z,alignmentforum,, 31408,https://www.alignmentforum.org/posts/HD2s4mj4fsx6WtFAR/two-problems-with-simulators-as-a-frame,Two problems with ‘Simulators’ as a frame,['ryan_greenblatt'],2023-02-17T23:34:21Z,alignmentforum,, 31425,https://www.alignmentforum.org/posts/rxMmhJbNbuywFqcBk/apologies-for-alignment-forum-server-outage-last-night,(apologies for Alignment Forum server outage last night),['Ruby'],2021-08-25T14:45:07Z,alignmentforum,, 31434,https://www.alignmentforum.org/posts/28zsuPaJpKAGSX4zq/humans-are-very-reliable-agents,Humans are very reliable agents,['alyssavance'],2022-06-16T22:02:11Z,alignmentforum,, 31452,https://www.alignmentforum.org/posts/yFofRxg7RRQYCcwFA/new-report-scheming-ais-will-ais-fake-alignment-during,"New report: ""Scheming AIs: Will AIs fake alignment during training in order to get power?""",['Joe Carlsmith'],2023-11-15T17:16:42Z,alignmentforum,, 31477,https://www.alignmentforum.org/posts/pZHpq6dBQzCZjjMgM/the-computational-anatomy-of-human-values,The Computational Anatomy of Human Values,['beren'],2023-04-06T10:33:25Z,alignmentforum,, 31504,https://www.alignmentforum.org/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress,Will AI undergo discontinuous progress?,['Sammy Martin'],2020-02-21T22:16:59Z,alignmentforum,, 31529,https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and,[Intro to brain-like-AGI safety] 3. Two subsystems: Learning & Steering,['Steven Byrnes'],2022-02-09T13:09:12Z,alignmentforum,, 31545,https://www.alignmentforum.org/posts/X8KQBjszbSDXzBwgP/humans-are-stunningly-rational-and-stunningly-irrational,Humans are stunningly rational and stunningly irrational,['Stuart_Armstrong'],2020-10-23T14:14:00Z,alignmentforum,, 31555,https://www.alignmentforum.org/posts/iNaaBAEkAy9nAgs3o/turning-off-lights-with-model-editing,Turning off lights with model editing,['Sam Marks'],2023-05-12T20:25:12Z,alignmentforum,, 31573,https://www.alignmentforum.org/posts/5bd75cc58225bf067037546b/delegative-inverse-reinforcement-learning,Delegative Inverse Reinforcement Learning,['Vanessa Kosoy'],2017-07-12T12:18:22Z,alignmentforum,, 31598,https://www.alignmentforum.org/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth,"Optimality is the tiger, and agents are its teeth",['Veedrac'],2022-04-02T00:46:27Z,alignmentforum,, 31609,https://www.alignmentforum.org/posts/ur4yr6WRCmEb5YfuH/minimal-maps-semi-decisions-and-neural-representations,"Minimal Maps, Semi-Decisions, and Neural Representations",['Zachary Robertson'],2020-12-06T15:15:08Z,alignmentforum,, 31619,https://www.alignmentforum.org/posts/hxzQoXjtLGRWPoLkE/models-myths-dreams-and-cheshire-cat-grins,"Models, myths, dreams, and Cheshire cat grins",['Stuart_Armstrong'],2020-06-24T10:50:58Z,alignmentforum,, 31629,https://www.alignmentforum.org/posts/FG6icLPKizEaWHex5/announcing-apollo-research,Announcing Apollo Research,"['Marius Hobbhahn', 'beren', 'Lee Sharkey', 'Lucius Bushnaq', 'Dan Braun', 'Mikita Balesni', 'Jérémy Scheurer']",2023-05-30T16:17:20Z,alignmentforum,, 31650,https://www.alignmentforum.org/posts/RJZ7bwoDB6BWgt6St/the-role-of-bayesian-ml-in-ai-safety-an-overview,The role of Bayesian ML in AI safety - an overview,['Marius Hobbhahn'],2023-01-27T19:40:06Z,alignmentforum,, 31680,https://www.alignmentforum.org/posts/pgpFHLJnv7AdSi3qS/christiano-arc-and-ga-conjecture-discuss-alignment-cruxes,Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes,"['Andrea_Miotti', 'paulfchristiano', 'Gabriel Alfour', 'Olivia Jimenez']",2023-02-24T23:03:05Z,alignmentforum,, 31709,https://www.alignmentforum.org/posts/SbC7duHNDHkd3PkgG/alignment-grantmaking-is-funding-limited-right-now,Alignment Grantmaking is Funding-Limited Right Now,['johnswentworth'],2023-07-19T16:49:09Z,alignmentforum,, 31728,https://www.alignmentforum.org/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy,Some Thoughts on Metaphilosophy,['Wei Dai'],2019-02-10T00:28:29Z,alignmentforum,, 31746,https://www.alignmentforum.org/posts/L9HcyaiWBLYe7vXid/distinguishing-claims-about-training-vs-deployment,Distinguishing claims about training vs deployment,['Richard_Ngo'],2021-02-03T11:30:07Z,alignmentforum,, 31775,https://www.alignmentforum.org/posts/pxWEKPHNBzXZWi2rB/probabilities-weights-sums-pretty-much-the-same-for-reward,"Probabilities, weights, sums: pretty much the same for reward functions",['Stuart_Armstrong'],2020-05-20T15:19:53Z,alignmentforum,, 31785,https://www.alignmentforum.org/posts/T4Mef9ZkL4WftQBqw/the-nature-of-counterfactuals,The Nature of Counterfactuals,['Chris_Leong'],2021-06-05T09:18:08Z,alignmentforum,, 31798,https://www.alignmentforum.org/posts/wEjozSY9rhkpAaABt/the-blackwell-order-as-a-formalization-of-knowledge,The Blackwell order as a formalization of knowledge,['Alex Flint'],2021-09-10T02:51:16Z,alignmentforum,, 31811,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374f20/modal-bargaining-agents,Modal Bargaining Agents,['orthonormal'],2015-04-16T22:19:03Z,alignmentforum,, 31822,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374ebf/third-person-counterfactuals,Third-person counterfactuals,['Benya_Fallenstein'],2015-02-03T01:13:26Z,alignmentforum,, 31837,https://www.alignmentforum.org/posts/dFbfCLZA4pejckeKc/a-mechanistic-explanation-for-solidgoldmagikarp-like-tokens,A mechanistic explanation for SolidGoldMagikarp-like tokens in GPT2,['MadHatter'],2023-02-26T01:10:34Z,alignmentforum,, 31857,https://www.alignmentforum.org/posts/jjiDARiv4hybZXeXL/concept-extrapolation-for-hypothesis-generation,Concept extrapolation for hypothesis generation,"['Stuart_Armstrong', 'patrickleask', 'rgorman']",2022-12-12T22:09:47Z,alignmentforum,, 31868,https://www.alignmentforum.org/posts/TJqfcEyDdLwkDxZZC/an-124-provably-safe-exploration-through-shielding,[AN #124]: Provably safe exploration through shielding,['Rohin Shah'],2020-11-04T18:20:06Z,alignmentforum,, 31903,https://www.alignmentforum.org/posts/FhKkFcojhKZt7nHzG/a-short-critique-of-vanessa-kosoy-s-predca-1,A short critique of Vanessa Kosoy's PreDCA,['Martín Soto'],2022-11-13T16:00:46Z,alignmentforum,, 31923,https://www.alignmentforum.org/posts/Nx4DsTpMaoTiTp4RQ/conceptual-problems-with-utility-functions,Conceptual problems with utility functions,['Dacyn'],2018-07-11T01:29:43Z,alignmentforum,, 31932,https://www.alignmentforum.org/posts/2WpPRrqrFQa6n2x3W/modal-fixpoint-cooperation-without-loeb-s-theorem,Modal Fixpoint Cooperation without Löb's Theorem,['Andrew_Critch'],2023-02-05T00:58:41Z,alignmentforum,, 31944,https://www.alignmentforum.org/posts/b5HNYh9ne5vEkX5ag/one-layer-transformers-aren-t-equivalent-to-a-set-of-skip,One-layer transformers aren’t equivalent to a set of skip-trigrams,['Buck'],2023-02-17T17:26:14Z,alignmentforum,, 31960,https://www.alignmentforum.org/posts/8dJ8LgpjzWJQfHAfx/simple-experiments-with-deceptive-alignment-1,Simple experiments with deceptive alignment,['Andreas_Moe'],2023-05-15T17:41:52Z,alignmentforum,, 31978,https://www.alignmentforum.org/posts/BRsxztzkTzScFQfDW/apply-for-research-internships-at-arc,Apply for research internships at ARC!,['paulfchristiano'],2022-01-03T20:26:18Z,alignmentforum,, 31987,https://www.alignmentforum.org/posts/mRwJce3npmzbKfxws/efficientzero-how-it-works,EfficientZero: How It Works,['1a3orn'],2021-11-26T15:17:08Z,alignmentforum,, 32008,https://www.alignmentforum.org/posts/kcWHfRnLMDDgsbJfd/if-gpt-6-is-human-level-agi-but-costs-usd200-per-page-of,"If GPT-6 is human-level AGI but costs $200 per page of output, what would happen?",['Daniel Kokotajlo'],2020-10-09T12:00:37Z,alignmentforum,, 32017,https://www.alignmentforum.org/posts/bBdfbWfWxHN9Chjcq/robustness-to-scale,Robustness to Scale,['Scott Garrabrant'],2018-02-21T22:55:19Z,alignmentforum,, 32033,https://www.alignmentforum.org/posts/e3j7h4mPHvkRynbco/help-arc-evaluate-capabilities-of-current-language-models,Help ARC evaluate capabilities of current language models (still need people),['Beth Barnes'],2022-07-19T04:55:18Z,alignmentforum,, 32044,https://www.alignmentforum.org/posts/ma5Jc4wPT36j3X84P/udt-can-learn-anthropic-probabilities,UDT can learn anthropic probabilities,['cousin_it'],2018-06-24T18:04:37Z,alignmentforum,, 32060,https://www.alignmentforum.org/posts/Zmwkz2BMvuFFR8bi3/agi-safety-fundamentals-curriculum-and-application,AGI Safety Fundamentals curriculum and application,['Richard_Ngo'],2021-10-20T21:44:24Z,alignmentforum,, 32076,https://www.alignmentforum.org/posts/r3xwHzMmMf25peeHE/the-translucent-thoughts-hypotheses-and-their-implications,The Translucent Thoughts Hypotheses and Their Implications,['Fabien Roger'],2023-03-09T16:30:02Z,alignmentforum,, 32100,https://www.alignmentforum.org/posts/pHWTNMESuAEjZg2Qn/occam-s-razor-may-be-sufficient-to-infer-the-preferences-of,Occam's Razor May Be Sufficient to Infer the Preferences of Irrational Agents: A reply to Armstrong & Mindermann,['Daniel Kokotajlo'],2019-10-07T19:52:19Z,alignmentforum,, 32116,https://www.alignmentforum.org/posts/9uj2Mto9CNdWZudyq/elementary-infra-bayesianism,Elementary Infra-Bayesianism,['Jan'],2022-05-08T12:23:00Z,alignmentforum,, 32125,https://www.alignmentforum.org/posts/Cty2rSMut483QgBQ2/what-should-ai-owe-to-us-accountable-and-aligned-ai-systems,What Should AI Owe To Us? Accountable and Aligned AI Systems via Contractualist AI Alignment,['xuan'],2022-09-08T15:04:46Z,alignmentforum,, 32163,https://www.alignmentforum.org/posts/QpHewJvZJFaQYuLwH/intro-to-brain-like-agi-safety-14-controlled-agi,[Intro to brain-like-AGI safety] 14. Controlled AGI,['Steven Byrnes'],2022-05-11T13:17:55Z,alignmentforum,, 32195,https://www.alignmentforum.org/posts/FWuByzM9T5qq2PF2n/a-correspondence-theorem,A Correspondence Theorem,['johnswentworth'],2020-10-26T23:28:06Z,alignmentforum,, 32205,https://www.alignmentforum.org/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is,(My understanding of) What Everyone in Technical Alignment is Doing and Why,"['Thomas Larsen', 'elifland']",2022-08-29T01:23:58Z,alignmentforum,, 32246,https://www.alignmentforum.org/posts/AWbtbmC6rAg6dh75b/some-thoughts-on-risks-from-narrow-non-agentic-ai,"Some thoughts on risks from narrow, non-agentic AI",['Richard_Ngo'],2021-01-19T00:04:10Z,alignmentforum,, 32284,https://www.alignmentforum.org/posts/PvuuBN39pmjw6wRpj/encultured-ai-part-1-appendix-relevant-research-examples,"Encultured AI, Part 1 Appendix: Relevant Research Examples","['Andrew_Critch', 'Nick Hay']",2022-08-08T22:44:50Z,alignmentforum,, 32321,https://www.alignmentforum.org/posts/Lp4Q9kSGsJHLfoHX3/more-is-different-for-ai,More Is Different for AI,['jsteinhardt'],2022-01-04T19:30:20Z,alignmentforum,, 32342,https://www.alignmentforum.org/posts/i2dNFgbjnqZBfeitT/oracles-sequence-predictors-and-self-confirming-predictions,"Oracles, sequence predictors, and self-confirming predictions",['Stuart_Armstrong'],2019-05-03T14:09:32Z,alignmentforum,, 32360,https://www.alignmentforum.org/posts/2neeoZ7idRbZf4eNC/re-introducing-selection-vs-control-for-optimization,"Re-introducing Selection vs Control for Optimization (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 1)",['Davidmanheim'],2019-07-02T15:36:51Z,alignmentforum,, 32377,https://www.alignmentforum.org/posts/WvHpsqmwQopnaMh46/less-threat-dependent-bargaining-solutions-3-2,Less Threat-Dependent Bargaining Solutions?? (3/2),['Diffractor'],2022-08-20T02:19:11Z,alignmentforum,, 32394,https://www.alignmentforum.org/posts/cLDcKgvM6KxBhqhGq/when-would-agis-engage-in-conflict,When would AGIs engage in conflict?,"['JesseClifton', 'Sammy Martin', 'antimonyanthony']",2022-09-14T19:38:22Z,alignmentforum,, 32425,https://www.alignmentforum.org/posts/CMnMaTxNAhXfcEtgm/announcing-web-taisu-may-13-17,"Announcing Web-TAISU, May 13-17",['Linda Linsefors'],2020-04-04T11:48:14Z,alignmentforum,, 32434,https://www.alignmentforum.org/posts/cSNaxb8wu564x9n6r/what-are-good-alignment-conference-papers,What are good alignment conference papers?,['adamShimi'],2021-08-28T13:35:38Z,alignmentforum,, 32443,https://www.alignmentforum.org/posts/xxvKhjpcTAJwvtbWM/deepmind-s-gato-generalist-agent,Deepmind's Gato: Generalist Agent,['Daniel Kokotajlo'],2022-05-12T16:01:22Z,alignmentforum,, 32458,https://www.alignmentforum.org/posts/fj8eyc7QzqCaB8Wgm/attainable-utility-landscape-how-the-world-is-changed,Attainable Utility Landscape: How The World Is Changed,['TurnTrout'],2020-02-10T00:58:01Z,alignmentforum,, 32470,https://www.alignmentforum.org/posts/DWFx2Cmsvd4uCKkZ4/inner-alignment-in-the-brain,Inner alignment in the brain,['Steven Byrnes'],2020-04-22T13:14:08Z,alignmentforum,, 32479,https://www.alignmentforum.org/posts/EpR5yTZMaJkDz4hhs/the-problem-with-the-current-state-of-agi-definitions,The Problem With The Current State of AGI Definitions,['Yitz'],2022-05-29T13:58:18Z,alignmentforum,, 32495,https://www.alignmentforum.org/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-entail-goal-directed-behavior,Coherence arguments do not entail goal-directed behavior,['Rohin Shah'],2018-12-03T03:26:04Z,alignmentforum,, 32515,https://www.alignmentforum.org/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety,Disentangling arguments for the importance of AI safety,['Richard_Ngo'],2019-01-21T12:41:44Z,alignmentforum,, 32557,https://www.alignmentforum.org/posts/W3tZacTRt4koHyxbr/examples-of-ai-increasing-ai-progress,Examples of AI Increasing AI Progress,['ThomasW'],2022-07-17T20:06:41Z,alignmentforum,, 32573,https://www.alignmentforum.org/posts/XPqMbtpbku8aN55wd/an-125-neural-network-scaling-laws-across-multiple,[AN #125]: Neural network scaling laws across multiple modalities,['Rohin Shah'],2020-11-11T18:20:05Z,alignmentforum,, 32606,https://www.alignmentforum.org/posts/CtGH3yEoo4mY2taxe/weak-hch-accesses-exp,Weak HCH accesses EXP,['evhub'],2020-07-22T22:36:44Z,alignmentforum,, 32625,https://www.alignmentforum.org/posts/qoz2ryN4GDqEWPBnQ/how-much-alignment-data-will-we-need-in-the-long-run-1,How much alignment data will we need in the long run?,['Jacob_Hilton'],2022-08-10T21:39:31Z,alignmentforum,, 32651,https://www.alignmentforum.org/posts/rh477a7fmWmzQdLMj/asot-finetuning-rl-and-gpt-s-world-prior,"[ASoT] Finetuning, RL, and GPT's world prior",['Jozdien'],2022-12-02T16:33:41Z,alignmentforum,, 32670,https://www.alignmentforum.org/posts/pRt4E3nmPBtWZiT4A/precursor-checking-for-deceptive-alignment,Precursor checking for deceptive alignment,['evhub'],2022-08-03T22:56:45Z,alignmentforum,, 32691,https://www.alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment,Low-stakes alignment,['paulfchristiano'],2021-04-30T00:10:06Z,alignmentforum,, 32711,https://www.alignmentforum.org/posts/TdesHi8kkyokQdDoQ/gradient-descent-doesn-t-select-for-inner-search,Gradient descent doesn't select for inner search,['Ivan Vendrov'],2022-08-13T04:15:19Z,alignmentforum,, 32733,https://www.alignmentforum.org/posts/gmHiwafywFo33euGz/aligned-foundation-models-don-t-imply-aligned-systems,"""Aligned"" foundation models don't imply aligned systems",['Max H'],2023-04-13T04:13:50Z,alignmentforum,, 32748,https://www.alignmentforum.org/posts/32QD3tRfognNHN9xw/ai-safety-discussion-days,AI Safety Discussion Days,['Linda Linsefors'],2020-05-27T16:54:48Z,alignmentforum,, 32769,https://www.alignmentforum.org/posts/DbWoZNxgwr2NBFdoo/trace-readme,Trace README,['johnswentworth'],2020-03-11T21:08:21Z,alignmentforum,, 32779,https://www.alignmentforum.org/posts/9m2fzjNSJmd3yxxKG/acdt-a-hack-y-acausal-decision-theory,ACDT: a hack-y acausal decision theory,['Stuart_Armstrong'],2020-01-15T17:22:49Z,alignmentforum,, 32795,https://www.alignmentforum.org/posts/jnHxfXgyQj3ALsD5a/intermittent-distillations-3,Intermittent Distillations #3,['Mark Xu'],2021-05-15T07:13:24Z,alignmentforum,, 32824,https://www.alignmentforum.org/posts/3DFBbPFZyscrAiTKS/my-overview-of-the-ai-alignment-landscape-threat-models,My Overview of the AI Alignment Landscape: Threat Models,['Neel Nanda'],2021-12-25T23:07:11Z,alignmentforum,, 32854,https://www.alignmentforum.org/posts/iTpLAaPamcKyjmbFC/robust-delegation,Robust Delegation,"['abramdemski', 'Scott Garrabrant']",2018-11-04T16:38:39Z,alignmentforum,, 32870,https://www.alignmentforum.org/posts/WcWzLSn8ZjJhCZxP4/predca-vanessa-kosoy-s-alignment-protocol,PreDCA: vanessa kosoy's alignment protocol,['Tamsin Leake'],2022-08-20T10:03:11Z,alignmentforum,, 32896,https://www.alignmentforum.org/posts/MWBFnH225LgRxfHgd/take-14-corrigibility-isn-t-that-great,Take 14: Corrigibility isn't that great.,['Charlie Steiner'],2022-12-25T13:04:22Z,alignmentforum,, 32910,https://www.alignmentforum.org/posts/nd692YfFGfZDh9Mwz/an-69-stuart-russell-s-new-book-on-why-we-need-to-replace,[AN #69] Stuart Russell's new book on why we need to replace the standard model of AI,['Rohin Shah'],2019-10-19T00:30:02Z,alignmentforum,, 32932,https://www.alignmentforum.org/posts/7jn5aDadcMH6sFeJe/why-i-m-joining-anthropic,Why I'm joining Anthropic,['evhub'],2023-01-05T01:12:14Z,alignmentforum,, 32960,https://www.alignmentforum.org/posts/Tux9WH4daKcxjEetQ/goal-directed-model-based-rl,Goal-directed = Model-based RL?,['adamShimi'],2020-02-20T19:13:51Z,alignmentforum,, 32978,https://www.alignmentforum.org/posts/bG4PR9uSsZqHg2gYY/utility-reward,Utility ≠ Reward,['vlad_m'],2019-09-05T17:28:13Z,alignmentforum,, 33010,https://www.alignmentforum.org/posts/t5DFpygMqpnFsmJ3b/cartographic-processes,Cartographic Processes,['johnswentworth'],2019-08-27T20:02:45Z,alignmentforum,, 33022,https://www.alignmentforum.org/posts/Z8C29oMAmYjhk2CNN/non-superintelligent-paperclip-maximizers-are-normal,Non-superintelligent paperclip maximizers are normal,['jessicata'],2023-10-10T00:29:53Z,alignmentforum,, 33044,https://www.alignmentforum.org/posts/JxWggFXcKrPKy7p8t/an-64-using-deep-rl-and-reward-uncertainty-to-incentivize,[AN #64]: Using Deep RL and Reward Uncertainty to Incentivize Preference Learning,['Rohin Shah'],2019-09-16T17:10:02Z,alignmentforum,, 33076,https://www.alignmentforum.org/posts/g3PwPgcdcWiP33pYn/counterfactual-mugging-poker-game,Counterfactual Mugging Poker Game,['Scott Garrabrant'],2018-06-13T23:34:59Z,alignmentforum,, 33092,https://www.alignmentforum.org/posts/zo9zKcz47JxDErFzQ/call-for-distillers,Call For Distillers,['johnswentworth'],2022-04-04T18:25:35Z,alignmentforum,, 33108,https://www.alignmentforum.org/posts/Lv3emECEjkCSHG7L7/solve-corrigibility-week,Solve Corrigibility Week,['Logan Riggs'],2021-11-28T17:00:30Z,alignmentforum,, 33121,https://www.alignmentforum.org/posts/MgLeAWSeLbzx8mkZ2/bounded-oracle-induction,Bounded Oracle Induction,['Diffractor'],2018-11-28T08:11:28Z,alignmentforum,, 33133,https://www.alignmentforum.org/posts/iBBK4j6RWC7znEiDv/history-of-the-development-of-logical-induction,History of the Development of Logical Induction,['Scott Garrabrant'],2018-08-29T03:15:52Z,alignmentforum,, 33142,https://www.alignmentforum.org/posts/JasCkaPtZEJsYDX8H/cartesian-boundary-as-abstraction-boundary,Cartesian Boundary as Abstraction Boundary,['johnswentworth'],2020-06-11T17:38:18Z,alignmentforum,, 33159,https://www.alignmentforum.org/posts/aqhMLqaoHb7uob7fr/if-i-were-a-well-intentioned-ai-iv-mesa-optimising,If I were a well-intentioned AI... IV: Mesa-optimising,['Stuart_Armstrong'],2020-03-02T12:16:16Z,alignmentforum,, 33176,https://www.alignmentforum.org/posts/Si52fuEGSJJTXW9zs/behavioral-and-mechanistic-definitions-often-confuse-ai,Behavioral and mechanistic definitions (often confuse AI alignment discussions),['LawrenceC'],2023-02-20T21:33:01Z,alignmentforum,, 33194,https://www.alignmentforum.org/posts/YEkzeJTrp69DTn8KD/cars-and-elephants-a-handwavy-argument-analogy-against,"""Cars and Elephants"": a handwavy argument/analogy against mechanistic interpretability",['David Scott Krueger (formerly: capybaralet)'],2022-10-31T21:26:05Z,alignmentforum,, 33205,https://www.alignmentforum.org/posts/GzoWcYibWYwJva8aL/parameter-counts-in-machine-learning,Parameter counts in Machine Learning,"['Jsevillamol', 'Pablo Villalobos']",2021-06-19T16:04:35Z,alignmentforum,, 33214,https://www.alignmentforum.org/posts/mnoc3cKY3gXMrTybs/a-list-of-core-ai-safety-problems-and-how-i-hope-to-solve,A list of core AI safety problems and how I hope to solve them,['davidad'],2023-08-26T15:12:18Z,alignmentforum,, 33261,https://www.alignmentforum.org/posts/LhxHcASQwpNa3mRNk/untrusted-smart-models-and-trusted-dumb-models,Untrusted smart models and trusted dumb models,['Buck'],2023-11-04T03:06:38Z,alignmentforum,, 33282,https://www.alignmentforum.org/posts/fLRgddjMTBnpbMeiM/infra-domain-proofs-2,Infra-Domain Proofs 2,['Diffractor'],2021-03-28T09:15:15Z,alignmentforum,, 33299,https://www.alignmentforum.org/posts/4RrLiboiGGKfsanMF/the-qaci-alignment-plan-table-of-contents,the QACI alignment plan: table of contents,['Tamsin Leake'],2023-03-21T20:22:01Z,alignmentforum,, 33315,https://www.alignmentforum.org/posts/k2sBrR4gJX9BNTuoa/an-131-formalizing-the-argument-of-ignored-attributes-in-a,[AN #131]: Formalizing the argument of ignored attributes in a utility function,['Rohin Shah'],2020-12-31T18:20:05Z,alignmentforum,, 33347,https://www.alignmentforum.org/posts/HYERofGZE6j9Tuigi/inner-alignment-failures-which-are-actually-outer-alignment,"""Inner Alignment Failures"" Which Are Actually Outer Alignment Failures",['johnswentworth'],2020-10-31T20:18:36Z,alignmentforum,, 33366,https://www.alignmentforum.org/posts/nRAMpjnb6Z4Qv3imF/the-strategy-stealing-assumption,The strategy-stealing assumption,['paulfchristiano'],2019-09-16T15:23:25Z,alignmentforum,, 33385,https://www.alignmentforum.org/posts/5CApLZiHGkt37nRQ2/an-111-the-circuits-hypotheses-for-deep-learning,[AN #111]: The Circuits hypotheses for deep learning,['Rohin Shah'],2020-08-05T17:40:23Z,alignmentforum,, 33395,https://www.alignmentforum.org/posts/MxHiYZJjYm53ATxhb/an-122-arguing-for-agi-driven-existential-risk-from-first,[AN #122]: Arguing for AGI-driven existential risk from first principles,['Rohin Shah'],2020-10-21T17:10:04Z,alignmentforum,, 33416,https://www.alignmentforum.org/posts/AfGmsjGPXN97kNp57/arguments-about-fast-takeoff,Arguments about fast takeoff,['paulfchristiano'],2018-02-25T04:53:36Z,alignmentforum,, 33426,https://www.alignmentforum.org/posts/LDsSqXf9Dpu3J3gHD/why-i-m-excited-about-debate,Why I'm excited about Debate,['Richard_Ngo'],2021-01-15T23:37:54Z,alignmentforum,, 33445,https://www.alignmentforum.org/posts/ThtZrHooK7En9mcZr/greed-is-the-root-of-this-evil,Greed Is the Root of This Evil,['Thane Ruthenis'],2022-10-13T20:40:57Z,alignmentforum,, 33466,https://www.alignmentforum.org/posts/YTG348f4pEYcicwQ3/alignment-newsletter-24,Alignment Newsletter #24,['Rohin Shah'],2018-09-17T16:20:02Z,alignmentforum,, 33497,https://www.alignmentforum.org/posts/pjTF49Rnc878jZSAZ/an-107-the-convergent-instrumental-subgoals-of-goal-directed,[AN #107]: The convergent instrumental subgoals of goal-directed agents,['Rohin Shah'],2020-07-16T06:47:56Z,alignmentforum,, 33524,https://www.alignmentforum.org/posts/e8qFDMzs2u9xf5ie6/belief-functions-and-decision-theory,Belief Functions And Decision Theory,['Diffractor'],2020-08-27T08:00:52Z,alignmentforum,, 33548,https://www.alignmentforum.org/posts/S2jsBsZvqjBZa3pKT/approaches-to-gradient-hacking,Approaches to gradient hacking,['adamShimi'],2021-08-14T15:16:56Z,alignmentforum,, 33573,https://www.alignmentforum.org/posts/qvyv72fCiC46sxfPt/on-unfixably-unsafe-agi-architectures,On unfixably unsafe AGI architectures,['Steven Byrnes'],2020-02-19T21:16:20Z,alignmentforum,, 33594,https://www.alignmentforum.org/posts/Tdu3tGT4i24qcLESh/equilibrium-and-prior-selection-problems-in-multipolar-1,Equilibrium and prior selection problems in multipolar deployment,['JesseClifton'],2020-04-02T20:06:14Z,alignmentforum,, 33612,https://www.alignmentforum.org/posts/a65sFvymnoLkBnE8n/idea-open-access-ai-safety-journal,Idea: Open Access AI Safety Journal,['Gordon Seidoh Worley'],2018-03-23T18:27:01Z,alignmentforum,, 33626,https://www.alignmentforum.org/posts/wqRqb7h6ZC48iDgfK/tentatively-found-600-monosemantic-features-in-a-small-lm,(tentatively) Found 600+ Monosemantic Features in a Small LM Using Sparse Autoencoders,['Logan Riggs'],2023-07-05T16:49:44Z,alignmentforum,, 33641,https://www.alignmentforum.org/posts/EhNCnCkmu7MwrQ7yz/future-directions-for-ambitious-value-learning,Future directions for ambitious value learning,['Rohin Shah'],2018-11-11T15:53:53Z,alignmentforum,, 33666,https://www.alignmentforum.org/posts/jnmG5jczvWbeRPcvG/four-usages-of-loss-in-ai,"Four usages of ""loss"" in AI",['TurnTrout'],2022-10-02T00:52:36Z,alignmentforum,, 33681,https://www.alignmentforum.org/posts/REesy8nqvknFFKywm/clarifying-wireheading-terminology,Clarifying wireheading terminology,['leogao'],2022-11-24T04:53:24Z,alignmentforum,, 33701,https://www.alignmentforum.org/posts/FLMyTjuTiGytE6sP2/inner-misalignment-in-simulator-llms,"Inner Misalignment in ""Simulator"" LLMs",['Adam Scherlis'],2023-01-31T08:33:58Z,alignmentforum,, 33723,https://www.alignmentforum.org/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce,Views on when AGI comes and on strategy to reduce existential risk,['TsviBT'],2023-07-08T09:00:20Z,alignmentforum,, 33747,https://www.alignmentforum.org/posts/rvxcSc6wdcCfaX6GZ/two-senses-of-optimizer,Two senses of “optimizer”,['Joar Skalse'],2019-08-21T16:02:09Z,alignmentforum,, 33760,https://www.alignmentforum.org/posts/QqwZ7cwEA2cxFEAun/teaching-ml-to-answer-questions-honestly-instead-of,Teaching ML to answer questions honestly instead of predicting human answers,['paulfchristiano'],2021-05-28T17:30:03Z,alignmentforum,, 33783,https://www.alignmentforum.org/posts/P448hmmAeGepQDREs/review-of-soft-takeoff-can-still-lead-to-dsa,Review of Soft Takeoff Can Still Lead to DSA,['Daniel Kokotajlo'],2021-01-10T18:10:25Z,alignmentforum,, 33807,https://www.alignmentforum.org/posts/vMM6HmSQaKmKadvBi/the-core-of-the-alignment-problem-is-1,The Core of the Alignment Problem is...,"['Thomas Larsen', 'Jeremy Gillen', 'JamesH']",2022-08-17T20:07:35Z,alignmentforum,, 33834,https://www.alignmentforum.org/posts/uDmiEvPtJnRcrbHB6/conjecture-workshop,Conjecture Workshop,['johnswentworth'],2020-05-15T22:41:32Z,alignmentforum,, 33844,https://www.alignmentforum.org/posts/2dKvTYYN4PTT7g4of/knowledge-manipulation-and-free-will,"Knowledge, manipulation, and free will",['Stuart_Armstrong'],2020-10-13T17:47:13Z,alignmentforum,, 33857,https://www.alignmentforum.org/posts/ttRyu8u9vqX3jZFjr/ordering-capability-thresholds,ordering capability thresholds,['Tamsin Leake'],2022-09-16T16:36:59Z,alignmentforum,, 33878,https://www.alignmentforum.org/posts/GgusnG2tiPEa4aYFS/ai-safety-papers-an-app-for-the-tai-safety-database,AI Safety Papers: An App for the TAI Safety Database,['ozziegooen'],2021-08-21T02:02:55Z,alignmentforum,, 33887,https://www.alignmentforum.org/posts/vT4tsttHgYJBoKi4n/some-abstract-non-technical-reasons-to-be-non-maximally,"Some abstract, non-technical reasons to be non-maximally-pessimistic about AI alignment",['Rob Bensinger'],2021-12-12T02:08:09Z,alignmentforum,, 33914,https://www.alignmentforum.org/posts/NdJtfujX4sE6xLCsb/if-i-were-a-well-intentioned-ai-iii-extremal-goodhart,If I were a well-intentioned AI... III: Extremal Goodhart,['Stuart_Armstrong'],2020-02-28T11:24:23Z,alignmentforum,, 33942,https://www.alignmentforum.org/posts/ABNjLr2H39g2oXqGb/the-indexing-problem,The Indexing Problem,['johnswentworth'],2020-06-22T19:11:54Z,alignmentforum,, 33953,https://www.alignmentforum.org/posts/bgQysKL6Luqacw3SN/multimodal-neurons-in-artificial-neural-networks,Multimodal Neurons in Artificial Neural Networks,['Kaj_Sotala'],2021-03-05T09:01:54Z,alignmentforum,, 33971,https://www.alignmentforum.org/posts/ChDH335ckdvpxXaXX/model-organisms-of-misalignment-the-case-for-a-new-pillar-of-1,Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research,"['evhub', 'Nicholas Schiefer', 'Carson Denison', 'Ethan Perez']",2023-08-08T01:30:11Z,alignmentforum,, 34000,https://www.alignmentforum.org/posts/BMghmAxYxeSdAteDc/an-exploration-of-gpt-2-s-embedding-weights,An exploration of GPT-2's embedding weights,['Adam Scherlis'],2022-12-13T00:46:18Z,alignmentforum,, 34016,https://www.alignmentforum.org/posts/3vFmQhHBosnjZXuAJ/an-171-disagreements-between-alignment-optimists-and,"[AN #171]: Disagreements between alignment ""optimists"" and ""pessimists""",['Rohin Shah'],2022-01-21T18:30:05Z,alignmentforum,, 34035,https://www.alignmentforum.org/posts/9hxH2pxffxeeXk8YT/a-test-for-language-model-consciousness,A Test for Language Model Consciousness,['Ethan Perez'],2022-08-25T19:41:23Z,alignmentforum,, 34053,https://www.alignmentforum.org/posts/TWdnCi4kPjTapYjh6/an-109-teaching-neural-nets-to-generalize-the-way-humans,[AN #109]: Teaching neural nets to generalize the way humans would,['Rohin Shah'],2020-07-22T17:10:05Z,alignmentforum,, 34069,https://www.alignmentforum.org/posts/HCibBn3ZCZRwMwNEE/against-time-in-agent-models,Against Time in Agent Models,['johnswentworth'],2022-05-13T19:55:33Z,alignmentforum,, 34084,https://www.alignmentforum.org/posts/EhkHnNJXwT8RmtfYZ/natural-language-alignment-1,Natural language alignment,['Jacy Reese Anthis'],2023-04-12T19:02:23Z,alignmentforum,, 34100,https://www.alignmentforum.org/posts/JzTfKrgC7Lfz3zcwM/theories-of-modularity-in-the-biological-literature,Theories of Modularity in the Biological Literature,"['TheMcDouglas', 'Avery', 'Lucius Bushnaq']",2022-04-04T12:48:42Z,alignmentforum,, 34124,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375029/quantilizers-maximize-expected-utility-subject-to-a-conservative-cost-constraint,Quantilizers maximize expected utility subject to a conservative cost constraint,['jessicata'],2015-09-28T02:17:38Z,alignmentforum,, 34140,https://www.alignmentforum.org/posts/nCbAHnpi4LGFR32yq/how-promising-are-legal-avenues-to-restrict-ai-training-data,How promising are legal avenues to restrict AI training data?,['thehalliard'],2022-12-10T16:31:46Z,alignmentforum,, 34149,https://www.alignmentforum.org/posts/pg6Z5tiuXotGTWaG8/anthropic-effects-in-estimating-evolution-difficulty,Anthropic Effects in Estimating Evolution Difficulty,['Mark Xu'],2021-07-05T04:02:18Z,alignmentforum,, 34167,https://www.alignmentforum.org/posts/ChierESmenTtCQqZy/subsystem-alignment,Subsystem Alignment,"['abramdemski', 'Scott Garrabrant']",2018-11-06T16:16:46Z,alignmentforum,, 34192,https://www.alignmentforum.org/posts/DE58ifrwYW5ogiSyJ/does-novel-understanding-imply-novel-agency-values,Does novel understanding imply novel agency / values?,['TsviBT'],2023-02-19T14:41:40Z,alignmentforum,, 34213,https://www.alignmentforum.org/posts/chevXfQmRYrTZnj8r/conditioning-prompts-and-fine-tuning,"Conditioning, Prompts, and Fine-Tuning",['Adam Jermyn'],2022-08-17T20:52:53Z,alignmentforum,, 34229,https://www.alignmentforum.org/posts/r3AcHkAXPbjPwXFjc/an-129-explaining-double-descent-by-measuring-bias-and,[AN #129]: Explaining double descent by measuring bias and variance,['Rohin Shah'],2020-12-16T18:10:05Z,alignmentforum,, 34256,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375321/on-motivations-for-miri-s-highly-reliable-agent-design-research,On motivations for MIRI's highly reliable agent design research,['jessicata'],2017-01-29T19:34:37Z,alignmentforum,, 34275,https://www.alignmentforum.org/posts/fKuugaxt2XLTkASkk/open-source-replication-and-commentary-on-anthropic-s,Open Source Replication & Commentary on Anthropic's Dictionary Learning Paper,['Neel Nanda'],2023-10-23T22:38:34Z,alignmentforum,, 34301,https://www.alignmentforum.org/posts/ykvw6sMQD7JXK5cdJ/review-of-learning-normativity-a-research-agenda,"Review of ""Learning Normativity: A Research Agenda""","['Gyrodiot', 'adamShimi', 'Joe_Collman']",2021-06-06T13:33:28Z,alignmentforum,, 34322,https://www.alignmentforum.org/posts/uzb3u3zMTkrSEhCaf/anthropic-probabilities-and-cost-functions,Anthropic probabilities and cost functions,['Stuart_Armstrong'],2018-12-21T17:54:21Z,alignmentforum,, 34331,https://www.alignmentforum.org/posts/zXfqftW8y69YzoXLj/using-gpt-n-to-solve-interpretability-of-neural-networks-a,Using GPT-N to Solve Interpretability of Neural Networks: A Research Agenda,"['Logan Riggs', 'Gurkenglas']",2020-09-03T18:27:06Z,alignmentforum,, 34354,https://www.alignmentforum.org/posts/5bd75cc58225bf067037554e/distributed-cooperation,Distributed Cooperation,['Diffractor'],2018-03-18T05:46:56Z,alignmentforum,, 34366,https://www.alignmentforum.org/posts/joPoxBpZjLNx8MKaF/syntax-semantics-and-symbol-grounding-simplified,"Syntax, semantics, and symbol grounding, simplified",['Stuart_Armstrong'],2020-11-23T16:12:12Z,alignmentforum,, 34383,https://www.alignmentforum.org/posts/JKSS8GEu7DGX4YuxN/reducing-collective-rationality-to-individual-optimization,Reducing collective rationality to individual optimization in common-payoff games using MCMC,['jessicata'],2018-08-20T00:51:29Z,alignmentforum,, 34396,https://www.alignmentforum.org/posts/bqRD6MS3yCdfM9wRe/side-channels-input-versus-output,Side-channels: input versus output,['davidad'],2022-12-12T12:32:29Z,alignmentforum,, 34412,https://www.alignmentforum.org/posts/6XLyM22PBd9qDtin8/learning-human-preferences-optimistic-and-pessimistic,Learning human preferences: optimistic and pessimistic scenarios,['Stuart_Armstrong'],2020-08-18T13:05:24Z,alignmentforum,, 34435,https://www.alignmentforum.org/posts/9ag5JGBnMsayBidwh/causality-a-brief-introduction,Causality: A Brief Introduction,"['tom4everitt', 'Lewis Hammond', 'Jonathan Richens', 'Francis Rhys Ward', 'RyanCarey', 'sbenthall', 'James Fox']",2023-06-20T15:01:39Z,alignmentforum,, 34455,https://www.alignmentforum.org/posts/3mwfyLpnYqhqvprbb/hedonic-loops-and-taming-rl,Hedonic Loops and Taming RL,['beren'],2023-07-19T15:12:42Z,alignmentforum,, 34483,https://www.alignmentforum.org/posts/EhAbh2pQoAXkm9yor/circumventing-interpretability-how-to-defeat-mind-readers,Circumventing interpretability: How to defeat mind-readers,['Lee Sharkey'],2022-07-14T16:59:22Z,alignmentforum,, 34517,https://www.alignmentforum.org/posts/Epm6CkXrdRyAihMRe/an-66-decomposing-robustness-into-capability-robustness-and,[AN #66]: Decomposing robustness into capability robustness and alignment robustness,['Rohin Shah'],2019-09-30T18:00:03Z,alignmentforum,, 34541,https://www.alignmentforum.org/posts/wdC8fH8kHffYn3kNa/in-defense-of-wrapper-minds,In Defense of Wrapper-Minds,['Thane Ruthenis'],2022-12-28T18:28:26Z,alignmentforum,, 34560,https://www.alignmentforum.org/posts/3zkXPo4ZTrDFZz7Sd/an-56-should-ml-researchers-stop-running-experiments-before,[AN #56] Should ML researchers stop running experiments before making hypotheses?,['Rohin Shah'],2019-05-21T02:20:02Z,alignmentforum,, 34590,https://www.alignmentforum.org/posts/NvqGmLBCtvQxfMs9m/the-artificial-intentional-stance,The Artificial Intentional Stance,['Charlie Steiner'],2019-07-27T07:00:48Z,alignmentforum,, 34611,https://www.alignmentforum.org/posts/MJXwnHbqFYE3N4dP2/aligning-an-h-jepa-agent-via-training-on-the-outputs-of-an,"Aligning an H-JEPA agent via training on the outputs of an LLM-based ""exemplary actor""",['Roman Leventov'],2023-05-29T11:08:36Z,alignmentforum,, 34648,https://www.alignmentforum.org/posts/4C4jha5SdReWgg7dF/a-brief-intro-to-domain-theory,A Brief Intro to Domain Theory,['Diffractor'],2019-11-21T03:24:13Z,alignmentforum,, 34664,https://www.alignmentforum.org/posts/i8sHdLyGQeBTGwTqq/value-extrapolation-concept-extrapolation-model-splintering,"Value extrapolation, concept extrapolation, model splintering",['Stuart_Armstrong'],2022-03-08T22:50:00Z,alignmentforum,, 34684,https://www.alignmentforum.org/posts/FPML8k4QtjJxk3Y4M/confusions-re-higher-level-game-theory,Confusions re: Higher-Level Game Theory,['Diffractor'],2021-07-02T03:15:11Z,alignmentforum,, 34707,https://www.alignmentforum.org/posts/GbAymLbJdGbqTumCN/more-detailed-proposal-for-measuring-alignment-of-current,More detailed proposal for measuring alignment of current models,['Beth Barnes'],2021-11-20T00:03:39Z,alignmentforum,, 34721,https://www.alignmentforum.org/posts/cpewqG3MjnKJpCr7E/ought-why-it-matters-and-ways-to-help,Ought: why it matters and ways to help,['paulfchristiano'],2019-07-25T18:00:28Z,alignmentforum,, 34734,https://www.alignmentforum.org/posts/9rtWTHsPAf2mLKizi/counterfactuals-as-a-matter-of-social-convention,Counterfactuals as a matter of Social Convention,['Chris_Leong'],2019-11-30T10:35:40Z,alignmentforum,, 34743,https://www.alignmentforum.org/posts/HvqQm6o8KnwxbdmhZ/estimating-training-compute-of-deep-learning-models,Estimating training compute of Deep Learning models,"['lennart', 'Jsevillamol', 'Marius Hobbhahn', 'Tamay Besiroglu', 'anson.ho']",2022-01-20T16:12:43Z,alignmentforum,, 34757,https://www.alignmentforum.org/posts/BeQcPCTAikQihhiaK/intro-to-brain-like-agi-safety-11-safety-alignment-but-they,[Intro to brain-like-AGI safety] 11. Safety ≠ alignment (but they’re close!),['Steven Byrnes'],2022-04-06T13:39:42Z,alignmentforum,, 34794,https://www.alignmentforum.org/posts/yKzyCw5EjabyZRkbJ/existential-ai-safety-is-not-separate-from-near-term,Existential AI Safety is NOT separate from near-term applications,['scasper'],2022-12-13T14:47:07Z,alignmentforum,, 34820,https://www.alignmentforum.org/posts/hRohhttbtpY3SHmmD/takeoff-speeds-have-a-huge-effect-on-what-it-means-to-work-1,Takeoff speeds have a huge effect on what it means to work on AI x-risk,['Buck'],2022-04-13T17:38:12Z,alignmentforum,, 34832,https://www.alignmentforum.org/posts/xhD6SHAAE9ghKZ9HS/safetywashing,Safetywashing,['Adam Scholl'],2022-07-01T11:56:33Z,alignmentforum,, 34842,https://www.alignmentforum.org/posts/rH9sXupnoR8wSmRe9/ai-safety-via-luck-2,AI Safety via Luck,['Jozdien'],2023-04-01T20:13:55Z,alignmentforum,, 34863,https://www.alignmentforum.org/posts/dTxGyKtshbmWsMWhn/criticism-of-the-main-framework-in-ai-alignment,Criticism of the main framework in AI alignment,['Michele Campolo'],2023-01-31T23:01:27Z,alignmentforum,, 34879,https://www.alignmentforum.org/posts/8NPx9FH2Zbv9gd9rX/alignment-newsletter-50,Alignment Newsletter #50,['Rohin Shah'],2019-03-28T18:10:01Z,alignmentforum,, 34902,https://www.alignmentforum.org/posts/XE6LD2c9NtB7gMdEm/an-92-learning-good-representations-with-contrastive,[AN #92]: Learning good representations with contrastive predictive coding,['Rohin Shah'],2020-03-25T17:20:02Z,alignmentforum,, 34925,https://www.alignmentforum.org/posts/pgsevroJ265WcScHu/the-conceptual-doppelgaenger-problem,The conceptual Doppelgänger problem,['TsviBT'],2023-02-12T17:23:56Z,alignmentforum,, 34942,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374fcb/asymptotic-logical-uncertainty-concrete-failure-of-the-solomonoff-approach,Asymptotic Logical Uncertainty: Concrete Failure of the Solomonoff Approach,['Scott Garrabrant'],2015-07-22T19:27:51Z,alignmentforum,, 34952,https://www.alignmentforum.org/posts/nM99oLhRzrmLWozoM/an-134-underspecification-as-a-cause-of-fragility-to,[AN #134]: Underspecification as a cause of fragility to distribution shift,['Rohin Shah'],2021-01-21T18:10:07Z,alignmentforum,, 34982,https://www.alignmentforum.org/posts/yBdDXXmLYejrcPPv2/two-alternatives-to-logical-counterfactuals,Two Alternatives to Logical Counterfactuals,['jessicata'],2020-04-01T09:48:30Z,alignmentforum,, 35003,https://www.alignmentforum.org/posts/9sYzoRnmqmxZm4Whf/conceptual-problems-with-udt-and-policy-selection,Conceptual Problems with UDT and Policy Selection,['abramdemski'],2019-06-28T23:50:23Z,alignmentforum,, 35024,https://www.alignmentforum.org/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized,Infra-Bayesian physicalism: a formal theory of naturalized induction,['Vanessa Kosoy'],2021-11-30T22:25:57Z,alignmentforum,, 35047,https://www.alignmentforum.org/posts/coQCEe962sjbcCqB9/the-gears-of-impact,The Gears of Impact,['TurnTrout'],2019-10-07T14:44:51Z,alignmentforum,, 35065,https://www.alignmentforum.org/posts/8qCKZj24FJotm3EKd/ultimate-ends-may-be-easily-hidable-behind-convergent,Ultimate ends may be easily hidable behind convergent subgoals,['TsviBT'],2023-04-02T14:51:23Z,alignmentforum,, 35088,https://www.alignmentforum.org/posts/CzrF5rsJWvccFdemb/humans-aren-t-fitness-maximizers,Humans aren't fitness maximizers,['So8res'],2022-10-04T01:31:48Z,alignmentforum,, 35103,https://www.alignmentforum.org/posts/FzF4Xok63ZCZNjmGY/blog-post-a-tale-of-two-research-communities,Blog post: A tale of two research communities,['Aryeh Englander'],2020-08-12T20:41:30Z,alignmentforum,, 35129,https://www.alignmentforum.org/posts/dzDKDRJPQ3kGqfER9/you-can-still-fetch-the-coffee-today-if-you-re-dead-tomorrow,You can still fetch the coffee today if you're dead tomorrow,['davidad'],2022-12-09T14:06:48Z,alignmentforum,, 35143,https://www.alignmentforum.org/posts/uRnprGSiLGXv35foX/how-can-interpretability-help-alignment,How can Interpretability help Alignment?,"['RobertKirk', 'Tomáš Gavenčiak', 'axioman']",2020-05-23T16:16:44Z,alignmentforum,, 35180,https://www.alignmentforum.org/posts/xRWsfGfvDAjRWXcnG/dslt-0-distilling-singular-learning-theory,DSLT 0. Distilling Singular Learning Theory,['Liam Carroll'],2023-06-16T09:50:14Z,alignmentforum,, 35198,https://www.alignmentforum.org/posts/CBHpzpzJy98idiSGs/do-humans-derive-values-from-fictitious-imputed-coherence,Do humans derive values from fictitious imputed coherence?,['TsviBT'],2023-03-05T15:23:04Z,alignmentforum,, 35215,https://www.alignmentforum.org/posts/mXKjuquNC8ivpiKWz/applications-open-for-agi-safety-fundamentals-alignment-1,Applications open for AGI Safety Fundamentals: Alignment Course,"['Richard_Ngo', 'Jamie Bernardi']",2022-12-13T18:31:55Z,alignmentforum,, 35227,https://www.alignmentforum.org/posts/4Pi3WhFb4jPphBzme/don-t-accelerate-problems-you-re-trying-to-solve,Don't accelerate problems you're trying to solve,"['Andrea_Miotti', 'remember']",2023-02-15T18:11:31Z,alignmentforum,, 35248,https://www.alignmentforum.org/posts/fJqP9WcnHXBRBeiBg/meta-questions-about-metaphilosophy,Meta Questions about Metaphilosophy,['Wei Dai'],2023-09-01T01:17:58Z,alignmentforum,, 35266,https://www.alignmentforum.org/posts/fZmMLCnZmMF9xgrs5/the-alignment-newsletter-2-04-16-18,The Alignment Newsletter #2: 04/16/18,['Rohin Shah'],2018-04-16T16:00:19Z,alignmentforum,, 35302,https://www.alignmentforum.org/posts/hwxj4gieR7FWNwYfa/ngo-and-yudkowsky-on-ai-capability-gains-1,Ngo and Yudkowsky on AI capability gains,"['Eliezer Yudkowsky', 'Richard_Ngo']",2021-11-18T22:19:06Z,alignmentforum,, 35334,https://www.alignmentforum.org/posts/GS5P7LLLbSSExb3Sk/the-many-faces-of-infra-beliefs,The Many Faces of Infra-Beliefs,['Diffractor'],2021-04-06T10:43:53Z,alignmentforum,, 35361,https://www.alignmentforum.org/posts/98c5WMDb3iKdzD4tM/oversight-misses-100-of-thoughts-the-ai-does-not-think,Oversight Misses 100% of Thoughts The AI Does Not Think,['johnswentworth'],2022-08-12T16:30:24Z,alignmentforum,, 35374,https://www.alignmentforum.org/posts/zcPLNNw4wgBX5k8kQ/decision-theory,Decision Theory,"['abramdemski', 'Scott Garrabrant']",2018-10-31T18:41:58Z,alignmentforum,, 35393,https://www.alignmentforum.org/posts/s8JuDTo8mTcbHMcLW/alignment-newsletter-25,Alignment Newsletter #25,['Rohin Shah'],2018-09-24T16:10:02Z,alignmentforum,, 35408,https://www.alignmentforum.org/posts/Pxvq2RMAKCuY6SHm9/logical-counterfactuals-and-proposition-graphs-part-1,"Logical Counterfactuals and Proposition graphs, Part 1",['Donald Hobson'],2019-08-22T22:06:02Z,alignmentforum,, 35417,https://www.alignmentforum.org/posts/xsieF8SXw4J5LkzEg/failure-modes-in-a-shard-theory-alignment-plan,Failure modes in a shard theory alignment plan,['Thomas Kwa'],2022-09-27T22:34:07Z,alignmentforum,, 35438,https://www.alignmentforum.org/posts/HeuJZfexbTBRijTs2/an-68-the-attainable-utility-theory-of-impact,[AN #68]: The attainable utility theory of impact,['Rohin Shah'],2019-10-14T17:00:01Z,alignmentforum,, 35459,https://www.alignmentforum.org/posts/fARMR2tiyCem8DD35/managing-risks-of-our-own-work,Managing risks of our own work,['Beth Barnes'],2023-08-18T00:41:31Z,alignmentforum,, 35483,https://www.alignmentforum.org/posts/eBd6WvzhuqduCkYv3/following-human-norms,Following human norms,['Rohin Shah'],2019-01-20T23:59:17Z,alignmentforum,, 35509,https://www.alignmentforum.org/posts/5GFn87cmw7A5hzR89/discussion-on-the-machine-learning-approach-to-ai-safety,Discussion on the machine learning approach to AI safety,['Vika'],2018-11-01T20:54:39Z,alignmentforum,, 35532,https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment,Deceptive Alignment,"['evhub', 'Chris van Merwijk', 'vlad_m', 'Joar Skalse', 'Scott Garrabrant']",2019-06-05T20:16:29Z,alignmentforum,, 35557,https://www.alignmentforum.org/posts/4XcADCLDDguyej2N7/orthogonal-s-formal-goal-alignment-theory-of-change,Orthogonal's Formal-Goal Alignment theory of change,['Tamsin Leake'],2023-05-05T22:36:15Z,alignmentforum,, 35576,https://www.alignmentforum.org/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios,Distinguishing AI takeover scenarios,"['Sam Clarke', 'Sammy Martin']",2021-09-08T16:19:41Z,alignmentforum,, 35600,https://www.alignmentforum.org/posts/JPHeENwRyXn9YFmXc/empowerment-is-almost-all-we-need,Empowerment is (almost) All We Need,['jacob_cannell'],2022-10-23T21:48:55Z,alignmentforum,, 35621,https://www.alignmentforum.org/posts/AFdRGfYDWQqmkdhFq/a-simple-environment-for-showing-mesa-misalignment,A simple environment for showing mesa misalignment,['Matthew Barnett'],2019-09-26T04:44:59Z,alignmentforum,, 35633,https://www.alignmentforum.org/posts/amK9EqxALJXyd9Rb2/paths-to-high-level-machine-intelligence,Paths To High-Level Machine Intelligence,['Daniel_Eth'],2021-09-10T13:21:12Z,alignmentforum,, 35652,https://www.alignmentforum.org/posts/jLAvJt8wuSFySN975/mechanistic-interpretability-quickstart-guide,Mechanistic Interpretability Quickstart Guide,['Neel Nanda'],2023-01-31T16:35:50Z,alignmentforum,, 35671,https://www.alignmentforum.org/posts/WikzbCsFjpLTRQmXn/declustering-reclustering-and-filling-in-thingspace,"Declustering, reclustering, and filling in thingspace",['Stuart_Armstrong'],2021-12-06T20:53:15Z,alignmentforum,, 35683,https://www.alignmentforum.org/posts/qxvihKpFMuc4tvuf4/recall-and-regurgitation-in-gpt2,Recall and Regurgitation in GPT2,['Megan Kinniment'],2022-10-03T19:35:23Z,alignmentforum,, 35705,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375414/acausal-trade-double-decrease,Acausal trade: double decrease,['Stuart_Armstrong'],2017-06-02T15:33:40Z,alignmentforum,, 35719,https://www.alignmentforum.org/posts/RjbTi6ETSo66ygfEY/extortion-and-trade-negotiations,Extortion and trade negotiations,['Stuart_Armstrong'],2016-12-17T21:39:30Z,alignmentforum,, 35729,https://www.alignmentforum.org/posts/M9aoMixFLf8JFLRaP/appendix-mathematics-of-indexical-impact-measures,Appendix: mathematics of indexical impact measures,['Stuart_Armstrong'],2020-02-17T13:22:44Z,alignmentforum,, 35744,https://www.alignmentforum.org/posts/uR2uWMD9JGnRnYSeM/take-5-another-problem-for-natural-abstractions-is-laziness,Take 5: Another problem for natural abstractions is laziness.,['Charlie Steiner'],2022-12-06T07:00:49Z,alignmentforum,, 35761,https://www.alignmentforum.org/posts/4v3hMuKfsGatLXPgt/investigating-the-learning-coefficient-of-modular-addition-1,Investigating the learning coefficient of modular addition: hackathon project,"['Nina Rimsky', 'Dmitry Vaintrob']",2023-10-17T19:51:30Z,alignmentforum,, 35778,https://www.alignmentforum.org/posts/2Wf3R4NZ77CLczLL2/cryptographic-boxes-for-unfriendly-ai,Cryptographic Boxes for Unfriendly AI,['paulfchristiano'],2010-12-18T08:28:46Z,alignmentforum,, 35798,https://www.alignmentforum.org/posts/5ciYedyQDDqAcrDLr/a-positive-case-for-how-we-might-succeed-at-prosaic-ai,A positive case for how we might succeed at prosaic AI alignment,['evhub'],2021-11-16T01:49:48Z,alignmentforum,, 35826,https://www.alignmentforum.org/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as,Reframing Superintelligence: Comprehensive AI Services as General Intelligence,['Rohin Shah'],2019-01-08T07:12:30Z,alignmentforum,, 35856,https://www.alignmentforum.org/posts/HWRR8YzuM63yZyTPG/new-paper-agi-agent-safety-by-iteratively-improving-the,New paper: AGI Agent Safety by Iteratively Improving the Utility Function,['Koen.Holtman'],2020-07-15T14:05:11Z,alignmentforum,, 35871,https://www.alignmentforum.org/posts/nQxqSsHfexivsd6vB/generalised-models-as-a-category,Generalised models as a category,['Stuart_Armstrong'],2021-02-16T16:08:28Z,alignmentforum,, 35891,https://www.alignmentforum.org/posts/QBHxfATzdASQcXwan/the-mechanistic-and-normative-structure-of-agency,The Mechanistic and Normative Structure of Agency,['Gordon Seidoh Worley'],2020-05-18T16:03:35Z,alignmentforum,, 35907,https://www.alignmentforum.org/posts/L7yHdqRiHKd3FhQ7B/alignment-newsletter-three-year-retrospective,Alignment Newsletter Three Year Retrospective,['Rohin Shah'],2021-04-07T14:39:43Z,alignmentforum,, 35941,https://www.alignmentforum.org/posts/DvCLEkr9pXLnWikB8/some-arguments-against-strong-scaling,Some Arguments Against Strong Scaling,['Joar Skalse'],2023-01-13T12:04:27Z,alignmentforum,, 35974,https://www.alignmentforum.org/posts/yT7QdN2wEubR8exAH/finite-factored-sets-orthogonality-and-time,Finite Factored Sets: Orthogonality and Time,['Scott Garrabrant'],2021-06-10T01:22:34Z,alignmentforum,, 35990,https://www.alignmentforum.org/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation,"SolidGoldMagikarp (plus, prompt generation)","['Jessica Rumbelow', 'mwatkins']",2023-02-05T22:02:36Z,alignmentforum,, 36010,https://www.alignmentforum.org/posts/D3hP47pZwXNPRByj8/an-102-meta-learning-by-gpt-3-and-a-list-of-full-proposals,"[AN #102]: Meta learning by GPT-3, and a list of full proposals for AI alignment",['Rohin Shah'],2020-06-03T17:20:02Z,alignmentforum,, 36033,https://www.alignmentforum.org/posts/z2669YMNDt42vHugj/annual-agi-benchmarking-event,Annual AGI Benchmarking Event,['Lawrence Phillips'],2022-08-27T00:06:32Z,alignmentforum,, 36049,https://www.alignmentforum.org/posts/Zfk6faYvcf5Ht7xDx/compute-thresholds-proposed-rules-to-mitigate-risk-of-a-lab,Compute Thresholds: proposed rules to mitigate risk of a “lab leak” accident during AI training runs,['davidad'],2023-07-22T18:09:04Z,alignmentforum,, 36073,https://www.alignmentforum.org/posts/jHSi6BwDKTLt5dmsG/grokking-the-intentional-stance,Grokking the Intentional Stance,['jbkjr'],2021-08-31T15:49:37Z,alignmentforum,, 36091,https://www.alignmentforum.org/posts/SZM32BdvYgrsBfYnw/short-version-information-loss-greater-than-basin-flatness,[Short version] Information Loss --> Basin flatness,['Vivek Hebbar'],2022-05-21T12:59:11Z,alignmentforum,, 36103,https://www.alignmentforum.org/posts/3ecs6duLmTfyra3Gp/some-lessons-learned-from-studying-indirect-object,Some Lessons Learned from Studying Indirect Object Identification in GPT-2 small,"['KevinRoWang', 'Alexandre Variengien', 'Arthur Conmy', 'Buck', 'jsteinhardt']",2022-10-28T23:55:45Z,alignmentforum,, 36122,https://www.alignmentforum.org/posts/NvwjExA7FcPDoo3L7/are-there-cognitive-realms,Are there cognitive realms?,['TsviBT'],2023-03-12T19:28:53Z,alignmentforum,, 36139,https://www.alignmentforum.org/posts/eGihD5jnD6LFzgDZA/agi-safety-from-first-principles-control,AGI safety from first principles: Control,['Richard_Ngo'],2020-10-02T21:51:21Z,alignmentforum,, 36177,https://www.alignmentforum.org/posts/gWRJDwqHnmJhurXgo/an-105-the-economic-trajectory-of-humanity-and-what-we-might,"[AN #105]: The economic trajectory of humanity, and what we might mean by optimization",['Rohin Shah'],2020-06-24T17:30:03Z,alignmentforum,, 36204,https://www.alignmentforum.org/posts/hJaJw6LK39zpyCKW6/standard-ml-oracles-vs-counterfactual-ones,Standard ML Oracles vs Counterfactual ones,['Stuart_Armstrong'],2018-10-10T20:01:14Z,alignmentforum,, 36220,https://www.alignmentforum.org/posts/5bd75cc58225bf067037523a/logical-inductors-that-trust-their-limits,Logical Inductors that trust their limits,['Scott Garrabrant'],2016-09-20T23:17:55Z,alignmentforum,, 36230,https://www.alignmentforum.org/posts/2NDt9DSPRbDoZPcKT/alignment-newsletter-21,Alignment Newsletter #21,['Rohin Shah'],2018-08-27T16:20:01Z,alignmentforum,, 36264,https://www.alignmentforum.org/posts/Gc9FGtdXhK9sCSEYu/what-a-compute-centric-framework-says-about-ai-takeoff,What a compute-centric framework says about AI takeoff speeds,['Tom Davidson'],2023-01-23T04:02:08Z,alignmentforum,, 36292,https://www.alignmentforum.org/posts/XWPJfgBymBbL3jdFd/an-58-mesa-optimization-what-it-is-and-why-we-should-care,"[AN #58] Mesa optimization: what it is, and why we should care",['Rohin Shah'],2019-06-24T16:10:01Z,alignmentforum,, 36310,https://www.alignmentforum.org/posts/4F8Bg8Z5cePTBofzo/announcing-the-introduction-to-ml-safety-course,Announcing the Introduction to ML Safety course,"['Dan H', 'ThomasW', 'ozhang']",2022-08-06T02:46:00Z,alignmentforum,, 36319,https://www.alignmentforum.org/posts/HEZgGBZTpT4Bov7nH/mapping-the-conceptual-territory-in-ai-existential-safety,Mapping the Conceptual Territory in AI Existential Safety and Alignment,['jbkjr'],2021-02-12T07:55:54Z,alignmentforum,, 36356,https://www.alignmentforum.org/posts/jCZhy3nqH2MoethZQ/an-116-how-to-make-explanations-of-neurons-compositional,[AN #116]: How to make explanations of neurons compositional,['Rohin Shah'],2020-09-09T17:20:05Z,alignmentforum,, 36377,https://www.alignmentforum.org/posts/6XCTppoPAMdKCPFb4/oracles-reject-all-deals-break-superrationality-with-1,"Oracles: reject all deals - break superrationality, with superrationality",['Stuart_Armstrong'],2019-12-05T13:51:27Z,alignmentforum,, 36389,https://www.alignmentforum.org/posts/L3QDs6of4Rb2TgpRD/the-ai-debate-debate,"The ""AI Debate"" Debate",['michaelcohen'],2020-07-02T10:16:24Z,alignmentforum,, 36404,https://www.alignmentforum.org/posts/axzPYvcmWr2TwvnLi/an-101-why-we-should-rigorously-measure-and-forecast-ai,[AN #101]: Why we should rigorously measure and forecast AI progress,['Rohin Shah'],2020-05-27T17:20:02Z,alignmentforum,, 36431,https://www.alignmentforum.org/posts/jP3vRbtvDtBtgvkeb/clarifying-consequentialists-in-the-solomonoff-prior,Clarifying Consequentialists in the Solomonoff Prior,['vlad_m'],2018-07-11T02:35:57Z,alignmentforum,, 36444,https://www.alignmentforum.org/posts/8GdPargak863xaebm/an-analytic-perspective-on-ai-alignment,An Analytic Perspective on AI Alignment,['DanielFilan'],2020-03-01T04:10:03Z,alignmentforum,, 36467,https://www.alignmentforum.org/posts/9x5mtYjHYfr4T7KLj/learning-the-smooth-prior,Learning the smooth prior,"['Geoffrey Irving', 'Rohin Shah', 'evhub']",2022-04-29T21:10:18Z,alignmentforum,, 36493,https://www.alignmentforum.org/posts/Ntmbm79zQakr29XLw/understanding-the-two-head-strategy-for-teaching-ml-to,Understanding the two-head strategy for teaching ML to answer questions honestly,['Adam Scherlis'],2022-01-11T23:24:22Z,alignmentforum,, 36509,https://www.alignmentforum.org/posts/AJ6GHm5n6fBRJbMhq/announcing-epoch-a-research-organization-investigating-the,Announcing Epoch: A research organization investigating the road to Transformative AI,"['Jsevillamol', 'Pablo Villalobos', 'Tamay', 'lennart', 'Marius Hobbhahn', 'anson.ho']",2022-06-27T13:55:51Z,alignmentforum,, 36523,https://www.alignmentforum.org/posts/PF58wEdztZFX2dSue/how-truthful-is-gpt-3-a-benchmark-for-language-models,How truthful is GPT-3? A benchmark for language models,['Owain_Evans'],2021-09-16T10:09:53Z,alignmentforum,, 36543,https://www.alignmentforum.org/posts/Hqahetrx6g8FncokC/control,Control,['TsviBT'],2023-02-05T16:16:41Z,alignmentforum,, 36559,https://www.alignmentforum.org/posts/2J6fFHQZkWxFcjL6c/tracr-compiled-transformers-as-a-laboratory-for-1,Tracr: Compiled Transformers as a Laboratory for Interpretability | DeepMind,['DragonGod'],2023-01-13T16:53:10Z,alignmentforum,, 36568,https://www.alignmentforum.org/posts/6mysMAqvo9giHC4iX/what-s-general-purpose-search-and-why-might-we-expect-to-see,"What's General-Purpose Search, And Why Might We Expect To See It In Trained ML Systems?",['johnswentworth'],2022-08-15T22:48:39Z,alignmentforum,, 36584,https://www.alignmentforum.org/posts/YzbQeCiwoLBHrvAh4/palm-in-extrapolating-gpt-n-performance,"PaLM in ""Extrapolating GPT-N performance""",['Lukas Finnveden'],2022-04-06T13:05:13Z,alignmentforum,, 36607,https://www.alignmentforum.org/posts/nbhTzEosM9sqEvr6P/corrigibility-doesn-t-always-have-a-good-action-to-take,Corrigibility doesn't always have a good action to take,['Stuart_Armstrong'],2018-08-28T20:30:12Z,alignmentforum,, 36616,https://www.alignmentforum.org/posts/GPADepj6yP8zqSbJh/an-65-learning-useful-skills-by-watching-humans-play,[AN #65]: Learning useful skills by watching humans “play”,['Rohin Shah'],2019-09-23T17:30:02Z,alignmentforum,, 36661,https://www.alignmentforum.org/posts/3oQCY4he4zRzF6vQb/introduction-to-towards-causal-foundations-of-safe-agi,Introduction to Towards Causal Foundations of Safe AGI,"['tom4everitt', 'Lewis Hammond', 'Francis Rhys Ward', 'RyanCarey', 'James Fox', 'mattmacdermott', 'sbenthall']",2023-06-12T17:55:24Z,alignmentforum,, 36683,https://www.alignmentforum.org/posts/Q7WiHdSSShkNsgDpa/how-much-can-value-learning-be-disentangled,How much can value learning be disentangled?,['Stuart_Armstrong'],2019-01-29T14:17:01Z,alignmentforum,, 36704,https://www.alignmentforum.org/posts/wzPzPmAsG3BwrBrwy/test-cases-for-impact-regularisation-methods,Test Cases for Impact Regularisation Methods,['DanielFilan'],2019-02-06T21:50:01Z,alignmentforum,, 36733,https://www.alignmentforum.org/posts/jP4cx3TCweDngSLS6/goal-directedness-what-success-looks-like,Goal-Directedness: What Success Looks Like,['adamShimi'],2020-08-16T18:33:29Z,alignmentforum,, 36750,https://www.alignmentforum.org/posts/iydwbZhATANhjoGP7/more-variations-on-pseudo-alignment,More variations on pseudo-alignment,['evhub'],2019-11-04T23:24:20Z,alignmentforum,, 36772,https://www.alignmentforum.org/posts/XpCnhaAQrssq8tJBG/an-interactive-introduction-to-grokking-and-mechanistic,An interactive introduction to grokking and mechanistic interpretability,"['Adam Pearce', 'Asma Ghandeharioun']",2023-08-07T19:09:19Z,alignmentforum,, 36794,https://www.alignmentforum.org/posts/J7Rnt8aJPH7MALkmq/vaniver-s-view-on-factored-cognition,Vaniver's View on Factored Cognition,['Vaniver'],2019-08-23T02:54:01Z,alignmentforum,, 36817,https://www.alignmentforum.org/posts/kYgWmKJnqq8QkbjFj/bayesian-utility-representing-preference-by-probability,Bayesian Utility: Representing Preference by Probability Measures,['Vladimir_Nesov'],2009-07-27T14:28:55Z,alignmentforum,, 36828,https://www.alignmentforum.org/posts/WLYBy5Cus4oRFY3mu/thoughts-on-open-source-ai,Thoughts on open source AI,['Sam Marks'],2023-11-03T15:35:42Z,alignmentforum,, 36845,https://www.alignmentforum.org/posts/aBRS3x4sPSJ9G6xkj/underspecification-of-oracle-ai,Underspecification of Oracle AI,"['Rubi J. Hudson', 'Adam Jermyn', 'Johannes Treutlein']",2023-01-15T20:10:42Z,alignmentforum,, 36874,https://www.alignmentforum.org/posts/MiWBC2A2cADspausG/counterfactuals-are-confusing-because-of-an-ontological,Counterfactuals are Confusing because of an Ontological Shift,['Chris_Leong'],2022-08-05T19:03:47Z,alignmentforum,, 36889,https://www.alignmentforum.org/posts/tyyPoKWxpitEcAkw2/new-canada-ai-safety-and-governance-community,*New* Canada AI Safety & Governance community,"[""Wyatt Tessari L'Allié""]",2022-08-29T18:46:00Z,alignmentforum,, 36900,https://www.alignmentforum.org/posts/6chtMKXpLcJ26t7n5/integrating-three-models-of-human-cognition,Integrating Three Models of (Human) Cognition,['jbkjr'],2021-11-23T01:06:49Z,alignmentforum,, 36932,https://www.alignmentforum.org/posts/5sWNnbHRkExfLaS49/before-smart-ai-there-will-be-many-mediocre-or-specialized,"Before smart AI, there will be many mediocre or specialized AIs",['Lukas Finnveden'],2023-05-26T01:38:42Z,alignmentforum,, 36957,https://www.alignmentforum.org/posts/LAHXvi4qwXogmdTHd/sub-sums-and-sub-tensors-1,Sub-Sums and Sub-Tensors,['Scott Garrabrant'],2020-11-05T18:06:44Z,alignmentforum,, 36971,https://www.alignmentforum.org/posts/7e5tyFnpzGCdfT4mR/research-agenda-supervising-ais-improving-ais,Research agenda: Supervising AIs improving AIs,"['Quintin Pope', 'Owen Dudney', 'Roman Engeler', 'jacquesthibs']",2023-04-29T17:09:21Z,alignmentforum,, 37014,https://www.alignmentforum.org/posts/2NaAhMPGub8F2Pbr7/the-fusion-power-generator-scenario,The Fusion Power Generator Scenario,['johnswentworth'],2020-08-08T18:31:39Z,alignmentforum,, 37027,https://www.alignmentforum.org/posts/5bd75cc58225bf06703753d4/two-major-obstacles-for-logical-inductor-decision-theory,Two Major Obstacles for Logical Inductor Decision Theory,['Scott Garrabrant'],2017-04-17T21:10:55Z,alignmentforum,, 37045,https://www.alignmentforum.org/posts/tDDDZ2nZdvyziwSvv/an-113-checking-the-ethical-intuitions-of-large-language,[AN #113]: Checking the ethical intuitions of large language models,['Rohin Shah'],2020-08-19T17:10:04Z,alignmentforum,, 37073,https://www.alignmentforum.org/posts/nFv2buafNc9jSaxAH/siren-worlds-and-the-perils-of-over-optimised-search,Siren worlds and the perils of over-optimised search,['Stuart_Armstrong'],2014-04-07T11:00:19Z,alignmentforum,, 37092,https://www.alignmentforum.org/posts/pAXDrFTMCJtkrfREc/intelligent-behaviour-across-systems-scales-and-substrates,"Intelligent behaviour across systems, scales and substrates",['Nora_Ammann'],2022-10-21T17:09:33Z,alignmentforum,, 37110,https://www.alignmentforum.org/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications,chinchilla's wild implications,['nostalgebraist'],2022-07-31T01:18:28Z,alignmentforum,, 37133,https://www.alignmentforum.org/posts/z8afQRsH9wWsB4iMD/harsanyi-s-social-aggregation-theorem-and-what-it-means-for,Harsanyi's Social Aggregation Theorem and what it means for CEV,['AlexMennen'],2013-01-05T21:38:43Z,alignmentforum,, 37149,https://www.alignmentforum.org/posts/bXTNKjsD4y3fabhwR/conjecture-a-retrospective-after-8-months-of-work-1,Conjecture: a retrospective after 8 months of work,"['Connor Leahy', 'Sid Black', 'Gabriel Alfour', 'Chris Scammell']",2022-11-23T17:10:24Z,alignmentforum,, 37184,https://www.alignmentforum.org/posts/2drtQcFoyRCjYaJFe/reducing-goodhart-announcement-executive-summary,"Reducing Goodhart: Announcement, Executive Summary",['Charlie Steiner'],2022-08-20T09:49:24Z,alignmentforum,, 37203,https://www.alignmentforum.org/posts/WknLjywekGajwD2fp/an-97-are-there-historical-examples-of-large-robust,"[AN #97]: Are there historical examples of large, robust discontinuities?",['Rohin Shah'],2020-04-29T17:30:02Z,alignmentforum,, 37235,https://www.alignmentforum.org/posts/rPC9Y9b5vkTqakywC/an-93-the-precipice-we-re-standing-at-and-how-we-can-back,"[AN #93]: The Precipice we’re standing at, and how we can back away from it",['Rohin Shah'],2020-04-01T17:10:02Z,alignmentforum,, 37263,https://www.alignmentforum.org/posts/YRis8ZDstqnaW2erL/some-quick-follow-up-experiments-to-taken-out-of-context-on,Some Quick Follow-Up Experiments to “Taken out of context: On measuring situational awareness in LLMs”,['miles'],2023-10-03T02:22:00Z,alignmentforum,, 37288,https://www.alignmentforum.org/posts/Bpw2HXjMa3GaouDnC/what-are-red-flags-for-neural-network-suffering,What are red flags for Neural Network suffering?,['Marius Hobbhahn'],2021-11-08T12:51:28Z,alignmentforum,, 37311,https://www.alignmentforum.org/posts/dKTh9Td3KaJ8QW6gw/why-assume-agis-will-optimize-for-fixed-goals,why assume AGIs will optimize for fixed goals?,['nostalgebraist'],2022-06-10T01:28:11Z,alignmentforum,, 37324,https://www.alignmentforum.org/posts/TnkDtTAqCGetvLsgr/a-possible-resolution-to-spurious-counterfactuals,A Possible Resolution To Spurious Counterfactuals,['JoshuaOSHickman'],2021-12-06T18:26:41Z,alignmentforum,, 37341,https://www.alignmentforum.org/posts/5p4ynEJQ8nXxp2sxC/parsing-chris-mingard-on-neural-networks,Parsing Chris Mingard on Neural Networks,['Alex Flint'],2021-05-06T22:16:15Z,alignmentforum,, 37353,https://www.alignmentforum.org/posts/RuDD3aQWLDSb4eTXP/what-selection-theorems-do-we-expect-want,What Selection Theorems Do We Expect/Want?,['johnswentworth'],2021-10-01T16:03:49Z,alignmentforum,, 37376,https://www.alignmentforum.org/posts/w7mS6syTderWihHPM/looking-for-adversarial-collaborators-to-test-our-debate,Looking for adversarial collaborators to test our Debate protocol,['Beth Barnes'],2020-08-19T03:15:27Z,alignmentforum,, 37392,https://www.alignmentforum.org/posts/Cu7yv4eM6dCeA67Af/minimization-of-prediction-error-as-a-foundation-for-human,Minimization of prediction error as a foundation for human values in AI alignment,['Gordon Seidoh Worley'],2019-10-09T18:23:42Z,alignmentforum,, 37409,https://www.alignmentforum.org/posts/wt7HXaCWzuKQipqz3/eis-vi-critiques-of-mechanistic-interpretability-work-in-ai,EIS VI: Critiques of Mechanistic Interpretability Work in AI Safety,['scasper'],2023-02-17T20:48:26Z,alignmentforum,, 37430,https://www.alignmentforum.org/posts/a2sw7HKyjnAAp2oZ4/conditioning-predictive-models-open-problems-conclusion-and,"Conditioning Predictive Models: Open problems, Conclusion, and Appendix","['evhub', 'Adam Jermyn', 'Johannes Treutlein', 'Rubi J. Hudson', 'kcwoolverton']",2023-02-10T19:21:20Z,alignmentforum,, 37462,https://www.alignmentforum.org/posts/pKSmEkSQJsCSTK6nH/an-open-agency-architecture-for-safe-transformative-ai,An Open Agency Architecture for Safe Transformative AI,['davidad'],2022-12-20T13:04:06Z,alignmentforum,, 37491,https://www.alignmentforum.org/posts/8CvkNa6FKSrK4Nj83/ban-development-of-unpredictable-powerful-models,Ban development of unpredictable powerful models?,['TurnTrout'],2023-06-20T01:43:12Z,alignmentforum,, 37511,https://www.alignmentforum.org/posts/LayCHnetp2jBmmpDx/openai-microsoft-announce-next-generation-language-model,"OpenAI/Microsoft announce ""next generation language model"" integrated into Bing/Edge",['LawrenceC'],2023-02-07T20:38:09Z,alignmentforum,, 37520,https://www.alignmentforum.org/posts/BfN88BfZQ4XGeZkda/concrete-reasons-for-hope-about-ai,Concrete Reasons for Hope about AI,['Zac Hatfield-Dodds'],2023-01-14T01:22:19Z,alignmentforum,, 37550,https://www.alignmentforum.org/posts/5bd75cc58225bf06703750e8/speculations-on-information-under-logical-uncertainty,Speculations on information under logical uncertainty,['TsviBT'],2016-02-24T21:58:57Z,alignmentforum,, 37573,https://www.alignmentforum.org/posts/8viQEp8KBg2QSW4Yc/solidgoldmagikarp-iii-glitch-token-archaeology,SolidGoldMagikarp III: Glitch token archaeology,"['mwatkins', 'Jessica Rumbelow']",2023-02-14T10:17:51Z,alignmentforum,, 37587,https://www.alignmentforum.org/posts/SsuqYoyBnheSj7jLw/principles-of-privacy-for-alignment-research,Principles of Privacy for Alignment Research,['johnswentworth'],2022-07-27T19:53:28Z,alignmentforum,, 37610,https://www.alignmentforum.org/posts/XvhrmTog2bkf5s2qu/how-toy-models-of-ontology-changes-can-be-misleading,How toy models of ontology changes can be misleading,['Stuart_Armstrong'],2023-10-21T21:13:56Z,alignmentforum,, 37621,https://www.alignmentforum.org/posts/8ccTZ9ZxpJrvnxt4F/shard-theory-in-nine-theses-a-distillation-and-critical,Shard Theory in Nine Theses: a Distillation and Critical Appraisal,['LawrenceC'],2022-12-19T22:52:20Z,alignmentforum,, 37644,https://www.alignmentforum.org/posts/dWMzzd6hfimTQk8yk/how-to-solve-deception-and-still-fail,How to solve deception and still fail.,['Charlie Steiner'],2023-10-04T19:56:56Z,alignmentforum,, 37666,https://www.alignmentforum.org/posts/HCv2uwgDGf5dyX5y6/preface-to-the-sequence-on-iterated-amplification,Preface to the sequence on iterated amplification,['paulfchristiano'],2018-11-10T13:24:13Z,alignmentforum,, 37680,https://www.alignmentforum.org/posts/F9vcbEMKW48j4Z6h9/non-consequentialist-cooperation,Non-Consequentialist Cooperation?,['abramdemski'],2019-01-11T09:15:37Z,alignmentforum,, 37697,https://www.alignmentforum.org/posts/smDeWfgeYDg9eGq5G/environments-for-measuring-deception-resource-acquisition,"Environments for Measuring Deception, Resource Acquisition, and Ethical Violations",['Dan H'],2023-04-07T18:40:21Z,alignmentforum,, 37711,https://www.alignmentforum.org/posts/vpvLqinp4FoigqvKy/reflective-bayesianism,Reflective Bayesianism,['abramdemski'],2021-04-06T19:48:44Z,alignmentforum,, 37725,https://www.alignmentforum.org/posts/Q8Z8yoG4tBaowBHwk/critiquing-what-failure-looks-like,"Critiquing ""What failure looks like""",['Grue_Slinky'],2019-12-27T23:59:50Z,alignmentforum,, 37742,https://www.alignmentforum.org/posts/fcnFddKjKZdDXt5cp/knowledge-is-not-just-digital-abstraction-layers,Knowledge is not just digital abstraction layers,['Alex Flint'],2021-06-15T03:49:55Z,alignmentforum,, 37752,https://www.alignmentforum.org/posts/GEPX7jgLMB8vR2qaK/opinions-on-interpretable-machine-learning-and-70-summaries,Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers,['Peter Hase'],2021-04-09T19:19:43Z,alignmentforum,, 37784,https://www.alignmentforum.org/posts/hgSKz3RkSSgZXrXNp/causality-adds-up-to-normality,Causality Adds Up to Normality,['johnswentworth'],2020-06-15T17:19:58Z,alignmentforum,, 37801,https://www.alignmentforum.org/posts/kqxEJkq5Big9nNKxy/beyond-kolmogorov-and-shannon,Beyond Kolmogorov and Shannon,"['Alexander Gietelink Oldenziel', 'Adam Shai']",2022-10-25T15:13:56Z,alignmentforum,, 37812,https://www.alignmentforum.org/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-to-have-nice,Decision theory does not imply that we get to have nice things,['So8res'],2022-10-18T03:04:49Z,alignmentforum,, 37847,https://www.alignmentforum.org/posts/eDxhEDnKLfhtc28XK/a-toy-model-of-gradient-hacking,A Toy Model of Gradient Hacking,['Oam Patel'],2022-06-20T22:01:25Z,alignmentforum,, 37867,https://www.alignmentforum.org/posts/EnRLAnRLG5zyJ9sAf/cataloguing-priors-in-theory-and-practice,Cataloguing Priors in Theory and Practice,['Paul Bricman'],2022-10-13T12:36:40Z,alignmentforum,, 37891,https://www.alignmentforum.org/posts/5eY6A4Zfu6rfeMfJS/learning-normativity-language,Learning Normativity: Language,['Bunthut'],2021-02-05T22:26:15Z,alignmentforum,, 37909,https://www.alignmentforum.org/posts/5Zfyktwgz3rvAvZyL/paper-discovering-novel-algorithms-with-alphatensor-deepmind,Paper: Discovering novel algorithms with AlphaTensor [Deepmind],['LawrenceC'],2022-10-05T16:20:12Z,alignmentforum,, 37919,https://www.alignmentforum.org/posts/CphfDP4ynz3QQ4AKY/introducing-the-ml-safety-scholars-program,Introducing the ML Safety Scholars Program,"['Dan H', 'ThomasW', 'Mantas Mazeika', 'ozhang', 'Sidney Hough', 'Kevin Liu']",2022-05-04T16:01:52Z,alignmentforum,, 37929,https://www.alignmentforum.org/posts/wjQkQ8bgWWFym8zF9/distilled-representations-research-agenda-1,Distilled Representations Research Agenda,"['Hoagy', 'mishajw']",2022-10-18T20:59:20Z,alignmentforum,, 37950,https://www.alignmentforum.org/posts/vavnqwYbc8jMu3dTY/ai-coordination-needs-clear-wins,AI coordination needs clear wins,['evhub'],2022-09-01T23:41:48Z,alignmentforum,, 37961,https://www.alignmentforum.org/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds,"Yudkowsky and Christiano discuss ""Takeoff Speeds""",['Eliezer Yudkowsky'],2021-11-22T19:35:28Z,alignmentforum,, 37982,https://www.alignmentforum.org/posts/pH3eKEAEupx8c2ep9/update-on-ought-s-experiments-on-factored-evaluation-of,Update on Ought's experiments on factored evaluation of arguments,['Owain_Evans'],2020-01-12T21:20:42Z,alignmentforum,, 37992,https://www.alignmentforum.org/posts/FpokmCnbP3CEZ5h4t/ml-alignment-theory-program-under-evan-hubinger,ML Alignment Theory Program under Evan Hubinger,"['ozhang', 'evhub', 'Victor W']",2021-12-06T00:03:15Z,alignmentforum,, 38004,https://www.alignmentforum.org/posts/WiXePTj7KeEycbiwK/survey-on-ai-existential-risk-scenarios,Survey on AI existential risk scenarios,"['Sam Clarke', 'Alexis Carlier', 'Jonas Schuett']",2021-06-08T17:12:42Z,alignmentforum,, 38025,https://www.alignmentforum.org/posts/MyvkTKfndx9t4zknh/eis-ii-what-is-interpretability,EIS II: What is “Interpretability”?,['scasper'],2023-02-09T16:48:35Z,alignmentforum,, 38040,https://www.alignmentforum.org/posts/nwLQt4e7bstCyPEXs/internal-interfaces-are-a-high-priority-interpretability,Internal Interfaces Are a High-Priority Interpretability Target,['Thane Ruthenis'],2022-12-29T17:49:27Z,alignmentforum,, 38054,https://www.alignmentforum.org/posts/ZddY8BZbvoXHEvDHf/selfishness-preference-falsification-and-ai-alignment,"Selfishness, preference falsification, and AI alignment",['jessicata'],2021-10-28T00:16:47Z,alignmentforum,, 38075,https://www.alignmentforum.org/posts/HcJPJxkyCsrpSdCii/statement-on-ai-extinction-signed-by-agi-labs-top-academics,"Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures",['Dan H'],2023-05-30T09:05:26Z,alignmentforum,, 38086,https://www.alignmentforum.org/posts/dBmfb76zx6wjPsBC7/when-can-we-trust-model-evaluations,When can we trust model evaluations?,['evhub'],2023-07-28T19:42:22Z,alignmentforum,, 38116,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374ed9/oracle-machines-for-automated-philosophy,Oracle machines for automated philosophy,['Nisan'],2015-02-17T15:10:04Z,alignmentforum,, 38131,https://www.alignmentforum.org/posts/5bd75cc58225bf06703754d4/the-doomsday-argument-in-anthropic-decision-theory,The Doomsday argument in anthropic decision theory,['Stuart_Armstrong'],2017-08-31T13:20:26Z,alignmentforum,, 38142,https://www.alignmentforum.org/posts/aw5nqamqtnDnW8w9u/reward-hacking-from-a-causal-perspective,Reward Hacking from a Causal Perspective,"['tom4everitt', 'Francis Rhys Ward', 'sbenthall', 'James Fox', 'mattmacdermott', 'RyanCarey']",2023-07-21T18:27:40Z,alignmentforum,, 38165,https://www.alignmentforum.org/posts/ydtdwWSCCihms5Jeo/catastrophic-risks-from-ai-6-discussion-and-faq,Catastrophic Risks from AI #6: Discussion and FAQ,"['Dan H', 'Mantas Mazeika', 'ThomasW']",2023-06-27T23:23:59Z,alignmentforum,, 38204,https://www.alignmentforum.org/posts/sMhJsRfLXAg87EEqT/section-7-foundations-of-rational-agency,Section 7: Foundations of Rational Agency,['JesseClifton'],2019-12-22T02:05:24Z,alignmentforum,, 38231,https://www.alignmentforum.org/posts/zkF9PNSyDKusoyLkP/investigating-ai-takeover-scenarios,Investigating AI Takeover Scenarios,['Sammy Martin'],2021-09-17T18:47:22Z,alignmentforum,, 38256,https://www.alignmentforum.org/posts/zkfmhWQXsZweijmzi/causality-and-a-cost-semantics-for-neural-networks,Causality and a Cost Semantics for Neural Networks,['scottviteri'],2023-08-21T21:02:01Z,alignmentforum,, 38272,https://www.alignmentforum.org/posts/ZDZmopKquzHYPRNxq/selection-vs-control,Selection vs Control,['abramdemski'],2019-06-02T07:01:40Z,alignmentforum,, 38283,https://www.alignmentforum.org/posts/uqAdqrvxqGqeBHjTP/towards-understanding-based-safety-evaluations,Towards understanding-based safety evaluations,['evhub'],2023-03-15T18:18:01Z,alignmentforum,, 38295,https://www.alignmentforum.org/posts/b9sGz74ayftqPBDYv/the-space-of-systems-and-the-space-of-maps,The space of systems and the space of maps,"['Jan_Kulveit', 'rosehadshar', 'Nora_Ammann', 'clem_acs']",2023-03-22T14:59:05Z,alignmentforum,, 38310,https://www.alignmentforum.org/posts/3xF66BNSC5caZuKyC/why-subagents,Why Subagents?,['johnswentworth'],2019-08-01T22:17:26Z,alignmentforum,, 38320,https://www.alignmentforum.org/posts/o6ptPu7arZrqRCxyz/200-cop-in-mi-exploring-polysemanticity-and-superposition,200 COP in MI: Exploring Polysemanticity and Superposition,['Neel Nanda'],2023-01-03T01:52:46Z,alignmentforum,, 38342,https://www.alignmentforum.org/posts/q9GZyfm8xKAD2BGdi/strong-implication-of-preference-uncertainty,Strong implication of preference uncertainty,['Stuart_Armstrong'],2020-08-12T19:02:50Z,alignmentforum,, 38356,https://www.alignmentforum.org/posts/SBahPHStddcFJnyft/some-constructions-for-proof-based-cooperation-without-loeb,Some constructions for proof-based cooperation without Löb,['James Payor'],2023-03-21T16:12:17Z,alignmentforum,, 38373,https://www.alignmentforum.org/posts/LxNwBNxXktvzAko65/reframing-superintelligence-llms-4-years,“Reframing Superintelligence” + LLMs + 4 years,['Eric Drexler'],2023-07-10T13:42:10Z,alignmentforum,, 38410,https://www.alignmentforum.org/posts/b9b4y2azGjthGBEFb/an-120-tracing-the-intellectual-roots-of-ai-and-ai-alignment,[AN #120]: Tracing the intellectual roots of AI and AI alignment,['Rohin Shah'],2020-10-07T17:10:07Z,alignmentforum,, 38446,https://www.alignmentforum.org/posts/BKM8uQS6QdJPZLqCr/towards-a-mechanistic-understanding-of-corrigibility,Towards a mechanistic understanding of corrigibility,['evhub'],2019-08-22T23:20:57Z,alignmentforum,, 38479,https://www.alignmentforum.org/posts/n767Q8HqbrteaPA25/complex-systems-for-ai-safety-pragmatic-ai-safety-3,Complex Systems for AI Safety [Pragmatic AI Safety #3],"['Dan H', 'ThomasW']",2022-05-24T00:00:59Z,alignmentforum,, 38516,https://www.alignmentforum.org/posts/nTiAyxFybZ7jgtWvn/towards-a-mechanistic-understanding-of-goal-directedness,Towards a Mechanistic Understanding of Goal-Directedness,['Mark Xu'],2021-03-09T20:17:26Z,alignmentforum,, 38533,https://www.alignmentforum.org/posts/27AWRKbKyXuzQoaSk/some-conceptual-alignment-research-projects,Some conceptual alignment research projects,['Richard_Ngo'],2022-08-25T22:51:33Z,alignmentforum,, 38555,https://www.alignmentforum.org/posts/oqzasmQ9Lye45QDMZ/causality-transformative-ai-and-alignment-part-i,"Causality, Transformative AI and alignment - part I",['Marius Hobbhahn'],2022-01-27T16:18:58Z,alignmentforum,, 38578,https://www.alignmentforum.org/posts/ktJ9rCsotdqEoBtof/asot-some-thoughts-on-human-abstractions,[ASoT] Some thoughts on human abstractions,['leogao'],2023-03-16T05:42:13Z,alignmentforum,, 38596,https://www.alignmentforum.org/posts/L5JSMZQvkBAx9MD5A/to-what-extent-is-gpt-3-capable-of-reasoning,To what extent is GPT-3 capable of reasoning?,['TurnTrout'],2020-07-20T17:10:50Z,alignmentforum,, 38622,https://www.alignmentforum.org/posts/nLhfRpDutEdgr6PKe/tradeoff-between-desirable-properties-for-baseline-choices,Tradeoff between desirable properties for baseline choices in impact measures,['Vika'],2020-07-04T11:56:04Z,alignmentforum,, 38643,https://www.alignmentforum.org/posts/cnjWN4mzmWzggRnCJ/practical-consequences-of-impossibility-of-value-learning,Practical consequences of impossibility of value learning,['Stuart_Armstrong'],2019-08-02T23:06:03Z,alignmentforum,, 38661,https://www.alignmentforum.org/posts/ux93sLHcqmBfsRTvg/gpt-can-write-quines-now-gpt-4,GPT can write Quines now (GPT-4),['Andrew_Critch'],2023-03-14T19:18:52Z,alignmentforum,, 38677,https://www.alignmentforum.org/posts/ikN9qQEkrFuPtYd6Y/safely-and-usefully-spectating-on-ais-optimizing-over-toy,Safely and usefully spectating on AIs optimizing over toy worlds,['AlexMennen'],2018-07-31T18:30:38Z,alignmentforum,, 38696,https://www.alignmentforum.org/posts/jh6dkqN2wd7fCRfB5/meta-learning-to-gradient-hack,Meta learning to gradient hack,['Quintin Pope'],2021-10-01T19:25:30Z,alignmentforum,, 38713,https://www.alignmentforum.org/posts/89qWCy6yi2eeFGsRu/technical-model-refinement-formalism,Technical model refinement formalism,['Stuart_Armstrong'],2020-08-27T11:54:23Z,alignmentforum,, 38730,https://www.alignmentforum.org/posts/75oMAADr4265AGK3L/attainable-utility-preservation-concepts,Attainable Utility Preservation: Concepts,['TurnTrout'],2020-02-17T05:20:10Z,alignmentforum,, 38747,https://www.alignmentforum.org/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal,Challenges to Christiano’s capability amplification proposal,['Eliezer Yudkowsky'],2018-05-19T18:18:55Z,alignmentforum,, 38771,https://www.alignmentforum.org/posts/GY49CKBkEs3bEpteM/parametrically-retargetable-decision-makers-tend-to-seek,Parametrically retargetable decision-makers tend to seek power,['TurnTrout'],2023-02-18T18:41:39Z,alignmentforum,, 38785,https://www.alignmentforum.org/posts/DZk6mRo9vhCXN9Rfn/a-walkthrough-of-interpretability-in-the-wild-w-authors,"A Walkthrough of Interpretability in the Wild (w/ authors Kevin Wang, Arthur Conmy & Alexandre Variengien)",['Neel Nanda'],2022-11-07T22:39:17Z,alignmentforum,, 38799,https://www.alignmentforum.org/posts/xBoBmPtgvwdfqm2r5/counterfactual-induction-algorithm-sketch-fixpoint-proof,"Counterfactual Induction (Algorithm Sketch, Fixpoint proof)",['Diffractor'],2019-12-17T05:04:25Z,alignmentforum,, 38816,https://www.alignmentforum.org/posts/MZJxtzjSeezEkedWn/anthropic-decision-theory-for-self-locating-beliefs,Anthropic decision theory for self-locating beliefs,['Stuart_Armstrong'],2021-07-12T14:11:41Z,alignmentforum,, 38831,https://www.alignmentforum.org/posts/ajQzejMYizfX4dMWK/how-does-iterated-amplification-exceed-human-abilities,How does iterated amplification exceed human abilities?,['riceissa'],2020-05-02T23:44:31Z,alignmentforum,, 38843,https://www.alignmentforum.org/posts/YhQr36yGkhe6x8Fyn/learning-the-prior-and-generalization,Learning the prior and generalization,['evhub'],2020-07-29T22:49:43Z,alignmentforum,, 38857,https://www.alignmentforum.org/posts/NSCBF7MTLF2HdhEnD/an-75-solving-atari-and-go-with-learned-game-models-and,"[AN #75]: Solving Atari and Go with learned game models, and thoughts from a MIRI employee",['Rohin Shah'],2019-11-27T18:10:01Z,alignmentforum,, 38894,https://www.alignmentforum.org/posts/PX8BB7Rqw7HedrSJd/by-default-avoid-ambiguous-distant-situations,"By default, avoid ambiguous distant situations",['Stuart_Armstrong'],2019-05-21T14:48:15Z,alignmentforum,, 38905,https://www.alignmentforum.org/posts/mLfPHv4QjmeQrsSva/paper-on-measuring-situational-awareness-in-llms,Paper: On measuring situational awareness in LLMs,"['Owain_Evans', 'Daniel Kokotajlo', 'Mikita Balesni', 'Tomek Korbak', 'berglund', 'Asa Cooper Stickland', 'Meg', 'Maximilian Kaufmann']",2023-09-04T12:54:21Z,alignmentforum,, 38926,https://www.alignmentforum.org/posts/DG7asvufKgaqEknKd/ai-alignment-writing-day-roundup-2,AI Alignment Writing Day Roundup #2,['Ben Pace'],2019-10-07T23:36:36Z,alignmentforum,, 38946,https://www.alignmentforum.org/posts/KbCHcb8yyjAMFAAPJ/when-wishful-thinking-works,When wishful thinking works,['AlexMennen'],2018-09-01T23:43:01Z,alignmentforum,, 38962,https://www.alignmentforum.org/posts/cHnQ4bBFr3cX6rBxh/positive-values-seem-more-robust-and-lasting-than,Positive values seem more robust and lasting than prohibitions,['TurnTrout'],2022-12-17T21:43:32Z,alignmentforum,, 38971,https://www.alignmentforum.org/posts/iXuJLARFBZbaBGxW3/a-conversation-about-katja-s-counterarguments-to-ai-risk,A conversation about Katja's counterarguments to AI risk,"['Matthew Barnett', 'Ege Erdil', 'Brangus Brangus']",2022-10-18T18:40:37Z,alignmentforum,, 38998,https://www.alignmentforum.org/posts/Dx9LoqsEh3gHNJMDk/fixing-the-good-regulator-theorem,Fixing The Good Regulator Theorem,['johnswentworth'],2021-02-09T20:30:17Z,alignmentforum,, 39012,https://www.alignmentforum.org/posts/mdau2DBSMi5bWXPGA/useful-does-not-mean-secure,Useful Does Not Mean Secure,['Ben Pace'],2019-11-30T02:05:14Z,alignmentforum,, 39029,https://www.alignmentforum.org/posts/gLfHp8XaWpfsmXyWZ/conservative-agency-with-multiple-stakeholders,Conservative Agency with Multiple Stakeholders,['TurnTrout'],2021-06-08T00:30:53Z,alignmentforum,, 39047,https://www.alignmentforum.org/posts/X5WTgfX5Ly4ZNHWZD/focus-you-are-allowed-to-be-bad-at-accomplishing-your-goals,Focus: you are allowed to be bad at accomplishing your goals,['adamShimi'],2020-06-03T21:04:29Z,alignmentforum,, 39058,https://www.alignmentforum.org/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious,“Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments,['Andrew_Critch'],2022-04-19T20:25:35Z,alignmentforum,, 39079,https://www.alignmentforum.org/posts/PgsxXNSDsyz4DFEuw/anthropic-paradoxes-transposed-into-anthropic-decision,Anthropic paradoxes transposed into Anthropic Decision Theory,['Stuart_Armstrong'],2018-12-19T18:07:42Z,alignmentforum,, 39089,https://www.alignmentforum.org/posts/ZM63n353vh2ag7z4p/radical-probabilism-transcript,Radical Probabilism [Transcript],"['abramdemski', 'Ben Pace']",2020-06-26T22:14:14Z,alignmentforum,, 39101,https://www.alignmentforum.org/posts/5R9dRqTREZriN9iL7/eight-definitions-of-observability,Eight Definitions of Observability,['Scott Garrabrant'],2020-11-10T23:37:08Z,alignmentforum,, 39117,https://www.alignmentforum.org/posts/A48amesEmqD8KNSmY/conditional-prediction-with-zero-sum-training-solves-self,Conditional Prediction with Zero-Sum Training Solves Self-Fulfilling Prophecies,"['Rubi J. Hudson', 'Johannes Treutlein']",2023-05-26T17:44:36Z,alignmentforum,, 39132,https://www.alignmentforum.org/posts/oYbJ2vprujdSP9rcL/the-alignment-newsletter-9-06-04-18,The Alignment Newsletter #9: 06/04/18,['Rohin Shah'],2018-06-04T16:00:42Z,alignmentforum,, 39153,https://www.alignmentforum.org/posts/SxQJWw8RtXJdngBtS/qapr-4-inductive-biases,QAPR 4: Inductive biases,['Quintin Pope'],2022-10-10T22:08:52Z,alignmentforum,, 39183,https://www.alignmentforum.org/posts/nxmo2cyREteqvLMss/agi-goal-space-is-big-but-narrowing-might-not-be-as-hard-as-1,"AGI goal space is big, but narrowing might not be as hard as it seems.",['Jacy Reese Anthis'],2023-04-12T19:03:27Z,alignmentforum,, 39209,https://www.alignmentforum.org/posts/cq5x4XDnLcBrYbb66/will-capabilities-generalise-more,Will Capabilities Generalise More?,['Ramana Kumar'],2022-06-29T17:12:56Z,alignmentforum,, 39239,https://www.alignmentforum.org/posts/2NncxDQ3KBDCxiJiP/cosmopolitan-values-don-t-come-free,Cosmopolitan values don't come free,['So8res'],2023-05-31T15:58:17Z,alignmentforum,, 39254,https://www.alignmentforum.org/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story,Another (outer) alignment failure story,['paulfchristiano'],2021-04-07T20:12:32Z,alignmentforum,, 39275,https://www.alignmentforum.org/posts/TxcYSRQ9giC6zmKov/value-impact,Value Impact,['TurnTrout'],2019-09-23T00:47:13Z,alignmentforum,, 39288,https://www.alignmentforum.org/posts/ZrCsaCXrMTgrX9GzK/an-83-sample-efficient-deep-learning-with-remixmatch,[AN #83]: Sample-efficient deep learning with ReMixMatch,['Rohin Shah'],2020-01-22T18:10:01Z,alignmentforum,, 39312,https://www.alignmentforum.org/posts/6Ks6p33LQyfFkNtYE/paper-superposition-memorization-and-double-descent,"Paper: Superposition, Memorization, and Double Descent (Anthropic)",['LawrenceC'],2023-01-05T17:54:38Z,alignmentforum,, 39331,https://www.alignmentforum.org/posts/PDx4ueLpvz5gxPEus/why-i-m-not-working-on-debate-rrm-elk-natural-abstractions,"Why I’m not working on {debate, RRM, ELK, natural abstractions}",['Steven Byrnes'],2023-02-10T19:22:38Z,alignmentforum,, 39354,https://www.alignmentforum.org/posts/an29DQgYaKbyQprns/early-thoughts-on-ontology-grounding-problems,Early Thoughts on Ontology/Grounding Problems,['johnswentworth'],2020-11-14T23:19:36Z,alignmentforum,, 39373,https://www.alignmentforum.org/posts/roA83jDvq7F2epnHK/better-priors-as-a-safety-problem,Better priors as a safety problem,['paulfchristiano'],2020-07-05T21:20:03Z,alignmentforum,, 39389,https://www.alignmentforum.org/posts/eG3WhHS8CLNxuH6rT/agi-safety-from-first-principles-superintelligence,AGI safety from first principles: Superintelligence,['Richard_Ngo'],2020-09-28T19:53:41Z,alignmentforum,, 39410,https://www.alignmentforum.org/posts/5GzqD7fgtxjepPERp/deliberation-everywhere-simple-examples,Deliberation Everywhere: Simple Examples,['Oliver Sourbut'],2022-06-27T17:26:21Z,alignmentforum,, 39430,https://www.alignmentforum.org/posts/GkC6YTu4DWp2zwf9k/giant-in-scrutable-matrices-maybe-the-best-of-all-possible,Giant (In)scrutable Matrices: (Maybe) the Best of All Possible Worlds,['1a3orn'],2023-04-04T17:39:40Z,alignmentforum,, 39450,https://www.alignmentforum.org/posts/DQhwrir3nCcMtqA2j/toy-model-of-preference-bias-and-extra-information,"Toy model of preference, bias, and extra information",['Stuart_Armstrong'],2021-03-24T10:14:35Z,alignmentforum,, 39461,https://www.alignmentforum.org/posts/4g29JgtbJ283iJ3Bh/petrov-corrigibility,Petrov corrigibility,['Stuart_Armstrong'],2018-09-11T13:50:51Z,alignmentforum,, 39470,https://www.alignmentforum.org/posts/qXtbBAxmFkAQLQEJE/interpretability-tool-ness-alignment-corrigibility-are-not,Interpretability/Tool-ness/Alignment/Corrigibility are not Composable,['johnswentworth'],2022-08-08T18:05:12Z,alignmentforum,, 39492,https://www.alignmentforum.org/posts/pvE6NdPoMCyk55Lxn/watermarking-considered-overrated,Watermarking considered overrated?,['DanielFilan'],2023-07-31T21:36:05Z,alignmentforum,, 39510,https://www.alignmentforum.org/posts/bnDGt4Y9Hfx62Bgmk/under-a-week-left-to-win-usd1-000-by-questioning-oracle-ais,"Under a week left to win $1,000! By questioning Oracle AIs.",['Stuart_Armstrong'],2019-08-25T17:02:47Z,alignmentforum,, 39519,https://www.alignmentforum.org/posts/d5m3G3ov5phZu7FX3/mundane-solutions-to-exotic-problems,Mundane solutions to exotic problems,['paulfchristiano'],2021-05-04T18:20:05Z,alignmentforum,, 39534,https://www.alignmentforum.org/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem,The Credit Assignment Problem,['abramdemski'],2019-11-08T02:50:30Z,alignmentforum,, 39550,https://www.alignmentforum.org/posts/9ezkEb9oGvEi6WoB3/concrete-steps-to-get-started-in-transformer-mechanistic,Concrete Steps to Get Started in Transformer Mechanistic Interpretability,['Neel Nanda'],2022-12-25T22:21:50Z,alignmentforum,, 39568,https://www.alignmentforum.org/posts/m5or8yzrw9GLavz9b/collection-of-arguments-to-expect-outer-and-inner-alignment,Collection of arguments to expect (outer and inner) alignment failure?,['Sam Clarke'],2021-09-28T16:55:28Z,alignmentforum,, 39581,https://www.alignmentforum.org/posts/CzZ6Fch4JSpwCpu6C/interpretability,Interpretability,"['abergal', 'Nick_Beckstead']",2021-10-29T07:28:03Z,alignmentforum,, 39603,https://www.alignmentforum.org/posts/Fjoy5SxgBmxfy7FNB/gradient-surfing-the-hidden-role-of-regularization,Gradient surfing: the hidden role of regularization,['Jesse Hoogland'],2023-02-06T03:50:40Z,alignmentforum,, 39621,https://www.alignmentforum.org/posts/yGFiw23pJ32obgLbw/finite-factored-sets-applications,Finite Factored Sets: Applications,['Scott Garrabrant'],2021-08-31T21:19:04Z,alignmentforum,, 39650,https://www.alignmentforum.org/posts/suy5w8cWZJZsv2XES/an-167-concrete-ml-safety-problems-and-their-relevance-to-x,[AN #167]: Concrete ML safety problems and their relevance to x-risk,['Rohin Shah'],2021-10-20T17:10:04Z,alignmentforum,, 39677,https://www.alignmentforum.org/posts/MHHzLfAQBZzieGBjq/fully-acausal-trade,"""Fully"" acausal trade",['Stuart_Armstrong'],2019-12-04T16:39:46Z,alignmentforum,, 39686,https://www.alignmentforum.org/posts/9pi2XqZP6PYif5wGL/the-alignment-newsletter-6-05-14-18,The Alignment Newsletter #6: 05/14/18,['Rohin Shah'],2018-05-14T16:00:49Z,alignmentforum,, 39701,https://www.alignmentforum.org/posts/p3s8RvkcyTwzu27ps/brainstorm-of-things-that-could-force-an-ai-team-to-burn,Brainstorm of things that could force an AI team to burn their lead,['So8res'],2022-07-24T23:58:17Z,alignmentforum,, 39740,https://www.alignmentforum.org/posts/LfGzAduBWzY5gq6FE/how-low-should-fruit-hang-before-we-pick-it,How Low Should Fruit Hang Before We Pick It?,['TurnTrout'],2020-02-25T02:08:53Z,alignmentforum,, 39754,https://www.alignmentforum.org/posts/rwkkcgSpnAyE8oNo3/alexander-and-yudkowsky-on-agi-goals,Alexander and Yudkowsky on AGI goals,"['Scott Alexander', 'Eliezer Yudkowsky']",2023-01-24T21:09:17Z,alignmentforum,, 39778,https://www.alignmentforum.org/posts/BPJLzkEpx8Btz9ywq/the-dumbest-possible-gets-there-first,The Dumbest Possible Gets There First,['Artaxerxes'],2022-08-13T10:20:27Z,alignmentforum,, 39799,https://www.alignmentforum.org/posts/8MZ72PYa3kRe4yRDD/axrp-episode-1-adversarial-policies-with-adam-gleave,AXRP Episode 1 - Adversarial Policies with Adam Gleave,['DanielFilan'],2020-12-29T20:41:52Z,alignmentforum,, 39824,https://www.alignmentforum.org/posts/jsy8t64Jp5xcuFrXX/agisf-adaptation-for-in-person-groups,AGISF adaptation for in-person groups,"['Sam Marks', 'Xander Davies', 'Richard_Ngo']",2023-01-13T03:24:58Z,alignmentforum,, 39839,https://www.alignmentforum.org/posts/c6KFvQcZggQKZzxr9/trends-in-gpu-price-performance,Trends in GPU price-performance,"['Marius Hobbhahn', 'Tamay']",2022-07-01T15:51:11Z,alignmentforum,, 39849,https://www.alignmentforum.org/posts/GyZR24j8buR7D2n8D/cfp-for-rebellion-and-disobedience-in-ai-workshop,CFP for Rebellion and Disobedience in AI workshop,['Ram Rachum'],2022-12-29T16:08:05Z,alignmentforum,, 39859,https://www.alignmentforum.org/posts/Aet2mbnK7GDDfrEQu/contra-shard-theory-in-the-context-of-the-diamond-maximizer,"Contra shard theory, in the context of the diamond maximizer problem",['So8res'],2022-10-13T23:51:30Z,alignmentforum,, 39876,https://www.alignmentforum.org/posts/TR3eqQ2fnfKWzxxHL/research-agenda-in-reverse-what-would-a-solution-look-like,Research Agenda in reverse: what *would* a solution look like?,['Stuart_Armstrong'],2019-06-25T13:52:49Z,alignmentforum,, 39893,https://www.alignmentforum.org/posts/WfdxXhszxFc3BxZ8r/more-findings-on-maximal-data-dimension,More findings on maximal data dimension,['Marius Hobbhahn'],2023-02-02T18:33:54Z,alignmentforum,, 39914,https://www.alignmentforum.org/posts/8Gv5zSCnGeLxK5FAF/mlsn-1-iclr-safety-paper-roundup,[MLSN #1]: ICLR Safety Paper Roundup,['Dan H'],2021-10-18T15:20:00Z,alignmentforum,, 39937,https://www.alignmentforum.org/posts/LYdvzXF6E4iXM2ZSD/an-77-double-descent-a-unification-of-statistical-theory-and,[AN #77]: Double descent: a unification of statistical theory and modern ML practice,['Rohin Shah'],2019-12-18T18:30:02Z,alignmentforum,, 39966,https://www.alignmentforum.org/posts/fYf9JAwa6BYMt8GBj/link-a-minimal-viable-product-for-alignment,[Link] A minimal viable product for alignment,['janleike'],2022-04-06T15:38:30Z,alignmentforum,, 39975,https://www.alignmentforum.org/posts/DfewqowdzDdCD7S9y/agents-that-learn-from-human-behavior-can-t-learn-human,Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet,['steven0461'],2018-07-11T02:59:12Z,alignmentforum,, 39985,https://www.alignmentforum.org/posts/i7Z9gt6RwzFhokdJ7/the-alignment-newsletter-5-05-07-18,The Alignment Newsletter #5: 05/07/18,['Rohin Shah'],2018-05-07T16:00:11Z,alignmentforum,, 40011,https://www.alignmentforum.org/posts/keiYkaeoLHoKK4LYA/six-dimensions-of-operational-adequacy-in-agi-projects,Six Dimensions of Operational Adequacy in AGI Projects,['Eliezer Yudkowsky'],2022-05-30T17:00:31Z,alignmentforum,, 40044,https://www.alignmentforum.org/posts/yxzrKb2vFXRkwndQ4/understanding-iterated-distillation-and-amplification-claims,Understanding Iterated Distillation and Amplification: Claims and Oversight,['William_S'],2018-04-17T22:36:30Z,alignmentforum,, 40069,https://www.alignmentforum.org/posts/YnGRBADQwpYRbuCbz/towards-hodge-podge-alignment-1,Towards Hodge-podge Alignment,['Cleo Nardo'],2022-12-19T20:12:15Z,alignmentforum,, 40086,https://www.alignmentforum.org/posts/3dBtgKCkJh5yCHbag/projecting-compute-trends-in-machine-learning-2,Projecting compute trends in Machine Learning,"['Tamay', 'lennart', 'Jsevillamol']",2022-03-07T15:32:13Z,alignmentforum,, 40097,https://www.alignmentforum.org/posts/x5aTiznxJ4o9EGdj9/uncertainty-about-the-future-does-not-imply-that-agi-will-go,Uncertainty about the future does not imply that AGI will go well,['Lauro Langosco'],2023-06-01T17:38:10Z,alignmentforum,, 40112,https://www.alignmentforum.org/posts/KLNDgqQLfpFXbhQak/wireheading-and-discontinuity,Wireheading and discontinuity,['Michele Campolo'],2020-02-18T10:49:42Z,alignmentforum,, 40121,https://www.alignmentforum.org/posts/ADMWDDKGgivgghxWf/productive-mistakes-not-perfect-answers,"Productive Mistakes, Not Perfect Answers",['adamShimi'],2022-04-07T16:41:50Z,alignmentforum,, 40138,https://www.alignmentforum.org/posts/FkMPXiomjGBjMfosg/axrp-episode-5-infra-bayesianism-with-vanessa-kosoy,AXRP Episode 5 - Infra-Bayesianism with Vanessa Kosoy,['DanielFilan'],2021-03-10T04:30:10Z,alignmentforum,, 40160,https://www.alignmentforum.org/posts/33kpQK3poHGNXJXf8/doing-oversight-from-the-very-start-of-training-seems-hard-1,Doing oversight from the very start of training seems hard,['peterbarnett'],2022-09-20T17:21:07Z,alignmentforum,, 40178,https://www.alignmentforum.org/posts/z2BPxcFfhKho89D8L/goodhart-ethology,Goodhart Ethology,['Charlie Steiner'],2021-09-17T17:31:34Z,alignmentforum,, 40210,https://www.alignmentforum.org/posts/tKwJQbo6SfWF2ifKh/toward-a-new-technical-explanation-of-technical-explanation,Toward a New Technical Explanation of Technical Explanation,['abramdemski'],2018-02-16T00:44:29Z,alignmentforum,, 40227,https://www.alignmentforum.org/posts/2PDC69DDJuAx6GANa/verification-is-not-easier-than-generation-in-general,Verification Is Not Easier Than Generation In General,['johnswentworth'],2022-12-06T05:20:49Z,alignmentforum,, 40236,https://www.alignmentforum.org/posts/HS2E8woaF5h5QSptP/the-ai-is-the-model,The AI is the model,['Charlie Steiner'],2019-10-04T08:11:49Z,alignmentforum,, 40246,https://www.alignmentforum.org/posts/Cj4hWE2xBf7t8nKkk/how-interpretability-can-be-impactful,How Interpretability can be Impactful,['Connall Garrod'],2022-07-18T00:06:06Z,alignmentforum,, 40283,https://www.alignmentforum.org/posts/pz84sQKsgg3GBHQpd/supervised-learning-and-self-modeling-what-s-superhuman,"Supervised learning and self-modeling: What's ""superhuman?""",['Charlie Steiner'],2021-12-09T12:44:14Z,alignmentforum,, 40313,https://www.alignmentforum.org/posts/K5ikTdaNymfWXQHFb/model-based-rl-desires-brains-wireheading,"Model-based RL, Desires, Brains, Wireheading",['Steven Byrnes'],2021-07-14T15:11:13Z,alignmentforum,, 40344,https://www.alignmentforum.org/posts/TYTEJxzeK3jBMq2TZ/your-posts-should-be-on-arxiv,Your posts should be on arXiv,['JanBrauner'],2022-08-25T10:35:12Z,alignmentforum,, 40357,https://www.alignmentforum.org/posts/dLoK6KGcHAoudtwdo/arc-is-hiring,ARC is hiring!,"['paulfchristiano', 'Mark Xu']",2021-12-14T20:09:34Z,alignmentforum,, 40372,https://www.alignmentforum.org/posts/amBsmfFK4NFDtkHiT/seven-strategies-for-tackling-the-hard-part-of-the-alignment,Seven Strategies for Tackling the Hard Part of the Alignment Problem,['scasper'],2023-07-08T18:55:22Z,alignmentforum,, 40394,https://www.alignmentforum.org/posts/Ez4zZQKWgC6fE3h9G/plausibly-almost-every-powerful-algorithm-would-be,"Plausibly, almost every powerful algorithm would be manipulative",['Stuart_Armstrong'],2020-02-06T11:50:16Z,alignmentforum,, 40403,https://www.alignmentforum.org/posts/7jSvfeyh8ogu8GcE6/decoupling-deliberation-from-competition,Decoupling deliberation from competition,['paulfchristiano'],2021-05-25T18:50:04Z,alignmentforum,, 40426,https://www.alignmentforum.org/posts/X5L9g4fXmhPdQrBCA/a-library-and-tutorial-for-factored-cognition-with-language,A Library and Tutorial for Factored Cognition with Language Models,"['stuhlmueller', 'justin_dan', 'goodgravy']",2022-09-28T18:15:11Z,alignmentforum,, 40439,https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and,"[Intro to brain-like-AGI safety] 5. The “long-term predictor”, and TD learning",['Steven Byrnes'],2022-02-23T14:44:53Z,alignmentforum,, 40463,https://www.alignmentforum.org/posts/CsMQ7zsprBqWaeSvk/the-alignment-newsletter-1-04-09-18,The Alignment Newsletter #1: 04/09/18,['Rohin Shah'],2018-04-09T16:00:39Z,alignmentforum,, 40487,https://www.alignmentforum.org/posts/HXJaKcuQqaZDJq9xz/announcing-the-cnn-interpretability-competition,Announcing the CNN Interpretability Competition,['scasper'],2023-09-26T16:21:50Z,alignmentforum,, 40502,https://www.alignmentforum.org/posts/dmcJr2NBg5QuGSHFC/i-missed-the-crux-of-the-alignment-problem-the-whole-time,I missed the crux of the alignment problem the whole time,['zeshen'],2022-08-13T10:11:25Z,alignmentforum,, 40517,https://www.alignmentforum.org/posts/f69LK7CndhSNA7oPn/ai-safety-research-project-ideas,AI Safety Research Project Ideas,['Owain_Evans'],2021-05-21T13:39:40Z,alignmentforum,, 40546,https://www.alignmentforum.org/posts/5bd75cc58225bf067037555e/resource-limited-reflective-oracles,Resource-Limited Reflective Oracles,['Diffractor'],2018-06-06T02:50:42Z,alignmentforum,, 40556,https://www.alignmentforum.org/posts/a7jnbtoKFyvu5qfkd/formal-inner-alignment-prospectus,"Formal Inner Alignment, Prospectus",['abramdemski'],2021-05-12T19:57:37Z,alignmentforum,, 40574,https://www.alignmentforum.org/posts/wpsGprQCRffRKG92v/catastrophic-risks-from-ai-4-organizational-risks,Catastrophic Risks from AI #4: Organizational Risks,"['Dan H', 'Mantas Mazeika', 'ThomasW']",2023-06-26T19:36:41Z,alignmentforum,, 40608,https://www.alignmentforum.org/posts/8oMF8Lv5jiGaQSFvo/boundaries-part-1-a-key-missing-concept-from-utility-theory,"«Boundaries», Part 1: a key missing concept from utility theory",['Andrew_Critch'],2022-07-26T23:03:56Z,alignmentforum,, 40624,https://www.alignmentforum.org/posts/zqmAMst8hmsdJqrpR/shell-games,Shell games,['TsviBT'],2023-03-19T10:43:44Z,alignmentforum,, 40639,https://www.alignmentforum.org/posts/maBNBgopYxb9YZP8B/sparsity-and-interpretability-1,Sparsity and interpretability?,"['Ada Böhm', 'RobertKirk', 'Tomáš Gavenčiak']",2020-06-01T13:25:47Z,alignmentforum,, 40666,https://www.alignmentforum.org/posts/mFWBsCy2SHQ97RArb/expanding-the-domain-of-discourse-reveals-structure-already,Expanding the domain of discourse reveals structure already there but hidden,['TsviBT'],2023-04-09T13:36:29Z,alignmentforum,, 40682,https://www.alignmentforum.org/posts/NcahJFd5S5RNupxFT/how-to-get-value-learning-and-reference-wrong,How to get value learning and reference wrong,['Charlie Steiner'],2019-02-26T20:22:43Z,alignmentforum,, 40703,https://www.alignmentforum.org/posts/FrFZjkdRsmsbnQEm8/interpretability-s-alignment-solving-potential-analysis-of-7,Interpretability’s Alignment-Solving Potential: Analysis of 7 Scenarios,['Evan R. Murphy'],2022-05-12T20:01:56Z,alignmentforum,, 40725,https://www.alignmentforum.org/posts/Xts5wm3akbemk4pDa/non-obstruction-a-simple-concept-motivating-corrigibility,Non-Obstruction: A Simple Concept Motivating Corrigibility,['TurnTrout'],2020-11-21T19:35:40Z,alignmentforum,, 40755,https://www.alignmentforum.org/posts/od6zKB5swBzGL3LqE/trying-to-find-the-underlying-structure-of-computational,Trying to find the underlying structure of computational systems,['Matthias G. Mayer'],2022-09-13T21:16:50Z,alignmentforum,, 40766,https://www.alignmentforum.org/posts/hWtpqjYXAvFExmAsD/arguments-about-highly-reliable-agent-designs-as-a-useful,Arguments about Highly Reliable Agent Designs as a Useful Path to Artificial Intelligence Safety,"['riceissa', 'Davidmanheim']",2022-01-27T13:13:11Z,alignmentforum,, 40783,https://www.alignmentforum.org/posts/i9okkiKQ4rY8eawmT/an-evangelion-dialogue-explaining-the-qaci-alignment-plan,an Evangelion dialogue explaining the QACI alignment plan,['Tamsin Leake'],2023-06-10T03:28:47Z,alignmentforum,, 40814,https://www.alignmentforum.org/posts/Rhbac7CfRodMrs77F/asot-consequentialist-models-as-a-superset-of-mesaoptimizers,[ASoT] Consequentialist models as a superset of mesaoptimizers,['leogao'],2022-04-23T17:57:40Z,alignmentforum,, 40830,https://www.alignmentforum.org/posts/m7oGxvouzzeQKiGJH/how-should-ai-debate-be-judged,How should AI debate be judged?,['abramdemski'],2020-07-15T22:20:34Z,alignmentforum,, 40847,https://www.alignmentforum.org/posts/y2XyxomuEpMaRYDQw/ai-learn-to-be-conservative-then-learn-to-be-less-so,"AI, learn to be conservative, then learn to be less so: reducing side-effects, learning preserved features, and going beyond conservatism",['Stuart_Armstrong'],2021-09-20T11:56:57Z,alignmentforum,, 40865,https://www.alignmentforum.org/posts/uKujHaJd2ckAKAevo/an-exercise-to-build-intuitions-on-agi-risk,An Exercise to Build Intuitions on AGI Risk,['Lauro Langosco'],2023-06-07T18:35:48Z,alignmentforum,, 40877,https://www.alignmentforum.org/posts/w6d7XBCegc96kz4n3/the-argument-from-philosophical-difficulty,The Argument from Philosophical Difficulty,['Wei Dai'],2019-02-10T00:28:07Z,alignmentforum,, 40902,https://www.alignmentforum.org/posts/odtMt7zbMuuyavaZB/when-do-brains-beat-brawn-in-chess-an-experiment,"When do ""brains beat brawn"" in Chess? An experiment",['titotal'],2023-06-28T13:33:24Z,alignmentforum,, 40915,https://www.alignmentforum.org/posts/6ayQbR5opoTN4AgFb/hierarchical-planning-context-agents,Hierarchical planning: context agents,['Charlie Steiner'],2020-12-19T11:24:09Z,alignmentforum,, 40937,https://www.alignmentforum.org/posts/du92yeHQn9iE5vorj/no-one-size-fit-all-epistemic-strategy,No One-Size-Fit-All Epistemic Strategy,['adamShimi'],2022-08-20T12:56:23Z,alignmentforum,, 40948,https://www.alignmentforum.org/posts/HByDKLLdaWEcA2QQD/applying-superintelligence-without-collusion,Applying superintelligence without collusion,['Eric Drexler'],2022-11-08T18:08:32Z,alignmentforum,, 40964,https://www.alignmentforum.org/posts/hxuKtHH4jTdtmEAbK/exploring-finite-factored-sets-with-some-toy-examples,Exploring Finite Factored Sets with some toy examples,['Thomas Kehrenberg'],2022-03-19T22:08:56Z,alignmentforum,, 40974,https://www.alignmentforum.org/posts/8xKhCbNrdP4gaA8c3/sections-3-and-4-credibility-peaceful-bargaining-mechanisms,"Sections 3 & 4: Credibility, Peaceful Bargaining Mechanisms",['JesseClifton'],2019-12-17T21:46:49Z,alignmentforum,, 41001,https://www.alignmentforum.org/posts/oijJc8Mu2jPNgpuvy/asot-some-thoughts-about-deceptive-mesaoptimization,[ASoT] Some thoughts about deceptive mesaoptimization,['leogao'],2022-03-28T21:14:27Z,alignmentforum,, 41028,https://www.alignmentforum.org/posts/XsYAL4jvzztomSjKy/humans-reflecting-on-hrh,Humans Reflecting on HRH,['leogao'],2022-07-29T21:56:54Z,alignmentforum,, 41045,https://www.alignmentforum.org/posts/idP5E5XhJGh9T5Yq9/less-basic-inframeasure-theory,Less Basic Inframeasure Theory,['Diffractor'],2020-12-16T03:52:48Z,alignmentforum,, 41064,https://www.alignmentforum.org/posts/Wi4RAJCbh3qD9fynj/ai-neorealism-a-threat-model-and-success-criterion-for,AI Neorealism: a threat model & success criterion for existential safety,['davidad'],2022-12-15T13:42:11Z,alignmentforum,, 41094,https://www.alignmentforum.org/posts/vXzM5L6njDZSf4Ftk/defining-ai-wireheading,Defining AI wireheading,['Stuart_Armstrong'],2019-11-21T13:04:49Z,alignmentforum,, 41111,https://www.alignmentforum.org/posts/5bd75cc58225bf067037505e/reflexive-oracles-and-superrationality-prisoner-s-dilemma,Reflexive Oracles and superrationality: prisoner's dilemma,['Stuart_Armstrong'],2017-05-24T08:34:07Z,alignmentforum,, 41130,https://www.alignmentforum.org/posts/jMRuwXdC6NPFw8HLq/quintin-s-alignment-papers-roundup-week-2,Quintin's alignment papers roundup - week 2,['Quintin Pope'],2022-09-19T13:41:27Z,alignmentforum,, 41167,https://www.alignmentforum.org/posts/RBcKeY8B5mvxiCN37/state-of-my-alignment-research-and-what-needs-work,"state of my alignment research, and what needs work",['Tamsin Leake'],2023-03-03T10:28:34Z,alignmentforum,, 41180,https://www.alignmentforum.org/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values,The shard theory of human values,"['Quintin Pope', 'TurnTrout']",2022-09-04T04:28:12Z,alignmentforum,, 41200,https://www.alignmentforum.org/posts/svpnmmeJresYs23rY/counting-down-vs-counting-up-coherence,Counting-down vs. counting-up coherence,['TsviBT'],2023-02-27T14:59:39Z,alignmentforum,, 41215,https://www.alignmentforum.org/posts/NEa3puQB23FyiifnW/linkpost-treacherous-turns-in-the-wild,[Linkpost] Treacherous turns in the wild,['Mark Xu'],2021-04-26T22:51:44Z,alignmentforum,, 41225,https://www.alignmentforum.org/posts/3qXE6fK47JhSfkpnB/do-sufficiently-advanced-agents-use-logic,Do Sufficiently Advanced Agents Use Logic?,['abramdemski'],2019-09-13T19:53:36Z,alignmentforum,, 41240,https://www.alignmentforum.org/posts/sZa5LQg6rrWgMR4Jx/finite-factored-sets-introduction-and-factorizations,Finite Factored Sets: Introduction and Factorizations,['Scott Garrabrant'],2021-06-04T17:41:35Z,alignmentforum,, 41255,https://www.alignmentforum.org/posts/Y4YHTBziAscS5WPN7/epistemological-framing-for-ai-alignment-research,Epistemological Framing for AI Alignment Research,['adamShimi'],2021-03-08T22:05:29Z,alignmentforum,, 41276,https://www.alignmentforum.org/posts/wAAvP8RG6EwzCvHJy/reasons-for-excitement-about-impact-of-impact-measure,Reasons for Excitement about Impact of Impact Measure Research,['TurnTrout'],2020-02-27T21:42:19Z,alignmentforum,, 41296,https://www.alignmentforum.org/posts/zCcmJzbenAXu6qugS/tabooing-agent-for-prosaic-alignment,Tabooing 'Agent' for Prosaic Alignment,['Hjalmar_Wijk'],2019-08-23T02:55:58Z,alignmentforum,, 41320,https://www.alignmentforum.org/posts/6rZroG3mztowktJzp/how-many-bits-of-optimization-can-one-bit-of-observation,How Many Bits Of Optimization Can One Bit Of Observation Unlock?,['johnswentworth'],2023-04-26T00:26:23Z,alignmentforum,, 41332,https://www.alignmentforum.org/posts/gnvrixhDfG7S2TpNL/latent-variables-and-model-mis-specification,Latent Variables and Model Mis-Specification,['jsteinhardt'],2018-11-07T14:48:40Z,alignmentforum,, 41349,https://www.alignmentforum.org/posts/A5YQqDEz9QKGAZvn6/agi-is-easier-than-robotaxis,AGI is easier than robotaxis,['Daniel Kokotajlo'],2023-08-13T17:00:30Z,alignmentforum,, 41364,https://www.alignmentforum.org/posts/WerwgmeYZYGC2hKXN/promising-posts-on-af-that-have-fallen-through-the-cracks,Promising posts on AF that have fallen through the cracks,['Evan R. Murphy'],2022-01-04T15:39:07Z,alignmentforum,, 41379,https://www.alignmentforum.org/posts/CvibiLyHj3n3Aigez/scarce-channels-and-abstraction-coupling,Scarce Channels and Abstraction Coupling,['johnswentworth'],2023-02-28T23:26:04Z,alignmentforum,, 41392,https://www.alignmentforum.org/posts/by5NkEoSC4gvo9bQ2/an-144-how-language-models-can-also-be-finetuned-for-non,[AN #144]: How language models can also be finetuned for non-language tasks,['Rohin Shah'],2021-04-02T17:20:04Z,alignmentforum,, 41413,https://www.alignmentforum.org/posts/L4anhrxjv8j2yRKKp/how-discovering-latent-knowledge-in-language-models-without,"How ""Discovering Latent Knowledge in Language Models Without Supervision"" Fits Into a Broader Alignment Scheme",['Collin'],2022-12-15T18:22:40Z,alignmentforum,, 41425,https://www.alignmentforum.org/posts/ho63vCb2MNFijinzY/agi-safety-career-advice,AGI safety career advice,['Richard_Ngo'],2023-05-02T07:36:09Z,alignmentforum,, 41458,https://www.alignmentforum.org/posts/dMBmZNwdjQ6yHvWZ5/you-re-not-a-simulation-cause-you-re-hallucinating,"You're not a simulation, 'cause you're hallucinating",['Stuart_Armstrong'],2023-02-21T12:12:22Z,alignmentforum,, 41467,https://www.alignmentforum.org/posts/sdrCBWpqyvNBJSZH5/the-no-free-lunch-theorems-and-their-razor,The No Free Lunch theorems and their Razor,['Adrià Garriga-alonso'],2022-05-24T06:40:33Z,alignmentforum,, 41489,https://www.alignmentforum.org/posts/amoS8fGYsRKo6Wsdd/research-request-alignment-strategy-deep-dive-on-making-ai,"Research request (alignment strategy): Deep dive on ""making AI solve alignment for us""",['JanBrauner'],2022-12-01T14:55:24Z,alignmentforum,, 41501,https://www.alignmentforum.org/posts/4DegbDJJiMX2b3EKm/tai-safety-bibliographic-database,TAI Safety Bibliographic Database,['JessRiedel'],2020-12-22T17:42:06Z,alignmentforum,, 41523,https://www.alignmentforum.org/posts/9pZtvjegYKBALFnLk/characterizing-real-world-agents-as-a-research-meta-strategy,Characterizing Real-World Agents as a Research Meta-Strategy,['johnswentworth'],2019-10-08T15:32:28Z,alignmentforum,, 41537,https://www.alignmentforum.org/posts/GGttHsdWHjh94cX5G/experiments-in-evaluating-steering-vectors,Experiments in Evaluating Steering Vectors,['Gytis Daujotas'],2023-06-19T15:11:52Z,alignmentforum,, 41552,https://www.alignmentforum.org/posts/Pd53Mip7Aa3TsdA7E/insufficient-values,Insufficient Values,"['Jozdien', 'Jacob Abraham', 'Abraham Francis']",2021-06-16T14:33:28Z,alignmentforum,, 41580,https://www.alignmentforum.org/posts/K686EFdXysfRBdob2/musings-on-cumulative-cultural-evolution-and-ai,Musings on Cumulative Cultural Evolution and AI,['calebo'],2019-07-07T16:46:45Z,alignmentforum,, 41592,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374e61/using-modal-fixed-points-to-formalize-logical-causality,Using modal fixed points to formalize logical causality,['cousin_it'],2017-08-24T14:33:09Z,alignmentforum,, 41604,https://www.alignmentforum.org/posts/Z2rkdEAJ9MvYPBeYW/thoughts-on-iason-gabriel-s-artificial-intelligence-values,"Thoughts on Iason Gabriel’s Artificial Intelligence, Values, and Alignment",['Alex Flint'],2021-01-14T12:58:37Z,alignmentforum,, 41621,https://www.alignmentforum.org/posts/8whGos5JCdBzDbZhH/framings-of-deceptive-alignment,Framings of Deceptive Alignment,['peterbarnett'],2022-04-26T04:25:56Z,alignmentforum,, 41641,https://www.alignmentforum.org/posts/oCWk8QpjgyqbFHKtK/finding-the-multiple-ground-truths-of-coinrun-and-image,Finding the multiple ground truths of CoinRun and image classification,['Stuart_Armstrong'],2021-12-08T18:13:02Z,alignmentforum,, 41660,https://www.alignmentforum.org/posts/Z8BWP6CEQuARcbNZu/self-regulation-of-safety-in-ai-research,Self-regulation of safety in AI research,['Gordon Seidoh Worley'],2018-02-25T23:17:45Z,alignmentforum,, 41675,https://www.alignmentforum.org/posts/CTh3JHBmEfHjE7WP5/axrp-episode-25-cooperative-ai-with-caspar-oesterheld,AXRP Episode 25 - Cooperative AI with Caspar Oesterheld,['DanielFilan'],2023-10-03T21:50:08Z,alignmentforum,, 41712,https://www.alignmentforum.org/posts/dBMC63hjkc5wPqTC7/human-ai-collaboration,Human-AI Collaboration,['Rohin Shah'],2019-10-22T06:32:21Z,alignmentforum,, 41733,https://www.alignmentforum.org/posts/aKhzD8m53oCiE38K7/the-alignment-newsletter-11-06-18-18,The Alignment Newsletter #11: 06/18/18,['Rohin Shah'],2018-06-18T16:00:47Z,alignmentforum,, 41748,https://www.alignmentforum.org/posts/E8imxQo96WgDCMxkA/replication-conjecture-s-sparse-coding-in-toy-models,[Replication] Conjecture's Sparse Coding in Toy Models,"['Hoagy', 'Logan Riggs']",2023-06-02T17:34:25Z,alignmentforum,, 41758,https://www.alignmentforum.org/posts/42z4k8Co5BuHMBvER/breaking-oracles-superrationality-and-acausal-trade,Breaking Oracles: superrationality and acausal trade,['Stuart_Armstrong'],2019-11-25T10:40:18Z,alignmentforum,, 41776,https://www.alignmentforum.org/posts/hpAbfXtqYC2BrpeiC/troll-bridge-5,Troll Bridge,['abramdemski'],2019-08-23T18:36:40Z,alignmentforum,, 41795,https://www.alignmentforum.org/posts/QFuypcQGZK59TaKos/limiting-causality-by-complexity-class,Limiting Causality by Complexity Class,['Bunthut'],2021-01-30T12:18:38Z,alignmentforum,, 41806,https://www.alignmentforum.org/posts/y9JeNZ2WAkR6MbBZH/an-88-how-the-principal-agent-literature-relates-to-ai-risk,[AN #88]: How the principal-agent literature relates to AI risk,['Rohin Shah'],2020-02-27T09:10:02Z,alignmentforum,, 41843,https://www.alignmentforum.org/posts/Ya9LzwEbfaAMY8ABo/solidgoldmagikarp-ii-technical-details-and-more-recent,SolidGoldMagikarp II: technical details and more recent findings,"['mwatkins', 'Jessica Rumbelow']",2023-02-06T19:09:01Z,alignmentforum,, 41860,https://www.alignmentforum.org/posts/kWTko53s2DqTeprjz/asot-observations-about-elk,[ASoT] Observations about ELK,['leogao'],2022-03-26T00:42:21Z,alignmentforum,, 41874,https://www.alignmentforum.org/posts/6YNZt5xbBT5dJXknC/take-9-no-rlhf-ida-debate-doesn-t-solve-outer-alignment,"Take 9: No, RLHF/IDA/debate doesn't solve outer alignment.",['Charlie Steiner'],2022-12-12T11:51:43Z,alignmentforum,, 41883,https://www.alignmentforum.org/posts/rQGW2GqHAFprupYkf/intermittent-distillations-4-semiconductors-economics,"Intermittent Distillations #4: Semiconductors, Economics, Intelligence, and Technological Progress.",['Mark Xu'],2021-07-08T22:14:23Z,alignmentforum,, 41907,https://www.alignmentforum.org/posts/Kx9K7tLFf8rxcnNLT/seeking-interns-ras-for-mechanistic-interpretability,Seeking Interns/RAs for Mechanistic Interpretability Projects,['Neel Nanda'],2022-08-15T07:11:25Z,alignmentforum,, 41921,https://www.alignmentforum.org/posts/8Hr95c37nadXCxzh7/counterfactuals-smoking-lesion-vs-newcomb-s,Counterfactuals: Smoking Lesion vs. Newcomb's,['Chris_Leong'],2019-12-08T21:02:06Z,alignmentforum,, 41938,https://www.alignmentforum.org/posts/JNtGxrusJRpx53Q8L/announcing-ai-alignment-awards-usd100k-research-contests,Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility,"['Akash', 'Olivia Jimenez']",2022-11-22T22:19:09Z,alignmentforum,, 41953,https://www.alignmentforum.org/posts/mcnWZBnbeDz7KKtjJ/rsps-are-pauses-done-right,RSPs are pauses done right,['evhub'],2023-10-14T04:06:03Z,alignmentforum,, 41971,https://www.alignmentforum.org/posts/tCex9F9YptGMpk2sT/normativity,Normativity,['abramdemski'],2020-11-18T16:52:00Z,alignmentforum,, 41991,https://www.alignmentforum.org/posts/SfNwpyL7o49ohYyWB/dehumanisation-errors,Dehumanisation *errors*,['Stuart_Armstrong'],2020-09-23T09:51:53Z,alignmentforum,, 42009,https://www.alignmentforum.org/posts/Sd4QvG4ZyjynZuHGt/intro-to-brain-like-agi-safety-12-two-paths-forward,[Intro to brain-like-AGI safety] 12. Two paths forward: “Controlled AGI” and “Social-instinct AGI”,['Steven Byrnes'],2022-04-20T12:58:33Z,alignmentforum,, 42028,https://www.alignmentforum.org/posts/fc9KjZeSLuHN7HfW6/making-nanobots-isn-t-a-one-shot-process-even-for-an,"Making Nanobots isn't a one-shot process, even for an artificial superintelligance",['dankrad'],2023-04-25T00:39:25Z,alignmentforum,, 42054,https://www.alignmentforum.org/posts/uky9nAtnw9WrAjziD/connecting-the-good-regulator-theorem-with-semantics-and,Connecting the good regulator theorem with semantics and symbol grounding,['Stuart_Armstrong'],2021-03-04T14:35:40Z,alignmentforum,, 42072,https://www.alignmentforum.org/posts/FRRb6Gqem8k69ocbi/externalized-reasoning-oversight-a-research-direction-for,Externalized reasoning oversight: a research direction for language model alignment,['tamera'],2022-08-03T12:03:17Z,alignmentforum,, 42095,https://www.alignmentforum.org/posts/xqkGmfikqapbJ2YMj/shard-theory-an-overview,Shard Theory: An Overview,['David Udell'],2022-08-11T05:44:53Z,alignmentforum,, 42118,https://www.alignmentforum.org/posts/y5GftLezdozEHdXkL/an-intuitive-guide-to-garrabrant-induction,An Intuitive Guide to Garrabrant Induction,['Mark Xu'],2021-06-03T22:21:42Z,alignmentforum,, 42133,https://www.alignmentforum.org/posts/b9jubzqz866CModHB/proofs-section-1-2-mixtures-updates-pushforwards,"Proofs Section 1.2 (Mixtures, Updates, Pushforwards)",['Diffractor'],2020-08-27T07:57:28Z,alignmentforum,, 42152,https://www.alignmentforum.org/posts/EbL5W5ccwfbqFiYBJ/auditing-games-for-high-level-interpretability-1,Auditing games for high-level interpretability,['Paul Colognese'],2022-11-01T10:44:08Z,alignmentforum,, 42171,https://www.alignmentforum.org/posts/saRRRdMnMPXXtQBNi/unsupervised-translation-as-an-intent-alignment-problem,“Unsupervised” translation as an (intent) alignment problem,['paulfchristiano'],2020-09-30T00:50:06Z,alignmentforum,, 42187,https://www.alignmentforum.org/posts/inALbAqdx63KTaGgs/benchmarks-for-detecting-measurement-tampering-redwood,Benchmarks for Detecting Measurement Tampering [Redwood Research],"['ryan_greenblatt', 'Fabien Roger']",2023-09-05T16:44:48Z,alignmentforum,, 42208,https://www.alignmentforum.org/posts/LnnMPNHEpqtaqonCM/gato-s-generalisation-predictions-and-experiments-i-d-like,Gato's Generalisation: Predictions and Experiments I'd Like to See,['Oliver Sourbut'],2022-05-18T07:15:51Z,alignmentforum,, 42230,https://www.alignmentforum.org/posts/jS2iiDPqMvZ2tnik2/ai-alignment-philosophical-pluralism-and-the-relevance-of,"AI Alignment, Philosophical Pluralism, and the Relevance of Non-Western Philosophy",['xuan'],2021-01-01T00:08:07Z,alignmentforum,, 42267,https://www.alignmentforum.org/posts/EFrsvnF6uZieZr3uG/adversarial-training-importance-sampling-and-anti,"Adversarial training, importance sampling, and anti-adversarial training for AI whistleblowing",['Buck'],2022-06-02T23:48:30Z,alignmentforum,, 42282,https://www.alignmentforum.org/posts/SqjQFhn5KTarfW8v7/lessons-learned-from-talking-to-greater-than-100-academics,Lessons learned from talking to >100 academics about AI safety,['Marius Hobbhahn'],2022-10-10T13:16:38Z,alignmentforum,, 42303,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375286/vector-valued-reinforcement-learning,Vector-Valued Reinforcement Learning,['orthonormal'],2016-11-01T00:21:55Z,alignmentforum,, 42318,https://www.alignmentforum.org/posts/FDJnZt8Ks2djouQTZ/how-do-we-become-confident-in-the-safety-of-a-machine,How do we become confident in the safety of a machine learning system?,['evhub'],2021-11-08T22:49:41Z,alignmentforum,, 42342,https://www.alignmentforum.org/posts/nGqzNC6uNueum2w8T/inductive-biases-stick-around,Inductive biases stick around,['evhub'],2019-12-18T19:52:36Z,alignmentforum,, 42353,https://www.alignmentforum.org/posts/Fx8gCJu5zuLdZezTN/jitters-no-evidence-of-stupidity-in-rl,Jitters No Evidence of Stupidity in RL,['1a3orn'],2021-09-16T22:43:58Z,alignmentforum,, 42370,https://www.alignmentforum.org/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions,Discussion with Eliezer Yudkowsky on AGI interventions,"['Rob Bensinger', 'Eliezer Yudkowsky']",2021-11-11T03:01:11Z,alignmentforum,, 42399,https://www.alignmentforum.org/posts/SrsH2MyyH8MqH9QSr/parallels-between-ai-safety-by-debate-and-evidence-law,Parallels Between AI Safety by Debate and Evidence Law,['Cullen'],2020-07-20T22:52:09Z,alignmentforum,, 42410,https://www.alignmentforum.org/posts/kaxqjCKJL6RNHwJLD/scalable-and-transferable-black-box-jailbreaks-for-language-2,Scalable And Transferable Black-Box Jailbreaks For Language Models Via Persona Modulation,"['Soroush Pour', 'rusheb', 'Quentin FEUILLADE--MONTIXI', 'Arush', 'scasper']",2023-11-07T17:59:37Z,alignmentforum,, 42432,https://www.alignmentforum.org/posts/zvrZi95EHqJPxdgps/why-we-need-a-theory-of-human-values,Why we need a *theory* of human values,['Stuart_Armstrong'],2018-12-05T16:00:14Z,alignmentforum,, 42451,https://www.alignmentforum.org/posts/Aufg88v7mQ2RuEXkS/proper-scoring-rules-don-t-guarantee-predicting-fixed-points,Proper scoring rules don’t guarantee predicting fixed points,"['Johannes Treutlein', 'Rubi J. Hudson', 'Caspar Oesterheld']",2022-12-16T18:22:24Z,alignmentforum,, 42472,https://www.alignmentforum.org/posts/pqkdsqd6s6w2HtT9g/intermittent-distillations-1,Intermittent Distillations #1,['Mark Xu'],2021-03-17T05:15:27Z,alignmentforum,, 42511,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374ecc/un-manipulable-counterfactuals,Un-manipulable counterfactuals,['Stuart_Armstrong'],2015-02-12T19:51:14Z,alignmentforum,, 42521,https://www.alignmentforum.org/posts/wiQeYuQPwSypXXFar/cartesian-frames-as-generalised-models,Cartesian frames as generalised models,['Stuart_Armstrong'],2021-02-16T16:09:20Z,alignmentforum,, 42544,https://www.alignmentforum.org/posts/sdxZdGFtAwHGFGKhg/truthful-and-honest-ai,Truthful and honest AI,"['abergal', 'Nick_Beckstead', 'Owain_Evans']",2021-10-29T07:28:36Z,alignmentforum,, 42582,https://www.alignmentforum.org/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison,2020 AI Alignment Literature Review and Charity Comparison,['Larks'],2020-12-21T15:27:19Z,alignmentforum,, 42603,https://www.alignmentforum.org/posts/CtGwGgxfoefiwfcor/disentangling-perspectives-on-strategy-stealing-in-ai-safety,Disentangling Perspectives On Strategy-Stealing in AI Safety,['shawnghu'],2021-12-18T20:13:09Z,alignmentforum,, 42626,https://www.alignmentforum.org/posts/pZaPhGg2hmmPwByHc/future-ml-systems-will-be-qualitatively-different,Future ML Systems Will Be Qualitatively Different,['jsteinhardt'],2022-01-11T19:50:11Z,alignmentforum,, 42647,https://www.alignmentforum.org/posts/noxJrzXcdz738uqMi/i-don-t-find-the-lie-detection-results-that-surprising-by-an,I don’t find the lie detection results that surprising (by an author of the paper),['JanBrauner'],2023-10-04T17:10:51Z,alignmentforum,, 42666,https://www.alignmentforum.org/posts/b44zed5fBWyyQwBHL/trying-to-make-a-treacherous-mesa-optimizer,Trying to Make a Treacherous Mesa-Optimizer,['MadHatter'],2022-11-09T18:07:03Z,alignmentforum,, 42686,https://www.alignmentforum.org/posts/ervaGwJ2ZcwqfCcLx/agi-ruin-scenarios-are-likely-and-disjunctive,AGI ruin scenarios are likely (and disjunctive),['So8res'],2022-07-27T03:21:58Z,alignmentforum,, 42706,https://www.alignmentforum.org/posts/gtLLBhzQTG6nKTeCZ/attribution-patching-activation-patching-at-industrial-scale,Attribution Patching: Activation Patching At Industrial Scale,['Neel Nanda'],2023-03-16T21:44:55Z,alignmentforum,, 42726,https://www.alignmentforum.org/posts/4kYkYSKSALH4JaQ99/toy-problem-detective-story-alignment,Toy Problem: Detective Story Alignment,['johnswentworth'],2020-10-13T21:02:52Z,alignmentforum,, 42739,https://www.alignmentforum.org/posts/XyBWkoaqfnuEyNWXi/reflection-mechanisms-as-an-alignment-target-a-survey-1,Reflection Mechanisms as an Alignment target: A survey,"['Marius Hobbhahn', 'elandgre', 'Beth Barnes']",2022-06-22T15:05:56Z,alignmentforum,, 42756,https://www.alignmentforum.org/posts/nqwzrpkPvviLHWXaE/apply-to-the-redwood-research-mechanistic-interpretability,"Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley","['maxnadeau', 'Xander Davies', 'Buck', 'Nate Thomas']",2022-10-27T01:32:45Z,alignmentforum,, 42776,https://www.alignmentforum.org/posts/aKBAYN5LpaQMrPqMj/dslt-4-phase-transitions-in-neural-networks,DSLT 4. Phase Transitions in Neural Networks,['Liam Carroll'],2023-06-24T17:22:38Z,alignmentforum,, 42793,https://www.alignmentforum.org/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes,"Supervise Process, not Outcomes","['stuhlmueller', 'jungofthewon']",2022-04-05T22:18:20Z,alignmentforum,, 42804,https://www.alignmentforum.org/posts/FuToH2KHxKmJLGk2B/ai-alignment-as-navigating-the-space-of-intelligent,AI alignment as “navigating the space of intelligent behaviour”,['Nora_Ammann'],2022-08-23T13:28:15Z,alignmentforum,, 42819,https://www.alignmentforum.org/posts/NeJnFmeNAXACASX8P/alignment-newsletter-46,Alignment Newsletter #46,['Rohin Shah'],2019-02-22T00:10:04Z,alignmentforum,, 42852,https://www.alignmentforum.org/posts/oBFMbhQMt9HkmfF6d/why-deceptive-alignment-matters-for-agi-safety,Why deceptive alignment matters for AGI safety,['Marius Hobbhahn'],2022-09-15T13:38:53Z,alignmentforum,, 42869,https://www.alignmentforum.org/posts/wkNQdYj47HX33noKv/dutch-booking-cdt,Dutch-Booking CDT,['abramdemski'],2019-01-13T00:10:08Z,alignmentforum,, 42880,https://www.alignmentforum.org/posts/2BG49yHpgEL46eioZ/mlsn-9-verifying-large-training-runs-security-risks-from-llm,"[MLSN #9] Verifying large training runs, security risks from LLM access to APIs, why natural selection may favor AIs over humans","['Dan H', 'ThomasW']",2023-04-11T16:03:31Z,alignmentforum,, 42913,https://www.alignmentforum.org/posts/JFoQ3echrH2pfjKuP/non-directed-conceptual-founding,Non-directed conceptual founding,['TsviBT'],2023-01-15T14:56:37Z,alignmentforum,, 42923,https://www.alignmentforum.org/posts/H6L7fuEN9qXDanQ6W/how-much-chess-engine-progress-is-about-adapting-to-bigger,How much chess engine progress is about adapting to bigger computers?,['paulfchristiano'],2021-07-07T22:35:29Z,alignmentforum,, 42933,https://www.alignmentforum.org/posts/FnwqLB7A9PenRdg4Z/for-alignment-we-should-simultaneously-use-multiple-theories,"For alignment, we should simultaneously use multiple theories of cognition and value",['Roman Leventov'],2023-04-24T10:37:15Z,alignmentforum,, 42958,https://www.alignmentforum.org/posts/H5iGhDhQBtoDpCBZ2/announcing-the-alignment-of-complex-systems-research-group,Announcing the Alignment of Complex Systems Research Group,"['Jan_Kulveit', 'technicalities']",2022-06-04T04:10:14Z,alignmentforum,, 42981,https://www.alignmentforum.org/posts/ncsxcf8CkDveXBCrA/ai-safety-in-a-world-of-vulnerable-machine-learning-systems-1,AI Safety in a World of Vulnerable Machine Learning Systems,"['AdamGleave', 'EuanMcLean']",2023-03-08T02:40:43Z,alignmentforum,, 43010,https://www.alignmentforum.org/posts/3Eq5Rq5uQ97kt8B8f/modeling-failure-modes-of-high-level-machine-intelligence,Modeling Failure Modes of High-Level Machine Intelligence,"['Ben Cottier', 'Daniel_Eth', 'Sammy Martin']",2021-12-06T13:54:38Z,alignmentforum,, 43049,https://www.alignmentforum.org/posts/hRuE66SXobGhrbwxR/note-on-algorithms-with-multiple-trained-components,Note on algorithms with multiple trained components,['Steven Byrnes'],2022-12-20T17:08:24Z,alignmentforum,, 43064,https://www.alignmentforum.org/posts/n2Gseb3XFpMyc2FEb/response-to-what-does-the-universal-prior-actually-look-like,"Response to ""What does the universal prior actually look like?""",['michaelcohen'],2021-05-20T16:12:30Z,alignmentforum,, 43081,https://www.alignmentforum.org/posts/Djs38EWYZG8o7JMWY/paul-s-research-agenda-faq,Paul's research agenda FAQ,['zhukeepa'],2018-07-01T06:25:14Z,alignmentforum,, 43123,https://www.alignmentforum.org/posts/LmHJPWRMEvmCWe382/jesse-hoogland-on-developmental-interpretability-and,Jesse Hoogland on Developmental Interpretability and Singular Learning Theory,['Michaël Trazzi'],2023-07-06T15:46:00Z,alignmentforum,, 43143,https://www.alignmentforum.org/posts/fnjKpBoWJXcSDwhZk/what-s-the-backward-forward-flop-ratio-for-neural-networks,What’s the backward-forward FLOP ratio for Neural Networks?,"['Marius Hobbhahn', 'Jsevillamol']",2021-12-13T08:54:48Z,alignmentforum,, 43154,https://www.alignmentforum.org/posts/YdxG2D3bvG5YsuHpG/problems-facing-a-correspondence-theory-of-knowledge,Problems facing a correspondence theory of knowledge,['Alex Flint'],2021-05-24T16:02:38Z,alignmentforum,, 43177,https://www.alignmentforum.org/posts/RzAmPDNciirWKdtc7/pessimism-about-unknown-unknowns-inspires-conservatism,Pessimism About Unknown Unknowns Inspires Conservatism,['michaelcohen'],2020-02-03T14:48:15Z,alignmentforum,, 43200,https://www.alignmentforum.org/posts/BTApNmv7s6RTGxeP4/cyborg-periods-there-will-be-multiple-ai-transitions,Cyborg Periods: There will be multiple AI transitions,"['Jan_Kulveit', 'rosehadshar']",2023-02-22T16:09:05Z,alignmentforum,, 43218,https://www.alignmentforum.org/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent,Understanding “Deep Double Descent”,['evhub'],2019-12-06T00:00:10Z,alignmentforum,, 43250,https://www.alignmentforum.org/posts/oBdfDvmrBKoTq3x85/apply-to-the-constellation-visiting-researcher-program-and,"Apply to the Constellation Visiting Researcher Program and Astra Fellowship, in Berkeley this Winter",['Nate Thomas'],2023-10-26T03:07:34Z,alignmentforum,, 43265,https://www.alignmentforum.org/posts/aAzApjEpdYwAxnsAS/reinforcement-learning-with-imperceptible-rewards,Reinforcement learning with imperceptible rewards,['Vanessa Kosoy'],2019-04-07T10:27:34Z,alignmentforum,, 43289,https://www.alignmentforum.org/posts/qKvn7rxP2mzJbKfcA/persuasion-tools-ai-takeover-without-agi-or-agency,Persuasion Tools: AI takeover without AGI or agency?,['Daniel Kokotajlo'],2020-11-20T16:54:01Z,alignmentforum,, 43312,https://www.alignmentforum.org/posts/ryCfHod3eFhkYipW9/quantitative-cruxes-in-alignment,Quantitative cruxes in Alignment,['Martín Soto'],2023-07-02T20:38:19Z,alignmentforum,, 43340,https://www.alignmentforum.org/posts/HhWhaSzQr6xmBki8F/birds-brains-planes-and-ai-against-appeals-to-the-complexity,"Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain",['Daniel Kokotajlo'],2021-01-18T12:08:13Z,alignmentforum,, 43355,https://www.alignmentforum.org/posts/oH3XmScSFnZt6x2eN/the-economy-as-an-analogy-for-advanced-ai-systems-2,The economy as an analogy for advanced AI systems,"['rosehadshar', 'particlemania']",2022-11-15T11:16:04Z,alignmentforum,, 43379,https://www.alignmentforum.org/posts/kguLeJTt6LnGuYX4E/the-limits-of-ai-safety-via-debate,The limits of AI safety via debate,['Marius Hobbhahn'],2022-05-10T13:33:28Z,alignmentforum,, 43402,https://www.alignmentforum.org/posts/5Hc4R6rj5yJ3xBhiX/power-seeking-for-successive-choices,Power-seeking for successive choices,['adamShimi'],2021-08-12T20:37:53Z,alignmentforum,, 43415,https://www.alignmentforum.org/posts/BeeirdrMXCPYZwgfj/the-blue-minimising-robot-and-model-splintering,The blue-minimising robot and model splintering,['Stuart_Armstrong'],2021-05-28T15:09:55Z,alignmentforum,, 43438,https://www.alignmentforum.org/posts/o2BkyQLrscbLbQSJn/series-of-absurd-upgrades-in-nature-s-great-search,Series of absurd upgrades in nature's great search,['lukehmiles'],2023-09-03T09:35:21Z,alignmentforum,, 43458,https://www.alignmentforum.org/posts/mTuQDeuiXKnk972WR/a-rephrasing-of-and-footnote-to-an-embedded-agency-proposal,A Rephrasing Of and Footnote To An Embedded Agency Proposal,['JoshuaOSHickman'],2022-03-09T18:13:23Z,alignmentforum,, 43477,https://www.alignmentforum.org/posts/XJqtRWnNRLaqJ8RCx/an-138-why-ai-governance-should-find-problems-rather-than,[AN #138]: Why AI governance should find problems rather than just solving them,['Rohin Shah'],2021-02-17T18:50:03Z,alignmentforum,, 43504,https://www.alignmentforum.org/posts/Hw26MrLuhGWH7kBLm/ai-alignment-is-distinct-from-its-near-term-applications,AI alignment is distinct from its near-term applications,['paulfchristiano'],2022-12-13T07:10:04Z,alignmentforum,, 43529,https://www.alignmentforum.org/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works,Biology-Inspired AGI Timelines: The Trick That Never Works,['Eliezer Yudkowsky'],2021-12-01T22:35:28Z,alignmentforum,, 43548,https://www.alignmentforum.org/posts/PRaxzmDJdvie46ahL/benign-model-free-rl-1,Benign model-free RL,['paulfchristiano'],2018-12-02T04:10:45Z,alignmentforum,, 43566,https://www.alignmentforum.org/posts/zwRQW9gEyszmHwff8/endo-dia-para-and-ecto-systemic-novelty,"Endo-, Dia-, Para-, and Ecto-systemic novelty",['TsviBT'],2023-04-23T12:25:13Z,alignmentforum,, 43585,https://www.alignmentforum.org/posts/zKkZanEQc4AZBEKx9/tasra-a-taxonomy-and-analysis-of-societal-scale-risks-from,TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI,['Andrew_Critch'],2023-06-13T05:04:47Z,alignmentforum,, 43622,https://www.alignmentforum.org/posts/jRHLCyRKsQv5u2Lph/how-conservative-should-the-partial-maximisers-be,"""How conservative"" should the partial maximisers be?",['Stuart_Armstrong'],2020-04-13T15:50:00Z,alignmentforum,, 43633,https://www.alignmentforum.org/posts/RhAxxPXrkcEaNArnd/notes-on-can-you-control-the-past,"Notes on ""Can you control the past""",['So8res'],2022-10-20T03:41:44Z,alignmentforum,, 43649,https://www.alignmentforum.org/posts/m2bwD87ctjJDXC3SZ/ultra-simplified-research-agenda,Ultra-simplified research agenda,['Stuart_Armstrong'],2019-11-22T14:29:41Z,alignmentforum,, 43663,https://www.alignmentforum.org/posts/5pHqQwCDWrvZGp8pX/structure-creativity-and-novelty,"Structure, creativity, and novelty",['TsviBT'],2023-01-29T14:30:19Z,alignmentforum,, 43679,https://www.alignmentforum.org/posts/5bd75cc58225bf067037531b/a-measure-theoretic-generalization-of-logical-induction,A measure-theoretic generalization of logical induction,['Vanessa Kosoy'],2017-01-18T13:56:20Z,alignmentforum,, 43692,https://www.alignmentforum.org/posts/BNoHokwCiPGmFHnp8/phil-trammell-on-economic-growth-under-transformative-ai,Phil Trammell on Economic Growth Under Transformative AI,['Michaël Trazzi'],2021-10-24T18:11:18Z,alignmentforum,, 43717,https://www.alignmentforum.org/posts/5bd75cc58225bf06703752da/pursuing-convergent-instrumental-subgoals-on-the-user-s-behalf-doesn-t-always-require-good-priors,Pursuing convergent instrumental subgoals on the user's behalf doesn't always require good priors,['jessicata'],2016-12-30T02:36:48Z,alignmentforum,, 43732,https://www.alignmentforum.org/posts/AanbbjYr5zckMKde7/specification-gaming-examples-in-ai-1,Specification gaming examples in AI,['Vika'],2018-04-03T12:30:48Z,alignmentforum,, 43742,https://www.alignmentforum.org/posts/bG7yKSRWBaMou7t93/my-current-outlook-on-ai-risk-mitigation,my current outlook on AI risk mitigation,['Tamsin Leake'],2022-10-03T20:06:49Z,alignmentforum,, 43765,https://www.alignmentforum.org/posts/76cReK4Mix3zKCWNT/ntk-gp-models-of-neural-nets-can-t-learn-features,NTK/GP Models of Neural Nets Can't Learn Features,['interstice'],2021-04-22T03:01:44Z,alignmentforum,, 43775,https://www.alignmentforum.org/posts/YvbbkPYH77xhdqvKt/the-alignment-newsletter-3-04-23-18,The Alignment Newsletter #3: 04/23/18,['Rohin Shah'],2018-04-23T16:00:33Z,alignmentforum,, 43796,https://www.alignmentforum.org/posts/8cr9fJnay97GEYPt3/bounded-complexity-of-solving-elk-and-its-implications,Bounded complexity of solving ELK and its implications,['Rubi J. Hudson'],2022-07-19T06:56:18Z,alignmentforum,, 43817,https://www.alignmentforum.org/posts/4jFnquoHuoaTqdphu/ai-x-risk-reduction-why-i-chose-academia-over-industry,AI x-risk reduction: why I chose academia over industry,['David Scott Krueger (formerly: capybaralet)'],2021-03-14T17:25:13Z,alignmentforum,, 43838,https://www.alignmentforum.org/posts/eqov4SEYEbeFMXegR/power-as-easily-exploitable-opportunities,Power as Easily Exploitable Opportunities,['TurnTrout'],2020-08-01T02:14:27Z,alignmentforum,, 43855,https://www.alignmentforum.org/posts/bzmLC3J8PsknwRZbr/why-not-subagents,Why Not Subagents?,"['johnswentworth', 'David Lorell']",2023-06-22T22:16:55Z,alignmentforum,, 43869,https://www.alignmentforum.org/posts/h3ejmEeNniDNFXTgp/fractional-progress-estimates-for-ai-timelines-and-implied,Fractional progress estimates for AI timelines and implied resource requirements,"['Mark Xu', 'CarlShulman']",2021-07-15T18:43:10Z,alignmentforum,, 43879,https://www.alignmentforum.org/posts/CBvebC9FgSMtsD5T9/response-to-holden-s-alignment-plan,Response to Holden’s alignment plan,['Alex Flint'],2022-12-22T16:08:53Z,alignmentforum,, 43904,https://www.alignmentforum.org/posts/ofx82Y9a4zcETfT6Q/extortion-beats-brinksmanship-but-the-audience-matters,"Extortion beats brinksmanship, but the audience matters",['Stuart_Armstrong'],2020-11-16T21:13:19Z,alignmentforum,, 43924,https://www.alignmentforum.org/posts/PT8vSxsusqWuN7JXp/my-understanding-of-paul-christiano-s-iterated-amplification,My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda,['Chi Nguyen'],2020-08-15T20:02:00Z,alignmentforum,, 43967,https://www.alignmentforum.org/posts/3dEKykLBvCszNunzB/ai-assisted-list-of-ten-concrete-alignment-things-to-do,AI-assisted list of ten concrete alignment things to do right now,['lukehmiles'],2022-09-07T08:38:30Z,alignmentforum,, 44018,https://www.alignmentforum.org/posts/BuRt2igbFx9KaB5QG/steering-behaviour-testing-for-non-myopia-in-language-models,Steering Behaviour: Testing for (Non-)Myopia in Language Models,"['Evan R. Murphy', 'Megan Kinniment']",2022-12-05T20:28:33Z,alignmentforum,, 44036,https://www.alignmentforum.org/posts/p62bkNAciLsv6WFnR/how-do-we-align-an-agi-without-getting-socially-engineered,How Do We Align an AGI Without Getting Socially Engineered? (Hint: Box It),"['Peter S. Park', 'NickyP', 'Stephen Fowler']",2022-08-10T18:14:09Z,alignmentforum,, 44067,https://www.alignmentforum.org/posts/c2RzFadrxkzyRAFXa/who-models-the-models-that-model-models-an-exploration-of,Who models the models that model models? An exploration of GPT-3's in-context model fitting ability,['Lovre'],2022-06-07T19:37:49Z,alignmentforum,, 44084,https://www.alignmentforum.org/posts/A9d8npg59GRd83c5k/towards-a-formalisation-of-logical-counterfactuals,Towards a Formalisation of Logical Counterfactuals,['Bunthut'],2020-08-08T22:14:28Z,alignmentforum,, 44093,https://www.alignmentforum.org/posts/QskBy5uDd2oeEGkBB/risk-map-of-ai-systems,Risk Map of AI Systems,"['VojtaKovarik', 'Jan_Kulveit']",2020-12-15T09:16:47Z,alignmentforum,, 44118,https://www.alignmentforum.org/posts/ig62gqK2xRPv79Wsr/usd500-bounty-contest-explain-infra-bayes-in-the-language-of,$500 Bounty/Contest: Explain Infra-Bayes In The Language Of Game Theory,['johnswentworth'],2023-03-25T17:29:51Z,alignmentforum,, 44127,https://www.alignmentforum.org/posts/Dan6iKFruioYmZafw/thoughts-on-refusing-harmful-requests-to-large-language,Thoughts on refusing harmful requests to large language models,['William_S'],2023-01-19T19:49:23Z,alignmentforum,, 44146,https://www.alignmentforum.org/posts/6ReBeYwsDeNgv6Dr5/the-defender-s-advantage-of-interpretability,The Defender’s Advantage of Interpretability,['Marius Hobbhahn'],2022-09-14T14:05:38Z,alignmentforum,, 44165,https://www.alignmentforum.org/posts/Bxxh9GbJ6WuW5Hmkj/what-s-the-dream-for-giving-natural-language-commands-to-ai,What's the dream for giving natural language commands to AI?,['Charlie Steiner'],2019-10-08T13:42:39Z,alignmentforum,, 44188,https://www.alignmentforum.org/posts/DSEwkvj8W7y8C3jau/simulators-constraints-and-goal-agnosticism-porbynotes-vol-1,"Simulators, constraints, and goal agnosticism: porbynotes vol. 1",['porby'],2022-11-23T04:22:26Z,alignmentforum,, 44237,https://www.alignmentforum.org/posts/Q76CpqHeEMykKpFdB/really-strong-features-found-in-residual-stream,Really Strong Features Found in Residual Stream,['Logan Riggs'],2023-07-08T19:40:15Z,alignmentforum,, 44249,https://www.alignmentforum.org/posts/CknHb67jutFfBwWz3/squeezing-foundations-research-assistance-out-of-formal,Squeezing foundations research assistance out of formal logic narrow AI.,['Donald Hobson'],2023-03-08T09:38:17Z,alignmentforum,, 44278,https://www.alignmentforum.org/posts/NpJkFLBJEq7JQt7oy/clarifying-mesa-optimization,Clarifying mesa-optimization,"['Marius Hobbhahn', 'Pierre Peigné']",2023-03-21T15:53:34Z,alignmentforum,, 44301,https://www.alignmentforum.org/posts/FhqZZFydyQG9WTSKR/announcing-mechanism-design-for-ai-safety-reading-group,Announcing: Mechanism Design for AI Safety - Reading Group,['Rubi J. Hudson'],2022-08-09T04:21:51Z,alignmentforum,, 44316,https://www.alignmentforum.org/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage,Soft takeoff can still lead to decisive strategic advantage,['Daniel Kokotajlo'],2019-08-23T16:39:31Z,alignmentforum,, 44333,https://www.alignmentforum.org/posts/GC69Hmc6ZQDM9xC3w/musings-on-the-speed-prior,Musings on the Speed Prior,['evhub'],2022-03-02T04:04:42Z,alignmentforum,, 44356,https://www.alignmentforum.org/posts/pt8Sf2kvRZ8BBW5b5/speculation-on-path-dependance-in-large-language-models,Speculation on Path-Dependance in Large Language Models.,['NickyP'],2023-01-15T20:42:48Z,alignmentforum,, 44378,https://www.alignmentforum.org/posts/FBbHEjkZzdupcjkna/miri-op-exchange-about-decision-theory-1,MIRI/OP exchange about decision theory,['Rob Bensinger'],2021-08-25T22:44:10Z,alignmentforum,, 44393,https://www.alignmentforum.org/posts/XjMkPyaPYTf7LrKiT/prisoners-dilemma-with-costs-to-modeling,Prisoners' Dilemma with Costs to Modeling,['Scott Garrabrant'],2018-06-05T04:51:31Z,alignmentforum,, 44402,https://www.alignmentforum.org/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory,A Critique of Functional Decision Theory,['wdmacaskill'],2019-09-13T19:23:23Z,alignmentforum,, 44423,https://www.alignmentforum.org/posts/AsNjqggQQ4yJcbsWn/ai-safety-endgame-stories,AI Safety Endgame Stories,['Ivan Vendrov'],2022-09-28T16:58:03Z,alignmentforum,, 44448,https://www.alignmentforum.org/posts/2QuAcx8XQw7rrXzGC/beware-over-use-of-the-agent-model,Beware over-use of the agent model,['Alex Flint'],2021-04-25T22:19:06Z,alignmentforum,, 44457,https://www.alignmentforum.org/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability,A Longlist of Theories of Impact for Interpretability,['Neel Nanda'],2022-03-11T14:55:35Z,alignmentforum,, 44481,https://www.alignmentforum.org/posts/yamkYG5dSzGcEdFTf/a-strong-mind-continues-its-trajectory-of-creativity,A strong mind continues its trajectory of creativity,['TsviBT'],2023-05-14T17:24:00Z,alignmentforum,, 44493,https://www.alignmentforum.org/posts/LRgM9cuLNPbsjwEdN/implications-of-automated-ontology-identification,Implications of automated ontology identification,"['Alex Flint', 'adamShimi', 'Robert Miles']",2022-02-18T03:30:54Z,alignmentforum,, 44510,https://www.alignmentforum.org/posts/PTkd8nazvH9HQpwP8/building-brain-inspired-agi-is-infinitely-easier-than,Building brain-inspired AGI is infinitely easier than understanding the brain,['Steven Byrnes'],2020-06-02T14:13:32Z,alignmentforum,, 44523,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374ec0/the-odd-counterfactuals-of-playing-chicken,The odd counterfactuals of playing chicken,['Benya_Fallenstein'],2015-02-02T07:15:50Z,alignmentforum,, 44539,https://www.alignmentforum.org/posts/8H5JbowLTJoNHzLuH/an-117-how-neural-nets-would-fare-under-the-tevv-framework,[AN #117]: How neural nets would fare under the TEVV framework,['Rohin Shah'],2020-09-16T17:20:14Z,alignmentforum,, 44573,https://www.alignmentforum.org/posts/5F5Tz3u6kJbTNMqsb/intro-to-brain-like-agi-safety-13-symbol-grounding-and-human,[Intro to brain-like-AGI safety] 13. Symbol grounding & human social instincts,['Steven Byrnes'],2022-04-27T13:30:34Z,alignmentforum,, 44591,https://www.alignmentforum.org/posts/roZvoF6tRH6xYtHMF/avoiding-the-instrumental-policy-by-hiding-information-about,Avoiding the instrumental policy by hiding information about humans,['paulfchristiano'],2021-06-13T20:00:52Z,alignmentforum,, 44607,https://www.alignmentforum.org/posts/DARiTSTx5xDLQGrrz/inverse-scaling-prize-second-round-winners,Inverse Scaling Prize: Second Round Winners,"['Ian McKenzie', 'Sam Bowman', 'Ethan Perez']",2023-01-24T20:12:48Z,alignmentforum,, 44629,https://www.alignmentforum.org/posts/3zfFPjMv9fioDAeHi/survey-of-nlp-researchers-nlp-is-contributing-to-agi,Survey of NLP Researchers: NLP is contributing to AGI progress; major catastrophe plausible,['Sam Bowman'],2022-08-31T01:39:55Z,alignmentforum,, 44650,https://www.alignmentforum.org/posts/xMQ7vwFACQX3gZouv/reframing-inner-alignment,Reframing inner alignment,['davidad'],2022-12-11T13:53:23Z,alignmentforum,, 44672,https://www.alignmentforum.org/posts/NK2CeDNKMEY9gRZp2/ai-that-shouldn-t-work-yet-kind-of-does,"AI that shouldn't work, yet kind of does",['Donald Hobson'],2023-02-23T23:18:55Z,alignmentforum,, 44688,https://www.alignmentforum.org/posts/B4vgbeXMGxEnEwY8d/who-is-harry-potter-some-predictions,Who is Harry Potter? Some predictions.,['Donald Hobson'],2023-10-24T16:14:18Z,alignmentforum,, 44701,https://www.alignmentforum.org/posts/vERGLBpDE8m5mpT6t/autonomous-replication-and-adaptation-an-attempt-at-a,Autonomous replication and adaptation: an attempt at a concrete danger threshold,['Hjalmar_Wijk'],2023-08-17T01:31:11Z,alignmentforum,, 44718,https://www.alignmentforum.org/posts/KWmrz9WbGntMGMb73/comparing-four-approaches-to-inner-alignment,Comparing Four Approaches to Inner Alignment,['Lucas Teixeira'],2022-07-29T21:06:09Z,alignmentforum,, 44745,https://www.alignmentforum.org/posts/Mj259G5n5BxXXrZ7C/an-85-the-normative-questions-we-should-be-asking-for-ai,"[AN #85]: The normative questions we should be asking for AI alignment, and a surprisingly good chatbot",['Rohin Shah'],2020-02-05T18:20:02Z,alignmentforum,, 44772,https://www.alignmentforum.org/posts/uTPetRBbkP4p6FhGf/universality-and-hidden-information-in-concept-bottleneck,Universality and Hidden Information in Concept Bottleneck Models,['Hoagy'],2023-04-05T14:00:36Z,alignmentforum,, 44795,https://www.alignmentforum.org/posts/adiszfnFgPEnRsGSr/conditioning-generative-models-with-restrictions,Conditioning Generative Models with Restrictions,['Adam Jermyn'],2022-07-21T20:33:19Z,alignmentforum,, 44817,https://www.alignmentforum.org/posts/5bd75cc58225bf06703754e8/humans-can-be-assigned-any-values-whatsoever,Humans can be assigned any values whatsoever...,['Stuart_Armstrong'],2017-10-24T12:03:02Z,alignmentforum,, 44835,https://www.alignmentforum.org/posts/6zwW9oGaHbuMuvnmX/would-i-think-for-ten-thousand-years,Would I think for ten thousand years?,['Stuart_Armstrong'],2019-02-11T19:37:54Z,alignmentforum,, 44845,https://www.alignmentforum.org/posts/NtX7LKhCXMW2vjWx6/thoughts-on-reward-engineering,Thoughts on reward engineering,['paulfchristiano'],2019-01-24T20:15:05Z,alignmentforum,, 44884,https://www.alignmentforum.org/posts/FwYMuD2sNcaEpE5on/finding-gliders-in-the-game-of-life,Finding gliders in the game of life,['paulfchristiano'],2022-12-01T20:40:04Z,alignmentforum,, 44900,https://www.alignmentforum.org/posts/Htu55gzoiYHS6TREB/sentience-matters,Sentience matters,['So8res'],2023-05-29T21:25:31Z,alignmentforum,, 44923,https://www.alignmentforum.org/posts/4Tjz4EJ8DozE9z5nQ/introducing-the-principles-of-intelligent-behaviour-in,Introducing the Principles of Intelligent Behaviour in Biological and Social Systems (PIBBSS) Fellowship,['adamShimi'],2021-12-18T15:23:27Z,alignmentforum,, 44942,https://www.alignmentforum.org/posts/yW3Tct2iyBMzYhTw7/how-does-bee-learning-compare-with-machine-learning,How does bee learning compare with machine learning?,['eleni'],2021-03-04T01:59:06Z,alignmentforum,, 44955,https://www.alignmentforum.org/posts/c9NSeCapaKtP6kvQD/gradient-descent-is-not-just-more-efficient-genetic,Gradient descent is not just more efficient genetic algorithms,['leogao'],2021-09-08T16:23:47Z,alignmentforum,, 44969,https://www.alignmentforum.org/posts/fSC98Cy3zR9GsEPnT/curiosity-killed-the-cat-and-the-asymptotically-optimal,Curiosity Killed the Cat and the Asymptotically Optimal Agent,['michaelcohen'],2020-02-20T17:28:42Z,alignmentforum,, 44987,https://www.alignmentforum.org/posts/Jiy3n5KMsGGJ6NNYH/asot-some-thoughts-about-lm-monologue-limitations-and-elk,[ASoT] Some thoughts about LM monologue limitations and ELK,['leogao'],2022-03-30T14:26:15Z,alignmentforum,, 45012,https://www.alignmentforum.org/posts/ivpKSjM4D6FbqF4pZ/cortes-pizarro-and-afonso-as-precedents-for-takeover,"Cortés, Pizarro, and Afonso as Precedents for Takeover",['Daniel Kokotajlo'],2020-03-01T03:49:45Z,alignmentforum,, 45025,https://www.alignmentforum.org/posts/i3pkxN43NgkLRaAGZ/reflection-mechanisms-as-an-alignment-target-a-follow-up,Reflection Mechanisms as an Alignment target: A follow-up survey,"['Marius Hobbhahn', 'elandgre', 'Beth Barnes']",2022-10-05T14:03:20Z,alignmentforum,, 45041,https://www.alignmentforum.org/posts/aqTAd7KzsYmHWYdei/why-copilot-accelerates-timelines,Why Copilot Accelerates Timelines,['Michaël Trazzi'],2022-04-26T22:06:20Z,alignmentforum,, 45057,https://www.alignmentforum.org/posts/oeCXS2ZCn4rPyq7LQ/ai-learns-betrayal-and-how-to-avoid-it,AI learns betrayal and how to avoid it,['Stuart_Armstrong'],2021-09-30T09:39:10Z,alignmentforum,, 45078,https://www.alignmentforum.org/posts/DkfGaZTgwsE7XZq9k/research-agenda-update,Research agenda update,['Steven Byrnes'],2021-08-06T19:24:54Z,alignmentforum,, 45107,https://www.alignmentforum.org/posts/35748mXjzwxDrX7yQ/optimal-play-in-human-judged-debate-usually-won-t-answer,Optimal play in human-judged Debate usually won't answer your question,['Joe_Collman'],2021-01-27T07:34:19Z,alignmentforum,, 45130,https://www.alignmentforum.org/posts/sAJnZY8pp2W3DR4mx/breaking-down-the-training-deployment-dichotomy,Breaking down the training/deployment dichotomy,['Erik Jenner'],2022-08-28T21:45:50Z,alignmentforum,, 45148,https://www.alignmentforum.org/posts/CNrz9uy5Y4ELypzca/take-6-cais-is-actually-orwellian,Take 6: CAIS is actually Orwellian.,['Charlie Steiner'],2022-12-07T13:50:38Z,alignmentforum,, 45166,https://www.alignmentforum.org/posts/4qY9zEHLa2su4PkQ4/can-hch-epistemically-dominate-ramanujan,Can HCH epistemically dominate Ramanujan?,['zhukeepa'],2019-02-23T22:00:33Z,alignmentforum,, 45180,https://www.alignmentforum.org/posts/XLx3mpdi7HSp4rytF/prize-for-alignment-research-tasks,Prize for Alignment Research Tasks,"['stuhlmueller', 'William_S']",2022-04-29T08:57:04Z,alignmentforum,, 45197,https://www.alignmentforum.org/posts/uL74oQv5PsnotGzt7/all-i-know-is-goodhart,All I know is Goodhart,['Stuart_Armstrong'],2019-10-21T12:12:53Z,alignmentforum,, 45209,https://www.alignmentforum.org/posts/GdR5v7nCfKuybHHng/gradient-hacker-design-principles-from-biology,Gradient Hacker Design Principles From Biology,['johnswentworth'],2022-09-01T19:03:17Z,alignmentforum,, 45226,https://www.alignmentforum.org/posts/TE5nJ882s5dCMkBB8/conclusion-to-the-sequence-on-value-learning,Conclusion to the sequence on value learning,['Rohin Shah'],2019-02-03T21:05:12Z,alignmentforum,, 45259,https://www.alignmentforum.org/posts/KzwB4ovzrZ8DYWgpw/more-findings-on-memorization-and-double-descent,More findings on Memorization and double descent,['Marius Hobbhahn'],2023-02-01T18:26:41Z,alignmentforum,, 45274,https://www.alignmentforum.org/posts/yTvBSFrXhZfL8vr5a/worst-case-thinking-in-ai-alignment,Worst-case thinking in AI alignment,['Buck'],2021-12-23T01:29:48Z,alignmentforum,, 45300,https://www.alignmentforum.org/posts/cj3PRu8QoFm4BA8oc/infra-bayesian-physicalism-proofs-part-i,Infra-Bayesian physicalism: proofs part I,['Vanessa Kosoy'],2021-11-30T22:26:33Z,alignmentforum,, 45310,https://www.alignmentforum.org/posts/iDFTmb8HSGtL4zTvf/how-could-we-know-that-an-agi-system-will-have-good,How could we know that an AGI system will have good consequences?,['So8res'],2022-11-07T22:42:27Z,alignmentforum,, 45328,https://www.alignmentforum.org/posts/5bd75cc58225bf067037536b/counterfactually-uninfluenceable-agents,Counterfactually uninfluenceable agents,['Stuart_Armstrong'],2017-06-02T16:17:10Z,alignmentforum,, 45337,https://www.alignmentforum.org/posts/jWkqACmDes6SoAiyE/truthful-lms-as-a-warm-up-for-aligned-agi,Truthful LMs as a warm-up for aligned AGI,['Jacob_Hilton'],2022-01-17T16:49:24Z,alignmentforum,, 45358,https://www.alignmentforum.org/posts/qdqYrcGZTh9Lp49Nj/creating-environments-to-design-and-test-embedded-agents,Creating Environments to Design and Test Embedded Agents,['lukehmiles'],2019-08-23T03:17:33Z,alignmentforum,, 45380,https://www.alignmentforum.org/posts/XxX2CAoFskuQNkBDy/discovering-agents,Discovering Agents,['zac_kenton'],2022-08-18T17:33:43Z,alignmentforum,, 45397,https://www.alignmentforum.org/posts/S9GxuAEeQomnLkeNt/a-space-of-proposals-for-building-safe-advanced-ai,A space of proposals for building safe advanced AI,['Richard_Ngo'],2020-07-10T16:58:34Z,alignmentforum,, 45418,https://www.alignmentforum.org/posts/gYfgWSxCpFdk2cZfE/the-alignment-problem-machine-learning-and-human-values,The Alignment Problem: Machine Learning and Human Values,['Rohin Shah'],2020-10-06T17:41:21Z,alignmentforum,, 45455,https://www.alignmentforum.org/posts/tpHB69eXorChEsix3/bits-of-optimization-can-only-be-lost-over-a-distance,Bits of Optimization Can Only Be Lost Over A Distance,['johnswentworth'],2022-05-23T18:55:17Z,alignmentforum,, 45466,https://www.alignmentforum.org/posts/heXcGuJqbx3HBmero/people-care-about-each-other-even-though-they-have-imperfect,People care about each other even though they have imperfect motivational pointers?,['TurnTrout'],2022-11-08T18:15:32Z,alignmentforum,, 45484,https://www.alignmentforum.org/posts/rrpnEDpLPxsmmsLzs/open-technical-problem-a-quinean-proof-of-loeb-s-theorem-for,"Open technical problem: A Quinean proof of Löb's theorem, for an easier cartoon guide",['Andrew_Critch'],2022-11-24T21:16:44Z,alignmentforum,, 45499,https://www.alignmentforum.org/posts/dJSD5RK6Qoidb3QY5/synthesizing-amplification-and-debate,Synthesizing amplification and debate,['evhub'],2020-02-05T22:53:57Z,alignmentforum,, 45511,https://www.alignmentforum.org/posts/JkCPkMxuftohieb8B/compact-vs-wide-models,Compact vs. Wide Models,['Vaniver'],2018-07-16T04:09:10Z,alignmentforum,, 45529,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374e6a/from-halting-oracles-to-modal-logic,From halting oracles to modal logic,['Benya_Fallenstein'],2015-02-03T19:26:17Z,alignmentforum,, 45539,https://www.alignmentforum.org/posts/y5jAuKqkShdjMNZab/morality-is-scary,Morality is Scary,['Wei Dai'],2021-12-02T06:35:07Z,alignmentforum,, 45556,https://www.alignmentforum.org/posts/bEKW5gBawZirJXREb/pathways-google-s-agi,Pathways: Google's AGI,['Lê Nguyên Hoang'],2021-09-25T07:02:30Z,alignmentforum,, 45570,https://www.alignmentforum.org/posts/hWag6E7XPCbdfaoKZ/epistemic-strategies-of-safety-capabilities-tradeoffs,Epistemic Strategies of Safety-Capabilities Tradeoffs,['adamShimi'],2021-10-22T08:22:51Z,alignmentforum,, 45580,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375441/cooperative-oracles-stratified-pareto-optima-and-almost-stratified-pareto-optima,Cooperative Oracles: Stratified Pareto Optima and Almost Stratified Pareto Optima,['Scott Garrabrant'],2017-06-03T00:38:46Z,alignmentforum,, 45596,https://www.alignmentforum.org/posts/yAiqLmLFxvyANSfs2/counterfactual-oracles-online-supervised-learning-with,Counterfactual Oracles = online supervised learning with random selection of training episodes,['Wei Dai'],2019-09-10T08:29:08Z,alignmentforum,, 45617,https://www.alignmentforum.org/posts/xHnuX42WNZ9hq53bz/attempted-gears-analysis-of-agi-intervention-discussion-with-1,Attempted Gears Analysis of AGI Intervention Discussion With Eliezer,['Zvi'],2021-11-15T03:50:01Z,alignmentforum,, 45647,https://www.alignmentforum.org/posts/svhQMdsefdYFDq5YM/evaluations-project-arc-is-hiring-a-researcher-and-a-webdev-1,Evaluations project @ ARC is hiring a researcher and a webdev/engineer,['Beth Barnes'],2022-09-09T22:46:48Z,alignmentforum,, 45668,https://www.alignmentforum.org/posts/MG4ZjWQDrdpgeu8wG/zoom-in-an-introduction-to-circuits,Zoom In: An Introduction to Circuits,['evhub'],2020-03-10T19:36:14Z,alignmentforum,, 45686,https://www.alignmentforum.org/posts/BoA3agdkAzL6HQtQP/clarifying-and-predicting-agi,Clarifying and predicting AGI,['Richard_Ngo'],2023-05-04T15:55:26Z,alignmentforum,, 45702,https://www.alignmentforum.org/posts/8e3FmHY4598SJ9PNL/ai-benefits-post-5-outstanding-questions-on-governing,AI Benefits Post 5: Outstanding Questions on Governing Benefits,['Cullen'],2020-07-21T16:46:11Z,alignmentforum,, 45732,https://www.alignmentforum.org/posts/SgkaXQn3xqJkGQ2D8/cooperative-oracles,Cooperative Oracles,['Diffractor'],2018-09-01T08:05:56Z,alignmentforum,, 45749,https://www.alignmentforum.org/posts/27fiu3ur57HmfEhTv/the-alignment-newsletter-4-04-30-18,The Alignment Newsletter #4: 04/30/18,['Rohin Shah'],2018-04-30T16:00:13Z,alignmentforum,, 45776,https://www.alignmentforum.org/posts/huNvfttDpxCApC3xZ/an-132-complex-and-subtly-incorrect-arguments-as-an-obstacle,[AN #132]: Complex and subtly incorrect arguments as an obstacle to debate,['Rohin Shah'],2021-01-06T18:20:06Z,alignmentforum,, 45793,https://www.alignmentforum.org/posts/afCcihytsFtKdwSvp/announcing-aisic-2022-the-ai-safety-israel-conference,"Announcing AISIC 2022 - the AI Safety Israel Conference, October 19-20",['Davidmanheim'],2022-09-21T19:32:36Z,alignmentforum,, 45803,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375415/acausal-trade-different-utilities-different-trades,"Acausal trade: different utilities, different trades",['Stuart_Armstrong'],2017-06-02T15:33:31Z,alignmentforum,, 45830,https://www.alignmentforum.org/posts/KH8fcM6SK8EkcKcZr/autonomy-as-taking-responsibility-for-reference-maintenance,Autonomy as taking responsibility for reference maintenance,['Ramana Kumar'],2022-08-17T12:50:30Z,alignmentforum,, 45841,https://www.alignmentforum.org/posts/N64THGX7XNCqRtvPG/alignment-proposals-and-complexity-classes,Alignment proposals and complexity classes,['evhub'],2020-07-16T00:27:37Z,alignmentforum,, 45868,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375427/acausal-trade-conclusion-theory-vs-practice,Acausal trade: conclusion: theory vs practice,['Stuart_Armstrong'],2017-05-16T19:33:51Z,alignmentforum,, 45883,https://www.alignmentforum.org/posts/ybmDkJAj3rdrrauuu/connectomics-seems-great-from-an-ai-x-risk-perspective,Connectomics seems great from an AI x-risk perspective,['Steven Byrnes'],2023-04-30T14:38:40Z,alignmentforum,, 45903,https://www.alignmentforum.org/posts/xxnPxELC4jLKaFKqG/learning-biases-and-rewards-simultaneously,Learning biases and rewards simultaneously,['Rohin Shah'],2019-07-06T01:45:50Z,alignmentforum,, 45917,https://www.alignmentforum.org/posts/Zj2PgP5A8vY2G3gYw/optimization-provenance,Optimization Provenance,['Adele Lopez'],2019-08-23T20:08:13Z,alignmentforum,, 45944,https://www.alignmentforum.org/posts/fsbcq9z7korjBTP8Z/understanding-strategic-deception-and-deceptive-alignment,Understanding strategic deception and deceptive alignment,"['Marius Hobbhahn', 'Mikita Balesni', 'Jérémy Scheurer', 'Dan Braun']",2023-09-25T16:27:47Z,alignmentforum,, 45959,https://www.alignmentforum.org/posts/S8WZ2rav9BqFAZoRM/causal-abstraction-toy-model-medical-sensor,Causal Abstraction Toy Model: Medical Sensor,['johnswentworth'],2019-12-11T21:12:51Z,alignmentforum,, 45975,https://www.alignmentforum.org/posts/MR5wJpE27ymE7M7iv/formalizing-the-qaci-alignment-formal-goal,formalizing the QACI alignment formal-goal,"['Tamsin Leake', 'JuliaHP']",2023-06-10T03:28:30Z,alignmentforum,, 45992,https://www.alignmentforum.org/posts/q9BmNh35xgXPRgJhm/why-you-should-care-about-goal-directedness,Why You Should Care About Goal-Directedness,['adamShimi'],2020-11-09T12:48:35Z,alignmentforum,, 46015,https://www.alignmentforum.org/posts/i32eyaARtFn6eD9Cg/oversight-leagues-the-training-game-as-a-feature,Oversight Leagues: The Training Game as a Feature,['Paul Bricman'],2022-09-09T10:08:03Z,alignmentforum,, 46034,https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment,Relaxed adversarial training for inner alignment,['evhub'],2019-09-10T23:03:08Z,alignmentforum,, 46062,https://www.alignmentforum.org/posts/rzkCTPnkydQxfkZsX/levels-of-goals-and-alignment,Levels of goals and alignment,['zeshen'],2022-09-16T16:44:50Z,alignmentforum,, 46089,https://www.alignmentforum.org/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target,Reward is not the optimization target,['TurnTrout'],2022-07-25T00:03:18Z,alignmentforum,, 46114,https://www.alignmentforum.org/posts/W6wBmQheDiFmfJqZy/brain-inspired-agi-and-the-lifetime-anchor,"Brain-inspired AGI and the ""lifetime anchor""",['Steven Byrnes'],2021-09-29T13:09:44Z,alignmentforum,, 46132,https://www.alignmentforum.org/posts/523mueiug9RapHtWb/language-models-can-be-utility-maximising-agents,Language Models can be Utility-Maximising Agents,['Raymond D'],2023-02-01T18:13:35Z,alignmentforum,, 46150,https://www.alignmentforum.org/posts/rZs6ddqNnW8LXuJqA/password-locked-models-a-stress-case-for-capabilities,Password-locked models: a stress case for capabilities evaluation,['Fabien Roger'],2023-08-03T14:53:12Z,alignmentforum,, 46177,https://www.alignmentforum.org/posts/pneKTZG9KqnSe2RdQ/two-types-of-updatelessness,Two Types of Updatelessness,['abramdemski'],2018-02-15T20:19:55Z,alignmentforum,, 46189,https://www.alignmentforum.org/posts/TrvkWBwYvvJjSqSCj/a-broad-basin-of-attraction-around-human-values,A broad basin of attraction around human values?,['Wei Dai'],2022-04-12T05:15:15Z,alignmentforum,, 46208,https://www.alignmentforum.org/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1,"Natural Abstractions: Key claims, Theorems, and Critiques","['LawrenceC', 'Leon Lang', 'Erik Jenner']",2023-03-16T16:37:40Z,alignmentforum,, 46234,https://www.alignmentforum.org/posts/ngwNHAy5TjStZnJzQ/how-transparency-changed-over-time,How transparency changed over time,['ViktoriaMalyasova'],2022-07-30T04:36:32Z,alignmentforum,, 46264,https://www.alignmentforum.org/posts/AwMb7C72etphiRvah/unsolved-ml-safety-problems,Unsolved ML Safety Problems,['jsteinhardt'],2021-09-29T16:00:19Z,alignmentforum,, 46308,https://www.alignmentforum.org/posts/YAgQysWqoBJ7eA7Np/priorities-for-the-uk-foundation-models-taskforce,Priorities for the UK Foundation Models Taskforce,['Andrea_Miotti'],2023-07-21T15:23:34Z,alignmentforum,, 46348,https://www.alignmentforum.org/posts/EzAt4SbtQcXtDNhHK/confused-why-a-capabilities-research-is-good-for-alignment,"Confused why a ""capabilities research is good for alignment progress"" position isn't discussed more",['Kaj_Sotala'],2022-06-02T21:41:45Z,alignmentforum,, 46369,https://www.alignmentforum.org/posts/bqyCd38tACvKgqmXG/counterfactuals-versus-the-laws-of-physics,Counterfactuals versus the laws of physics,['Stuart_Armstrong'],2020-02-18T13:21:02Z,alignmentforum,, 46379,https://www.alignmentforum.org/posts/Ni8ocGupB2kGG2fA7/agi-safety-from-first-principles-conclusion,AGI safety from first principles: Conclusion,['Richard_Ngo'],2020-10-04T23:06:59Z,alignmentforum,, 46398,https://www.alignmentforum.org/posts/5bd75cc58225bf067037540c/infinite-ethics-comparisons,Infinite ethics comparisons,['Stuart_Armstrong'],2017-05-06T19:24:24Z,alignmentforum,, 46415,https://www.alignmentforum.org/posts/wydAtj6FkPDHkdtzS/my-take-on-michael-littman-on-the-hci-of-hai,"My take on Michael Littman on ""The HCI of HAI""",['Alex Flint'],2021-04-02T19:51:44Z,alignmentforum,, 46435,https://www.alignmentforum.org/posts/uvEyizLAGykH8LwMx/fundamental-vs-applied-mechanistic-interpretability-research,'Fundamental' vs 'applied' mechanistic interpretability research,['Lee Sharkey'],2023-05-23T18:26:18Z,alignmentforum,, 46445,https://www.alignmentforum.org/posts/XFL3vaA69mHxATWM7/frame-for-take-off-speeds-to-inform-compute-governance-and,Frame for Take-Off Speeds to inform compute governance & scaling alignment,['Logan Riggs'],2022-05-13T22:23:12Z,alignmentforum,, 46466,https://www.alignmentforum.org/posts/GWCgZrzWCZCuzGktv/200-cop-in-mi-the-case-for-analysing-toy-language-models,200 COP in MI: The Case for Analysing Toy Language Models,['Neel Nanda'],2022-12-28T21:07:04Z,alignmentforum,, 46484,https://www.alignmentforum.org/posts/DFarDnQjMnjsKvW8s/practical-pitfalls-of-causal-scrubbing,Practical Pitfalls of Causal Scrubbing,"['Jérémy Scheurer', 'Phil3', 'tony', 'jacquesthibs', 'David Lindner']",2023-03-27T07:47:31Z,alignmentforum,, 46503,https://www.alignmentforum.org/posts/zupqBxrNKpT5dhFQb/exploring-decision-theories-with-counterfactuals-and-dynamic,Exploring Decision Theories With Counterfactuals and Dynamic Agent Self-Pointers,['JoshuaOSHickman'],2021-12-18T21:50:14Z,alignmentforum,, 46515,https://www.alignmentforum.org/posts/ocWqg2Pf2br4jMmKA/does-sgd-produce-deceptive-alignment,Does SGD Produce Deceptive Alignment?,['Mark Xu'],2020-11-06T23:48:10Z,alignmentforum,, 46539,https://www.alignmentforum.org/posts/ZbjyCuqpwCMMND4fv/robustness-of-model-graded-evaluations-and-automated,Robustness of Model-Graded Evaluations and Automated Interpretability,"['Simon Lermen', 'viluon']",2023-07-15T19:12:49Z,alignmentforum,, 46561,https://www.alignmentforum.org/posts/L2h9nAtPqEFK6atSJ/an-anthropomorphic-ai-dilemma,An anthropomorphic AI dilemma,['TsviBT'],2023-05-07T12:44:48Z,alignmentforum,, 46581,https://www.alignmentforum.org/posts/t2yeWvpGvzQ9sFrWc/an-151-how-sparsity-in-the-final-layer-makes-a-neural-net,[AN #151]: How sparsity in the final layer makes a neural net debuggable,['Rohin Shah'],2021-05-19T17:20:04Z,alignmentforum,, 46602,https://www.alignmentforum.org/posts/pTm6aEvmepJEA5cuK/parsing-abram-on-gradations-of-inner-alignment-obstacles,Parsing Abram on Gradations of Inner Alignment Obstacles,['Alex Flint'],2021-05-04T17:44:17Z,alignmentforum,, 46628,https://www.alignmentforum.org/posts/MygKP4iwdRL24eNsY/introduction-to-the-sequence-interpretability-research-for-1,Introduction to the sequence: Interpretability Research for the Most Important Century,['Evan R. Murphy'],2022-05-12T19:59:53Z,alignmentforum,, 46643,https://www.alignmentforum.org/posts/dkeDMktXtSjfoWnan/an-160-building-ais-that-learn-and-think-like-people,[AN #160]: Building AIs that learn and think like people,['Rohin Shah'],2021-08-13T17:10:04Z,alignmentforum,, 46673,https://www.alignmentforum.org/posts/fS7Zdj2e2xMqE6qja/more-christiano-cotra-and-yudkowsky-on-ai-progress,"More Christiano, Cotra, and Yudkowsky on AI progress","['Eliezer Yudkowsky', 'Ajeya Cotra']",2021-12-06T20:33:12Z,alignmentforum,, 46689,https://www.alignmentforum.org/posts/9aSi7koXHCakb82Fz/law-following-ai-2-intent-alignment-superintelligence,Law-Following AI 2: Intent Alignment + Superintelligence → Lawless AI (By Default),['Cullen'],2022-04-27T17:27:24Z,alignmentforum,, 46705,https://www.alignmentforum.org/posts/HDmcJv6SdyEFpFbcD/law-following-ai-4-don-t-rely-on-vicarious-liability,Law-Following AI 4: Don't Rely on Vicarious Liability,['Cullen'],2022-08-02T23:26:00Z,alignmentforum,, 46717,https://www.alignmentforum.org/posts/nA3n2vfCy3ffnjapw/models-modeling-models,Models Modeling Models,['Charlie Steiner'],2021-11-02T07:08:45Z,alignmentforum,, 46734,https://www.alignmentforum.org/posts/p7x32SEt43ZMC9r7r/embedded-agents,Embedded Agents,"['abramdemski', 'Scott Garrabrant']",2018-10-29T19:53:02Z,alignmentforum,, 46755,https://www.alignmentforum.org/posts/oH8KMnXHnw964QyS6/preface-to-the-sequence-on-value-learning,Preface to the sequence on value learning,['Rohin Shah'],2018-10-30T22:04:16Z,alignmentforum,, 46765,https://www.alignmentforum.org/posts/FNmzn33akiSesc4Ke/how-model-editing-could-help-with-the-alignment-problem,How model editing could help with the alignment problem,['Michael Ripa'],2023-09-30T17:47:25Z,alignmentforum,, 46800,https://www.alignmentforum.org/posts/XTgkhjNTEi97WHMi6/pavlov-generalizes,Pavlov Generalizes,['abramdemski'],2019-02-20T09:03:11Z,alignmentforum,, 46820,https://www.alignmentforum.org/posts/yXPT4nr4as7JvxLQa/classifying-specification-problems-as-variants-of-goodhart-s,Classifying specification problems as variants of Goodhart's Law,['Vika'],2019-08-19T20:40:29Z,alignmentforum,, 46852,https://www.alignmentforum.org/posts/jRf4WENQnhssCb6mJ/davidad-s-bold-plan-for-alignment-an-in-depth-explanation,Davidad's Bold Plan for Alignment: An In-Depth Explanation,"['Charbel-Raphaël', 'Gabin']",2023-04-19T16:09:01Z,alignmentforum,, 46889,https://www.alignmentforum.org/posts/SbxWdhhwJWCpifTst/asot-some-ways-elk-could-still-be-solvable-in-practice,[ASoT] Some ways ELK could still be solvable in practice,['leogao'],2022-03-27T01:15:17Z,alignmentforum,, 46903,https://www.alignmentforum.org/posts/FzqKXpTDaouMF6Chj/paper-walkthrough-automated-circuit-discovery-with-arthur,Paper Walkthrough: Automated Circuit Discovery with Arthur Conmy,['Neel Nanda'],2023-08-29T22:07:04Z,alignmentforum,, 46913,https://www.alignmentforum.org/posts/rauMEna2ddf26BqiE/alignment-allows-nonrobust-decision-influences-and-doesn-t,"Alignment allows ""nonrobust"" decision-influences and doesn't require robust grading",['TurnTrout'],2022-11-29T06:23:00Z,alignmentforum,, 46930,https://www.alignmentforum.org/posts/Q7XWGqL4HjjRmhEyG/internal-independent-review-for-language-model-agent,Internal independent review for language model agent alignment,['Seth Herd'],2023-07-07T06:54:12Z,alignmentforum,, 46956,https://www.alignmentforum.org/posts/SL9mKhgdmDKXmxwE4/learning-the-prior,Learning the prior,['paulfchristiano'],2020-07-05T21:00:01Z,alignmentforum,, 46974,https://www.alignmentforum.org/posts/X2fRsTjd2kQ89pipE/an-74-separating-beneficial-ai-into-competence-alignment-and,"[AN #74]: Separating beneficial AI into competence, alignment, and coping with impacts",['Rohin Shah'],2019-11-20T18:20:02Z,alignmentforum,, 46989,https://www.alignmentforum.org/posts/qoHwKgLFfPcEuwaba/conditioning-predictive-models-making-inner-alignment-as,Conditioning Predictive Models: Making inner alignment as easy as possible,"['evhub', 'Adam Jermyn', 'Johannes Treutlein', 'Rubi J. Hudson', 'kcwoolverton']",2023-02-07T20:04:20Z,alignmentforum,, 47027,https://www.alignmentforum.org/posts/42YykiTqtGMyJAjDM/alignment-as-translation,Alignment as Translation,['johnswentworth'],2020-03-19T21:40:01Z,alignmentforum,, 47044,https://www.alignmentforum.org/posts/A9tJFJY7DsGTFKKkh/high-stakes-alignment-via-adversarial-training-redwood,High-stakes alignment via adversarial training [Redwood Research report],"['dmz', 'LawrenceC', 'Nate Thomas']",2022-05-05T00:59:19Z,alignmentforum,, 47070,https://www.alignmentforum.org/posts/9a2asxypuNjCmga3p/iteration-fixed-point-exercises,Iteration Fixed Point Exercises,"['Scott Garrabrant', 'SamEisenstat']",2018-11-22T00:35:10Z,alignmentforum,, 47089,https://www.alignmentforum.org/posts/Kf6sKZudduhJmykTg/the-preference-fulfillment-hypothesis,The Preference Fulfillment Hypothesis,['Kaj_Sotala'],2023-02-26T10:55:13Z,alignmentforum,, 47112,https://www.alignmentforum.org/posts/DsYe7TKc4NhyJuPEy/benchmarking-proposals-on-risk-scenarios,Benchmarking Proposals on Risk Scenarios,['Paul Bricman'],2022-08-20T10:01:53Z,alignmentforum,, 47135,https://www.alignmentforum.org/posts/HbtRFDiyTDpPfRLqm/an-123-inferring-what-is-valuable-in-order-to-align,[AN #123]: Inferring what is valuable in order to align recommender systems,['Rohin Shah'],2020-10-28T17:00:06Z,alignmentforum,, 47159,https://www.alignmentforum.org/posts/KpD2fJa6zo8o2MBxg/consciousness-as-a-conflationary-alliance-term,Consciousness as a conflationary alliance term,['Andrew_Critch'],2023-07-10T08:09:49Z,alignmentforum,, 47172,https://www.alignmentforum.org/posts/GAbEcLDQLtHZE78TY/contingency-a-conceptual-tool-from-evolutionary-biology-for-2,Contingency: A Conceptual Tool from Evolutionary Biology for Alignment,['clem_acs'],2023-06-12T20:54:04Z,alignmentforum,, 47189,https://www.alignmentforum.org/posts/X7S3u5E4KktLp7gHz/tessellating-hills-a-toy-model-for-demons-in-imperfect,Tessellating Hills: a toy model for demons in imperfect search,['DaemonicSigil'],2020-02-20T00:12:50Z,alignmentforum,, 47201,https://www.alignmentforum.org/posts/hrYvdrqMyCnw3pBkd/take-7-you-should-talk-about-the-human-s-utility-function,"Take 7: You should talk about ""the human's utility function"" less.",['Charlie Steiner'],2022-12-08T08:14:17Z,alignmentforum,, 47212,https://www.alignmentforum.org/posts/cGLgs3t9md7v7cCm4/corrigibility-as-constrained-optimisation,Corrigibility as Constrained Optimisation,['Henrik Åslund'],2019-04-11T20:09:52Z,alignmentforum,, 47226,https://www.alignmentforum.org/posts/tHChCJB9piCTD7HEx/beyond-the-human-training-distribution-would-the-ai-ceo,Beyond the human training distribution: would the AI CEO create almost-illegal teddies?,['Stuart_Armstrong'],2021-10-18T21:10:53Z,alignmentforum,, 47237,https://www.alignmentforum.org/posts/f2C4CWNmrSKMs6SaK/linkpost-github-copilot-productivity-experiment,Linkpost: Github Copilot productivity experiment,['Daniel Kokotajlo'],2022-09-08T04:41:41Z,alignmentforum,, 47247,https://www.alignmentforum.org/posts/5HMqSGQ9ad9r9Hibw/committing-assuming-externalizing-and-internalizing,"Committing, Assuming, Externalizing, and Internalizing",['Scott Garrabrant'],2020-11-09T16:59:02Z,alignmentforum,, 47257,https://www.alignmentforum.org/posts/3r44dhh3uK7s9Pveq/rfc-philosophical-conservatism-in-ai-alignment-research,RFC: Philosophical Conservatism in AI Alignment Research,['Gordon Seidoh Worley'],2018-05-15T03:29:02Z,alignmentforum,, 47267,https://www.alignmentforum.org/posts/YuJNoCEgeWJfBtdtQ/distance-functions-are-hard-1,Distance Functions are Hard,['Grue_Slinky'],2019-08-13T17:33:15Z,alignmentforum,, 47288,https://www.alignmentforum.org/posts/5bd75cc58225bf06703750a5/logical-counterfactuals-for-random-algorithms,Logical counterfactuals for random algorithms,['Vanessa Kosoy'],2016-01-06T13:29:52Z,alignmentforum,, 47299,https://www.alignmentforum.org/posts/QTL5tRz7Q54bpcwdE/ai-risk-hub-in-singapore-1,AI risk hub in Singapore?,['Daniel Kokotajlo'],2020-10-29T11:45:16Z,alignmentforum,, 47317,https://www.alignmentforum.org/posts/qpJbFta7RwpHcFarc/can-we-make-peace-with-moral-indeterminacy,Can we make peace with moral indeterminacy?,['Charlie Steiner'],2019-10-03T12:56:44Z,alignmentforum,, 47331,https://www.alignmentforum.org/posts/3nDR23ksSQJ98WNDm/developmental-stages-of-gpts,Developmental Stages of GPTs,['orthonormal'],2020-07-26T22:03:20Z,alignmentforum,, 47357,https://www.alignmentforum.org/posts/6kgBAJBGp5Yum8oGj/an-139-how-the-simplicity-of-reality-explains-the-success-of,[AN #139]: How the simplicity of reality explains the success of neural nets,['Rohin Shah'],2021-02-24T18:30:04Z,alignmentforum,, 47381,https://www.alignmentforum.org/posts/z2ofM2oZQwmcWFt8N/ai-services-as-a-research-paradigm,AI Services as a Research Paradigm,['VojtaKovarik'],2020-04-20T13:00:40Z,alignmentforum,, 47400,https://www.alignmentforum.org/posts/JMpERTz9TcnMfEapF/knowledge-is-not-just-precipitation-of-action,Knowledge is not just precipitation of action,['Alex Flint'],2021-06-18T23:26:17Z,alignmentforum,, 47419,https://www.alignmentforum.org/posts/mgjHS6ou7DgwhKPpu/a-rough-and-incomplete-review-of-some-of-john-wentworth-s,A rough and incomplete review of some of John Wentworth's research,['So8res'],2023-03-28T18:52:51Z,alignmentforum,, 47443,https://www.alignmentforum.org/posts/RorXWkriXwErvJtvn/agi-will-have-learnt-utility-functions,AGI will have learnt utility functions,['beren'],2023-01-25T19:42:11Z,alignmentforum,, 47469,https://www.alignmentforum.org/posts/RihYwmskuJT9Rkbjq/the-longest-training-run,The longest training run,"['Jsevillamol', 'Tamay', 'Owen Dudney', 'anson.ho']",2022-08-17T17:18:40Z,alignmentforum,, 47487,https://www.alignmentforum.org/posts/RmPKdMqSr2xRwrqyE/the-dualist-predict-o-matic-usd100-prize,The Dualist Predict-O-Matic ($100 prize),['John_Maxwell'],2019-10-17T06:45:46Z,alignmentforum,, 47510,https://www.alignmentforum.org/posts/5bd75cc58225bf06703754b2/the-three-levels-of-goodhart-s-curse,The Three Levels of Goodhart's Curse,['Scott Garrabrant'],2017-12-30T16:41:25Z,alignmentforum,, 47531,https://www.alignmentforum.org/posts/bvdbx6tW9yxfxAJxe/catastrophic-risks-from-ai-1-summary,Catastrophic Risks from AI #1: Summary,"['Dan H', 'Mantas Mazeika', 'ThomasW']",2023-06-22T17:09:41Z,alignmentforum,, 47561,https://www.alignmentforum.org/posts/tHxXdAn8Yuiy9y2pZ/ai-safety-without-goal-directed-behavior,AI safety without goal-directed behavior,['Rohin Shah'],2019-01-07T07:48:19Z,alignmentforum,, 47575,https://www.alignmentforum.org/posts/6m5qqkeBTrqQsegGi/inner-alignment-requires-making-assumptions-about-human,Inner alignment requires making assumptions about human values,['Matthew Barnett'],2020-01-20T18:38:27Z,alignmentforum,, 47586,https://www.alignmentforum.org/posts/nyDnLif4cjeRe9DSv/generalizing-the-power-seeking-theorems,Generalizing the Power-Seeking Theorems,['TurnTrout'],2020-07-27T00:28:26Z,alignmentforum,, 47603,https://www.alignmentforum.org/posts/9bpACZn6kG2Ec6CPu/how-i-think-about-alignment,How I think about alignment,['Linda Linsefors'],2022-08-13T10:01:01Z,alignmentforum,, 47618,https://www.alignmentforum.org/posts/svhbnSdxW3XmFXXTK/anthropic-decision-theory-i-sleeping-beauty-and-selflessness,Anthropic decision theory I: Sleeping beauty and selflessness,['Stuart_Armstrong'],2011-11-01T11:41:33Z,alignmentforum,, 47627,https://www.alignmentforum.org/posts/5bd75cc58225bf067037503c/chatbots-or-set-answers-not-wbes,"Chatbots or set answers, not WBEs",['Stuart_Armstrong'],2015-10-09T15:48:36Z,alignmentforum,, 47640,https://www.alignmentforum.org/posts/WqYSmjSsE3hi8Lgot/experiences-and-learnings-from-both-sides-of-the-ai-safety,Experiences and learnings from both sides of the AI safety job market,['Marius Hobbhahn'],2023-11-15T15:40:32Z,alignmentforum,, 47663,https://www.alignmentforum.org/posts/oqghwKKifztYWLsea/four-motivations-for-learning-normativity,Four Motivations for Learning Normativity,['abramdemski'],2021-03-11T20:13:40Z,alignmentforum,, 47690,https://www.alignmentforum.org/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation,Why Agent Foundations? An Overly Abstract Explanation,['johnswentworth'],2022-03-25T23:17:10Z,alignmentforum,, 47707,https://www.alignmentforum.org/posts/nDHbgjdddG5EN6ocg/announcement-ai-alignment-prize-round-4-winners,Announcement: AI alignment prize round 4 winners,['cousin_it'],2019-01-20T14:46:48Z,alignmentforum,, 47720,https://www.alignmentforum.org/posts/w6BtMqKRLxG9bNLMr/the-catastrophic-convergence-conjecture,The Catastrophic Convergence Conjecture,['TurnTrout'],2020-02-14T21:16:59Z,alignmentforum,, 47740,https://www.alignmentforum.org/posts/kuQfnotjkQA4Kkfou/inference-time-intervention-eliciting-truthful-answers-from,Inference-Time Intervention: Eliciting Truthful Answers from a Language Model,['likenneth'],2023-06-11T05:38:35Z,alignmentforum,, 47750,https://www.alignmentforum.org/posts/w4aeAFzSAguvqA5qu/how-to-go-from-interpretability-to-alignment-just-retarget,How To Go From Interpretability To Alignment: Just Retarget The Search,['johnswentworth'],2022-08-10T16:08:11Z,alignmentforum,, 47759,https://www.alignmentforum.org/posts/4sEK5mtDYWJo2gHJn/catastrophic-risks-from-ai-3-ai-race,Catastrophic Risks from AI #3: AI Race,"['Dan H', 'Mantas Mazeika', 'ThomasW']",2023-06-23T19:21:07Z,alignmentforum,, 47790,https://www.alignmentforum.org/posts/2N7eEKDuL5sHQou3N/spooky-action-at-a-distance-in-the-loss-landscape,Spooky action at a distance in the loss landscape,"['Jesse Hoogland', 'Filip Sondej']",2023-01-28T00:22:47Z,alignmentforum,, 47805,https://www.alignmentforum.org/posts/bxt7uCiHam4QXrQAA/cyborgism,Cyborgism,"['NicholasKees', 'janus']",2023-02-10T14:47:48Z,alignmentforum,, 47821,https://www.alignmentforum.org/posts/ex2qcux8TQXigGAfv/usd100-usd50-rewards-for-good-references,$100/$50 rewards for good references,['Stuart_Armstrong'],2021-12-03T16:55:57Z,alignmentforum,, 47831,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374f87/agent-simulates-predictor-using-second-level-oracles,Agent Simulates Predictor using Second-Level Oracles,['orthonormal'],2015-06-06T22:08:37Z,alignmentforum,, 47846,https://www.alignmentforum.org/posts/6t9F5cS3JjtSspbAZ/finite-factored-sets-lw-transcript-with-running-commentary,Finite Factored Sets: LW transcript with running commentary,"['Rob Bensinger', 'Scott Garrabrant']",2021-06-27T16:02:06Z,alignmentforum,, 47870,https://www.alignmentforum.org/posts/wkhfytDQvfx3Jeie9/speculations-against-gpt-n-writing-alignment-papers,Speculations against GPT-n writing alignment papers,['Donald Hobson'],2021-06-07T21:13:17Z,alignmentforum,, 47890,https://www.alignmentforum.org/posts/nvkiGW4vH8CCHfoNi/specification-gaming-examples-in-ai,Specification gaming examples in AI,['Samuel Rødal'],2018-11-10T12:00:29Z,alignmentforum,, 47903,https://www.alignmentforum.org/posts/yXfka98pZXAmXiyDp/my-current-take-on-counterfactuals,My Current Take on Counterfactuals,['abramdemski'],2021-04-09T17:51:07Z,alignmentforum,, 47927,https://www.alignmentforum.org/posts/wG5KTj5jFiibydgmk/arc-paper-formalizing-the-presumption-of-independence,ARC paper: Formalizing the presumption of independence,['Erik Jenner'],2022-11-20T01:22:55Z,alignmentforum,, 47943,https://www.alignmentforum.org/posts/pihmQv5XezwkxJk2a/aligned-ai-via-monitoring-objectives-in-autogpt-like-systems,Aligned AI via monitoring objectives in AutoGPT-like systems,['Paul Colognese'],2023-05-24T15:59:14Z,alignmentforum,, 47969,https://www.alignmentforum.org/posts/yFQkFNCszoJPZTnK6/analogies-and-general-priors-on-intelligence,Analogies and General Priors on Intelligence,"['riceissa', 'Sammy Martin']",2021-08-20T21:03:19Z,alignmentforum,, 47986,https://www.alignmentforum.org/posts/5GxLiJJEzvqmTNyCK/the-alignment-problem-from-a-deep-learning-perspective-major,The Alignment Problem from a Deep Learning Perspective (major rewrite),"['SoerenMind', 'Richard_Ngo', 'LawrenceC']",2023-01-10T16:06:05Z,alignmentforum,, 48021,https://www.alignmentforum.org/posts/zvEbeZ6opjPJiQnFE/emergent-modularity-and-safety,Emergent modularity and safety,['Richard_Ngo'],2021-10-21T01:54:09Z,alignmentforum,, 48039,https://www.alignmentforum.org/posts/HJMQg8MksHq5ipDpN/an-136-how-well-will-gpt-n-perform-on-downstream-tasks,[AN #136]: How well will GPT-N perform on downstream tasks?,['Rohin Shah'],2021-02-03T18:10:04Z,alignmentforum,, 48064,https://www.alignmentforum.org/posts/S5oWwZMJBvfChSquW/idealized-factored-cognition,Idealized Factored Cognition,['Rafael Harth'],2020-11-30T18:49:47Z,alignmentforum,, 48085,https://www.alignmentforum.org/posts/egzqHKkzhuZuivHZ4/thoughts-on-gradient-hacking,Thoughts on gradient hacking,['Richard_Ngo'],2021-09-03T13:02:44Z,alignmentforum,, 48100,https://www.alignmentforum.org/posts/mAwxebLw3nYbDivmt/scaffolded-llms-less-obvious-concerns,Scaffolded LLMs: Less Obvious Concerns,['Stephen Fowler'],2023-06-16T10:39:59Z,alignmentforum,, 48130,https://www.alignmentforum.org/posts/PJLABqQ962hZEqhdB/debate-update-obfuscated-arguments-problem,Debate update: Obfuscated arguments problem,['Beth Barnes'],2020-12-23T03:24:38Z,alignmentforum,, 48157,https://www.alignmentforum.org/posts/9Hxa6pxRrxkwjBKib/an-163-using-finite-factored-sets-for-causal-and-temporal,[AN #163]: Using finite factored sets for causal and temporal inference,['Rohin Shah'],2021-09-08T17:20:05Z,alignmentforum,, 48172,https://www.alignmentforum.org/posts/mxXcPzpgGx4f8eK7v/2019-review-rewrite-seeking-power-is-often-robustly,2019 Review Rewrite: Seeking Power is Often Robustly Instrumental in MDPs,['TurnTrout'],2020-12-23T17:16:10Z,alignmentforum,, 48185,https://www.alignmentforum.org/posts/N5Jm6Nj4HkNKySA5Z/finite-factored-sets,Finite Factored Sets,['Scott Garrabrant'],2021-05-23T20:52:49Z,alignmentforum,, 48203,https://www.alignmentforum.org/posts/hmPCPyr6JFLEigHJx/goal-direction-for-simulated-agents,Goal-Direction for Simulated Agents,['Raymond D'],2023-07-12T17:06:29Z,alignmentforum,, 48216,https://www.alignmentforum.org/posts/Hx48HgHzDTsSFoJui/a-short-dialogue-on-the-meaning-of-reward-functions,A Short Dialogue on the Meaning of Reward Functions,"['Leon Lang', 'Quintin Pope', 'peligrietzer']",2022-11-19T21:04:30Z,alignmentforum,, 48240,https://www.alignmentforum.org/posts/ogHr8SvGqg9pW5wsT/capabilities-and-alignment-of-llm-cognitive-architectures,Capabilities and alignment of LLM cognitive architectures,['Seth Herd'],2023-04-18T16:29:30Z,alignmentforum,, 48268,https://www.alignmentforum.org/posts/KQfYieur2DFRZDamd/why-not-just-build-weak-ai-tools-for-ai-alignment-research,Why Not Just... Build Weak AI Tools For AI Alignment Research?,['johnswentworth'],2023-03-05T00:12:34Z,alignmentforum,, 48288,https://www.alignmentforum.org/posts/a2NZr87sGYpXhzsth/debate-minus-factored-cognition,Debate Minus Factored Cognition,['abramdemski'],2020-12-29T22:59:20Z,alignmentforum,, 48307,https://www.alignmentforum.org/posts/mDTded2Dn7BKRBEPX/penalizing-impact-via-attainable-utility-preservation,Penalizing Impact via Attainable Utility Preservation,['TurnTrout'],2018-12-28T21:46:01Z,alignmentforum,, 48324,https://www.alignmentforum.org/posts/8q2ySr7yxx7MSR35i/tournesol-youtube-and-ai-risk,"Tournesol, YouTube and AI Risk",['adamShimi'],2021-02-12T18:56:18Z,alignmentforum,, 48338,https://www.alignmentforum.org/posts/aEjckcqHZZny9L2zy/emergent-deception-and-emergent-optimization,Emergent Deception and Emergent Optimization,['jsteinhardt'],2023-02-20T02:40:10Z,alignmentforum,, 48360,https://www.alignmentforum.org/posts/4nZRzoGTqg8xy5rr8/the-reward-engineering-problem,The reward engineering problem,['paulfchristiano'],2019-01-16T18:47:24Z,alignmentforum,, 48381,https://www.alignmentforum.org/posts/HE5DL6XeomYxFab74/modeling-agi-safety-frameworks-with-causal-influence-1,Modeling AGI Safety Frameworks with Causal Influence Diagrams,['Ramana Kumar'],2019-06-21T12:50:08Z,alignmentforum,, 48391,https://www.alignmentforum.org/posts/MRFXpedeKJRa324dL/brute-force-searching-for-alignment,Brute force searching for alignment,['Donald Hobson'],2021-06-27T21:54:27Z,alignmentforum,, 48402,https://www.alignmentforum.org/posts/5bd75cc58225bf0670374e76/simplicity-priors-with-reflective-oracles,Simplicity priors with reflective oracles,['Benya_Fallenstein'],2014-11-15T06:39:19Z,alignmentforum,, 48423,https://www.alignmentforum.org/posts/d4NgfKY3cq9yiBLSM/goals-and-short-descriptions,Goals and short descriptions,['Michele Campolo'],2020-07-02T17:41:53Z,alignmentforum,, 48443,https://www.alignmentforum.org/posts/LWmmfTvptiJp7wvFg/epistemic-strategies-of-selection-theorems,Epistemic Strategies of Selection Theorems,['adamShimi'],2021-10-18T08:57:23Z,alignmentforum,, 48465,https://www.alignmentforum.org/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix,Causal scrubbing: Appendix,"['LawrenceC', 'Adrià Garriga-alonso', 'Nicholas Goldowsky-Dill', 'ryan_greenblatt', 'jenny', 'Ansh Radhakrishnan', 'Buck', 'Nate Thomas']",2022-12-03T00:58:46Z,alignmentforum,, 48487,https://www.alignmentforum.org/posts/rQDYQrDjPGqjrf8Mk/bridging-expected-utility-maximization-and-optimization,Bridging Expected Utility Maximization and Optimization,['Whispermute'],2022-08-05T08:18:26Z,alignmentforum,, 48498,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375072/a-sketch-of-a-value-learning-sovereign,A sketch of a value-learning sovereign,['jessicata'],2015-12-20T21:32:45Z,alignmentforum,, 48523,https://www.alignmentforum.org/posts/wJK944YqvFwjdbqCP/four-ways-an-impact-measure-could-help-alignment,Four Ways An Impact Measure Could Help Alignment,['Matthew Barnett'],2019-08-08T00:10:14Z,alignmentforum,, 48553,https://www.alignmentforum.org/posts/tEf8fEFCkFtPyg9pm/axrp-episode-13-first-principles-of-agi-safety-with-richard,AXRP Episode 13 - First Principles of AGI Safety with Richard Ngo,['DanielFilan'],2022-03-31T05:20:18Z,alignmentforum,, 48601,https://www.alignmentforum.org/posts/khFC2a4pLPvGtXAGG/how-to-catch-an-ai-liar-lie-detection-in-black-box-llms-by,How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions,"['JanBrauner', 'Owain_Evans', 'SoerenMind']",2023-09-28T18:53:59Z,alignmentforum,, 48618,https://www.alignmentforum.org/posts/E9G5JYbXZ3QK9aXTm/an-63-how-architecture-search-meta-learning-and-environment,"[AN #63] How architecture search, meta learning, and environment design could lead to general intelligence",['Rohin Shah'],2019-09-10T19:10:01Z,alignmentforum,, 48640,https://www.alignmentforum.org/posts/pNcFYZnPdXyL2RfgA/using-gpt-eliezer-against-chatgpt-jailbreaking,Using GPT-Eliezer against ChatGPT Jailbreaking,"['Stuart_Armstrong', 'rgorman']",2022-12-06T19:54:55Z,alignmentforum,, 48657,https://www.alignmentforum.org/posts/ejtFsvyhRkMofKAFy/200-cop-in-mi-interpreting-algorithmic-problems,200 COP in MI: Interpreting Algorithmic Problems,['Neel Nanda'],2022-12-31T19:55:39Z,alignmentforum,, 48674,https://www.alignmentforum.org/posts/jYdAxH8BarPT4fqnb/a-dilemma-for-prosaic-ai-alignment,A dilemma for prosaic AI alignment,['Daniel Kokotajlo'],2019-12-17T22:11:02Z,alignmentforum,, 48700,https://www.alignmentforum.org/posts/qqG2PdZ7pEcM6ev3S/instrumental-occam,Instrumental Occam?,['abramdemski'],2020-01-31T19:27:11Z,alignmentforum,, 48711,https://www.alignmentforum.org/posts/FnZws8NuKw6BJzmvZ/counterexamples-to-some-elk-proposals,Counterexamples to some ELK proposals,['paulfchristiano'],2021-12-31T17:05:11Z,alignmentforum,, 48737,https://www.alignmentforum.org/posts/54jDNkonygFheKL9H/alignment-newsletter-44,Alignment Newsletter #44,['Rohin Shah'],2019-02-06T08:30:01Z,alignmentforum,, 48765,https://www.alignmentforum.org/posts/BMj6uMuyBidrdZkiD/corrigibility-as-outside-view,Corrigibility as outside view,['TurnTrout'],2020-05-08T21:56:18Z,alignmentforum,, 48781,https://www.alignmentforum.org/posts/9AXSrp5MAThZZEfTc/ai-safety-camp-virtual-edition-2023,"AI Safety Camp, Virtual Edition 2023",['Linda Linsefors'],2023-01-06T11:09:07Z,alignmentforum,, 48790,https://www.alignmentforum.org/posts/tiKG7gvQ33vf8QAgy/wittgenstein-and-ml-parameters-vs-architecture,Wittgenstein and ML — parameters vs architecture,['Cleo Nardo'],2023-03-24T04:54:08Z,alignmentforum,, 48801,https://www.alignmentforum.org/posts/eDicGjD9yte6FLSie/interpreting-neural-networks-through-the-polytope-lens,Interpreting Neural Networks through the Polytope Lens,"['Sid Black', 'Lee Sharkey', 'Connor Leahy', 'beren', 'CRG', 'merizian', 'Eric Winsor', 'Dan Braun']",2022-09-23T17:58:31Z,alignmentforum,, 48822,https://www.alignmentforum.org/posts/amGDvs2ztj35mBF4Y/switching-hosting-providers-today-there-probably-will-be,"Switching hosting providers today, there probably will be some hiccups",['habryka'],2018-11-15T19:45:59Z,alignmentforum,, 48831,https://www.alignmentforum.org/posts/bAAtiG8og7CxH3cXG/review-of-fun-with-12-ooms-of-compute,"Review of ""Fun with +12 OOMs of Compute""","['adamShimi', 'Joe_Collman', 'Gyrodiot']",2021-03-28T14:55:37Z,alignmentforum,, 48849,https://www.alignmentforum.org/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results,"""Existential risk from AI"" survey results",['Rob Bensinger'],2021-06-01T20:02:06Z,alignmentforum,, 48863,https://www.alignmentforum.org/posts/rFhxbWCdECxuT9xa2/notes-on-the-mathematics-of-llm-architectures,Notes on the Mathematics of LLM Architectures,['Spencer Becker-Kahn'],2023-02-09T01:45:49Z,alignmentforum,, 48872,https://www.alignmentforum.org/posts/32sm7diYTky5KhF6w/distributed-decisions,Distributed Decisions,['johnswentworth'],2022-05-29T02:43:59Z,alignmentforum,, 48890,https://www.alignmentforum.org/posts/vHcGGrnzcshybrCJD/language-model-alignment-research-internships,Language Model Alignment Research Internships,['Ethan Perez'],2021-12-13T19:53:32Z,alignmentforum,, 48914,https://www.alignmentforum.org/posts/pcomQ4Fwi7FnfBZBR/how-does-gradient-descent-interact-with-goodhart,How does Gradient Descent Interact with Goodhart?,['Scott Garrabrant'],2019-02-02T00:14:52Z,alignmentforum,, 48928,https://www.alignmentforum.org/posts/Mrz2srZWc7EzbADSo/wrapper-minds-are-the-enemy,wrapper-minds are the enemy,['nostalgebraist'],2022-06-17T01:58:05Z,alignmentforum,, 48945,https://www.alignmentforum.org/posts/qgK7smTvJ4DB8rZ6h/othello-gpt-future-work-i-am-excited-about,Othello-GPT: Future Work I Am Excited About,['Neel Nanda'],2023-03-29T22:13:27Z,alignmentforum,, 48968,https://www.alignmentforum.org/posts/yRAo2KEGWenKYZG9K/discovering-language-model-behaviors-with-model-written,Discovering Language Model Behaviors with Model-Written Evaluations,"['evhub', 'Ethan Perez']",2022-12-20T20:08:12Z,alignmentforum,, 48989,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375432/futarchy-fix,Futarchy Fix,['abramdemski'],2017-05-30T05:46:39Z,alignmentforum,, 49013,https://www.alignmentforum.org/posts/Cw84NJXAma85AmpnH/my-take-on-higher-order-game-theory,My take on higher-order game theory,['Nisan'],2021-11-30T05:56:01Z,alignmentforum,, 49025,https://www.alignmentforum.org/posts/ASoGszmr9C5MPLtpC/definitions-of-objective-should-be-probable-and-predictive,Definitions of “objective” should be Probable and Predictive,['Rohin Shah'],2023-01-06T15:40:31Z,alignmentforum,, 49042,https://www.alignmentforum.org/posts/QZiGEDiobFz8ropA5/inferring-utility-functions-from-locally-non-transitive,Inferring utility functions from locally non-transitive preferences,['Jan'],2022-02-10T10:33:18Z,alignmentforum,, 49053,https://www.alignmentforum.org/posts/rxQbX2JpigjnbnL3A/two-challenges-for-elk,Two Challenges for ELK,['derek shiller'],2022-02-21T05:49:15Z,alignmentforum,, 49073,https://www.alignmentforum.org/posts/waAfXvcmbqaPHRA7B/alignment-newsletter-39,Alignment Newsletter #39,['Rohin Shah'],2019-01-01T08:10:01Z,alignmentforum,, 49100,https://www.alignmentforum.org/posts/72scWeZRta2ApsKja/epistemological-vigilance-for-alignment,Epistemological Vigilance for Alignment,['adamShimi'],2022-06-06T00:27:44Z,alignmentforum,, 49137,https://www.alignmentforum.org/posts/4RH5cMSBLZcv8DEw2/the-reverse-goodhart-problem,The reverse Goodhart problem,['Stuart_Armstrong'],2021-06-08T15:48:03Z,alignmentforum,, 49148,https://www.alignmentforum.org/posts/u6KXXmKFbXfWzoAXn/a-circuit-for-python-docstrings-in-a-4-layer-attention-only,A circuit for Python docstrings in a 4-layer attention-only transformer,"['StefanHex', 'Jett']",2023-02-20T19:35:14Z,alignmentforum,, 49167,https://www.alignmentforum.org/posts/a2io2mcxTWS4mxodF/results-from-a-survey-on-tool-use-and-workflows-in-alignment,Results from a survey on tool use and workflows in alignment research,"['jacquesthibs', 'Jan', 'janus', 'Logan Riggs']",2022-12-19T15:19:53Z,alignmentforum,, 49192,https://www.alignmentforum.org/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models,Thoughts on Human Models,"['Ramana Kumar', 'Scott Garrabrant']",2019-02-21T09:10:44Z,alignmentforum,, 49213,https://www.alignmentforum.org/posts/hnzHrdqn3nrjveayv/how-to-transformer-mechanistic-interpretability-in-50-lines,How-to Transformer Mechanistic Interpretability—in 50 lines of code or less!,['StefanHex'],2023-01-24T18:45:01Z,alignmentforum,, 49234,https://www.alignmentforum.org/posts/R6gPKJAq6dbuLNkwG/an-99-doubling-times-for-the-efficiency-of-ai-algorithms,[AN #99]: Doubling times for the efficiency of AI algorithms,['Rohin Shah'],2020-05-13T17:20:03Z,alignmentforum,, 49261,https://www.alignmentforum.org/posts/ZqfT5xTuNf6okrepY/a-critique-of-non-obstruction,A Critique of Non-Obstruction,['Joe_Collman'],2021-02-03T08:45:42Z,alignmentforum,, 49280,https://www.alignmentforum.org/posts/gzWb5kWwzhdaqmyTt/if-i-were-a-well-intentioned-ai-i-image-classifier,If I were a well-intentioned AI... I: Image classifier,['Stuart_Armstrong'],2020-02-26T12:39:59Z,alignmentforum,, 49313,https://www.alignmentforum.org/posts/cecqH7PvsNkrxFvwe/new-tool-for-exploring-ea-forum-lesswrong-and-alignment-1,"New tool for exploring EA Forum, LessWrong and Alignment Forum - Tree of Tags",['Filip Sondej'],2022-09-13T17:33:55Z,alignmentforum,, 49335,https://www.alignmentforum.org/posts/G3tuxF4X5R5BY7fut/want-to-predict-explain-control-the-output-of-gpt-4-then,"Want to predict/explain/control the output of GPT-4? Then learn about the world, not about transformers.",['Cleo Nardo'],2023-03-16T03:08:53Z,alignmentforum,, 49357,https://www.alignmentforum.org/posts/svE3S6NKdPYoGepzq/topological-fixed-point-exercises,Topological Fixed Point Exercises,"['Scott Garrabrant', 'SamEisenstat']",2018-11-17T01:40:06Z,alignmentforum,, 49378,https://www.alignmentforum.org/posts/5HtDzRAk7ePWsiL2L/open-problems-in-ai-x-risk-pais-5,Open Problems in AI X-Risk [PAIS #5],"['Dan H', 'ThomasW']",2022-06-10T02:08:06Z,alignmentforum,, 49423,https://www.alignmentforum.org/posts/LYxWrxram2JFBaeaq/when-most-vnm-coherent-preference-orderings-have-convergent,When Most VNM-Coherent Preference Orderings Have Convergent Instrumental Incentives,['TurnTrout'],2021-08-09T17:22:24Z,alignmentforum,, 49443,https://www.alignmentforum.org/posts/mkbGjzxD8d8XqKHzA/the-singular-value-decompositions-of-transformer-weight,The Singular Value Decompositions of Transformer Weight Matrices are Highly Interpretable,"['beren', 'Sid Black']",2022-11-28T12:54:52Z,alignmentforum,, 49460,https://www.alignmentforum.org/posts/bzLpXZMGAiMdfLNKy/asot-some-thoughts-about-imperfect-world-modeling,[ASoT] Some thoughts about imperfect world modeling,['leogao'],2022-04-07T15:42:10Z,alignmentforum,, 49484,https://www.alignmentforum.org/posts/LbrPTJ4fmABEdEnLf/200-concrete-open-problems-in-mechanistic-interpretability,200 Concrete Open Problems in Mechanistic Interpretability: Introduction,['Neel Nanda'],2022-12-28T21:06:54Z,alignmentforum,, 49499,https://www.alignmentforum.org/posts/5cWtwATHL6KyzChck/risks-from-ai-persuasion,Risks from AI persuasion,['Beth Barnes'],2021-12-24T01:48:17Z,alignmentforum,, 49521,https://www.alignmentforum.org/posts/b2Jk3dAmerjyNDzWf/an-email-with-a-bunch-of-links-i-sent-an-experienced-ml,[An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.],['David Scott Krueger (formerly: capybaralet)'],2022-09-08T22:28:55Z,alignmentforum,, 49550,https://www.alignmentforum.org/posts/kygEPBDrGGoM8rz9a/conjecture-internal-survey-agi-timelines-and-probability-of,Conjecture internal survey: AGI timelines and probability of human extinction from advanced AI,['Maris Sala'],2023-05-22T14:31:59Z,alignmentforum,, 49560,https://www.alignmentforum.org/posts/NLqAQzAhE9u87TvNz/eli-s-review-of-is-power-seeking-ai-an-existential-risk-1,"Eli's review of ""Is power-seeking AI an existential risk?""",['elifland'],2022-09-30T12:21:19Z,alignmentforum,, 49575,https://www.alignmentforum.org/posts/jYNT3Qihn2aAYaaPb/efficientzero-human-ale-sample-efficiency-w-muzero-self,EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised,['gwern'],2021-11-02T02:32:42Z,alignmentforum,, 49586,https://www.alignmentforum.org/posts/WrsQfBRqAPKiyGygT/a-review-of-agents-and-devices,"A review of ""Agents and Devices""",['adamShimi'],2021-08-13T08:42:41Z,alignmentforum,, 49599,https://www.alignmentforum.org/posts/kgsaSbJqWLtJfiCcz/naturalized-induction-a-challenge-for-evidential-and-causal,Naturalized induction – a challenge for evidential and causal decision theory,['Caspar Oesterheld'],2017-09-22T08:15:10Z,alignmentforum,, 49612,https://www.alignmentforum.org/posts/eqvvDM25MXLGqumnf/200-cop-in-mi-interpreting-reinforcement-learning,200 COP in MI: Interpreting Reinforcement Learning,['Neel Nanda'],2023-01-10T17:37:45Z,alignmentforum,, 49643,https://www.alignmentforum.org/posts/ZKzAjKSeNRtiaeJns/if-i-were-a-well-intentioned-ai-ii-acting-in-a-world,If I were a well-intentioned AI... II: Acting in a world,['Stuart_Armstrong'],2020-02-27T11:58:32Z,alignmentforum,, 49663,https://www.alignmentforum.org/posts/Pe3aqWXJWLHoB6vc4/alignment-newsletter-35,Alignment Newsletter #35,['Rohin Shah'],2018-12-04T01:10:01Z,alignmentforum,, 49695,https://www.alignmentforum.org/posts/xKvzpodBGcPMq7TqE/supervising-strong-learners-by-amplifying-weak-experts,Supervising strong learners by amplifying weak experts,['paulfchristiano'],2019-01-06T07:00:59Z,alignmentforum,, 49704,https://www.alignmentforum.org/posts/kpFxkXBbpF5pWDRrc/some-of-my-disagreements-with-list-of-lethalities,Some of my disagreements with List of Lethalities,['TurnTrout'],2023-01-24T00:25:28Z,alignmentforum,, 49723,https://www.alignmentforum.org/posts/opsfYWNxBYF5sJujB/announcing-epoch-s-dashboard-of-key-trends-and-figures-in,Announcing Epoch’s dashboard of key trends and figures in Machine Learning,['Jsevillamol'],2023-04-13T07:33:07Z,alignmentforum,, 49733,https://www.alignmentforum.org/posts/aFaKhG86tTrKvtAnT/against-gdp-as-a-metric-for-timelines-and-takeoff-speeds,Against GDP as a metric for timelines and takeoff speeds,['Daniel Kokotajlo'],2020-12-29T17:42:25Z,alignmentforum,, 49756,https://www.alignmentforum.org/posts/ASZco85chGouu2LKk/the-fraught-voyage-of-aligned-novelty,The fraught voyage of aligned novelty,['TsviBT'],2023-06-26T19:10:42Z,alignmentforum,, 49778,https://www.alignmentforum.org/posts/J9D6Bi3eFDDhCaovi/will-transparency-help-catch-deception-perhaps-not,Will transparency help catch deception? Perhaps not,['Matthew Barnett'],2019-11-04T20:52:53Z,alignmentforum,, 49794,https://www.alignmentforum.org/posts/TDqvQFks6TWutJEKu/towards-monosemanticity-decomposing-language-models-with,Towards Monosemanticity: Decomposing Language Models With Dictionary Learning,['Zac Hatfield-Dodds'],2023-10-05T21:01:40Z,alignmentforum,, 49811,https://www.alignmentforum.org/posts/JGByt8TrxREo4twaw/an-142-the-quest-to-understand-a-network-well-enough-to,[AN #142]: The quest to understand a network well enough to reimplement it by hand,['Rohin Shah'],2021-03-17T17:10:04Z,alignmentforum,, 49836,https://www.alignmentforum.org/posts/nPauymrHwpoNr6ipx/conversation-on-technology-forecasting-and-gradualism,Conversation on technology forecasting and gradualism,"['Richard_Ngo', 'Eliezer Yudkowsky', 'Rohin Shah', 'Rob Bensinger']",2021-12-09T21:23:21Z,alignmentforum,, 49858,https://www.alignmentforum.org/posts/esjMWREvj3WKZpBZd/an-128-prioritizing-research-on-ai-existential-safety-based,[AN #128]: Prioritizing research on AI existential safety based on its application to governance demands,['Rohin Shah'],2020-12-09T18:20:08Z,alignmentforum,, 49887,https://www.alignmentforum.org/posts/wgio8E758y9XWsi8j/grokking-forecasting-tai-with-biological-anchors,Grokking “Forecasting TAI with biological anchors”,['anson.ho'],2022-06-06T18:58:32Z,alignmentforum,, 49903,https://www.alignmentforum.org/posts/AdyqGnvhdqDMYJaug/what-is-a-definition-how-can-it-be-extrapolated,"What is a definition, how can it be extrapolated?",['Stuart_Armstrong'],2023-03-14T18:08:13Z,alignmentforum,, 49924,https://www.alignmentforum.org/posts/ostLZyhnBPndno2zP/active-inference-as-a-formalisation-of-instrumental,Active Inference as a formalisation of instrumental convergence,['Roman Leventov'],2022-07-26T17:55:58Z,alignmentforum,, 49940,https://www.alignmentforum.org/posts/uyk5nn93HxJMsio98/metaai-less-is-less-for-alignment-1,MetaAI: less is less for alignment.,['Cleo Nardo'],2023-06-13T14:08:45Z,alignmentforum,, 49973,https://www.alignmentforum.org/posts/WCX3EwnWAx7eyucqH/a-certain-formalization-of-corrigibility-is-vnm-incoherent,A Certain Formalization of Corrigibility Is VNM-Incoherent,['TurnTrout'],2021-11-20T00:30:49Z,alignmentforum,, 49990,https://www.alignmentforum.org/posts/tNtiJp8dA6jMbgKbf/hands-on-experience-is-not-magic,Hands-On Experience Is Not Magic,['Thane Ruthenis'],2023-05-27T16:57:11Z,alignmentforum,, 50005,https://www.alignmentforum.org/posts/WWcPFBZqpwA5kzE5y/linkpost-a-survey-on-over-300-works-about-interpretability,[Linkpost] A survey on over 300 works about interpretability in deep networks,['scasper'],2022-09-12T19:07:09Z,alignmentforum,, 50032,https://www.alignmentforum.org/posts/KDMLJEXTWtkZWheXt/consequentialism-and-corrigibility,Consequentialism & corrigibility,['Steven Byrnes'],2021-12-14T13:23:03Z,alignmentforum,, 50048,https://www.alignmentforum.org/posts/RMhs2fXtK5hLAjDQv/evaluating-existing-approaches-to-agi-alignment,Evaluating Existing Approaches to AGI Alignment,['Gordon Seidoh Worley'],2018-03-27T19:57:39Z,alignmentforum,, 50074,https://www.alignmentforum.org/posts/K8FTuEdAbsHDDw3hR/reflective-aixi-and-anthropics,Reflective AIXI and Anthropics,['Diffractor'],2018-09-24T02:15:18Z,alignmentforum,, 50099,https://www.alignmentforum.org/posts/dt4z82hpvvPFTDTfZ/six-ai-risk-strategy-ideas,Six AI Risk/Strategy Ideas,['Wei Dai'],2019-08-27T00:40:39Z,alignmentforum,, 50125,https://www.alignmentforum.org/posts/XW6Qi2LitMDb2MF8c/a-rationality-condition-for-cdt-is-that-it-equal-edt-part-1,A Rationality Condition for CDT Is That It Equal EDT (Part 1),['abramdemski'],2018-10-04T04:32:49Z,alignmentforum,, 50140,https://www.alignmentforum.org/posts/k7oxdbNaGATZbtEg3/redwood-research-s-current-project,Redwood Research’s current project,['Buck'],2021-09-21T23:30:37Z,alignmentforum,, 50159,https://www.alignmentforum.org/posts/nZY8Np759HYFawdjH/satisficers-tend-to-seek-power-instrumental-convergence-via,Satisficers Tend To Seek Power: Instrumental Convergence Via Retargetability,['TurnTrout'],2021-11-18T01:54:34Z,alignmentforum,, 50183,https://www.alignmentforum.org/posts/ALkH4o53ofm862vxc/announcing-encultured-ai-building-a-video-game,Announcing Encultured AI: Building a Video Game,"['Andrew_Critch', 'Nick Hay']",2022-08-18T02:16:27Z,alignmentforum,, 50211,https://www.alignmentforum.org/posts/d7RFGPMkbKurQjW6o/agi-timelines-are-mostly-not-strategically-relevant-to,AGI Timelines Are Mostly Not Strategically Relevant To Alignment,['johnswentworth'],2022-08-23T20:15:22Z,alignmentforum,, 50224,https://www.alignmentforum.org/posts/n3LAgnHg6ashQK3fF/takeaways-from-our-robust-injury-classifier-project-redwood,Takeaways from our robust injury classifier project [Redwood Research],['dmz'],2022-09-17T03:55:26Z,alignmentforum,, 50245,https://www.alignmentforum.org/posts/R3HAvMGFNJGXstckQ/relating-hch-and-logical-induction,Relating HCH and Logical Induction,['abramdemski'],2020-06-16T22:08:10Z,alignmentforum,, 50266,https://www.alignmentforum.org/posts/dDzHJGmyeQa2tGmqH/the-greedy-doctor-problem-turns-out-to-be-relevant-to-the,The Greedy Doctor Problem... turns out to be relevant to the ELK problem?,['Jan'],2022-01-14T11:58:05Z,alignmentforum,, 50290,https://www.alignmentforum.org/posts/8Ziz5BQjtuhr9orm4/what-decision-theory-is-implied-by-predictive-processing,What Decision Theory is Implied By Predictive Processing?,['johnswentworth'],2020-09-28T17:20:52Z,alignmentforum,, 50300,https://www.alignmentforum.org/posts/XKtybmbjhC6mXDm5z/compute-trends-across-three-eras-of-machine-learning,Compute Trends Across Three eras of Machine Learning,"['Jsevillamol', 'Pablo Villalobos', 'lennart', 'Marius Hobbhahn', 'Tamay Besiroglu', 'anson.ho']",2022-02-16T14:18:30Z,alignmentforum,, 50311,https://www.alignmentforum.org/posts/ZwEcvG3whyBqBdqSw/formal-alignment-what-it-is-and-some-proposals,"formal alignment: what it is, and some proposals",['Tamsin Leake'],2023-01-29T11:32:33Z,alignmentforum,, 50332,https://www.alignmentforum.org/posts/ZFT78ezD2yxLjo6QM/alignment-newsletter-34,Alignment Newsletter #34,['Rohin Shah'],2018-11-26T23:10:03Z,alignmentforum,, 50356,https://www.alignmentforum.org/posts/TxHBeEMC7SBZvxCk8/ai-and-evolution,AI and Evolution,['Dan H'],2023-03-30T12:56:27Z,alignmentforum,, 50374,https://www.alignmentforum.org/posts/6x7oExXi32ot6HjJv/approval-directed-bootstrapping,Approval-directed bootstrapping,['paulfchristiano'],2018-11-25T23:18:48Z,alignmentforum,, 50385,https://www.alignmentforum.org/posts/xh85KbTFhbCz7taD4/how-to-think-about-activation-patching,How to Think About Activation Patching,['Neel Nanda'],2023-06-04T14:17:42Z,alignmentforum,, 50407,https://www.alignmentforum.org/posts/teCsd4Aqg9KDxkaC9/bootstrapped-alignment,Bootstrapped Alignment,['Gordon Seidoh Worley'],2021-02-27T15:46:30Z,alignmentforum,, 50432,https://www.alignmentforum.org/posts/9dNxz2kjNvPtiZjxj/an-overview-of-catastrophic-ai-risks-summary,An Overview of Catastrophic AI Risks: Summary,"['Dan H', 'Mantas Mazeika', 'ThomasW']",2023-08-18T01:21:25Z,alignmentforum,, 50476,https://www.alignmentforum.org/posts/wsBpJn7HWEPCJxYai/excerpt-from-arbital-solomonoff-induction-dialogue,Excerpt from Arbital Solomonoff induction dialogue,['Richard_Ngo'],2021-01-17T03:49:47Z,alignmentforum,, 50486,https://www.alignmentforum.org/posts/4GXqbNvpJ4hJtcoSX/meta-execution,Meta-execution,['paulfchristiano'],2018-11-01T22:18:11Z,alignmentforum,, 50505,https://www.alignmentforum.org/posts/diutNaWF669WgEt3v/the-scaling-inconsistency-openai-s-new-insight,the scaling “inconsistency”: openAI’s new insight,['nostalgebraist'],2020-11-07T07:40:07Z,alignmentforum,, 50519,https://www.alignmentforum.org/posts/uKbxi2EJ3KBNRDGpL/comment-on-decision-theory,Comment on decision theory,['Rob Bensinger'],2018-09-09T20:13:10Z,alignmentforum,, 50536,https://www.alignmentforum.org/posts/bcFhPHcDRbWKcAEfk/modeling-naturalized-decision-problems-in-linear-logic,Modeling naturalized decision problems in linear logic,['jessicata'],2020-05-06T00:15:15Z,alignmentforum,, 50552,https://www.alignmentforum.org/posts/dMoaBvcxpBE7LcES4/tinystories-small-language-models-that-still-speak-coherent,TinyStories: Small Language Models That Still Speak Coherent English,['Ulisse Mini'],2023-05-28T22:23:31Z,alignmentforum,, 50571,https://www.alignmentforum.org/posts/NK4XxyrjFWt83m3dx/take-4-one-problem-with-natural-abstractions-is-there-s-too,Take 4: One problem with natural abstractions is there's too many of them.,['Charlie Steiner'],2022-12-05T10:39:42Z,alignmentforum,, 50581,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375068/reflexive-oracles-and-superrationality-pareto,Reflexive Oracles and superrationality: Pareto,['Stuart_Armstrong'],2017-05-24T08:35:16Z,alignmentforum,, 50596,https://www.alignmentforum.org/posts/4wa9XGnJHB3apPqoq/if-you-don-t-design-for-extrapolation-you-ll-extrapolate,"If you don't design for extrapolation, you'll extrapolate poorly - possibly fatally",['Stuart_Armstrong'],2021-04-08T18:10:52Z,alignmentforum,, 50617,https://www.alignmentforum.org/posts/EeXSjvyQge5FZPeuL/implications-of-evidential-cooperation-in-large-worlds,Implications of evidential cooperation in large worlds,['Lukas Finnveden'],2023-08-23T00:43:45Z,alignmentforum,, 50653,https://www.alignmentforum.org/posts/bBuBDJBYHt39Q5zZy/decision-transformer-interpretability,Decision Transformer Interpretability,"['Joseph Bloom', 'Paul Colognese']",2023-02-06T07:29:02Z,alignmentforum,, 50701,https://www.alignmentforum.org/posts/j9CbmSsnprxB2uFY9/embedded-curiosities,Embedded Curiosities,"['Scott Garrabrant', 'abramdemski']",2018-11-08T14:19:33Z,alignmentforum,, 50716,https://www.alignmentforum.org/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency,How To Get Into Independent Research On Alignment/Agency,['johnswentworth'],2021-11-19T00:00:22Z,alignmentforum,, 50733,https://www.alignmentforum.org/posts/KJRBb43nDxk6mwLcR/ai-doom-from-an-llm-plateau-ist-perspective,AI doom from an LLM-plateau-ist perspective,['Steven Byrnes'],2023-04-27T13:58:11Z,alignmentforum,, 50757,https://www.alignmentforum.org/posts/FF8i6SLfKb4g7C4EL/inside-the-mind-of-a-superhuman-go-model-how-does-leela-zero-2,Inside the mind of a superhuman Go model: How does Leela Zero read ladders?,['Haoxing Du'],2023-03-01T01:47:21Z,alignmentforum,, 50774,https://www.alignmentforum.org/posts/5bd75cc58225bf0670375085/logical-counterfactuals-consistent-under-self-modification,Logical Counterfactuals Consistent Under Self-Modification,['abramdemski'],2015-12-15T06:38:00Z,alignmentforum,, 50795,https://www.alignmentforum.org/posts/WCpv3KH7hvRqLTKqw/alignment-newsletter-19,Alignment Newsletter #19,['Rohin Shah'],2018-08-14T02:10:02Z,alignmentforum,, 50810,https://www.alignmentforum.org/posts/iK2F9QDZvwWinsBYB/non-adversarial-goodhart-and-ai-risks,Non-Adversarial Goodhart and AI Risks,['Davidmanheim'],2018-03-27T01:39:31Z,alignmentforum,, 50840,https://www.alignmentforum.org/posts/w8PNjCS8ZsQuqYWhD/instrumental-convergence-draft,Instrumental Convergence? [Draft],['J. Dmitri Gallow'],2023-06-14T20:21:41Z,alignmentforum,, 50857,https://www.alignmentforum.org/posts/mSPsyEwaymS74unND/acknowledging-human-preference-types-to-support-value,Acknowledging Human Preference Types to Support Value Learning,['Nandi Sabrina Erin'],2018-11-13T18:57:53Z,alignmentforum,, 50881,https://www.alignmentforum.org/posts/rSMbGFfsLMB3GWZtX/what-is-interpretability,What is Interpretability?,"['RobertKirk', 'Tomáš Gavenčiak', 'Ada Böhm']",2020-03-17T20:23:33Z,alignmentforum,, 50900,https://www.alignmentforum.org/posts/YbAZ4nSA8itL2EkDb/an-166-is-it-crazy-to-claim-we-re-in-the-most-important,[AN #166]: Is it crazy to claim we're in the most important century?,['Rohin Shah'],2021-10-08T17:30:12Z,alignmentforum,, 50917,https://www.alignmentforum.org/posts/5LyKxJJfz7cYdkZfm/why-would-ai-aim-to-defeat-humanity,"Why Would AI ""Aim"" To Defeat Humanity?",['HoldenKarnofsky'],2022-11-29T19:30:08Z,alignmentforum,, 50940,https://www.alignmentforum.org/posts/YSH3RFSFESzsa5Nrg/counterfactuals-thick-and-thin,"Counterfactuals, thick and thin",['Nisan'],2018-07-31T15:43:59Z,alignmentforum,, 50950,https://www.alignmentforum.org/posts/QeetPm8yvFf7mAGj9/applications-for-ai-safety-camp-2022-now-open,Applications for AI Safety Camp 2022 Now Open!,['adamShimi'],2021-11-17T21:42:32Z,alignmentforum,, 50959,https://www.alignmentforum.org/posts/cgJ447adbMAeoKTSt/transparency-trichotomy,Transparency Trichotomy,['Mark Xu'],2021-03-28T20:26:35Z,alignmentforum,, 50983,https://www.alignmentforum.org/posts/kFCu3batN8k8mwtmh/the-cave-allegory-revisited-understanding-gpt-s-worldview,The Cave Allegory Revisited: Understanding GPT's Worldview,['Jan_Kulveit'],2023-02-14T16:00:09Z,alignmentforum,, 50994,https://www.alignmentforum.org/posts/3pinFH3jerMzAvmza/on-how-various-plans-miss-the-hard-bits-of-the-alignment,On how various plans miss the hard bits of the alignment challenge,['So8res'],2022-07-12T02:49:50Z,alignmentforum,, 51024,https://www.alignmentforum.org/posts/AFJgo99YckQnhbF8Z/thinking-about-maximization-and-corrigibility,Thinking about maximization and corrigibility,['James Payor'],2023-04-21T21:22:52Z,alignmentforum,, 51054,https://www.alignmentforum.org/posts/GeabLEXYP7oBMivmF/acceptability-verification-a-research-agenda,Acceptability Verification: A Research Agenda,"['David Udell', 'evhub']",2022-07-12T20:11:35Z,alignmentforum,, 51065,https://www.alignmentforum.org/posts/LvKmjKMvozpdmiQhP/inverse-scaling-can-become-u-shaped,Inverse scaling can become U-shaped,['Edouard Harris'],2022-11-08T19:04:55Z,alignmentforum,, 51082,https://www.alignmentforum.org/posts/WKGZBCYAbZ6WGsKHc/love-in-a-simbox-is-all-you-need,LOVE in a simbox is all you need,['jacob_cannell'],2022-09-28T18:25:31Z,alignmentforum,, 51113,https://www.alignmentforum.org/posts/q5Gox77ReFAy5i2YQ/in-defense-of-probably-wrong-mechanistic-models,In defense of probably wrong mechanistic models,['evhub'],2022-12-06T23:24:21Z,alignmentforum,, 51125,https://www.alignmentforum.org/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk,Mechanistic anomaly detection and ELK,['paulfchristiano'],2022-11-25T18:50:04Z,alignmentforum,, 51145,https://www.alignmentforum.org/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta,Announcing AlignmentForum.org Beta,['Raemon'],2018-07-10T20:19:41Z,alignmentforum,, 51155,https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety,Two Neglected Problems in Human-AI Safety,['Wei Dai'],2018-12-16T22:13:29Z,alignmentforum,, 51175,https://www.alignmentforum.org/posts/etNJcXCsKC6izQQZj/pivotal-outcomes-and-pivotal-processes,Pivotal outcomes and pivotal processes,['Andrew_Critch'],2022-06-17T23:43:19Z,alignmentforum,, 51191,https://www.alignmentforum.org/posts/z8BF9GwcCjeXShC4q/compute-governance-the-role-of-commodity-hardware,Compute Governance: The Role of Commodity Hardware,['Jan'],2022-03-26T10:08:08Z,alignmentforum,, 51211,https://www.alignmentforum.org/posts/iWXQgwpksstozSDeA/kelly-bettors,Kelly bettors,['DanielFilan'],2018-11-13T00:40:01Z,alignmentforum,, 51220,https://www.alignmentforum.org/posts/RAnb2A5vML95rBMyd/formalizing-policy-modification-corrigibility,Formalizing Policy-Modification Corrigibility,['TurnTrout'],2021-12-03T01:31:42Z,alignmentforum,, 51240,https://www.alignmentforum.org/posts/5uyRB4CGAvB2eLHvm/possibilizing-vs-actualizing,Possibilizing vs. actualizing,['TsviBT'],2023-04-16T15:55:40Z,alignmentforum,, 51257,https://www.alignmentforum.org/posts/5ie8Mfwq5tEBCBDEC/provisionality,Provisionality,['TsviBT'],2023-06-19T11:49:07Z,alignmentforum,, 51273,https://www.alignmentforum.org/posts/dWJNFHnC4bkdbovug/training-goals-for-large-language-models,Training goals for large language models,['Johannes Treutlein'],2022-07-18T07:09:43Z,alignmentforum,, 51302,https://www.alignmentforum.org/posts/avvXAvGhhGgkJDDso/caution-when-interpreting-deepmind-s-in-context-rl-paper,Caution when interpreting Deepmind's In-context RL paper,['Sam Marks'],2022-11-01T02:42:07Z,alignmentforum,, 51317,https://www.alignmentforum.org/posts/Azqmzp5JoXJihMcr4/competition-amplify-rohin-s-prediction-on-agi-researchers,Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns,['stuhlmueller'],2020-07-21T20:06:09Z,alignmentforum,, 51327,https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology,Clarifying inner alignment terminology,['evhub'],2020-11-09T20:40:27Z,alignmentforum,, 51351,https://www.alignmentforum.org/posts/jghQXYvifXxhzkhHM/axrp-episode-23-mechanistic-anomaly-detection-with-mark-xu,AXRP Episode 23 - Mechanistic Anomaly Detection with Mark Xu,['DanielFilan'],2023-07-27T01:50:03Z,alignmentforum,, 51373,https://www.alignmentforum.org/posts/ndcqdTxkMnFF7gQFh/gradations-of-agency-1,Gradations of Agency,['Daniel Kokotajlo'],2022-05-23T01:10:38Z,alignmentforum,, 51390,https://www.alignmentforum.org/posts/2GycxikGnepJbxfHT/towards-an-empirical-investigation-of-inner-alignment,Towards an empirical investigation of inner alignment,['evhub'],2019-09-23T20:43:59Z,alignmentforum,, 51403,https://www.alignmentforum.org/posts/ukidKsEio8hfB9uHT/notes-on-learning-the-prior,Notes on Learning the Prior,['Spencer Becker-Kahn'],2022-07-15T17:28:37Z,alignmentforum,, 51426,https://www.alignmentforum.org/posts/aPsdGPCpcyPqkatgc/of-arguments-and-wagers,Of arguments and wagers,['paulfchristiano'],2020-01-10T22:20:02Z,alignmentforum,, 51436,https://www.alignmentforum.org/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign,The Solomonoff Prior is Malign,['Mark Xu'],2020-10-14T01:33:58Z,alignmentforum,, 51457,https://www.alignmentforum.org/posts/WckCfXpfrb9Bms8aA/my-understanding-of-the-alignment-problem,My understanding of the alignment problem,['danieldewey'],2021-11-15T18:13:20Z,alignmentforum,, 51472,https://www.alignmentforum.org/posts/TEDT4SJDfBwXezfgC/paradigms-and-theory-choice-in-ai-adaptivity-economy-and,"Paradigms and Theory Choice in AI: Adaptivity, Economy and Control",['particlemania'],2023-08-28T22:19:11Z,alignmentforum,, 51493,https://www.alignmentforum.org/posts/5Nz4PJgvLCpJd6YTA/looking-deeper-at-deconfusion,Looking Deeper at Deconfusion,['adamShimi'],2021-06-13T21:29:08Z,alignmentforum,, 51507,https://www.alignmentforum.org/posts/gQY6LrTWJNkTv8YJR/the-pointers-problem-human-values-are-a-function-of-humans,The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables,['johnswentworth'],2020-11-18T17:47:41Z,alignmentforum,, 51522,https://www.alignmentforum.org/posts/WjY9y7r52vaNZ2WmH/three-mental-images-from-thinking-about-agi-debate-and,Three mental images from thinking about AGI debate & corrigibility,['Steven Byrnes'],2020-08-03T14:29:19Z,alignmentforum,, 51548,https://www.alignmentforum.org/posts/6EMdmeosYPdn74wuG/wireheading-as-a-potential-problem-with-the-new-impact,Wireheading as a potential problem with the new impact measure,['Stuart_Armstrong'],2018-09-25T14:15:38Z,alignmentforum,, 51560,https://www.alignmentforum.org/posts/qRtD4WqKRYEtT5pi3/the-next-decades-might-be-wild,The next decades might be wild,['Marius Hobbhahn'],2022-12-15T16:10:05Z,alignmentforum,, 51590,https://www.alignmentforum.org/posts/SvuLhtREMy8wRBzpC/ambitious-vs-narrow-value-learning,Ambitious vs. narrow value learning,['paulfchristiano'],2019-01-12T06:18:22Z,alignmentforum,, 51605,https://www.alignmentforum.org/posts/DqfpcFwfeZHFe5J8h/mediation-from-a-distance,Mediation From a Distance,['johnswentworth'],2020-03-20T22:02:47Z,alignmentforum,, 51620,https://www.alignmentforum.org/posts/eCWkJrFff7oMLwjEp/clarifying-factored-cognition,Clarifying Factored Cognition,['Rafael Harth'],2020-12-13T20:02:38Z,alignmentforum,, 51632,https://www.alignmentforum.org/posts/Gi8HPM8iYZcdAEteJ/some-thoughts-on-why-adversarial-training-might-be-useful,Some thoughts on why adversarial training might be useful,['Beth Barnes'],2021-12-08T01:28:23Z,alignmentforum,, 51650,https://www.alignmentforum.org/posts/g5rABd5qbp8B4g3DE/towards-understanding-sycophancy-in-language-models,Towards Understanding Sycophancy in Language Models,"['Ethan Perez', 'mrinank_sharma', 'Meg', 'Tomek Korbak']",2023-10-24T00:30:49Z,alignmentforum,, 51669,https://www.alignmentforum.org/posts/9dJgvGh4wNyPbQfrt/categorical-measure-theoretic-approach-to-optimal-policies,Categorical-measure-theoretic approach to optimal policies tending to seek power,['jacek'],2023-01-12T00:32:51Z,alignmentforum,, 51683,https://www.alignmentforum.org/posts/L5Rua9aTndviy8dvc/eis-xi-moving-forward,EIS XI: Moving Forward,['scasper'],2023-02-22T19:05:53Z,alignmentforum,, 51719,https://www.alignmentforum.org/posts/pu3ddLSZjjmiiqQfh/another-take-on-agent-foundations-formalizing-zero-shot,Another take on agent foundations: formalizing zero-shot reasoning,['zhukeepa'],2018-07-01T06:12:57Z,alignmentforum,, 51738,https://www.alignmentforum.org/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals,Prizes for ELK proposals,['paulfchristiano'],2022-01-03T20:23:26Z,alignmentforum,, 51754,https://www.alignmentforum.org/posts/dxgEaDrEBkkE96CXr/thoughts-on-responsible-scaling-policies-and-regulation,Thoughts on responsible scaling policies and regulation,['paulfchristiano'],2023-10-24T22:21:18Z,alignmentforum,, 51778,https://www.alignmentforum.org/posts/WFnyxSL543c9bxGMm/how-to-get-into-ai-safety-research,How to get into AI safety research,['Stuart_Armstrong'],2022-05-18T18:05:07Z,alignmentforum,, 51787,https://www.alignmentforum.org/posts/YbYFeZQWncy9Tzzq9/results-of-usd1-000-oracle-contest,"Results of $1,000 Oracle contest!",['Stuart_Armstrong'],2020-06-17T17:44:45Z,alignmentforum,, 51800,https://www.alignmentforum.org/posts/E4GvMdELt6s6CaXrb/the-additive-summary-equation,The Additive Summary Equation,['johnswentworth'],2021-07-13T18:23:06Z,alignmentforum,, 51810,https://www.alignmentforum.org/posts/sGkRDrpphsu6Jhega/a-model-based-approach-to-ai-existential-risk,A Model-based Approach to AI Existential Risk,"['Sammy Martin', 'Lonnie Chrisman', 'Aryeh Englander']",2023-08-25T10:32:17Z,alignmentforum,, 51829,https://www.alignmentforum.org/posts/DbuCdEbkh4wL5cjJ5/preface-to-clr-s-research-agenda-on-cooperation-conflict-and,"Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI",['JesseClifton'],2019-12-13T21:02:49Z,alignmentforum,, 51861,https://www.alignmentforum.org/posts/yTvZFzcgt7rGYMxP5/topological-metaphysics-relating-point-set-topology-and,Topological metaphysics: relating point-set topology and locale theory,['jessicata'],2020-05-01T03:57:12Z,alignmentforum,, 51877,https://www.alignmentforum.org/posts/QQdmb3TjrvmB9Pfzp/we-can-do-better-than-dowhatimean,We can do better than DoWhatIMean,['lukehmiles'],2023-08-19T05:41:47Z,alignmentforum,, 51900,https://www.alignmentforum.org/posts/HJ4EHPG5qPbbbk5nK/gemini-modeling,Gemini modeling,['TsviBT'],2023-01-22T14:28:21Z,alignmentforum,, 51917,https://www.alignmentforum.org/posts/ztnkDKD5odorWt5dB/elk-computational-complexity-three-levels-of-difficulty,ELK Computational Complexity: Three Levels of Difficulty,['abramdemski'],2022-03-30T20:56:37Z,alignmentforum,, 51941,https://www.alignmentforum.org/posts/5bd75cc58225bf067037541b/why-i-am-not-currently-working-on-the-aamls-agenda,Why I am not currently working on the AAMLS agenda,['jessicata'],2017-06-01T17:57:24Z,alignmentforum,, 51962,https://www.alignmentforum.org/posts/znfkdCoHMANwqc2WE/the-ground-of-optimization-1,The ground of optimization,['Alex Flint'],2020-06-20T00:38:16Z,alignmentforum,, 51982,https://www.alignmentforum.org/posts/ky988ePJvCRhmCwGo/using-vector-fields-to-visualise-preferences-and-make-them,Using vector fields to visualise preferences and make them consistent,"['MichaelA', 'JustinShovelain']",2020-01-28T19:44:43Z,alignmentforum,, 52000,https://www.alignmentforum.org/posts/ZANm3Sbu5RxRca2zt/alignment-newsletter-26,Alignment Newsletter #26,['Rohin Shah'],2018-10-02T16:10:03Z,alignmentforum,, 52023,https://www.alignmentforum.org/posts/SXoHj7DTAjAsfJrcs/an-76-how-dataset-size-affects-robustness-and-benchmarking,"[AN #76]: How dataset size affects robustness, and benchmarking safe exploration by measuring constraint violations",['Rohin Shah'],2019-12-04T18:10:02Z,alignmentforum,, 52047,https://www.alignmentforum.org/posts/5spBue2z2tw4JuDCx/steering-gpt-2-xl-by-adding-an-activation-vector,Steering GPT-2-XL by adding an activation vector,"['TurnTrout', 'Monte M', 'David Udell', 'lisathiergart', 'Ulisse Mini']",2023-05-13T18:42:41Z,alignmentforum,, 52062,https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in,[Intro to brain-like-AGI safety] 2. “Learning from scratch” in the brain,['Steven Byrnes'],2022-02-02T13:22:59Z,alignmentforum,, 52074,https://www.alignmentforum.org/posts/kpmaEevZ2KehZo2tp/some-advice-on-independent-research,Some advice on independent research,['Marius Hobbhahn'],2022-11-08T14:46:19Z,alignmentforum,, 52097,https://www.alignmentforum.org/posts/QJEmnYKJt4kMeDhfy/the-linguistic-blind-spot-of-value-aligned-agency-natural,"The Linguistic Blind Spot of Value-Aligned Agency, Natural and Artificial",['Roman Leventov'],2023-02-14T06:57:58Z,alignmentforum,, 52109,https://www.alignmentforum.org/posts/AkGvmJ6WE5sXjuwnC/loeb-s-lemma-an-easier-approach-to-loeb-s-theorem,Löb's Lemma: an easier approach to Löb's Theorem,['Andrew_Critch'],2022-12-24T02:02:42Z,alignmentforum,, 52119,https://www.alignmentforum.org/posts/GFGNwCwkffBevyXR2/a-second-example-of-conditional-orthogonality-in-finite,A second example of conditional orthogonality in finite factored sets,['DanielFilan'],2021-07-07T01:40:02Z,alignmentforum,, 52129,https://www.alignmentforum.org/posts/JC7aJZjt2WvxxffGz/paradigms-of-ai-alignment-components-and-enablers,Paradigms of AI alignment: components and enablers,['Vika'],2022-06-02T06:19:59Z,alignmentforum,, 52190,https://www.alignmentforum.org/posts/Y59AYj5keDYHf29LK/experiment-idea-rl-agents-evading-learned-shutdownability,Experiment Idea: RL Agents Evading Learned Shutdownability,['Leon Lang'],2023-01-16T22:46:03Z,alignmentforum,, 52204,https://www.alignmentforum.org/posts/uQHAJ7TdBbweRR5iS/conditioning-counterfactuals-exploration-and-gears,"Conditioning, Counterfactuals, Exploration, and Gears",['Diffractor'],2018-07-10T22:11:52Z,alignmentforum,, 52224,https://www.alignmentforum.org/posts/pXLqpguHJzxSjDdx7/why-i-m-excited-about-redwood-research-s-current-project,Why I'm excited about Redwood Research's current project,['paulfchristiano'],2021-11-12T19:26:26Z,alignmentforum,, 52241,https://www.alignmentforum.org/posts/HXxHcRCxR4oHrAsEr/an-update-on-academia-vs-industry-one-year-into-my-faculty,An Update on Academia vs. Industry (one year into my faculty job),['David Scott Krueger (formerly: capybaralet)'],2022-09-03T20:43:38Z,alignmentforum,, 52278,https://www.alignmentforum.org/posts/6E6D3qLPM3urXDPpK/what-makes-counterfactuals-comparable-1,What makes counterfactuals comparable?,['Chris_Leong'],2020-04-24T22:47:38Z,alignmentforum,, 52288,https://www.alignmentforum.org/posts/NkSpukDkm9pjRdMdB/human-instincts-symbol-grounding-and-the-blank-slate,"Human instincts, symbol grounding, and the blank-slate neocortex",['Steven Byrnes'],2019-10-02T12:06:35Z,alignmentforum,, 52311,https://www.alignmentforum.org/posts/8GoynCn4jaXKsiDky/did-they-or-didn-t-they-learn-tool-use,Did they or didn't they learn tool use?,['Daniel Kokotajlo'],2021-07-29T13:26:32Z,alignmentforum,, 52328,https://www.alignmentforum.org/posts/oKYWbXioKaANATxKY/soares-tallinn-and-yudkowsky-discuss-agi-cognition,"Soares, Tallinn, and Yudkowsky discuss AGI cognition","['So8res', 'Eliezer Yudkowsky', 'jaan']",2021-11-29T19:26:33Z,alignmentforum,, 52351,https://www.alignmentforum.org/posts/GctJD5oCDRxCspEaZ/clarifying-ai-x-risk,Clarifying AI X-risk,"['zac_kenton', 'Rohin Shah', 'David Lindner', 'Vikrant Varma', 'Vika', 'Mary Phuong', 'Ramana Kumar', 'Elliot Catt']",2022-11-01T11:03:01Z,alignmentforum,, 52375,https://www.alignmentforum.org/posts/26eupx3Byc8swRS7f/bottle-caps-aren-t-optimisers,Bottle Caps Aren't Optimisers,['DanielFilan'],2018-08-31T18:30:01Z,alignmentforum,, 52386,https://www.alignmentforum.org/posts/BvctuKocyWR4YYea3/wireheading-is-in-the-eye-of-the-beholder,Wireheading is in the eye of the beholder,['Stuart_Armstrong'],2019-01-30T18:23:07Z,alignmentforum,, 52395,https://www.alignmentforum.org/posts/uSdPa9nrSgmXCtdKN/concrete-experiments-in-inner-alignment,Concrete experiments in inner alignment,['evhub'],2019-09-06T22:16:16Z,alignmentforum,, 52428,https://www.alignmentforum.org/posts/xhkxv6qnnGmqmxxsz/a-walkthrough-of-in-context-learning-and-induction-heads-w,A Walkthrough of In-Context Learning and Induction Heads (w/ Charles Frye) Part 1 of 2,['Neel Nanda'],2022-11-22T17:12:03Z,alignmentforum,, 52444,https://www.alignmentforum.org/posts/3kwwDieE9SmFoXz9F/non-poisonous-cake-anthropic-updates-are-normal,Non-poisonous cake: anthropic updates are normal,['Stuart_Armstrong'],2021-06-18T14:51:43Z,alignmentforum,, 52454,https://www.alignmentforum.org/posts/pv7Qpu8WSge8NRbpB/larger-language-models-may-disappoint-you-or-an-eternally,"larger language models may disappoint you [or, an eternally unfinished draft]",['nostalgebraist'],2021-11-26T23:08:56Z,alignmentforum,, 52483,https://www.alignmentforum.org/posts/pHPmMGEMYefk9jLeh/llm-basics-embedding-spaces-transformer-token-vectors-are,LLM Basics: Embedding Spaces - Transformer Token Vectors Are Not Points in Space,['NickyP'],2023-02-13T18:52:37Z,alignmentforum,, 52499,https://www.alignmentforum.org/posts/RozggPiqQxzzDaNYF/introduction-to-reducing-goodhart,Introduction to Reducing Goodhart,['Charlie Steiner'],2021-08-26T18:38:52Z,alignmentforum,, 52517,https://www.alignmentforum.org/posts/ysQEJ8tvm8KYc76D5/mathematical-models-of-progress,Mathematical Models of Progress?,['abramdemski'],2021-02-16T00:21:44Z,alignmentforum,, 52533,https://www.alignmentforum.org/posts/xqqhwbH2mq6i4iLmK/we-have-promising-alignment-plans-with-low-taxes,We have promising alignment plans with low taxes,['Seth Herd'],2023-11-10T18:51:39Z,alignmentforum,, 52553,https://www.alignmentforum.org/posts/d99ikjqdxpMiAFnch/arc-is-hiring-theoretical-researchers-1,ARC is hiring theoretical researchers,"['paulfchristiano', 'Jacob_Hilton', 'Mark Xu']",2023-06-12T18:50:08Z,alignmentforum,, 52569,https://www.alignmentforum.org/posts/Neh76ueECviJ6p75o/large-language-models-learn-to-represent-the-world,Large language models learn to represent the world,['gjm'],2023-01-22T13:10:39Z,alignmentforum,, 52585,https://www.alignmentforum.org/posts/CRMhhnKs7bymY4kbb/my-thoughts-on-the-ml-safety-course,My Thoughts on the ML Safety Course,['zeshen'],2022-09-27T13:15:03Z,alignmentforum,, 52618,https://www.alignmentforum.org/posts/JK2QGfNGLjuFnrEvz/explaining-grokking-through-circuit-efficiency,Explaining grokking through circuit efficiency,"['Vikrant Varma', 'Rohin Shah']",2023-09-08T14:39:24Z,alignmentforum,, 52636,https://www.alignmentforum.org/posts/FTk7ufqK2D4dkdBDr/notes-on-openai-s-alignment-plan,Notes on OpenAI’s alignment plan,['Alex Flint'],2022-12-08T19:13:59Z,alignmentforum,, 52651,https://www.alignmentforum.org/posts/vdXNPzuh3fwgykvKY/coherent-extrapolated-dreaming,Coherent extrapolated dreaming,['Alex Flint'],2022-12-26T17:29:14Z,alignmentforum,, 52678,https://www.alignmentforum.org/posts/NB9QrBa335GDijuyn/there-is-essentially-one-best-validated-theory-of-cognition,There is essentially one best-validated theory of cognition.,['abramdemski'],2021-12-10T15:51:06Z,alignmentforum,, 52699,https://www.alignmentforum.org/posts/GYQwJsChoRosjdW2r/functors-and-coarse-worlds,Functors and Coarse Worlds,['Scott Garrabrant'],2020-10-30T15:19:23Z,alignmentforum,, 52716,https://www.alignmentforum.org/posts/eLNo7b56kQQerCzp2/mech-interp-puzzle-1-suspiciously-similar-embeddings-in-gpt,Mech Interp Puzzle 1: Suspiciously Similar Embeddings in GPT-Neo,['Neel Nanda'],2023-07-16T22:02:15Z,alignmentforum,, 52731,https://www.alignmentforum.org/posts/JSkqkgYcyYt8oHsFi/large-language-models-can-provide-normative-assumptions-for,"Large language models can provide ""normative assumptions"" for learning human preferences",['Stuart_Armstrong'],2023-01-02T19:39:01Z,alignmentforum,, 52751,https://www.alignmentforum.org/posts/u9Azdu6Z7zFAhd4rK/bayesian-evolving-to-extinction,Bayesian Evolving-to-Extinction,['abramdemski'],2020-02-14T23:55:27Z,alignmentforum,, 52771,https://www.alignmentforum.org/posts/BbrsgHPJmGxeg7nXG/meta-do-you-want-ais-webinars,[Meta] Do you want AIS Webinars?,['Linda Linsefors'],2020-03-21T16:01:03Z,alignmentforum,, 52780,https://www.alignmentforum.org/posts/YbahERfcjTu7LZNQ6/summary-of-the-acausal-attack-issue-for-aixi,Summary of the Acausal Attack Issue for AIXI,['Diffractor'],2021-12-13T08:16:26Z,alignmentforum,, 52794,https://www.alignmentforum.org/posts/pA3F9oejzvGg6Kf3a/robustness-to-scaling-down-more-important-than-i-thought,Robustness to Scaling Down: More Important Than I Thought,['adamShimi'],2022-07-23T11:40:04Z,alignmentforum,, 52806,https://www.alignmentforum.org/posts/kWp4R9SYgKJFHAufB/polysemanticity-and-capacity-in-neural-networks,Polysemanticity and Capacity in Neural Networks,"['Buck', 'Adam Jermyn', 'Kshitij Sachan']",2022-10-07T17:51:07Z,alignmentforum,, 52826,https://www.alignmentforum.org/posts/nXeLPcT9uhfG3TMPS/conditioning-generative-models,Conditioning Generative Models,['Adam Jermyn'],2022-06-25T22:15:59Z,alignmentforum,, 52849,https://www.alignmentforum.org/posts/3uHgw2uW6BtR74yhQ/new-paper-corrigibility-with-utility-preservation,New paper: Corrigibility with Utility Preservation,['Koen.Holtman'],2019-08-06T19:04:26Z,alignmentforum,, 52862,https://www.alignmentforum.org/posts/FyChg3kYG54tEN3u6/evolution-is-a-bad-analogy-for-agi-inner-alignment,Evolution is a bad analogy for AGI: inner alignment,['Quintin Pope'],2022-08-13T22:15:57Z,alignmentforum,, 52877,https://www.alignmentforum.org/posts/DC8a8aYXoHtc8bBaB/agent-level-parallelism-2,Agent level parallelism,['Johannes C. Mayer'],2022-06-18T20:56:12Z,alignmentforum,, 52888,https://www.alignmentforum.org/posts/MeYeLEr4RNGreJZcB/on-the-role-of-counterfactuals-in-learning,On the Role of Counterfactuals in Learning,['Max Kanwal'],2018-07-11T02:45:36Z,alignmentforum,, 52901,https://www.alignmentforum.org/posts/qezBTig6p6p5xtL6G/a-theory-of-human-values,A theory of human values,['Stuart_Armstrong'],2019-03-13T15:22:45Z,alignmentforum,, 52919,https://www.alignmentforum.org/posts/5bd75cc58225bf06703753b9/the-ubiquitous-converse-lawvere-problem,The Ubiquitous Converse Lawvere Problem,['Scott Garrabrant'],2018-11-29T03:16:16Z,alignmentforum,, 52934,https://www.alignmentforum.org/posts/CAwwFpbteYBQw2Gkp/p-b-plan-to-p-b-better,P₂B: Plan to P₂B Better,"['Ramana Kumar', 'Daniel Kokotajlo']",2021-10-24T15:21:10Z,alignmentforum,, 52946,https://www.alignmentforum.org/posts/25288usP5B5ytnzA4/random-thoughts-on-predict-o-matic,Random Thoughts on Predict-O-Matic,['abramdemski'],2019-10-17T23:39:33Z,alignmentforum,, 52975,https://www.alignmentforum.org/posts/cyTP4ZMnN6RFu9L62/an-157-measuring-misalignment-in-the-technology-underlying,[AN #157]: Measuring misalignment in the technology underlying Copilot,['Rohin Shah'],2021-07-23T17:20:03Z,alignmentforum,, 53006,https://www.alignmentforum.org/posts/fj8faDDQEfvN2LQcW/conditioning-predictive-models-the-case-for-competitiveness,Conditioning Predictive Models: The case for competitiveness,"['evhub', 'Adam Jermyn', 'Johannes Treutlein', 'Rubi J. Hudson', 'kcwoolverton']",2023-02-06T20:08:55Z,alignmentforum,, 53032,https://www.alignmentforum.org/posts/mTi8TQEyP5Pr7oczd/machine-unlearning-evaluations-as-interpretability,Machine Unlearning Evaluations as Interpretability Benchmarks,"['NickyP', 'Nandi']",2023-10-23T16:33:05Z,alignmentforum,, 53048,https://www.alignmentforum.org/posts/gwG9uqw255gafjYN4/eis-iii-broad-critiques-of-interpretability-research,EIS III: Broad Critiques of Interpretability Research,['scasper'],2023-02-14T18:24:44Z,alignmentforum,, 53074,https://www.alignmentforum.org/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai,"A descriptive, not prescriptive, overview of current AI Alignment Research","['Jan', 'Logan Riggs', 'jacquesthibs', 'janus']",2022-06-06T21:59:22Z,alignmentforum,, 53094,https://www.alignmentforum.org/posts/uFNgRumrDTpBfQGrs/let-s-think-about-slowing-down-ai,Let’s think about slowing down AI,['KatjaGrace'],2022-12-22T17:40:05Z,alignmentforum,, 53123,https://www.alignmentforum.org/posts/5msxxQiTDmcDNBnkF/a-push-towards-interactive-transformer-decoding,A push towards interactive transformer decoding,['R0bk'],2023-05-31T17:56:59Z,alignmentforum,, 53142,https://www.alignmentforum.org/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers,AI Safety Needs Great Engineers,['Andy Jones'],2021-11-23T15:40:18Z,alignmentforum,, 53161,https://www.alignmentforum.org/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement,Visible Thoughts Project and Bounty Announcement,['So8res'],2021-11-30T00:19:08Z,alignmentforum,, 53171,https://www.alignmentforum.org/posts/xPrtsLAyMcZkkszvw/ethan-perez-on-the-inverse-scaling-prize-language-feedback,"Ethan Perez on the Inverse Scaling Prize, Language Feedback and Red Teaming",['Michaël Trazzi'],2022-08-24T16:35:43Z,alignmentforum,, 53199,https://www.alignmentforum.org/posts/NqsNYsyoA2YSbb3py/fundamental-question-what-determines-a-mind-s-effects,Fundamental question: What determines a mind's effects?,['TsviBT'],2023-09-03T17:15:42Z,alignmentforum,, 53214,https://www.alignmentforum.org/posts/YNuJjRuxsWWzfvder/recursive-quantilizers-ii,Recursive Quantilizers II,['abramdemski'],2020-12-02T15:26:30Z,alignmentforum,, 53248,https://www.alignmentforum.org/posts/v6Q7T335KCMxujhZu/clarifying-what-failure-looks-like,Clarifying “What failure looks like”,['Sam Clarke'],2020-09-20T20:40:48Z,alignmentforum,, 53277,https://www.alignmentforum.org/posts/7GEviErBXcjJsbSeD/ai-alignment-research-overview-by-jacob-steinhardt,AI Alignment Research Overview (by Jacob Steinhardt),['Ben Pace'],2019-11-06T19:24:50Z,alignmentforum,, 53304,https://www.alignmentforum.org/posts/Xd9FLs4geRAWxkQPE/writing-causal-models-like-we-write-programs,Writing Causal Models Like We Write Programs,['johnswentworth'],2020-05-05T18:05:38Z,alignmentforum,, 53319,https://www.alignmentforum.org/posts/pWRRBtLSncELQLfrg/disentangling-inner-alignment-failures,Disentangling inner alignment failures,['Erik Jenner'],2022-10-10T18:50:30Z,alignmentforum,, 53340,https://www.alignmentforum.org/posts/Hpam4RrJKfufXrmAi/anthropics-over-simplified-it-s-about-priors-not-updates,"Anthropics over-simplified: it's about priors, not updates",['Stuart_Armstrong'],2020-03-02T13:45:12Z,alignmentforum,, 53355,https://www.alignmentforum.org/posts/8mizBCm3dyc432nK8/residual-stream-norms-grow-exponentially-over-the-forward,Residual stream norms grow exponentially over the forward pass,"['StefanHex', 'TurnTrout']",2023-05-07T00:46:03Z,alignmentforum,, 53373,https://www.alignmentforum.org/posts/6qrLfAG7mDTyrHmh7/time-is-homogeneous-sequentially-composable-determination,Time is homogeneous sequentially-composable determination,['TsviBT'],2023-10-08T14:58:16Z,alignmentforum,, 53390,https://www.alignmentforum.org/posts/n3YRDJYCnQcDAw29G/verification-and-transparency,Verification and Transparency,['DanielFilan'],2019-08-08T01:50:01Z,alignmentforum,, 53405,https://www.alignmentforum.org/posts/aKpqwtZN6ifAhqJYK/what-success-looks-like,What success looks like,"['Marius Hobbhahn', 'MaxRa', 'JasperGeh', 'Yannick_Muehlhaeuser']",2022-06-28T14:38:43Z,alignmentforum,, 53430,https://www.alignmentforum.org/posts/jtK7FpsqpboAfr7Td/conjecture-second-hiring-round,Conjecture Second Hiring Round,"['Connor Leahy', 'Sid Black', 'Gabriel Alfour', 'Chris Scammell']",2022-11-23T17:11:43Z,alignmentforum,, 53445,https://www.alignmentforum.org/posts/FsxPNRJ5NQkrSKyDx/preferences-from-real-and-hypothetical-psychology-papers,Preferences from (real and hypothetical) psychology papers,['Stuart_Armstrong'],2021-10-06T09:06:08Z,alignmentforum,, 53455,https://www.alignmentforum.org/posts/h7QETH7GMk9HcMnHH/the-no-sandbagging-on-checkable-tasks-hypothesis,The “no sandbagging on checkable tasks” hypothesis,['Joe Carlsmith'],2023-07-31T23:06:03Z,alignmentforum,, 53474,https://www.alignmentforum.org/posts/dQvxMZkfgqGitWdkb/can-we-efficiently-explain-model-behaviors,Can we efficiently explain model behaviors?,['paulfchristiano'],2022-12-16T19:40:06Z,alignmentforum,, 53492,https://www.alignmentforum.org/posts/sCCdCLPN9E3YvdZhj/shulman-and-yudkowsky-on-ai-progress,Shulman and Yudkowsky on AI progress,"['Eliezer Yudkowsky', 'CarlShulman']",2021-12-03T20:05:23Z,alignmentforum,, 53515,https://www.alignmentforum.org/posts/ptmmK9PWgYTuWToaZ/what-i-ll-be-doing-at-miri,What I’ll be doing at MIRI,['evhub'],2019-11-12T23:19:16Z,alignmentforum,, 53524,https://www.alignmentforum.org/posts/sEyWufriufTnBKnTG/incidental-polysemanticity,Incidental polysemanticity,"['Victor Lecomte', 'Kushal Thaman', 'tmychow', 'Rylan Schaeffer']",2023-11-15T04:00:00Z,alignmentforum,, 53538,https://www.alignmentforum.org/posts/qnA6paRwMky3Q6ktk/modelling-transformative-ai-risks-mtair-project-introduction,Modelling Transformative AI Risks (MTAIR) Project: Introduction,"['Davidmanheim', 'Aryeh Englander']",2021-08-16T07:12:22Z,alignmentforum,, 53553,https://www.alignmentforum.org/posts/4ztqncYBakD6DWuXC/an-llm-based-exemplary-actor,An LLM-based “exemplary actor”,['Roman Leventov'],2023-05-29T11:12:51Z,alignmentforum,, 53587,https://www.alignmentforum.org/posts/Rcwv6SPsmhkgzfkDw/edt-solves-5-and-10-with-conditional-oracles,EDT solves 5 and 10 with conditional oracles,['jessicata'],2018-09-30T07:57:35Z,alignmentforum,, 53602,https://www.alignmentforum.org/posts/5sRK4rXH2EeSQJCau/corrigibility-at-some-small-length-by-dath-ilan,"""Corrigibility at some small length"" by dath ilan",['Christopher King'],2023-04-05T01:47:23Z,alignmentforum,, 53648,https://www.alignmentforum.org/posts/jotEekAQxmrwcMf9e/sydney-ai-safety-fellowship,Sydney AI Safety Fellowship,['Chris_Leong'],2021-12-02T07:34:29Z,alignmentforum,, 53657,https://www.alignmentforum.org/posts/cnC2RMWEGiGpJv8go/model-mis-specification-and-inverse-reinforcement-learning,Model Mis-specification and Inverse Reinforcement Learning,"['Owain_Evans', 'jsteinhardt']",2018-11-09T15:33:03Z,alignmentforum,, 53688,https://www.alignmentforum.org/posts/QgZAbFHtgSGjx4aTS/a-proof-of-inner-loeb-s-theorem,A proof of inner Löb's theorem,['James Payor'],2023-02-21T21:11:41Z,alignmentforum,, 53698,https://www.alignmentforum.org/posts/FM49gHBrs5GTx7wFf/rogue-agi-embodies-valuable-intellectual-property,Rogue AGI Embodies Valuable Intellectual Property,"['Mark Xu', 'CarlShulman']",2021-06-03T20:37:31Z,alignmentforum,, 53718,https://www.alignmentforum.org/posts/K4FrKRTrmyxrw5Dip/formalizing-objections-against-surrogate-goals,Formalizing Objections against Surrogate Goals,['VojtaKovarik'],2021-09-02T16:24:40Z,alignmentforum,, 53740,https://www.alignmentforum.org/posts/NuhsBLxxswinm2JKZ/what-is-the-alternative-to-intent-alignment-called,What is the alternative to intent alignment called?,['Richard_Ngo'],2020-04-30T02:16:03Z,alignmentforum,, 53749,https://www.alignmentforum.org/posts/5uiQkyKdejX3aEHLM/how-to-diversify-conceptual-alignment-the-model-behind,How to Diversify Conceptual Alignment: the Model Behind Refine,['adamShimi'],2022-07-20T10:44:03Z,alignmentforum,, 53771,https://www.alignmentforum.org/posts/BCyK2GQKiiuYdvkst/it-matters-when-the-first-sharp-left-turn-happens,It matters when the first sharp left turn happens,['Adam Jermyn'],2022-09-29T20:12:17Z,alignmentforum,, 53790,https://www.alignmentforum.org/posts/5YDczJcLZ6RmN5SSz/beren-s-deconfusing-direct-vs-amortised-optimisation-2,"Beren's ""Deconfusing Direct vs Amortised Optimisation""",['DragonGod'],2023-04-07T08:58:00Z,alignmentforum,, 53806,https://www.alignmentforum.org/posts/ZZDHoqpHmChxEYMme/an-127-rethinking-agency-cartesian-frames-as-a-formalization,[AN #127]: Rethinking agency: Cartesian frames as a formalization of ways to carve up the world into an agent and its environment,['Rohin Shah'],2020-12-02T18:20:05Z,alignmentforum,, 53826,https://www.alignmentforum.org/posts/8vLvpxzpc6ntfBWNo/seri-ml-alignment-theory-scholars-program-2022,SERI ML Alignment Theory Scholars Program 2022,"['Ryan Kidd', 'Victor Warlop', 'ozhang']",2022-04-27T00:43:38Z,alignmentforum,, 53841,https://www.alignmentforum.org/posts/ZWhJcHPmRaXAPAK5k/probabilistic-payor-lemma,Probabilistic Payor Lemma?,['abramdemski'],2023-03-19T17:57:04Z,alignmentforum,, 53860,https://www.alignmentforum.org/posts/MMAK6eeMCH3JGuqeZ/everything-i-need-to-know-about-takeoff-speeds-i-learned,Everything I Need To Know About Takeoff Speeds I Learned From Air Conditioner Ratings On Amazon,['johnswentworth'],2022-04-15T19:05:46Z,alignmentforum,, 53879,https://www.alignmentforum.org/posts/eXNy48LxxfgETdtYB/comparing-ai-alignment-approaches-to-minimize-false-positive,Comparing AI Alignment Approaches to Minimize False Positive Risk,['Gordon Seidoh Worley'],2020-06-30T19:34:57Z,alignmentforum,, 53898,https://www.alignmentforum.org/posts/vX7KirQwHsBaSEdfK/what-is-narrow-value-learning,What is narrow value learning?,['Rohin Shah'],2019-01-10T07:05:30Z,alignmentforum,, 53917,https://www.alignmentforum.org/posts/iNFZG4d9W848zsgch/the-goldbach-conjecture-is-probably-correct-so-was-fermat-s,The Goldbach conjecture is probably correct; so was Fermat's last theorem,['Stuart_Armstrong'],2020-07-14T19:30:15Z,alignmentforum,, 53926,https://www.alignmentforum.org/posts/4eZtmwaqhAgdJQDEg/dslt-1-the-rlct-measures-the-effective-dimension-of-neural,DSLT 1. The RLCT Measures the Effective Dimension of Neural Networks,['Liam Carroll'],2023-06-16T09:50:10Z,alignmentforum,, 53941,https://www.alignmentforum.org/posts/DFkGStzvj3jgXibFG/factored-cognition,Factored Cognition,['stuhlmueller'],2018-12-05T01:01:44Z,alignmentforum,, 53959,https://www.alignmentforum.org/posts/hNNM6gP5yZcHffmpD/the-case-for-a-journal-of-ai-alignment,The Case for a Journal of AI Alignment,['adamShimi'],2021-01-09T18:13:28Z,alignmentforum,, 53974,https://www.alignmentforum.org/posts/PADPJ3xac5ogjEGwA/defeating-goodhart-and-the-closest-unblocked-strategy,"Defeating Goodhart and the ""closest unblocked strategy"" problem",['Stuart_Armstrong'],2019-04-03T14:46:42Z,alignmentforum,, 53991,https://www.alignmentforum.org/posts/LCLBnmwdxkkz5fNvH/open-problems-with-myopia,Open Problems with Myopia,"['Mark Xu', 'evhub']",2021-03-10T18:38:09Z,alignmentforum,, 54009,https://www.alignmentforum.org/posts/EeAgytDZbDjRznPMA/gradient-hacking-definitions-and-examples,Gradient hacking: definitions and examples,['Richard_Ngo'],2022-06-29T21:35:37Z,alignmentforum,, 54035,https://www.alignmentforum.org/posts/hBJCMWELaW6MxinYW/intertheoretic-utility-comparison,Intertheoretic utility comparison,['Stuart_Armstrong'],2018-07-03T13:44:18Z,alignmentforum,, 54047,https://www.alignmentforum.org/posts/2SrzejmaxnwJBNkFE/a-possible-preference-algorithm,A possible preference algorithm,['Stuart_Armstrong'],2021-04-08T18:25:26Z,alignmentforum,, 54062,https://www.alignmentforum.org/posts/bd2K3Jdz82csjCFob/a-list-of-good-heuristics-that-the-case-for-ai-x-risk-fails,A list of good heuristics that the case for AI x-risk fails,['David Scott Krueger (formerly: capybaralet)'],2019-12-02T19:26:29Z,alignmentforum,, 54072,https://www.alignmentforum.org/posts/xRyLxfytmLFZ6qz5s/the-theory-practice-gap,The theory-practice gap,['Buck'],2021-09-17T22:51:46Z,alignmentforum,, 54095,https://www.alignmentforum.org/posts/L4e7CqqpDxea2x4Gg/disentangling-shard-theory-into-atomic-claims,Disentangling Shard Theory into Atomic Claims,['Leon Lang'],2023-01-13T04:23:52Z,alignmentforum,, 54125,https://www.alignmentforum.org/posts/3zmKzbMPjPvEcZfkn/an-150-the-subtypes-of-cooperative-ai-research,[AN #150]: The subtypes of Cooperative AI research,['Rohin Shah'],2021-05-12T17:20:27Z,alignmentforum,, 54148,https://www.alignmentforum.org/posts/3ejHFgQihLG4L6WQf/announcing-the-alignment-research-center,Announcing the Alignment Research Center,['paulfchristiano'],2021-04-26T23:30:03Z,alignmentforum,, 54160,https://www.alignmentforum.org/posts/D8ds9idKWbwzCseCh/zero-sum-is-a-misnomer,"""Zero Sum"" is a misnomer.",['abramdemski'],2020-09-30T18:25:31Z,alignmentforum,, 54170,https://www.alignmentforum.org/posts/LRYwpq8i9ym7Wuyoc/other-versions-of-no-free-lunch-in-value-learning,"Other versions of ""No free lunch in value learning""",['Stuart_Armstrong'],2020-02-25T14:25:01Z,alignmentforum,, 54184,https://www.alignmentforum.org/posts/jGB7Pd5q8ivBor8Ee/impact-measurement-and-value-neutrality-verification-1,Impact measurement and value-neutrality verification,['evhub'],2019-10-15T00:06:52Z,alignmentforum,, 54201,https://www.alignmentforum.org/posts/eeEEgNeTepZb6F6NF/scalar-reward-is-not-enough-for-aligned-agi,Scalar reward is not enough for aligned AGI,['Peter Vamplew'],2022-01-17T21:02:16Z,alignmentforum,, 54223,https://www.alignmentforum.org/posts/2Xfv3GQgo2kGER8vA/alignment-research-conceptual-alignment-research-applied,Alignment Research = Conceptual Alignment Research + Applied Alignment Research,['adamShimi'],2021-08-30T21:13:58Z,alignmentforum,, 54239,https://www.alignmentforum.org/posts/yEa7kwoMpsBgaBCgb/towards-a-new-impact-measure,Towards a New Impact Measure,['TurnTrout'],2018-09-18T17:21:34Z,alignmentforum,, 54263,https://www.alignmentforum.org/posts/Qsay72ct2KTmJ2hxc/forecasting-ai-progress-a-research-agenda,Forecasting AI Progress: A Research Agenda,"['rossg', 'axioman']",2020-08-10T01:04:21Z,alignmentforum,, 54272,https://www.alignmentforum.org/posts/fFF3G4W8FbXigS4gr/cognitive-biases-in-large-language-models,Cognitive Biases in Large Language Models,['Jan'],2021-09-25T20:59:52Z,alignmentforum,, 54308,https://www.alignmentforum.org/posts/RxutizkDNKzYCcNRv/does-iterated-amplification-tackle-the-inner-alignment,Does iterated amplification tackle the inner alignment problem?,['JanBrauner'],2020-02-15T12:58:03Z,alignmentforum,, 54318,https://www.alignmentforum.org/posts/mQqFNbvD5mYrCQPKE/an-73-detecting-catastrophic-failures-by-learning-how-agents,[AN #73]: Detecting catastrophic failures by learning how agents tend to break,['Rohin Shah'],2019-11-13T18:10:02Z,alignmentforum,, 54333,https://www.alignmentforum.org/posts/6TxmJRDGzDbwcLE3w/on-agent-incentives-to-manipulate-human-feedback-in-multi,On Agent Incentives to Manipulate Human Feedback in Multi-Agent Reward Learning Scenarios,['Francis Rhys Ward'],2022-04-03T18:20:37Z,alignmentforum,, 54346,https://www.alignmentforum.org/posts/srTD8DgTCR27udzoe/multiplicative-operations-on-cartesian-frames,Multiplicative Operations on Cartesian Frames,['Scott Garrabrant'],2020-11-03T19:27:15Z,alignmentforum,, 54367,https://www.alignmentforum.org/posts/XkXL96H6GknCbT5QH/mdp-models-are-determined-by-the-agent-architecture-and-the,MDP models are determined by the agent architecture and the environmental dynamics,['TurnTrout'],2021-05-26T00:14:01Z,alignmentforum,, 54384,https://www.alignmentforum.org/posts/LFNXiQuGrar3duBzJ/what-does-it-take-to-defend-the-world-against-out-of-control,What does it take to defend the world against out-of-control AGIs?,['Steven Byrnes'],2022-10-25T14:47:42Z,alignmentforum,, 54420,https://www.alignmentforum.org/posts/qunrsimS2cECxyCKy/link-why-i-m-excited-about-ai-assisted-human-feedback,[Link] Why I’m excited about AI-assisted human feedback,['janleike'],2022-04-06T15:37:58Z,alignmentforum,, 54429,https://www.alignmentforum.org/posts/iHmsJdxgMEWmAfNne/red-teaming-language-models-via-activation-engineering,Red-teaming language models via activation engineering,['Nina Rimsky'],2023-08-26T05:52:01Z,alignmentforum,, 54442,https://www.alignmentforum.org/posts/yCuzmCsE86BTu9PfA/there-are-no-coherence-theorems,There are no coherence theorems,"['Dan H', 'EJT']",2023-02-20T21:25:48Z,alignmentforum,, 54460,https://www.alignmentforum.org/posts/4BpeHPXMjRzopgAZd/mosaic-and-palimpsests-two-shapes-of-research,Mosaic and Palimpsests: Two Shapes of Research,['adamShimi'],2022-07-12T09:05:29Z,alignmentforum,, 54479,https://www.alignmentforum.org/posts/xzFQp7bmkoKfnae9R/but-exactly-how-complex-and-fragile,But exactly how complex and fragile?,['KatjaGrace'],2019-11-03T18:20:01Z,alignmentforum,, 54499,https://www.alignmentforum.org/posts/5bd75cc58225bf06703751eb/modeling-the-capabilities-of-advanced-ai-systems-as-episodic-reinforcement-learning,Modeling the capabilities of advanced AI systems as episodic reinforcement learning,['jessicata'],2016-08-19T02:52:13Z,alignmentforum,, 54521,https://www.alignmentforum.org/posts/xFotXGEotcKouifky/worlds-where-iterative-design-fails,Worlds Where Iterative Design Fails,['johnswentworth'],2022-08-30T20:48:29Z,alignmentforum,, 54551,https://www.alignmentforum.org/posts/pFXEG9C5m2X5h2yiq/drug-addicts-and-deceptively-aligned-agents-a-comparative,Drug addicts and deceptively aligned agents - a comparative analysis,['Jan'],2021-11-05T21:42:49Z,alignmentforum,, 54573,https://www.alignmentforum.org/posts/8Be6ZQvzhsigt5fkk/a-reply-to-byrnes-on-the-free-energy-principle,A reply to Byrnes on the Free Energy Principle,['Roman Leventov'],2023-03-03T13:03:49Z,alignmentforum,, 54601,https://www.alignmentforum.org/posts/Brr84ZmvK3kwy2eGJ/truthfulness-standards-and-credibility,"Truthfulness, standards and credibility",['Joe_Collman'],2022-04-07T10:31:53Z,alignmentforum,, 54623,https://www.alignmentforum.org/posts/nwjtqoox7JAcvMynx/difficulties-in-making-powerful-aligned-ai,Difficulties in making powerful aligned AI,['DanielFilan'],2023-05-14T20:50:05Z,alignmentforum,, 54650,https://www.alignmentforum.org/posts/jFCK9JRLwkoJX4aJA/don-t-design-agents-which-exploit-adversarial-inputs,Don't design agents which exploit adversarial inputs,"['TurnTrout', 'Garrett Baker']",2022-11-18T01:48:38Z,alignmentforum,, 54665,https://www.alignmentforum.org/posts/kwpvEpDXsivbmdYhr/transformer-inductive-biases-and-rasp,Transformer inductive biases & RASP,['Vivek Hebbar'],2022-02-24T00:42:30Z,alignmentforum,, 54681,https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to,"Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover",['Ajeya Cotra'],2022-07-18T19:06:15Z,alignmentforum,, 54708,https://www.alignmentforum.org/posts/rYdRiaA3cuioJxmBv/speculative-inferences-about-path-dependence-in-llm,Speculative inferences about path dependence in LLM supervised fine-tuning from results on linear mode connectivity and model souping,['RobertKirk'],2023-07-20T09:56:06Z,alignmentforum,, 54730,https://www.alignmentforum.org/posts/bP6KA2JJQMke8H4Au/an-112-engineering-a-safer-world,[AN #112]: Engineering a Safer World,['Rohin Shah'],2020-08-13T17:20:04Z,alignmentforum,, 54756,https://www.alignmentforum.org/posts/f6oWbqxEwktfPrKJw/computing-natural-abstractions-linear-approximation,Computing Natural Abstractions: Linear Approximation,['johnswentworth'],2021-04-15T17:47:10Z,alignmentforum,, 54766,https://www.alignmentforum.org/posts/HdA2nDKQ5FJtnTuxP/wildfire-of-strategicness,Wildfire of strategicness,['TsviBT'],2023-06-05T13:59:17Z,alignmentforum,, 54783,https://www.alignmentforum.org/posts/6DwprCdC7eErCRZkx/reading-the-ethicists-2-hunting-for-ai-alignment-papers,Reading the ethicists 2: Hunting for AI alignment papers,['Charlie Steiner'],2022-06-06T15:49:03Z,alignmentforum,, 54810,https://www.alignmentforum.org/posts/PDLfpRwSynu73mxGw/basic-facts-about-language-model-internals-1,Basic Facts about Language Model Internals,"['beren', 'Eric Winsor']",2023-01-04T13:01:35Z,alignmentforum,, 54822,https://www.alignmentforum.org/posts/YLoXcquNkNdsteZYd/knowledge-is-not-just-map-territory-resemblance,Knowledge is not just map/territory resemblance,['Alex Flint'],2021-05-25T17:58:09Z,alignmentforum,, 54831,https://www.alignmentforum.org/posts/HaHcsrDSZ3ZC2b4fK/world-model-interpretability-is-all-we-need,World-Model Interpretability Is All We Need,['Thane Ruthenis'],2023-01-14T19:37:15Z,alignmentforum,, 54857,https://www.alignmentforum.org/posts/yLTpo828duFQqPJfy/builder-breaker-for-deconfusion,Builder/Breaker for Deconfusion,['abramdemski'],2022-09-29T17:36:38Z,alignmentforum,, 54877,https://www.alignmentforum.org/posts/Fut8dtFsBYRz8atFF/the-natural-abstraction-hypothesis-implications-and-evidence,The Natural Abstraction Hypothesis: Implications and Evidence,['TheMcDouglas'],2021-12-14T23:14:25Z,alignmentforum,, 54908,https://www.alignmentforum.org/posts/gRp6FAWcQiCWkouN5/maze-solving-agents-add-a-top-right-vector-make-the-agent-go,"Maze-solving agents: Add a top-right vector, make the agent go to the top-right","['TurnTrout', 'peligrietzer', 'lisathiergart']",2023-03-31T19:20:49Z,alignmentforum,, 54925,https://www.alignmentforum.org/posts/tmyTb4bQQi7C47sde/safety-capabilities-tradeoff-dials-are-inevitable-in-agi,Safety-capabilities tradeoff dials are inevitable in AGI,['Steven Byrnes'],2021-10-07T19:03:22Z,alignmentforum,, 54947,https://www.alignmentforum.org/posts/ANupXf8XfZo2EJxGv/humans-can-be-assigned-any-values-whatsoever,Humans can be assigned any values whatsoever…,['Stuart_Armstrong'],2018-11-05T14:26:41Z,alignmentforum,, 54958,https://www.alignmentforum.org/posts/wgdfBtLmByaKYovYe/what-does-it-mean-to-apply-decision-theory,What does it mean to apply decision theory?,['abramdemski'],2020-07-08T20:31:06Z,alignmentforum,, 54972,https://www.alignmentforum.org/posts/gEKHX8WKrXGM4roRC/saving-time,Saving Time,['Scott Garrabrant'],2021-05-18T20:11:15Z,alignmentforum,, 54988,https://www.alignmentforum.org/posts/MMvbNuAis3SSphu7D/an-59-how-arguments-for-ai-risk-have-changed-over-time,[AN #59] How arguments for AI risk have changed over time,['Rohin Shah'],2019-07-08T17:20:02Z,alignmentforum,, 55010,https://www.alignmentforum.org/posts/AR6mfydDJiGksj6Co/encultured-ai-pre-planning-part-1-enabling-new-benchmarks,"Encultured AI Pre-planning, Part 1: Enabling New Benchmarks","['Andrew_Critch', 'Nick Hay']",2022-08-08T22:44:09Z,alignmentforum,, 55030,https://www.alignmentforum.org/posts/pGXR2ynhe5bBCCNqn/takeoff-speeds-and-discontinuities,Takeoff Speeds and Discontinuities,"['Sammy Martin', 'Daniel_Eth']",2021-09-30T13:50:35Z,alignmentforum,, 55053,https://www.alignmentforum.org/posts/gHefoxiznGfsbiAu9/inner-and-outer-alignment-decompose-one-hard-problem-into,Inner and outer alignment decompose one hard problem into two extremely hard problems,['TurnTrout'],2022-12-02T02:43:21Z,alignmentforum,, 55077,https://www.alignmentforum.org/posts/3RSq3bfnzuL3sp46J/acausal-normalcy,Acausal normalcy,['Andrew_Critch'],2023-03-03T23:34:34Z,alignmentforum,, 55097,https://www.alignmentforum.org/posts/owdBiF8pj6Lpwwdup/addressing-three-problems-with-counterfactual-corrigibility,"Addressing three problems with counterfactual corrigibility: bad bets, defending against backstops, and overconfidence.",['RyanCarey'],2018-10-21T12:03:12Z,alignmentforum,, 55115,https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines,Two-year update on my personal AI timelines,['Ajeya Cotra'],2022-08-02T23:07:49Z,alignmentforum,, 55138,https://www.alignmentforum.org/posts/LAR2ajpFDueNg45Mk/what-to-do-with-imitation-humans-other-than-asking-them-what,"What to do with imitation humans, other than asking them what the right thing to do is?",['Charlie Steiner'],2020-09-27T21:51:37Z,alignmentforum,, 55159,https://www.alignmentforum.org/posts/jyeYdBXAwsc4LPs7m/logical-uncertainty-and-functional-decision-theory,Logical Uncertainty and Functional Decision Theory,['swordsintoploughshares'],2018-07-10T23:08:10Z,alignmentforum,, 55174,https://www.alignmentforum.org/posts/EEPdbtvW8ei9Yi2e8/bridging-syntax-and-semantics-empirically,"Bridging syntax and semantics, empirically",['Stuart_Armstrong'],2018-09-19T16:48:32Z,alignmentforum,, 55191,https://www.alignmentforum.org/posts/jxE47v3aezEYaydg9/compositional-language-for-hypotheses-about-computations,Compositional language for hypotheses about computations,['Vanessa Kosoy'],2023-03-11T19:43:40Z,alignmentforum,, 55215,https://www.alignmentforum.org/posts/hzeLSQ9nwDkPc4KNt/seeking-power-is-convergently-instrumental-in-a-broad-class,Seeking Power is Convergently Instrumental in a Broad Class of Environments,['TurnTrout'],2021-08-08T02:02:19Z,alignmentforum,, 55242,https://www.alignmentforum.org/posts/RQn45KzN5cojLLb3L/why-you-might-expect-homogeneous-take-off-evidence-from-ml,Why you might expect homogeneous take-off: evidence from ML research,['Andrei Alexandru'],2022-07-17T20:31:19Z,alignmentforum,, 55259,https://www.alignmentforum.org/posts/WbdLYgbpxfrSXCBS6/excessive-ai-growth-rate-yields-little-socio-economic,Excessive AI growth-rate yields little socio-economic benefit.,['Cleo Nardo'],2023-04-04T19:13:51Z,alignmentforum,, 55273,https://www.alignmentforum.org/posts/95i5B78uhqyB3d6Xc/assuming-we-ve-solved-x-could-we-do-y,"Assuming we've solved X, could we do Y...",['Stuart_Armstrong'],2018-12-11T18:13:56Z,alignmentforum,, 55286,https://www.alignmentforum.org/posts/upYKjwjC67ovWKKMo/ai-benefits-post-4-outstanding-questions-on-selecting,AI Benefits Post 4: Outstanding Questions on Selecting Benefits,['Cullen'],2020-07-14T17:26:13Z,alignmentforum,, 55312,https://www.alignmentforum.org/posts/Hi7zurzkCog336EC2/plan-for-mediocre-alignment-of-brain-like-model-based-rl-agi,Plan for mediocre alignment of brain-like [model-based RL] AGI,['Steven Byrnes'],2023-03-13T14:11:33Z,alignmentforum,, 55334,https://www.alignmentforum.org/posts/jNrDzyc8PJ9HXtGFm/supervised-learning-of-outputs-in-the-brain,Supervised learning of outputs in the brain,['Steven Byrnes'],2020-10-26T14:32:54Z,alignmentforum,, 55351,https://www.alignmentforum.org/posts/wNrbHbhgPJBD2d9v6/language-models-are-a-potentially-safe-path-to-human-level,Language Models are a Potentially Safe Path to Human-Level AGI,['Nadav Brandes'],2023-04-20T00:40:16Z,alignmentforum,, 55373,https://www.alignmentforum.org/posts/MbWWKbyD5gLhJgfwn/meta-level-adversarial-evaluation-of-oversight-techniques-1,Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy,"['Buck', 'ryan_greenblatt']",2023-07-26T17:02:56Z,alignmentforum,, 55397,https://www.alignmentforum.org/posts/jfq2BH5kfQqu2vYv3/we-are-conjecture-a-new-alignment-research-startup,"We Are Conjecture, A New Alignment Research Startup",['Connor Leahy'],2022-04-08T11:40:14Z,alignmentforum,, 55421,https://www.alignmentforum.org/posts/idcnnZGEPfxuaSPBx/the-polarity-problem-draft,The Polarity Problem [Draft],"['Dan H', 'cdkg', 'Simon Goldstein']",2023-05-23T21:05:35Z,alignmentforum,, 55457,https://www.alignmentforum.org/posts/rt5X74Az3mXwTubRA/trace-goals-and-principles,Trace: Goals and Principles,['johnswentworth'],2020-02-28T23:50:13Z,alignmentforum,, 55474,https://www.alignmentforum.org/posts/mF8dkhZF9hAuLHXaD/reshaping-the-ai-industry,Reshaping the AI Industry,['Thane Ruthenis'],2022-05-29T22:54:32Z,alignmentforum,, 55502,https://www.alignmentforum.org/posts/ukZuzb8JpYiFLoord/why-you-should-minimax-in-two-player-zero-sum-games,Why you should minimax in two-player zero-sum games,['Nisan'],2020-05-17T20:48:04Z,alignmentforum,, 55511,https://www.alignmentforum.org/posts/YSFJosoHYFyXjoYWa/why-neural-networks-generalise-and-why-they-are-kind-of,"Why Neural Networks Generalise, and Why They Are (Kind of) Bayesian",['Joar Skalse'],2020-12-29T13:33:53Z,alignmentforum,, 55524,https://www.alignmentforum.org/posts/2eRgFFeeS7pR4R8nD/how-complex-are-myopic-imitators-1,How complex are myopic imitators?,['Vivek Hebbar'],2022-02-08T12:00:01Z,alignmentforum,, 55544,https://www.alignmentforum.org/posts/uhxpJyGYQ5FQRvdjY/abstracting-the-hardness-of-alignment-unbounded-atomic,Abstracting The Hardness of Alignment: Unbounded Atomic Optimization,['adamShimi'],2022-07-29T18:59:49Z,alignmentforum,, 55568,https://www.alignmentforum.org/posts/9ekP8FojvLa8Pr6P7/proofs-section-2-3-updates-decision-theory,"Proofs Section 2.3 (Updates, Decision Theory)",['Diffractor'],2020-08-27T07:49:05Z,alignmentforum,, 55591,https://www.alignmentforum.org/posts/Zd5Bsra7ar2pa3bwS/probability-theory-and-logical-induction-as-lenses,Probability theory and logical induction as lenses,['Alex Flint'],2021-04-23T02:41:25Z,alignmentforum,, 55607,https://www.alignmentforum.org/posts/u4mFjCPviXHAPjZK7/measurement-optimization-and-take-off-speed,"Measurement, Optimization, and Take-off Speed",['jsteinhardt'],2021-09-10T19:30:57Z,alignmentforum,, 55631,https://www.alignmentforum.org/posts/2KLz6RQWkCj4Rozrk/is-my-result-wrong-maths-vs-intuition-vs-evolution-in,Is my result wrong? Maths vs intuition vs evolution in learning human preferences,['Stuart_Armstrong'],2019-09-10T00:46:25Z,alignmentforum,, 55653,https://www.alignmentforum.org/posts/5MHxjWgwWEoMrPXj8/jobs-help-scale-up-lm-alignment-research-at-nyu,Jobs: Help scale up LM alignment research at NYU,['Sam Bowman'],2022-05-09T14:12:23Z,alignmentforum,, 55668,https://www.alignmentforum.org/posts/mvGNKQ6iSDf3d4gCi/importance-of-foresight-evaluations-within-elk,Importance of foresight evaluations within ELK,['Jonathan Uesato'],2022-01-06T15:34:01Z,alignmentforum,, 55689,https://www.alignmentforum.org/posts/REBFQF43nwcJgp8Ge/trends-in-the-dollar-training-cost-of-machine-learning-1,Trends in the dollar training cost of machine learning systems,['Ben Cottier'],2023-02-01T14:48:54Z,alignmentforum,, 55702,https://www.alignmentforum.org/posts/zbFGRRWPwxBwjwknY/how-to-do-theoretical-research-a-personal-perspective-1,"How to do theoretical research, a personal perspective",['Mark Xu'],2022-08-19T19:41:22Z,alignmentforum,, 55721,https://www.alignmentforum.org/posts/YKBbqMXSKetTQ2HBW/hypotheses-about-finding-knowledge-and-one-shot-causal,Hypotheses about Finding Knowledge and One-Shot Causal Entanglements,['Jemist'],2021-12-01T17:01:44Z,alignmentforum,, 55740,https://www.alignmentforum.org/posts/A8iGaZ3uHNNGgJeaD/an-orthodox-case-against-utility-functions,An Orthodox Case Against Utility Functions,['abramdemski'],2020-04-07T19:18:12Z,alignmentforum,, 55757,https://www.alignmentforum.org/posts/cYduioQNeHALQAMre/what-are-the-differences-between-all-the-iterative-recursive,What are the differences between all the iterative/recursive approaches to AI alignment?,['riceissa'],2019-09-21T02:09:13Z,alignmentforum,, 55781,https://www.alignmentforum.org/posts/7d2PsdHXrJnbofrvF/an-90-how-search-landscapes-can-contain-self-reinforcing,[AN #90]: How search landscapes can contain self-reinforcing feedback loops,['Rohin Shah'],2020-03-11T17:30:02Z,alignmentforum,, 55804,https://www.alignmentforum.org/posts/qyyo2efuwWkyR62fB/gpt-3-and-concept-extrapolation,GPT-3 and concept extrapolation,['Stuart_Armstrong'],2022-04-20T10:39:29Z,alignmentforum,, 55814,https://www.alignmentforum.org/posts/qE73pqxAZmeACsAdF/a-short-introduction-to-machine-learning,A short introduction to machine learning,['Richard_Ngo'],2021-08-30T14:31:45Z,alignmentforum,, 55832,https://www.alignmentforum.org/posts/dodEwdiphp5TJZjJj/some-notes-on-the-mathematics-of-toy-autoencoding-problems,Some Notes on the mathematics of Toy Autoencoding Problems,['Spencer Becker-Kahn'],2022-12-22T17:21:25Z,alignmentforum,, 55851,https://www.alignmentforum.org/posts/BeGCCmDtkdJD7j5Y5/the-slippery-slope-from-dalle-2-to-deepfake-anarchy,The Slippery Slope from DALLE-2 to Deepfake Anarchy,['scasper'],2022-11-05T14:53:55Z,alignmentforum,, 55878,https://www.alignmentforum.org/posts/eELny7y7JtCs6fQaA/ai-tracker-monitoring-current-and-near-future-risks-from,AI Tracker: monitoring current and near-future risks from superscale models,"['Edouard Harris', 'Jeremie Harris']",2021-11-23T19:16:06Z,alignmentforum,, 55891,https://www.alignmentforum.org/posts/6tikKda9LBzrkLfBJ/an-82-how-openai-five-distributed-their-training-computation,[AN #82]: How OpenAI Five distributed their training computation,['Rohin Shah'],2020-01-15T18:20:01Z,alignmentforum,, 55919,https://www.alignmentforum.org/posts/DtkA5jysFZGv7W4qP/training-process-transparency-through-gradient,Training Process Transparency through Gradient Interpretability: Early experiments on toy language models,"['robertzk', 'evhub']",2023-07-21T14:52:09Z,alignmentforum,, 55941,https://www.alignmentforum.org/posts/ikYKHkffKNJvBygXG/updated-deference-is-not-a-strong-argument-against-the,Updated Deference is not a strong argument against the utility uncertainty approach to alignment,['Ivan Vendrov'],2022-06-24T19:32:18Z,alignmentforum,, 55965,https://www.alignmentforum.org/posts/CewHdaAjEvG3bpc6C/epistemic-artefacts-of-conceptual-ai-alignment-research,Epistemic Artefacts of (conceptual) AI alignment research,"['Nora_Ammann', 'particlemania']",2022-08-19T17:18:48Z,alignmentforum,, 55992,https://www.alignmentforum.org/posts/nekLYqbCEBDEfbLzF/artificial-sandwiching-when-can-we-test-scalable-alignment,Artificial Sandwiching: When can we test scalable alignment protocols without humans?,['Sam Bowman'],2022-07-13T21:14:08Z,alignmentforum,, 56006,https://www.alignmentforum.org/posts/vnvGhfikBbrjZHMuD/predicting-gpu-performance,Predicting GPU performance,"['Marius Hobbhahn', 'Tamay']",2022-12-14T16:27:24Z,alignmentforum,, 56016,https://www.alignmentforum.org/posts/ixZLTmFfnKRbaStA5/book-review-a-thousand-brains-by-jeff-hawkins,"Book review: ""A Thousand Brains"" by Jeff Hawkins",['Steven Byrnes'],2021-03-04T05:10:45Z,alignmentforum,, 56043,https://www.alignmentforum.org/posts/fovfuFdpuEwQzJu2w/neural-networks-generalize-because-of-this-one-weird-trick,Neural networks generalize because of this one weird trick,['Jesse Hoogland'],2023-01-18T00:10:37Z,alignmentforum,, 56062,https://www.alignmentforum.org/posts/6Ghvdb2iwLAyGT6A3/paper-replication-walkthrough-reverse-engineering-modular,Paper Replication Walkthrough: Reverse-Engineering Modular Addition,['Neel Nanda'],2023-03-12T13:25:46Z,alignmentforum,, 56077,https://www.alignmentforum.org/posts/3kwR2dufdJyJamHQq/mechanistic-transparency-for-machine-learning,Mechanistic Transparency for Machine Learning,['DanielFilan'],2018-07-11T00:34:47Z,alignmentforum,, 56098,https://www.alignmentforum.org/posts/HACcn8roty9KBAWzZ/evaluations-of-new-ai-safety-researchers-can-be-noisy,Evaluations (of new AI Safety researchers) can be noisy,['LawrenceC'],2023-02-05T04:15:02Z,alignmentforum,, 56122,https://www.alignmentforum.org/posts/p8xcZerxHWi4nLorx/an-164-how-well-can-language-models-write-code,[AN #164]: How well can language models write code?,['Rohin Shah'],2021-09-15T17:20:04Z,alignmentforum,, 56153,https://www.alignmentforum.org/posts/AqsjZwxHNqH64C2b6/let-s-see-you-write-that-corrigibility-tag,Let's See You Write That Corrigibility Tag,['Eliezer Yudkowsky'],2022-06-19T21:11:04Z,alignmentforum,, 56177,https://www.alignmentforum.org/posts/CLuCgA2Ab7sBfvEuW/what-is-a-vnm-stable-set-really,"What is a VNM stable set, really?",['Nisan'],2021-01-25T05:43:59Z,alignmentforum,, 56189,https://www.alignmentforum.org/posts/wfpdejMWog4vEDLDg/ai-and-compute-trend-isn-t-predictive-of-what-is-happening,"""AI and Compute"" trend isn't predictive of what is happening",['alexlyzhov'],2021-04-02T00:44:47Z,alignmentforum,, 56198,https://www.alignmentforum.org/posts/6WbLRLdmTL4JxxvCq/analysing-dangerous-messages-from-future-ufai-via-oracles,Analysing: Dangerous messages from future UFAI via Oracles,['Stuart_Armstrong'],2019-11-22T14:17:43Z,alignmentforum,, 56233,https://www.alignmentforum.org/posts/HhBcoRwnyJhQRJnxr/model-driven-feedback-could-amplify-alignment-failures,Model-driven feedback could amplify alignment failures,['aogara'],2023-01-30T00:00:29Z,alignmentforum,, 56254,https://www.alignmentforum.org/posts/JKwrDwsaRiSxTv9ur/categorizing-failures-as-outer-or-inner-misalignment-is,Categorizing failures as “outer” or “inner” misalignment is often confused,['Rohin Shah'],2023-01-06T15:48:52Z,alignmentforum,, 56271,https://www.alignmentforum.org/posts/k8KJqXyctf4a342QA/aggregating-utilities-for-corrigible-ai-feedback-draft,Aggregating Utilities for Corrigible AI [Feedback Draft],"['Dan H', 'Simon Goldstein']",2023-05-12T20:57:04Z,alignmentforum,, 56299,https://www.alignmentforum.org/posts/H32NbFcqjTxy2pvaq/optimizing-arbitrary-expressions-with-a-linear-number-of,Optimizing arbitrary expressions with a linear number of queries to a Logical Induction Oracle (Cartoon Guide),['Donald Hobson'],2020-07-23T21:37:39Z,alignmentforum,, 56312,https://www.alignmentforum.org/posts/SN8wFZsZiyBygc27k/a-self-embedded-probabilistic-model,A Self-Embedded Probabilistic Model,['johnswentworth'],2020-11-13T20:36:24Z,alignmentforum,, 56331,https://www.alignmentforum.org/posts/kj37Hzb2MsALwLqWt/alignment-research-exercises,Alignment research exercises,['Richard_Ngo'],2022-02-21T20:24:16Z,alignmentforum,, 56354,https://www.alignmentforum.org/posts/TJT2oBMGaZTE7f2z2/when-is-cdt-dutch-bookable,When is CDT Dutch-Bookable?,['abramdemski'],2019-01-13T18:54:12Z,alignmentforum,, 56364,https://www.alignmentforum.org/posts/KRDo2afKJtD7bzSM8/barriers-to-mechanistic-interpretability-for-agi-safety,Barriers to Mechanistic Interpretability for AGI Safety,['Connor Leahy'],2023-08-29T10:56:46Z,alignmentforum,, 56380,https://www.alignmentforum.org/posts/8tLPYYQJM8SwL2xn9/proofs-section-2-2-isomorphism-to-expectations,Proofs Section 2.2 (Isomorphism to Expectations),['Diffractor'],2020-08-27T07:52:08Z,alignmentforum,, 56396,https://www.alignmentforum.org/posts/wJ3AqNPM7W4nfY5Bk/self-confirming-prophecies-and-simplified-oracle-designs,"Self-confirming prophecies, and simplified Oracle designs",['Stuart_Armstrong'],2019-06-28T09:57:36Z,alignmentforum,, 56412,https://www.alignmentforum.org/posts/ZkgqsyWgyDx4ZssqJ/implications-of-quantum-computing-for-artificial,Implications of Quantum Computing for Artificial Intelligence Alignment Research,"['Jsevillamol', 'PabloAMC']",2019-08-22T10:33:28Z,alignmentforum,, 56439,https://www.alignmentforum.org/posts/ARGXciEGtuhMKtYb8/the-best-predictor-is-malicious-optimiser-problem,"The ""best predictor is malicious optimiser"" problem",['Donald Hobson'],2020-07-29T11:49:20Z,alignmentforum,, 56460,https://www.alignmentforum.org/posts/moi3cFY2wpeKGu9TT/clarifying-the-agent-like-structure-problem,Clarifying the Agent-Like Structure Problem,['johnswentworth'],2022-09-29T21:28:09Z,alignmentforum,, 56470,https://www.alignmentforum.org/posts/j7kyt6sHEjukRND8B/which-counterfactuals-should-an-ai-follow,Which counterfactuals should an AI follow?,['Stuart_Armstrong'],2021-04-07T16:47:43Z,alignmentforum,, 56489,https://www.alignmentforum.org/posts/wyYubb3eC5FS365nk/how-do-we-prepare-for-final-crunch-time,How do we prepare for final crunch time?,['Eli Tyre'],2021-03-30T05:47:55Z,alignmentforum,, 56517,https://www.alignmentforum.org/posts/TmhrC93mj2Pgsox9t/provide-feedback-on-open-philanthropy-s-ai-alignment-rfp,Provide feedback on Open Philanthropy’s AI alignment RFP,"['abergal', 'Nick_Beckstead']",2021-08-20T19:52:55Z,alignmentforum,, 56531,https://www.alignmentforum.org/posts/D3PnBxkj5jkKPm6jr/yampolskiy-on-ai-risk-skepticism,Yampolskiy on AI Risk Skepticism,['Gordon Seidoh Worley'],2021-05-11T14:50:39Z,alignmentforum,, 56540,https://www.alignmentforum.org/posts/cwpKagyTvqSyAJB7q/clarifying-power-seeking-and-instrumental-convergence,Clarifying Power-Seeking and Instrumental Convergence,['TurnTrout'],2019-12-20T19:59:33Z,alignmentforum,, 56552,https://www.alignmentforum.org/posts/hTfyX4823wKqnoFnS/alignment-versus-ai-alignment,Alignment versus AI Alignment,['Alex Flint'],2022-02-04T22:59:10Z,alignmentforum,, 56577,https://www.alignmentforum.org/posts/Hn65bRY3BoXEW3bxL/the-unexpected-clanging,The Unexpected Clanging,['Chris_Leong'],2023-05-18T14:47:02Z,alignmentforum,, 56589,https://www.alignmentforum.org/posts/cHJxSJ4jBmBRGtbaE/continuity-assumptions,Continuity Assumptions,['Jan_Kulveit'],2022-06-13T21:31:30Z,alignmentforum,, 56607,https://www.alignmentforum.org/posts/QppXf4yfcG8JAKhnw/more-gpt-3-and-symbol-grounding,More GPT-3 and symbol grounding,['Stuart_Armstrong'],2022-02-23T18:30:02Z,alignmentforum,, 56631,https://www.alignmentforum.org/posts/CuDYhLLXq6FuHvGZc/axrp-episode-7-5-forecasting-transformative-ai-from,AXRP Episode 7.5 - Forecasting Transformative AI from Biological Anchors with Ajeya Cotra,['DanielFilan'],2021-05-28T00:20:11Z,alignmentforum,, 56662,https://www.alignmentforum.org/posts/Foqiq3TGmfYmQwrnH/the-alignment-newsletter-10-06-11-18,The Alignment Newsletter #10: 06/11/18,['Rohin Shah'],2018-06-11T16:00:28Z,alignmentforum,, 56685,https://www.alignmentforum.org/posts/5bd75cc58225bf067037529f/predicting-hch-using-expert-advice,Predicting HCH using expert advice,['jessicata'],2016-11-28T03:38:05Z,alignmentforum,, 56705,https://www.alignmentforum.org/posts/Y8ezvHWZXW232CkKr/new-deepmind-ai-safety-research-blog,New DeepMind AI Safety Research Blog,['Vika'],2018-09-27T16:28:59Z,alignmentforum,, 56714,https://www.alignmentforum.org/posts/7CJBiHYxebTmMfGs3/infinite-data-compute-arguments-in-alignment,Infinite Data/Compute Arguments in Alignment,['johnswentworth'],2020-08-04T20:21:37Z,alignmentforum,, 56725,https://www.alignmentforum.org/posts/pf48kg9xCxJAcHmQc/understanding-recent-impact-measures,Understanding Recent Impact Measures,['Matthew Barnett'],2019-08-07T04:57:04Z,alignmentforum,, 56740,https://www.alignmentforum.org/posts/vJ7ggyjuP4u2yHNcP/threat-resistant-bargaining-megapost-introducing-the-rose,Threat-Resistant Bargaining Megapost: Introducing the ROSE Value,['Diffractor'],2022-09-28T01:20:12Z,alignmentforum,, 56758,https://www.alignmentforum.org/posts/Fbk9H6ipfybHyqjrp/a-playbook-for-ai-risk-reduction-focused-on-misaligned-ai,A Playbook for AI Risk Reduction (focused on misaligned AI),['HoldenKarnofsky'],2023-06-06T18:05:55Z,alignmentforum,, 56784,https://www.alignmentforum.org/posts/ZyWyAJbedvEgRT2uF/inaccessible-information,Inaccessible information,['paulfchristiano'],2020-06-03T05:10:03Z,alignmentforum,, 56815,https://www.alignmentforum.org/posts/LqRD7sNcpkA9cmXLv/open-problems-and-fundamental-limitations-of-rlhf,Open Problems and Fundamental Limitations of RLHF,['scasper'],2023-07-31T15:31:29Z,alignmentforum,, 56833,https://www.alignmentforum.org/posts/D7epkkJb3CqDTYgX9/refine-an-incubator-for-conceptual-alignment-research-bets,Refine: An Incubator for Conceptual Alignment Research Bets,['adamShimi'],2022-04-15T08:57:36Z,alignmentforum,, 56848,https://www.alignmentforum.org/posts/iznohbCPFkeB9kAJL/inverse-scaling-prize-round-1-winners,Inverse Scaling Prize: Round 1 Winners,"['Ethan Perez', 'Ian McKenzie']",2022-09-26T19:57:01Z,alignmentforum,, 56870,https://www.alignmentforum.org/posts/uyvnjaRaKdGXoKrv7/from-language-to-ethics-by-automated-reasoning,From language to ethics by automated reasoning,['Michele Campolo'],2021-11-21T15:16:20Z,alignmentforum,, 56884,https://www.alignmentforum.org/posts/X26ksz4p3wSyycKNB/gears-level-mental-models-of-transformer-interpretability,Gears-Level Mental Models of Transformer Interpretability,['KevinRoWang'],2022-03-29T20:09:53Z,alignmentforum,, 56910,https://www.alignmentforum.org/posts/D4gEDdqWrgDPMtasc/thoughts-on-process-based-supervision-1,Thoughts on “Process-Based Supervision”,['Steven Byrnes'],2023-07-17T14:08:57Z,alignmentforum,, 56945,https://www.alignmentforum.org/posts/QirLfXhDPYWCP8PK5/transparency-and-agi-safety,Transparency and AGI safety,['jylin04'],2021-01-11T18:51:50Z,alignmentforum,, 56977,https://www.alignmentforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1,Writeup: Progress on AI Safety via Debate,"['Beth Barnes', 'paulfchristiano']",2020-02-05T21:04:05Z,alignmentforum,, 56992,https://www.alignmentforum.org/posts/Ca3sCRGfWvXvYC5YC/what-are-some-non-purely-sampling-ways-to-do-deep-rl,What are some non-purely-sampling ways to do deep RL?,['evhub'],2019-12-05T00:09:55Z,alignmentforum,, 57017,https://arbital.com/p/examination_through_isomorphism,Examination through isomorphism,"['Eric Rogstad', 'Kevin Clancy', 'Luke Sciarappa']",2016-11-30T23:22:14Z,arbital,, 57027,https://arbital.com/p/psychologizing,Psychologizing,['Eliezer Yudkowsky'],2016-06-08T18:02:45Z,arbital,, 57038,https://arbital.com/p/odds_intro,Odds: Introduction,"['Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-10-11T17:27:48Z,arbital,, 57059,https://arbital.com/p/paperclip_maximizer,Paperclip maximizer,"['Eric Rogstad', 'Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2017-03-03T18:24:22Z,arbital,, 57072,https://arbital.com/p/hack,Ad-hoc hack (alignment theory),"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-05-18T04:28:24Z,arbital,, 57092,https://arbital.com/p/prime_element_ring_theory,Prime element of a ring,"['Eric Bruylant', 'Patrick Stevens']",2016-08-21T05:33:25Z,arbital,, 57101,https://arbital.com/p/turing_machine_external_resources,Turing machine: External resources,['Eric Bruylant'],2016-08-20T21:22:35Z,arbital,, 57121,https://arbital.com/p/proof_by_contradiction,Proof by contradiction,"['Eric Rogstad', 'Eric Bruylant', 'Mark Chimes', 'Jaime Sevilla Molina', 'Joe Zeng']",2016-08-20T11:27:57Z,arbital,, 57130,https://arbital.com/p/simplicity_of_alternating_group_five_simpler_proof,The alternating group on five elements is simple: Simpler proof,"['Team Arbital', 'Patrick Stevens']",2016-06-17T19:38:57Z,arbital,, 57139,https://arbital.com/p/P_vs_NP,P vs NP,"['Mark Chimes', 'Jaime Sevilla Molina', 'Alexei Andreev']",2016-06-15T14:20:07Z,arbital,, 57148,https://arbital.com/p/soft_optimizer,Mild optimization,"['Eric Bruylant', 'Patrick LaVictoire', 'Eliezer Yudkowsky']",2016-06-20T19:06:02Z,arbital,, 57168,https://arbital.com/p/group_examples,Group: Examples,"['Daniel Satanove', 'Patrick Stevens', 'Qiaochu Yuan', 'Eric Bruylant', 'Mark Chimes']",2016-10-21T15:25:45Z,arbital,, 57189,https://arbital.com/p/epistemology,Epistemology,"['Eric Rogstad', 'Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-08-08T15:01:53Z,arbital,, 57198,https://arbital.com/p/efficiency,Epistemic and instrumental efficiency,"['Eric Bruylant', 'Jessica Taylor', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-06-16T18:21:25Z,arbital,, 57216,https://arbital.com/p/general_intelligence,General intelligence,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-03-24T07:42:00Z,arbital,, 57232,https://arbital.com/p/5b,Linguistic conventions in value alignment,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-17T21:42:06Z,arbital,, 57246,https://arbital.com/p/lambda_calculus,Lambda calculus,['Dylan Hendrickson'],2016-12-06T03:33:25Z,arbital,, 57269,https://arbital.com/p/attainable_optimum,Attainable optimum,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-13T16:23:44Z,arbital,, 57278,https://arbital.com/p/aligning_adds_time,Aligning an AGI adds significant development time,['Eliezer Yudkowsky'],2017-02-22T19:51:33Z,arbital,, 57297,https://arbital.com/p/reliable_prediction,Reliable prediction,"['Eric Bruylant', 'Jessica Taylor']",2016-03-24T00:29:32Z,arbital,, 57314,https://arbital.com/p/mind_design_space_wide,Mind design space is wide,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-19T18:28:17Z,arbital,, 57324,https://arbital.com/p/uncomputability,Uncomputability,"['Eric Rogstad', 'Eric Bruylant', 'Jaime Sevilla Molina', 'Alexei Andreev']",2016-06-14T20:59:38Z,arbital,, 57333,https://arbital.com/p/3v0,GalCom: Rules,['Nate Soares'],2016-05-27T12:02:07Z,arbital,, 57342,https://arbital.com/p/2w1,Uncountability: Intro (Math 1),"['Eric Bruylant', 'Mark Chimes', 'Patrick Stevens', 'Jason Gross']",2016-10-26T19:27:53Z,arbital,, 57356,https://arbital.com/p/paperclip,Paperclip,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-01-11T19:55:01Z,arbital,, 57370,https://arbital.com/p/bayes_examples_realistic_math1,Realistic (Math 1),"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-02-27T21:45:47Z,arbital,, 57379,https://arbital.com/p/n_digit,n-digit,['Nate Soares'],2016-06-24T02:58:44Z,arbital,, 57396,https://arbital.com/p/needs_clickbait_meta_tag,Needs clickbait,"['Eric Bruylant', 'Jaime Sevilla Molina', 'Patrick Stevens']",2016-08-19T21:01:16Z,arbital,, 57405,https://arbital.com/p/task_identification,Task identification problem,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-23T21:13:27Z,arbital,, 57414,https://arbital.com/p/standard_provability_predicate,Standard provability predicate,"['Eric Rogstad', 'Eric Bruylant', 'Jaime Sevilla Molina']",2016-07-23T16:04:16Z,arbital,, 57426,https://arbital.com/p/566,The n-th root of m is either an integer or irrational,"['Eric Bruylant', 'Sharma Kunapalli', 'Joe Zeng']",2016-07-06T21:33:03Z,arbital,, 57435,https://arbital.com/p/convex_function,Convex function,"['Eric Bruylant', 'Jessica Taylor']",2016-07-13T03:23:54Z,arbital,, 57445,https://arbital.com/p/quotient_by_subgroup_is_well_defined_iff_normal,Quotient by subgroup is well defined if and only if subgroup is normal,['Patrick Stevens'],2016-06-20T06:58:24Z,arbital,, 57454,https://arbital.com/p/not_more_paperclips,You can't get more paperclips that way,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-05-25T20:33:44Z,arbital,, 57470,https://arbital.com/p/associative_operation,Associative operation,"['Eric Rogstad', 'Eric Bruylant', 'Dylan Hendrickson', 'Nate Soares']",2016-07-08T20:17:26Z,arbital,, 57479,https://arbital.com/p/rich_domain,Rich domain,"['Eliezer Yudkowsky', 'Alexei Andreev']",2017-02-18T02:30:49Z,arbital,, 57492,https://arbital.com/p/stabiliser_is_a_subgroup,Stabiliser is a subgroup,"['Dylan Hendrickson', 'Mark Chimes', 'Patrick Stevens']",2016-07-07T15:26:16Z,arbital,, 57501,https://arbital.com/p/6hk,Primer on Infinite Series,"['Eric Bruylant', 'Chris Holden']",2016-11-02T21:04:22Z,arbital,, 57512,https://arbital.com/p/meta_unsolved,Meta-rules for (narrow) value learning are still unsolved,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-21T23:37:07Z,arbital,, 57530,https://arbital.com/p/random_utility_function,Random utility function,['Eliezer Yudkowsky'],2017-02-08T17:04:13Z,arbital,, 57539,https://arbital.com/p/bayes_waterfall_diseasitis,Waterfall diagrams and relative odds,"['Robert Eidschun', 'Eric Bruylant', 'Alexei Andreev', 'Malo Bourgon', 'Nate Soares', 'Khana Santamaria', 'Adom Hartell', 'Eliezer Yudkowsky']",2017-01-30T07:27:09Z,arbital,, 57549,https://arbital.com/p/consequentialist,Consequentialist cognition,"['Eric Bruylant', 'Olivia Schaefer', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-06-11T03:04:41Z,arbital,, 57562,https://arbital.com/p/cognitive_steganography,Cognitive steganography,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-04-28T21:15:34Z,arbital,, 57574,https://arbital.com/p/cartesian_agent,Cartesian agent,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-01-19T23:37:58Z,arbital,, 57584,https://arbital.com/p/alignment_principle,Principles in AI alignment,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-16T17:54:18Z,arbital,, 57611,https://arbital.com/p/advanced_agent_theory,Theory of (advanced) agents,['Eliezer Yudkowsky'],2017-02-17T20:22:50Z,arbital,, 57625,https://arbital.com/p/bayesian_likelihood,Likelihood,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-10-07T23:58:35Z,arbital,, 57640,https://arbital.com/p/context_disaster,Context disaster,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2017-03-01T03:10:17Z,arbital,, 57661,https://arbital.com/p/advanced_agent,Advanced agent properties,"['Eric Bruylant', 'Alexei Andreev', 'Eliezer Yudkowsky', 'Matthew Graves']",2017-03-25T04:59:44Z,arbital,, 57683,https://arbital.com/p/avert_instrumental_pressure,Averting instrumental pressures,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-27T23:57:57Z,arbital,, 57697,https://arbital.com/p/least_common_multiple,Least common multiple,"['Kevin Clancy', 'Johannes Schmitt', 'Patrick Stevens']",2016-09-25T19:50:36Z,arbital,, 57711,https://arbital.com/p/abortable,Abortable plans,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-22T02:03:46Z,arbital,, 57721,https://arbital.com/p/godels_first_incompleteness_theorem,Gödel's first incompleteness theorem,"['Eric Rogstad', 'Eric Bruylant', 'Jaime Sevilla Molina']",2016-10-16T15:28:44Z,arbital,, 57734,https://arbital.com/p/moral_hazard,Moral hazards in AGI development,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-23T21:51:40Z,arbital,, 57752,https://arbital.com/p/reflective_degree_of_freedom,Reflectively consistent degree of freedom,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-09T02:19:20Z,arbital,, 57776,https://arbital.com/p/exponential,Exponential,"['Eric Bruylant', 'Nate Soares', 'Joe Zeng']",2016-07-04T05:34:11Z,arbital,, 57785,https://arbital.com/p/conditional_probability_refresher,Conditional probability: Refresher,"['Adom Hartell', 'Eric Bruylant', 'Nate Soares', 'Cameron McFee']",2016-10-10T21:53:50Z,arbital,, 57794,https://arbital.com/p/probability_distribution_motivated_definition,Probability distribution: Motivated definition,"['Eric Bruylant', 'Nate Soares', 'Alexei Andreev']",2016-07-06T22:49:26Z,arbital,, 57812,https://arbital.com/p/aixitl,AIXI-tl,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-01-17T00:36:39Z,arbital,, 57821,https://arbital.com/p/eliezer_fixes,List of Eliezer's current most desired fixes and features,"['Eliezer Yudkowsky', 'Alexei Andreev']",2017-03-03T22:00:26Z,arbital,, 57842,https://arbital.com/p/shutdown_problem,Shutdown problem,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-13T16:39:31Z,arbital,, 57855,https://arbital.com/p/bayes_ordinary_claims,Ordinary claims require ordinary evidence,"['Eric Bruylant', 'Nate Soares', 'Alexei Andreev']",2016-07-06T13:33:00Z,arbital,, 57864,https://arbital.com/p/colon_to_notation,Colon-to notation,"['Kevin Clancy', 'Dylan Hendrickson', 'Eric Rogstad', 'Izaak Meckler', 'Qiaochu Yuan']",2016-08-04T14:54:29Z,arbital,, 57873,https://arbital.com/p/cauchy_theorem_on_subgroup_existence_intuitive,Cauchy's theorem on subgroup existence: intuitive version,"['Eric Bruylant', 'Patrick Stevens']",2016-06-30T13:51:06Z,arbital,, 57882,https://arbital.com/p/strong_Church_Turing_thesis,Strong Church Turing thesis,"['Eric Rogstad', 'Eric Bruylant', 'Jaime Sevilla Molina', 'Patrick Stevens']",2016-06-16T10:46:01Z,arbital,, 57891,https://arbital.com/p/bayes_rule_multiple,Bayes' rule: Vector form,"['Eric Bruylant', 'Ryan White', 'Patrick LaVictoire', 'gia dang', 'Alexei Andreev', 'Eliezer Yudkowsky', 'Nate Soares', 'Nick Jordan', 'Francis Marineau', 'Erik Nash', 'Adom Hartell', 'Connor Flexman', 'Nate Windwood', 'Fedor Belolutskiy']",2017-05-22T22:31:28Z,arbital,, 57913,https://arbital.com/p/ordered_ring,Ordered ring,"['Dylan Hendrickson', 'Eric Bruylant', 'Joe Zeng']",2016-07-07T16:51:00Z,arbital,, 57922,https://arbital.com/p/daemons,Optimization daemons,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-27T22:54:00Z,arbital,, 57943,https://arbital.com/p/elementary_algebra,Elementary Algebra,"['Kevin Clancy', 'Alexei Andreev', 'Gabriel Sjöstrand', 'Adele Lopez', 'Eric Bruylant']",2016-08-06T13:10:55Z,arbital,, 57952,https://arbital.com/p/ea_intro,Introduction to Effective Altruism,"['Eric Rogstad', 'Aaron Gertler']",2017-01-04T01:09:15Z,arbital,, 57963,https://arbital.com/p/Arbital,Arbital,"['Eric Bruylant', 'Alexei Andreev', 'Eric Rogstad', 'Anna Salamon', 'Nate Soares', 'Tom Brown', 'mrkun', 'Alex Montel', 'Eliezer Yudkowsky']",2016-08-08T14:07:52Z,arbital,, 57981,https://arbital.com/p/nofreelunch_irrelevant,No-Free-Lunch theorems are often irrelevant,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-20T04:02:01Z,arbital,, 57992,https://arbital.com/p/fractional_digits,Fractional digits,"['Malcolm McCrimmon', 'Nate Soares', 'Eric Rogstad', 'Alexei Andreev']",2016-07-30T00:56:30Z,arbital,, 58001,https://arbital.com/p/bayes_rule_functional,Bayes' rule: Functional form,"['Alexei Andreev', 'Eric Rogstad', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-10-10T23:21:34Z,arbital,, 58011,https://arbital.com/p/axiom_of_choice_introduction,Axiom of Choice: Introduction,['Mark Chimes'],2016-10-10T18:29:09Z,arbital,, 58020,https://arbital.com/p/5r7,0.999...=1,['Dylan Hendrickson'],2016-08-04T12:02:40Z,arbital,, 58032,https://arbital.com/p/bits_in_a_trit,How many bits to a trit?,"['Eric Rogstad', 'Nate Soares']",2016-06-02T23:04:18Z,arbital,, 58041,https://arbital.com/p/AIXI,AIXI,"['Alexei Andreev', 'Eric Rogstad', 'Eric Bruylant', 'Brian Muhia', 'Eliezer Yudkowsky']",2017-10-06T12:14:15Z,arbital,, 58057,https://arbital.com/p/strained_argument,Strained argument,"['Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T16:17:44Z,arbital,, 58066,https://arbital.com/p/3x3,Intradependent encodings can be compressed,['Nate Soares'],2016-05-29T14:38:50Z,arbital,, 58075,https://arbital.com/p/4fh,Square visualization of probabilities on two events: (example) Diseasitis,"['Tsvi BT', 'Team Arbital']",2016-06-18T08:41:24Z,arbital,, 58085,https://arbital.com/p/bayes_science_virtues,Bayesian view of scientific virtues,"['Eric Bruylant', 'Patrick LaVictoire', 'Otto Mossberg', 'Alexei Andreev', 'Eric Rogstad', 'Dewi Morgan', 'Nate Soares', 'Adom Hartell', 'Eliezer Yudkowsky']",2017-02-16T17:22:52Z,arbital,, 58112,https://arbital.com/p/group_theory_examples,Group theory: Examples,"['Eric Rogstad', 'Qiaochu Yuan']",2016-05-25T20:34:31Z,arbital,, 58129,https://arbital.com/p/task_goal,Task (AI goal),"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-01-26T01:55:21Z,arbital,, 58150,https://arbital.com/p/needs_brief_summary_meta_tag,Needs brief summary,"['Eric Bruylant', 'Mark Chimes', 'Nate Soares']",2016-06-22T15:11:40Z,arbital,, 58159,https://arbital.com/p/split_by_mastery_meta_tag,Needs splitting by mastery,['Eric Bruylant'],2016-08-08T16:17:52Z,arbital,, 58168,https://arbital.com/p/object_level_goal,Object-level vs. indirect goals,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-23T02:33:03Z,arbital,, 58177,https://arbital.com/p/beneficial,'Beneficial',"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-05-01T18:00:26Z,arbital,, 58187,https://arbital.com/p/a_class_meta_tag,A-Class,"['Eric Rogstad', 'Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-08-19T21:47:21Z,arbital,, 58200,https://arbital.com/p/function_range,Range (of a function),['Nate Soares'],2016-05-13T21:39:55Z,arbital,, 58209,https://arbital.com/p/4cl,Two independent events: Square visualization,"['Eric Rogstad', 'Jaime Sevilla Molina', 'Tsvi BT']",2016-06-16T14:55:54Z,arbital,, 58218,https://arbital.com/p/real_number_as_dedekind_cut,Real number (as Dedekind cut),"['Kevin Clancy', 'Dylan Hendrickson', 'Patrick Stevens', 'Eric Bruylant', 'Joe Zeng']",2016-07-23T17:41:20Z,arbital,, 58227,https://arbital.com/p/odds,Odds,"['Stephanie Zolayvar', 'Alexei Andreev', 'Eric Rogstad', 'Gregor Gerasev', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky', 'Emile Kroeger']",2016-10-24T19:03:35Z,arbital,, 58236,https://arbital.com/p/order_theory,Order theory,"['Kevin Clancy', 'Joe Zeng', 'Alexei Andreev']",2016-05-27T00:35:53Z,arbital,, 58245,https://arbital.com/p/sufficiently_advanced_ai,Sufficiently advanced Artificial Intelligence,['Eliezer Yudkowsky'],2017-01-16T16:58:38Z,arbital,, 58254,https://arbital.com/p/rescue_utility,Rescuing the utility function,"['Eric Rogstad', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-12-29T05:20:24Z,arbital,, 58270,https://arbital.com/p/minimality_principle,Minimality principle,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-10-19T20:48:12Z,arbital,, 58284,https://arbital.com/p/logspace,Life in logspace,"['Malcolm McCrimmon', 'Nate Soares', 'Eric Rogstad', 'Alexei Andreev']",2016-07-29T22:26:10Z,arbital,, 58294,https://arbital.com/p/trits_with_galcom_bits,Encoding trits with GalCom bits,['Nate Soares'],2016-06-02T20:31:13Z,arbital,, 58303,https://arbital.com/p/diseasitis,Diseasitis,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-10-01T04:01:55Z,arbital,, 58313,https://arbital.com/p/group_action_induces_homomorphism,Group action induces homomorphism to the symmetric group,['Patrick Stevens'],2016-06-14T15:05:26Z,arbital,, 58322,https://arbital.com/p/intradependent_encoding,Intradependent encoding,['Nate Soares'],2016-05-29T13:18:16Z,arbital,, 58332,https://arbital.com/p/order_monotone_exercises,Monotone function: exercises,['Kevin Clancy'],2016-12-03T22:49:22Z,arbital,, 58344,https://arbital.com/p/calibrated_probabilities,Well-calibrated probabilities,['Eliezer Yudkowsky'],2015-12-18T21:20:44Z,arbital,, 58353,https://arbital.com/p/injective_function,Injective function,"['Eric Bruylant', 'Patrick Stevens']",2016-06-29T06:44:12Z,arbital,, 58363,https://arbital.com/p/probability_interpretations_examples,Probability interpretations: Examples,"['Nate Soares', 'Alexei Andreev']",2016-06-30T22:13:46Z,arbital,, 58381,https://arbital.com/p/perfect_rolling_sphere,Perfect rolling sphere,"['Eric Rogstad', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-03-31T19:05:37Z,arbital,, 58392,https://arbital.com/p/probability_interpretations_correspondence,"Correspondence visualizations for different interpretations of ""probability""","['Eric Rogstad', 'Nate Soares']",2016-07-10T10:58:40Z,arbital,, 58407,https://arbital.com/p/needs_links_meta_tag,Needs links,['Eric Bruylant'],2016-08-08T13:15:48Z,arbital,, 58417,https://arbital.com/p/modal_logic,Modal logic,"['Eric Bruylant', 'Jaime Sevilla Molina', 'Patrick LaVictoire']",2016-07-27T19:13:12Z,arbital,, 58432,https://arbital.com/p/likelihood_notation,Likelihood notation,"['Nate Soares', 'Alexei Andreev']",2016-07-07T03:16:06Z,arbital,, 58441,https://arbital.com/p/ontology_identification_technical_tutorial,Ontology identification problem: Technical tutorial,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-02-05T00:51:21Z,arbital,, 58459,https://arbital.com/p/mindcrime,Mindcrime,"['Alexei Andreev', 'Jeremy Perret', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-12-29T05:36:44Z,arbital,, 58484,https://arbital.com/p/needs_exercises_meta_tag,Needs exercises,"['Eric Bruylant', 'Mark Chimes', 'Alexei Andreev']",2016-06-20T20:55:38Z,arbital,, 58494,https://arbital.com/p/quine,Quine,"['Nate Soares', 'Jaime Sevilla Molina', 'Patrick LaVictoire']",2016-05-08T18:27:17Z,arbital,, 58504,https://arbital.com/p/commutative_operation,Commutative operation,"['Dylan Hendrickson', 'Eric Bruylant', 'Nate Soares']",2016-07-17T19:33:35Z,arbital,, 58513,https://arbital.com/p/invisible_background,Invisible background fallacies,['Eliezer Yudkowsky'],2017-01-05T22:49:41Z,arbital,, 58525,https://arbital.com/p/galcom,GalCom,['Nate Soares'],2016-05-28T12:51:26Z,arbital,, 58538,https://arbital.com/p/tiling_agents,Tiling agents theory,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-01-17T05:03:53Z,arbital,, 58554,https://arbital.com/p/cev,Coherent extrapolated volition (alignment target),"['Eric Bruylant', 'Rob Bensinger', 'Eliezer Yudkowsky']",2019-07-26T16:54:02Z,arbital,, 58583,https://arbital.com/p/likelihoods_not_pvalues,"Report likelihoods, not p-values","['Eric Rogstad', 'Eric Bruylant', 'Nate Soares', 'austin stone']",2017-04-29T03:00:59Z,arbital,, 58609,https://arbital.com/p/most_complexity_incompressible,Most complex things are not very compressible,['Eliezer Yudkowsky'],2016-06-27T00:06:10Z,arbital,, 58618,https://arbital.com/p/derivative_calculus,Derivative,"['Eric Bruylant', 'Patrick Stevens', 'Michael Cohen', 'Alexei Andreev']",2016-10-24T17:47:09Z,arbital,, 58627,https://arbital.com/p/log_guide,Introductory guide to logarithms,"['Alexei Andreev', 'Eric Rogstad', 'Eric Bruylant', 'Daniel Satanove', 'Nate Soares']",2016-10-21T18:09:55Z,arbital,, 58636,https://arbital.com/p/bayes_probability_notation_math1,Probability notation for Bayes' rule: Intro (Math 1),"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-08-03T14:30:21Z,arbital,, 58655,https://arbital.com/p/totally_ordered_set,Totally ordered set,"['Kevin Clancy', 'Eric Rogstad', 'Dylan Hendrickson', 'Eric Bruylant', 'Joe Zeng']",2016-07-21T16:07:16Z,arbital,, 58664,https://arbital.com/p/factual_question,Strictly factual question,"['Eric Rogstad', 'Eliezer Yudkowsky']",2016-05-25T19:12:12Z,arbital,, 58673,https://arbital.com/p/reals_as_classes_of_cauchy_sequences_form_a_field,The reals (constructed as classes of Cauchy sequences of rationals) form a field,"['Eric Bruylant', 'Patrick Stevens', 'Joe Zeng']",2016-07-05T20:06:56Z,arbital,, 58682,https://arbital.com/p/intended_goal,Intended goal,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-15T23:46:42Z,arbital,, 58692,https://arbital.com/p/ordering_of_rational_numbers_math_0,Ordering of rational numbers (Math 0),"['Patrick Stevens', 'Joe Zeng']",2016-08-14T01:53:01Z,arbital,, 58701,https://arbital.com/p/guarded_definition,Guarded definition,"['Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T16:16:33Z,arbital,, 58711,https://arbital.com/p/relevant_powerful_agent,Relevant powerful agent,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T01:21:04Z,arbital,, 58725,https://arbital.com/p/digit_wheel,Digit wheel,"['Eric Rogstad', 'Eric Bruylant', 'Nate Soares']",2016-06-07T22:29:02Z,arbital,, 58740,https://arbital.com/p/show_broken,Show me what you've broken,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-05-16T06:15:45Z,arbital,, 58758,https://arbital.com/p/n_message,n-message,"['Eric Rogstad', 'Nate Soares']",2016-06-07T03:44:19Z,arbital,, 58767,https://arbital.com/p/rocket_alignment_metaphor,The rocket alignment problem,"['Alexei Andreev', 'Logan L', 'Rob Bensinger', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky']",2018-10-08T21:31:12Z,arbital,, 58782,https://arbital.com/p/group_homomorphism,Group homomorphism,"['Eric Bruylant', 'Patrick Stevens']",2016-06-22T16:47:46Z,arbital,, 58792,https://arbital.com/p/axiom_of_choice_definition_intuitive,Axiom of Choice Definition (Intuitive),['Mark Chimes'],2016-10-10T19:11:14Z,arbital,, 58801,https://arbital.com/p/log_tutorial_end,The End (of the basic log tutorial),"['Nate Soares', 'Alexei Andreev']",2016-09-20T23:27:16Z,arbital,, 58813,https://arbital.com/p/informed_oversight,Informed oversight,"['Eric Bruylant', 'Jessica Taylor']",2016-03-24T00:50:49Z,arbital,, 58828,https://arbital.com/p/AI_boxing,Boxed AI,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T15:39:24Z,arbital,, 58839,https://arbital.com/p/function,Function,"['Eric Rogstad', 'Patrick Stevens', 'Nate Soares', 'Eric Bruylant', 'Mark Chimes']",2016-06-08T17:21:49Z,arbital,, 58848,https://arbital.com/p/ZF_provability_oracle,Zermelo-Fraenkel provability oracle,"['Rob Bensinger', 'Alexei Andreev', 'Eric Bruylant', 'Eliezer Yudkowsky', 'Gurkenglas']",2017-03-14T01:42:50Z,arbital,, 58864,https://arbital.com/p/moral_uncertainty,Moral uncertainty,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-08T03:40:43Z,arbital,, 58874,https://arbital.com/p/programmer_deception,Programmer deception,"['Eric Bruylant', 'Niplav Yushtun', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T03:53:36Z,arbital,, 58892,https://arbital.com/p/unique_factorisation_domain,Unique factorisation domain,['Patrick Stevens'],2016-08-14T12:08:00Z,arbital,, 58903,https://arbital.com/p/natural_number,Natural number,"['Eric Rogstad', 'Patrick Stevens', 'Nate Soares', 'Martin Epstein', 'Eric Bruylant', 'Jaime Sevilla Molina', 'Joe Zeng']",2016-12-22T05:39:50Z,arbital,, 58912,https://arbital.com/p/emulating_digits,Emulating digits,"['Eric Rogstad', 'Nate Soares']",2016-06-25T15:14:02Z,arbital,, 58923,https://arbital.com/p/value_cosmopolitan,Cosmopolitan value,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-01-11T19:29:05Z,arbital,, 58933,https://arbital.com/p/6bf,Rice's Theorem: Intro (Math 1),"['Dylan Hendrickson', 'Eric Rogstad']",2016-11-18T19:38:43Z,arbital,, 58945,https://arbital.com/p/decision_problem,Decision problem,"['Eric Rogstad', 'Eric Bruylant', 'Jaime Sevilla Molina', 'Alex Appel']",2016-07-04T16:24:40Z,arbital,, 58959,https://arbital.com/p/sparking_widgets,Sparking widgets,['Nate Soares'],2016-07-06T12:58:00Z,arbital,, 58968,https://arbital.com/p/4l,Safe impact measure,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T04:28:47Z,arbital,, 58983,https://arbital.com/p/reflective_consistency,Reflective consistency,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-22T00:25:48Z,arbital,, 58997,https://arbital.com/p/only_one_log,There is only one logarithm,"['Eric Rogstad', 'Nate Soares', 'Patrick Stevens']",2016-06-20T20:50:28Z,arbital,, 59017,https://arbital.com/p/executable_philosophy,Executable philosophy,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-06-06T21:06:32Z,arbital,, 59037,https://arbital.com/p/rice_and_halt,Rice's theorem and the Halting problem,['Jaime Sevilla Molina'],2016-08-14T17:22:24Z,arbital,, 59051,https://arbital.com/p/identify_goal_concept,Goal-concept identification,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-04-14T19:43:11Z,arbital,, 59061,https://arbital.com/p/MIRI,Machine Intelligence Research Institute,"['Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-23T20:16:07Z,arbital,, 59070,https://arbital.com/p/complexity_of_value,Complexity of value,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-04-14T01:17:56Z,arbital,, 59092,https://arbital.com/p/decimal_notation,Decimal notation,"['Dylan Hendrickson', 'Eric Bruylant', 'Nate Soares', 'Michael Cohen']",2016-07-04T05:32:35Z,arbital,, 59101,https://arbital.com/p/bayes_rule_proof,Proof of Bayes' rule,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-07-10T19:24:01Z,arbital,, 59111,https://arbital.com/p/order_relation,Order relation,"['Patrick Stevens', 'Joe Zeng']",2016-07-07T14:32:44Z,arbital,, 59127,https://arbital.com/p/evidence_for_CT_thesis,Church-Turing thesis: Evidence for the Church-Turing thesis,"['Eric Bruylant', 'Jaime Sevilla Molina']",2016-06-20T17:59:25Z,arbital,, 59137,https://arbital.com/p/order_monotone_examples,Monotone function: examples,['Kevin Clancy'],2016-12-03T22:54:32Z,arbital,, 59149,https://arbital.com/p/poset_examples,Poset: Examples,"['Eric Rogstad', 'Kevin Clancy', 'Joe Zeng']",2016-12-06T20:34:09Z,arbital,, 59158,https://arbital.com/p/cromwells_rule,Empirical probabilities are not exactly 0 or 1,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-07-06T16:01:13Z,arbital,, 59167,https://arbital.com/p/log_base_1,Logarithm base 1,['Nate Soares'],2016-09-14T23:27:41Z,arbital,, 59176,https://arbital.com/p/hard_corrigibility,Hard problem of corrigibility,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-10-11T02:52:38Z,arbital,, 59193,https://arbital.com/p/Vinge_principle,Vinge's Principle,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-25T16:48:17Z,arbital,, 59203,https://arbital.com/p/symmetric_group,Symmetric group,"['Eric Bruylant', 'Patrick Stevens']",2016-06-17T13:13:31Z,arbital,, 59212,https://arbital.com/p/otherizer,Other-izing (wanted: new optimization idiom),"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-22T00:33:10Z,arbital,, 59231,https://arbital.com/p/5hr,Löb's theorem and computer programs,"['Malcolm McCrimmon', 'Jaime Sevilla Molina', 'Eric Rogstad', 'Patrick LaVictoire']",2016-07-29T21:32:55Z,arbital,, 59240,https://arbital.com/p/real_is_rich,Almost all real-world domains are rich,"['Eliezer Yudkowsky', 'Alexei Andreev']",2017-02-18T02:34:47Z,arbital,, 59254,https://arbital.com/p/likelihood_function,Likelihood function,"['Eric Bruylant', 'Nate Soares', 'Alexei Andreev']",2016-08-02T15:12:50Z,arbital,, 59264,https://arbital.com/p/actual_effectiveness,Actual effectiveness,['Eliezer Yudkowsky'],2017-02-22T03:43:25Z,arbital,, 59273,https://arbital.com/p/nonadversarial,Non-adversarial principle,"['Ananya Aloke', 'Eric Bruylant', 'Eric Rogstad', 'Eliezer Yudkowsky']",2017-01-22T06:06:13Z,arbital,, 59293,https://arbital.com/p/category_mathematics,Category (mathematics),"['Eric Bruylant', 'Mark Chimes', 'Patrick Stevens']",2016-06-18T05:53:09Z,arbital,, 59302,https://arbital.com/p/bayes_rule_elimination,Belief revision as probability elimination,"['Alexei Andreev', 'Logan L', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-10-08T16:59:55Z,arbital,, 59313,https://arbital.com/p/prior_probability,Prior probability,"['Alexei Andreev', 'Cuyler Brehaut', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-08-04T12:27:46Z,arbital,, 59322,https://arbital.com/p/humean_free_boundary,Humean degree of freedom,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-14T21:02:28Z,arbital,, 59331,https://arbital.com/p/well_ordered_set,Well-ordered set,"['Dylan Hendrickson', 'Joe Zeng']",2016-07-07T15:09:09Z,arbital,, 59340,https://arbital.com/p/value_alignment_programmer,Programmer,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-15T23:10:38Z,arbital,, 59354,https://arbital.com/p/advanced_nonagent,Advanced nonagent,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-06-07T23:36:11Z,arbital,, 59379,https://arbital.com/p/value_alignment_utility,Utility,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T23:59:11Z,arbital,, 59396,https://arbital.com/p/emphemeral_premises,Emphemeral premises,['Eliezer Yudkowsky'],2016-06-19T21:30:37Z,arbital,, 59408,https://arbital.com/p/category_theory_equaliser,Equaliser (category theory),['Patrick Stevens'],2016-06-18T13:45:48Z,arbital,, 59417,https://arbital.com/p/two_independent_events,Two independent events,"['Tsvi BT', 'Eric Rogstad']",2016-06-16T15:39:52Z,arbital,, 59426,https://arbital.com/p/arguments_against_P_NP,P vs NP: Arguments against P=NP,"['Eric Rogstad', 'Eric Bruylant', 'Nate Soares', 'Jaime Sevilla Molina']",2016-07-08T06:15:00Z,arbital,, 59437,https://arbital.com/p/taskagi_open_problems,Open subproblems in aligning a Task-based AGI,"['Jessica Taylor', 'Eliezer Yudkowsky']",2016-04-14T20:15:06Z,arbital,, 59473,https://arbital.com/p/log_characteristic,The characteristic of the logarithm,"['Eric Rogstad', 'Adom Hartell', 'Nate Soares', 'Alexei Andreev']",2016-10-19T19:34:50Z,arbital,, 59488,https://arbital.com/p/dihedral_groups_are_non_abelian,Dihedral groups are non-abelian,['Patrick Stevens'],2016-06-15T13:06:13Z,arbital,, 59497,https://arbital.com/p/1hh,Solomonoff induction: Intro Dialogue (Math 2),"['Michael Keenan', 'Alexei Andreev', 'Eliezer Yudkowsky', 'Nate Soares', 'Robert Bell', 'Eric Bruylant', 'Travis Rivera', 'Gurkenglas']",2017-12-24T23:28:56Z,arbital,, 59518,https://arbital.com/p/normal_system_of_provability,Normal system of provability logic,"['Eric Bruylant', 'Jaime Sevilla Molina']",2017-04-22T14:13:21Z,arbital,, 59527,https://arbital.com/p/intro_modern_logic,An introductory guide to modern logic,"['Eric Bruylant', 'Mark Chimes', 'Jaime Sevilla Molina']",2016-10-21T15:27:08Z,arbital,, 59536,https://arbital.com/p/diagonal_lemma,Diagonal lemma,"['Eric Rogstad', 'Eric Bruylant', 'Jaime Sevilla Molina']",2016-07-22T06:11:27Z,arbital,, 59546,https://arbital.com/p/symmetric_group_is_generated_by_transpositions,Every member of a symmetric group on finitely many elements is a product of transpositions,['Patrick Stevens'],2016-06-15T08:03:48Z,arbital,, 59555,https://arbital.com/p/odds_technical,Odds: Technical explanation,"['Eliezer Yudkowsky', 'Alexei Andreev']",2016-10-12T22:40:11Z,arbital,, 59568,https://arbital.com/p/bounded_agent,Bounded agent,['Eliezer Yudkowsky'],2016-03-22T00:52:19Z,arbital,, 59577,https://arbital.com/p/proof_of_rice_theorem,Proof of Rice's theorem,"['Eric Bruylant', 'Patrick Stevens']",2016-08-08T12:35:54Z,arbital,, 59594,https://arbital.com/p/bayes_waterfall_diagram,Waterfall diagram,"['Alexei Andreev', 'Eric Rogstad', 'Nate Soares', 'Eric Bruylant', 'Salil Kalghatgi', 'Eliezer Yudkowsky']",2016-09-29T17:12:43Z,arbital,, 59604,https://arbital.com/p/godel_codes,Gödel encoding and self-reference,['Patrick LaVictoire'],2016-05-06T15:58:15Z,arbital,, 59613,https://arbital.com/p/transcendental_number,Transcendental number,"['Chris Barnett', 'Eric Bruylant', 'Patrick Stevens', 'Joe Zeng']",2016-08-20T19:59:36Z,arbital,, 59623,https://arbital.com/p/ontology_identification,Ontology identification problem,"['Alexei Andreev', 'Nate Soares', 'Tom Brown', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-10-14T15:32:34Z,arbital,, 59645,https://arbital.com/p/unforeseen_maximum,Unforeseen maximum,"['Eric Rogstad', 'Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-06-27T00:24:55Z,arbital,, 59669,https://arbital.com/p/detrimental,'Detrimental',['Eliezer Yudkowsky'],2016-06-09T21:15:55Z,arbital,, 59678,https://arbital.com/p/group_orbits_partition,Group orbits partition,['Patrick Stevens'],2016-06-20T06:55:28Z,arbital,, 59687,https://arbital.com/p/lobs_theorem,Löb's theorem,"['Patrick LaVictoire', 'Eric Rogstad', 'Malcolm McCrimmon', 'Eric Bruylant', 'Jaime Sevilla Molina']",2016-07-30T02:03:46Z,arbital,, 59700,https://arbital.com/p/safe_useless,Safe but useless,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-07T23:38:38Z,arbital,, 59709,https://arbital.com/p/axiom_of_choice,Axiom of Choice,"['Daniel Satanove', 'Dylan Hendrickson', 'Yoni Lavi', 'Eric Bruylant', 'Mark Chimes', 'Jaime Sevilla Molina']",2016-12-02T20:22:51Z,arbital,, 59725,https://arbital.com/p/operator_mathematics,Operator,"['Eric Bruylant', 'Nate Soares', 'Patrick Stevens']",2016-06-14T10:29:12Z,arbital,, 59734,https://arbital.com/p/explicit_bayes_counters_worry,Explicit Bayes as a counter for 'worrying',"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-02-08T00:13:04Z,arbital,, 59744,https://arbital.com/p/commutativity_intuition,Commutativity: Intuition,"['Eric Bruylant', 'Nate Soares']",2016-08-27T12:09:10Z,arbital,, 59753,https://arbital.com/p/real_number,Real number,"['Kevin Clancy', 'M Yass', 'Michael Cohen', 'Alexei Andreev', 'Eric Rogstad', 'Patrick Stevens', 'Nate Soares', 'Eric Bruylant', 'Joe Zeng']",2016-08-16T20:23:25Z,arbital,, 59763,https://arbital.com/p/496,Square visualization of probabilities on two events,"['Tsvi BT', 'Eric Rogstad', 'Team Arbital']",2016-06-19T08:37:06Z,arbital,, 59772,https://arbital.com/p/group_mathematics,Group,"['Daniel Satanove', 'm g', 'Dylan Hendrickson', 'Eric Rogstad', 'Patrick Stevens', 'Louis Paquin', 'Nate Soares', 'Qiaochu Yuan', 'Eric Bruylant', 'Mark Chimes', 'Joe Zeng']",2016-12-31T13:05:14Z,arbital,, 59787,https://arbital.com/p/corps_vs_si,Corporations vs. superintelligences,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-03-25T05:41:36Z,arbital,, 59803,https://arbital.com/p/nonperson_predicate,Nonperson predicate,"['Eric Bruylant', 'Eliezer Yudkowsky']",2015-12-28T18:49:00Z,arbital,, 59813,https://arbital.com/p/bit,Bit,"['Eric Bruylant', 'Nate Soares']",2016-06-24T01:39:49Z,arbital,, 59822,https://arbital.com/p/prog_dep_typ,Programming in Dependent Type Theory,['Jack Gallagher'],2016-05-26T03:21:54Z,arbital,, 59835,https://arbital.com/p/NickBostrom,Nick Bostrom,"['Eric Bruylant', 'Eliezer Yudkowsky']",2015-12-01T18:12:26Z,arbital,, 59844,https://arbital.com/p/boolean,Boolean,"['Malcolm McCrimmon', 'Eric Bruylant']",2016-07-29T22:11:31Z,arbital,, 59853,https://arbital.com/p/currying,Currying,"['Eric Bruylant', 'M Yass']",2016-07-17T22:22:13Z,arbital,, 59862,https://arbital.com/p/superintelligent,Superintelligent,"['Eric Rogstad', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-08T15:24:56Z,arbital,, 59877,https://arbital.com/p/abelian_group,Abelian group,"['Alexei Andreev', 'Eric Rogstad', 'Patrick Stevens', 'Nate Soares', 'Qiaochu Yuan']",2016-07-18T15:59:32Z,arbital,, 59893,https://arbital.com/p/probability_interpretations,"Interpretations of ""probability""","['Eric Rogstad', 'Nate Soares']",2016-07-01T05:22:14Z,arbital,, 59911,https://arbital.com/p/Sovereign,Autonomous AGI,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-06T20:39:50Z,arbital,, 59921,https://arbital.com/p/advanced_safety,Advanced safety,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T05:05:43Z,arbital,, 59934,https://arbital.com/p/development_phase_unpredictable,Development phase unpredictable,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-10-13T15:31:17Z,arbital,, 59943,https://arbital.com/p/extensionality_axiom,Extensionality Axiom,"['Eric Rogstad', 'Ilia Zaichuk']",2016-08-29T11:34:30Z,arbital,, 59952,https://arbital.com/p/inductive_prior,Inductive prior,"['Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-03-03T04:10:29Z,arbital,, 59967,https://arbital.com/p/instrumental_pressure,Instrumental pressure,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T15:49:35Z,arbital,, 59977,https://arbital.com/p/updated_deference,Problem of fully updated deference,"['Rob Bensinger', 'Eric Bruylant', 'Eliezer Yudkowsky', 'Thomas Jones']",2017-03-08T20:52:03Z,arbital,, 59998,https://arbital.com/p/happiness_maximizer,Happiness maximizer,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T16:36:19Z,arbital,, 60007,https://arbital.com/p/log_inverts_exp,Logarithms invert exponentials,"['Eric Rogstad', 'Eric Bruylant', 'Nate Soares', 'Joe Zeng']",2016-07-04T19:14:19Z,arbital,, 60016,https://arbital.com/p/task_agi,Task-directed AGI,"['Malcolm Ocean', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2017-03-25T05:35:00Z,arbital,, 60036,https://arbital.com/p/order_lattice,Lattice (Order Theory),"['Eric Rogstad', 'Kevin Clancy', 'Alexei Andreev']",2016-12-06T14:07:55Z,arbital,, 60050,https://arbital.com/p/uncontainability,Cognitive uncontainability,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T15:13:34Z,arbital,, 60068,https://arbital.com/p/turing_machine,Turing machine,"['Eric Leese', 'Alexei Andreev', 'Eric Rogstad', 'Patrick Stevens', 'Alex Appel', 'Eric Bruylant']",2016-10-03T16:45:38Z,arbital,, 60082,https://arbital.com/p/only_empty_set_satisfies_up_of_emptyset,The empty set is the only set which satisfies the universal property of the empty set,['Patrick Stevens'],2016-08-26T15:16:09Z,arbital,, 60091,https://arbital.com/p/AI_safety_mindset,AI safety mindset,"['Rob Bensinger', 'Eric Bruylant', 'Eliezer Yudkowsky', 'Steven Dee']",2017-08-03T15:38:42Z,arbital,, 60115,https://arbital.com/p/infinitely_many_primes,Proof that there are infinitely many primes,"['Eric Bruylant', 'Patrick Stevens', 'Joe Zeng']",2016-08-15T11:09:43Z,arbital,, 60131,https://arbital.com/p/associativity_examples,Associativity: Examples,"['Eric Bruylant', 'Nate Soares']",2016-08-19T21:28:56Z,arbital,, 60140,https://arbital.com/p/convergent_strategies,Convergent instrumental strategies,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Raccoon Arnold']",2016-06-19T16:52:53Z,arbital,, 60161,https://arbital.com/p/pointing_finger,"Look where I'm pointing, not at my finger","['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-09-09T20:40:33Z,arbital,, 60183,https://arbital.com/p/oracle,Oracle,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T15:16:14Z,arbital,, 60192,https://arbital.com/p/fractional_bit,Fractional bits,"['Eric Rogstad', 'Nate Soares']",2016-06-24T02:18:43Z,arbital,, 60201,https://arbital.com/p/prove_too_much,Proving too much,['Eliezer Yudkowsky'],2016-05-25T19:32:02Z,arbital,, 60211,https://arbital.com/p/nonadversarial_safety,The AI must tolerate your safety measures,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-13T17:41:49Z,arbital,, 60231,https://arbital.com/p/bayes_rule_odds_intro,Introduction to Bayes' rule: Odds form,"['Eric Bruylant', 'Conor Duggan', 'M J', 'Simon Grimm', 'Tom Voltz', 'Miguel Lima Medín', 'Alexei Andreev', 'Mott Smith', 'Nate Soares', 'John Woodgate', 'yassine chaouche', 'Killian McGuinness', 'Elias Mannherz', 'Ryan Bush', 'Eliezer Yudkowsky']",2016-10-25T22:15:44Z,arbital,, 60243,https://arbital.com/p/poset,Partially ordered set,"['Kevin Clancy', 'Eric Rogstad', 'Patrick Stevens', 'Nate Soares', 'Eric Bruylant', 'Joe Zeng']",2017-01-29T22:34:08Z,arbital,, 60266,https://arbital.com/p/AGI_typology,Strategic AGI typology,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-29T23:45:44Z,arbital,, 60279,https://arbital.com/p/normalize_probabilities,Normalization (probability),['Eliezer Yudkowsky'],2016-10-07T21:37:30Z,arbital,, 60288,https://arbital.com/p/1mj,Arithmetical hierarchy: If you don't read logic,"['Patrick LaVictoire', 'Alexei Andreev', 'Eric Bruylant', 'Noah Walton', 'Eliezer Yudkowsky']",2016-04-04T02:04:32Z,arbital,, 60299,https://arbital.com/p/uncountable_sample_spaces_are_too_large,Uncountable sample spaces are way too large,['Tsvi BT'],2016-06-16T07:05:14Z,arbital,, 60308,https://arbital.com/p/axiom,Axiom,"['Eric Bruylant', 'Jaime Sevilla Molina']",2016-10-12T14:28:53Z,arbital,, 60318,https://arbital.com/p/LaTeX,LaTeX,['Eric Bruylant'],2016-08-20T10:35:55Z,arbital,, 60333,https://arbital.com/p/reflective_stability,Reflective stability,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-05-21T10:50:56Z,arbital,, 60342,https://arbital.com/p/exercise_meta_tag,Exercise,['Eric Bruylant'],2016-06-20T20:29:27Z,arbital,, 60351,https://arbital.com/p/left_cosets_partition_parent_group,Left cosets partition the parent group,['Patrick Stevens'],2016-06-28T07:29:03Z,arbital,, 60367,https://arbital.com/p/principal_ideal_domain,Principal ideal domain,"['Eric Bruylant', 'Patrick Stevens']",2016-08-04T14:10:18Z,arbital,, 60376,https://arbital.com/p/value_alignment_problem,Value alignment problem,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2017-02-01T23:46:41Z,arbital,, 60391,https://arbital.com/p/uncountability_math_3,Uncountability (Math 3),"['Dylan Hendrickson', 'Eric Bruylant', 'Daniel Satanove', 'Patrick Stevens']",2016-10-26T19:09:44Z,arbital,, 60407,https://arbital.com/p/alignment_difficulty,Difficulty of AI alignment,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-05-25T15:09:06Z,arbital,, 60431,https://arbital.com/p/trit,Trit,['Nate Soares'],2016-06-07T03:56:27Z,arbital,, 60441,https://arbital.com/p/some_computations_are_people,Some computations are people,"['Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-28T17:59:08Z,arbital,, 60450,https://arbital.com/p/product_category_theory,Product (Category Theory),"['Mark Chimes', 'Nate Soares', 'Patrick Stevens', 'Alexei Andreev']",2016-06-24T01:55:45Z,arbital,, 60459,https://arbital.com/p/poset_lattice_examples,Lattice: Examples,['Kevin Clancy'],2016-07-16T17:08:44Z,arbital,, 60470,https://arbital.com/p/associativity_intuition,Associativity: Intuition,"['Eric Bruylant', 'Nate Soares']",2016-08-19T21:22:18Z,arbital,, 60480,https://arbital.com/p/category_theory,Category theory,"['Daniel Satanove', 'Tsvi BT', 'Patrick Stevens', 'Nate Soares', 'Eric Bruylant', 'Mark Chimes', 'Jaime Sevilla Molina']",2016-10-20T22:30:34Z,arbital,, 60490,https://arbital.com/p/bayes_frequency_diagram_diseasitis,Frequency diagrams: A first look at Bayes,"['Alexei Andreev', 'Eric Rogstad', 'Sachin Krishnan', 'Anthony Mercuri', 'Akanksha rawat', 'Nate Soares', 'jj jj', 'David Salamon', 'Eric Bruylant', 'John Mahony', 'Eliezer Yudkowsky']",2016-10-25T23:46:18Z,arbital,, 60501,https://arbital.com/p/instrumental_goals_equally_tractable,Instrumental goals are almost-equally as tractable as terminal goals,"['mrkun', 'Eric Bruylant', 'Niplav Yushtun', 'Eliezer Yudkowsky']",2021-08-18T21:33:39Z,arbital,, 60512,https://arbital.com/p/poset_exercises,Poset: Exercises,"['Eric Bruylant', 'Kevin Clancy', 'Mark Chimes', 'Chris Pasek']",2016-08-22T16:24:37Z,arbital,, 60525,https://arbital.com/p/conjugacy_classes_alternating_five_simpler,Conjugacy classes of the alternating group on five elements: Simpler proof,['Patrick Stevens'],2016-06-18T13:07:31Z,arbital,, 60534,https://arbital.com/p/dihedral_group,Dihedral group,['Patrick Stevens'],2016-06-16T18:38:00Z,arbital,, 60543,https://arbital.com/p/Kolmogorov_complexity,Algorithmic complexity,"['Jaime Sevilla Molina', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-06-14T20:52:34Z,arbital,, 60553,https://arbital.com/p/information,Information,['Nate Soares'],2016-05-31T14:32:54Z,arbital,, 60562,https://arbital.com/p/free_group,Free group,"['Dylan Hendrickson', 'Eric Bruylant', 'Eric Rogstad', 'Patrick Stevens']",2016-10-23T16:22:16Z,arbital,, 60572,https://arbital.com/p/ai_alignment,AI alignment,"['Alexei Andreev', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky', 'Ruben Bloom']",2017-01-27T19:32:06Z,arbital,, 60600,https://arbital.com/p/codomain_vs_image,Codomain vs image,"['Eric Rogstad', 'Eric Bruylant', 'Nate Soares', 'Patrick Stevens']",2016-06-10T14:46:07Z,arbital,, 60610,https://arbital.com/p/image_of_group_under_homomorphism_is_subgroup,The image of a group under a homomorphism is a subgroup of the codomain,['Patrick Stevens'],2016-06-14T17:30:27Z,arbital,, 60619,https://arbital.com/p/probability_distribution_countable,Probability distribution (countable sample space),['Tsvi BT'],2016-06-11T04:40:48Z,arbital,, 60628,https://arbital.com/p/ignorance_prior,Ignorance prior,['Eliezer Yudkowsky'],2016-03-03T22:10:51Z,arbital,, 60637,https://arbital.com/p/5wv,Factorial,"['Eric Bruylant', 'Patrick Stevens']",2016-08-18T05:05:29Z,arbital,, 60646,https://arbital.com/p/arithmetical_adequacy_GL,Solovay's theorems of arithmetical adequacy for GL,['Jaime Sevilla Molina'],2016-07-29T10:53:04Z,arbital,, 60657,https://arbital.com/p/EliezerYudkowsky,Eliezer Yudkowsky,['Eliezer Yudkowsky'],2015-12-19T00:46:45Z,arbital,, 60679,https://arbital.com/p/orbit_stabiliser_theorem,Orbit-stabiliser theorem,"['Mark Chimes', 'Patrick Stevens', 'Alexei Andreev']",2016-07-01T02:46:57Z,arbital,, 60693,https://arbital.com/p/log_as_comm_cost,Log as the change in the cost of communicating,"['Eric Rogstad', 'Nate Soares', 'Alexei Andreev']",2016-06-12T14:40:16Z,arbital,, 60709,https://arbital.com/p/fixed_point_theorem_provability_logic,Fixed point theorem of provability logic,"['mrkun', 'Jaime Sevilla Molina']",2017-03-02T15:00:20Z,arbital,, 60722,https://arbital.com/p/hyperexistential_separation,Separation from hyperexistential risk,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-12-04T20:38:46Z,arbital,, 60749,https://arbital.com/p/powerful_agent_highly_optimized,Relevant powerful agents will be highly optimized,"['Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T16:42:26Z,arbital,, 60762,https://arbital.com/p/countability,Countability,"['Eric Bruylant', 'Alexei Andreev']",2016-10-20T20:56:47Z,arbital,, 60771,https://arbital.com/p/number_sets_intro,Intro to Number Sets,"['Eric Bruylant', 'Joe Zeng']",2016-08-20T14:27:36Z,arbital,, 60781,https://arbital.com/p/group_stabiliser,Stabiliser (of a group action),"['Mark Chimes', 'Patrick Stevens', 'Alexei Andreev']",2016-06-28T20:08:10Z,arbital,, 60791,https://arbital.com/p/function_domain,Domain (of a function),"['Eric Bruylant', 'Nate Soares']",2016-05-14T03:12:53Z,arbital,, 60800,https://arbital.com/p/bayes_rule_proportional,Bayes' rule: Proportional form,"['Viktor Riabtsev', 'Alexei Andreev', 'Eric Rogstad', 'Himanshu Chaturvedi', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky', 'V N']",2016-10-10T22:24:58Z,arbital,, 60809,https://arbital.com/p/relative_likelihood,Relative likelihood,"['Alexei Andreev', 'Eric Rogstad', 'Al Prihodko', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-08-04T12:00:09Z,arbital,, 60818,https://arbital.com/p/cayley_theorem_symmetric_groups,Cayley's Theorem on symmetric groups,['Patrick Stevens'],2016-06-15T07:29:23Z,arbital,, 60827,https://arbital.com/p/bit_examples,Bit (of data): Examples,['Nate Soares'],2016-05-31T04:19:18Z,arbital,, 60836,https://arbital.com/p/fundamental_theorem_of_arithmetic,Fundamental Theorem of Arithmetic,['Patrick Stevens'],2016-08-07T08:18:06Z,arbital,, 60848,https://arbital.com/p/immediate_goods,Immediate goods,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T16:39:47Z,arbital,, 60857,https://arbital.com/p/splitting_conjugacy_classes_in_alternating_group,Splitting conjugacy classes in alternating group,['Patrick Stevens'],2016-06-28T07:44:43Z,arbital,, 60866,https://arbital.com/p/greatest_common_divisor,Greatest common divisor,"['Eric Bruylant', 'Kendrea Beers', 'Patrick Stevens']",2016-08-04T18:04:44Z,arbital,, 60875,https://arbital.com/p/inductive_ambiguity,Identifying ambiguous inductions,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-03-20T02:44:19Z,arbital,, 60890,https://arbital.com/p/proposed_b_class,Proposed B-Class,"['Eric Bruylant', 'Alexei Andreev']",2016-08-02T18:22:58Z,arbital,, 60899,https://arbital.com/p/rice_theorem,Rice's Theorem,"['Eric Rogstad', 'Dylan Hendrickson', 'Patrick Stevens', 'Eric Bruylant', 'Jaime Sevilla Molina']",2016-08-14T08:28:35Z,arbital,, 60912,https://arbital.com/p/ideal_equals_kernel_of_ring_homomorphism,Ideals are the same thing as kernels of ring homomorphisms,['Patrick Stevens'],2016-08-03T16:30:24Z,arbital,, 60921,https://arbital.com/p/Vinge_law,Vinge's Law,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky']",2016-06-25T16:59:03Z,arbital,, 60933,https://arbital.com/p/bayes_frequency_diagram,Frequency diagram,"['Eric Bruylant', 'Nate Soares', 'Alexei Andreev']",2016-07-06T21:04:25Z,arbital,, 60942,https://arbital.com/p/bayes_rule_fast_intro,High-speed intro to Bayes's rule,"['Joe Huang', 'Brie Hoffman', 'Eric Rogstad', 'Gijs van Dam', 'Rihn H', 'Mike Totman', 'Connor Flexman', 'Eliezer Yudkowsky']",2017-12-24T23:39:11Z,arbital,, 60960,https://arbital.com/p/flee_from_surprise,Shift towards the hypothesis of least surprise,"['Eric Bruylant', 'Nate Soares', 'Dony Christie', 'Alexei Andreev']",2016-11-03T01:10:16Z,arbital,, 60970,https://arbital.com/p/Vingean_reflection,Vingean reflection,['Eliezer Yudkowsky'],2016-06-20T23:55:03Z,arbital,, 60982,https://arbital.com/p/isomorphism,Isomorphism,"['Daniel Satanove', 'Eric Rogstad', 'Dylan Hendrickson', 'Patrick Stevens', 'Eric Bruylant', 'Mark Chimes']",2016-10-20T22:07:16Z,arbital,, 60991,https://arbital.com/p/conceivability,Conceivability,"['Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T16:53:16Z,arbital,, 61000,https://arbital.com/p/timemachine_efficiency_metaphor,Time-machine metaphor for efficient agents,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-07-27T13:44:15Z,arbital,, 61009,https://arbital.com/p/5hs,Gödel II and Löb's theorem,['Jaime Sevilla Molina'],2016-07-25T06:03:35Z,arbital,, 61019,https://arbital.com/p/bayes_extraordinary_claims,Extraordinary claims require extraordinary evidence,"['Alexei Andreev', 'sdfsdf sdfsdf', 'Nate Soares', 'Grady Simon', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-10-08T02:52:18Z,arbital,, 61031,https://arbital.com/p/division_of_rational_numbers_math_0,Division of rational numbers (Math 0),"['Eric Bruylant', 'Patrick Stevens']",2016-08-01T06:09:45Z,arbital,, 61045,https://arbital.com/p/product_is_unique_up_to_isomorphism,Product is unique up to isomorphism,['Patrick Stevens'],2016-08-28T12:53:27Z,arbital,, 61054,https://arbital.com/p/ideal_target,Ideal target,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-08T03:56:27Z,arbital,, 61063,https://arbital.com/p/goodness_estimate_bias,Goodness estimate biaser,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-07-08T15:53:19Z,arbital,, 61085,https://arbital.com/p/odds_refresher,Odds: Refresher,"['Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-10-12T22:38:46Z,arbital,, 61094,https://arbital.com/p/bayes_odds_to_probability,Odds form to probability form,"['Eric Bruylant', 'Nate Soares']",2016-07-06T04:34:25Z,arbital,, 61103,https://arbital.com/p/hypercomputer,Hypercomputer,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-01-17T00:19:09Z,arbital,, 61113,https://arbital.com/p/interruptibility,Interruptibility,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-13T17:09:20Z,arbital,, 61128,https://arbital.com/p/multiple_compression,Compressing multiple messages,"['Eric Rogstad', 'Nate Soares']",2016-06-02T23:08:29Z,arbital,, 61140,https://arbital.com/p/nearest_unblocked,Nearest unblocked strategy,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-05-01T20:22:53Z,arbital,, 61158,https://arbital.com/p/group_action,Group action,"['Eric Rogstad', 'Patrick Stevens', 'Qiaochu Yuan']",2016-06-14T15:04:49Z,arbital,, 61167,https://arbital.com/p/image_of_identity_under_group_homomorphism,Image of the identity under a group homomorphism is the identity,['Patrick Stevens'],2016-06-14T17:21:36Z,arbital,, 61176,https://arbital.com/p/partial_function,Partial function,['Patrick Stevens'],2016-08-06T10:41:39Z,arbital,, 61186,https://arbital.com/p/square_root,Square root,['Travis Rivera'],2017-09-20T02:08:58Z,arbital,, 61195,https://arbital.com/p/extraordinary_claims,Extraordinary claims,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-03-04T05:16:35Z,arbital,, 61211,https://arbital.com/p/strong_uncontainability,Strong cognitive uncontainability,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-03-12T06:16:15Z,arbital,, 61228,https://arbital.com/p/KANSI,Known-algorithm non-self-improving agent,['Eliezer Yudkowsky'],2015-12-28T20:43:03Z,arbital,, 61242,https://arbital.com/p/no_coffee_if_dead,You can't get the coffee if you're dead,"['Eric Rogstad', 'Eliezer Yudkowsky']",2017-01-16T18:33:22Z,arbital,, 61251,https://arbital.com/p/preference_framework,Preference framework,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2017-02-13T16:12:34Z,arbital,, 61260,https://arbital.com/p/axiom_of_choice_history_and_controversy,Axiom of Choice: History and Controversy,"['Tarn Somervell Fletcher', 'Mark Chimes']",2016-10-12T12:15:33Z,arbital,, 61270,https://arbital.com/p/large_computer,Unphysically large finite computer,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-01-19T23:51:25Z,arbital,, 61280,https://arbital.com/p/modal_combat,Modal combat,"['Jaime Sevilla Molina', 'Eric Bruylant', 'Patrick LaVictoire']",2016-09-26T22:01:57Z,arbital,, 61296,https://arbital.com/p/sign_of_permutation_is_well_defined,The sign of a permutation is well-defined,"['Patrick Stevens', 'AM AM']",2016-06-28T12:14:29Z,arbital,, 61305,https://arbital.com/p/bulverism,Bulverism,['Eliezer Yudkowsky'],2016-06-08T18:26:32Z,arbital,, 61314,https://arbital.com/p/math_order_complete_lattice,Complete lattice,['Kevin Clancy'],2017-02-10T20:53:34Z,arbital,, 61330,https://arbital.com/p/59h,Proof of Gödel's first incompleteness theorem,['Jaime Sevilla Molina'],2016-10-11T18:24:50Z,arbital,, 61340,https://arbital.com/p/mechanical_turk,Mechanical Turk (example),"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-04-17T19:12:38Z,arbital,, 61350,https://arbital.com/p/bayes_rule_examples,Bayes' rule examples,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky']",2016-07-10T19:53:43Z,arbital,, 61359,https://arbital.com/p/logical_game,Logical game,"['Alexei Andreev', 'Eric Rogstad', 'Rob Bensinger', 'Eric Bruylant', 'Eliezer Yudkowsky']",2021-03-04T00:31:01Z,arbital,, 61368,https://arbital.com/p/bayes_log_odds,Bayes' rule: Log-odds form,"['Szymon Slawinski', 'Joe Zeng', 'Alexei Andreev', 'Eric Rogstad', 'Viktor Gregor', 'Pierre Thierry', 'Nate Soares', 'John Schmitt', 'Lars Øvlisen', 'Eric Bruylant', 'Eliezer Yudkowsky']",2017-07-09T22:25:14Z,arbital,, 61378,https://arbital.com/p/bayes_rule_probability_proof,Proof of Bayes' rule: Probability form,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-10-08T18:26:54Z,arbital,, 61387,https://arbital.com/p/gotcha_button,Gotcha button,['Eliezer Yudkowsky'],2017-01-29T20:51:43Z,arbital,, 61397,https://arbital.com/p/relative_ability,"Infrahuman, par-human, superhuman, efficient, optimal","['Eric Bruylant', 'Eliezer Yudkowsky']",2017-03-08T06:04:22Z,arbital,, 61419,https://arbital.com/p/modular_arithmetic,Modular arithmetic,"['Eric Rogstad', 'Malcolm McCrimmon', 'Patrick Stevens', 'Eric Bruylant', 'Mark Chimes']",2016-08-02T15:29:56Z,arbital,, 61428,https://arbital.com/p/4s,"Natural language understanding of ""right"" will yield normativity","['Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T03:33:27Z,arbital,, 61448,https://arbital.com/p/6gb,Empty set,"['Eric Bruylant', 'Patrick Stevens']",2016-10-25T04:12:29Z,arbital,, 61458,https://arbital.com/p/solomonoff_induction,Solomonoff induction,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-30T03:05:19Z,arbital,, 61467,https://arbital.com/p/bijective_function_intro_math_0,Bijective Function: Intro (Math 0),"['Eric Bruylant', 'Mark Chimes']",2016-07-12T17:57:03Z,arbital,, 61476,https://arbital.com/p/edge_instantiation,Edge instantiation,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-03-12T06:20:39Z,arbital,, 61496,https://arbital.com/p/lagrange_theorem_on_subgroup_size_intuitive,Lagrange theorem on subgroup size: Intuitive version,"['Eric Bruylant', 'Patrick Stevens']",2016-08-27T12:11:30Z,arbital,, 61505,https://arbital.com/p/selective_similarity_metric,Selective similarity metrics for imitation,"['Eric Bruylant', 'Jessica Taylor']",2016-03-24T03:21:45Z,arbital,, 61524,https://arbital.com/p/bayes_rule_odds,Bayes' rule: Odds form,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-10-12T22:56:37Z,arbital,, 61536,https://arbital.com/p/shutdown_utility_function,Shutdown utility function,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-22T01:52:27Z,arbital,, 61554,https://arbital.com/p/order_of_operations,Order of operations,"['Eric Bruylant', 'Patrick Stevens', 'Joe Zeng']",2016-07-06T21:01:11Z,arbital,, 61567,https://arbital.com/p/Vingean_uncertainty,Vingean uncertainty,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-06-20T23:55:06Z,arbital,, 61589,https://arbital.com/p/definition_meta_tag,Definition,"['Kevin Clancy', 'Alexei Andreev', 'Eric Rogstad', 'Patrick Stevens', 'Nate Soares', 'Eric Bruylant', 'Jaime Sevilla Molina', 'Eliezer Yudkowsky']",2015-10-21T14:01:49Z,arbital,, 61599,https://arbital.com/p/c_class_meta_tag,C-Class,"['Alexei Andreev', 'Eric Rogstad', 'Eric Bruylant', 'Mark Chimes', 'Eliezer Yudkowsky']",2016-08-27T15:22:59Z,arbital,, 61608,https://arbital.com/p/low_impact,Low impact,"['Eric Bruylant', 'Niplav Yushtun', 'Eliezer Yudkowsky']",2016-04-19T02:08:27Z,arbital,, 61633,https://arbital.com/p/standard_agent,Standard agent properties,"['Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-18T00:37:50Z,arbital,, 61649,https://arbital.com/p/bayes_reasoning,Bayesian reasoning,"['Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-07-26T14:28:17Z,arbital,, 61658,https://arbital.com/p/normal_subgroup,Normal subgroup,['Patrick Stevens'],2016-06-18T13:43:43Z,arbital,, 61667,https://arbital.com/p/ideological_turing_test,Ideological Turing test,"['Sean Cummins', 'Eliezer Yudkowsky']",2017-01-10T20:55:25Z,arbital,, 61676,https://arbital.com/p/axiom_of_choice_definition_mathematical,Axiom of Choice: Definition (Formal),['Mark Chimes'],2016-10-10T19:04:45Z,arbital,, 61685,https://arbital.com/p/meta_utility,Meta-utility function,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-13T16:14:46Z,arbital,, 61695,https://arbital.com/p/likelihood_ratio,Likelihood ratio,"['Nate Soares', 'Alexei Andreev']",2016-07-07T12:39:05Z,arbital,, 61704,https://arbital.com/p/identity_element,Identity element,"['Eric Bruylant', 'Joe Zeng']",2016-08-20T11:19:57Z,arbital,, 61713,https://arbital.com/p/bayes_probability_notation,Probability notation for Bayes' rule,"['Alonzo Church', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-07-10T19:10:01Z,arbital,, 61722,https://arbital.com/p/distinguish_advancement,Distinguish which advanced-agent properties lead to the foreseeable difficulty,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-20T20:12:33Z,arbital,, 61731,https://arbital.com/p/subgroup_normal_iff_union_of_conjugacy_classes,Subgroup is normal if and only if it is a union of conjugacy classes,['Patrick Stevens'],2016-06-17T19:37:34Z,arbital,, 61740,https://arbital.com/p/provability_logic,Provability logic,"['Dylan Hendrickson', 'Jaime Sevilla Molina']",2016-07-26T08:44:04Z,arbital,, 61762,https://arbital.com/p/irreducibles_are_prime_in_integers,Euclid's Lemma on prime numbers,"['Brayden Beathe-Gateley', 'Patrick Stevens']",2016-08-07T12:53:46Z,arbital,, 61776,https://arbital.com/p/distant_SIs,Modeling distant superintelligences,"['Eric Bruylant', 'Eliezer Yudkowsky']",2015-12-29T23:22:36Z,arbital,, 61788,https://arbital.com/p/digit_exchange_rates,Exchange rates between digits,"['Eric Rogstad', 'Nate Soares', 'Alexei Andreev']",2016-06-24T03:02:48Z,arbital,, 61797,https://arbital.com/p/function_physical_metaphor,Function: Physical metaphor,"['Eric Bruylant', 'Nate Soares']",2016-05-15T05:55:52Z,arbital,, 61806,https://arbital.com/p/bayesian_prior,Prior,"['Eliezer Yudkowsky', 'Alexei Andreev']",2016-03-04T05:11:28Z,arbital,, 61817,https://arbital.com/p/intelligence_explosion,Intelligence explosion,['Eliezer Yudkowsky'],2016-06-07T18:27:47Z,arbital,, 61829,https://arbital.com/p/missing_weird,Missing the weird alternative,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-26T23:16:55Z,arbital,, 61852,https://arbital.com/p/disjoint_union_universal_property,Universal property of the disjoint union,"['Patrick Stevens', 'Eric Rogstad', 'Eric Bruylant']",2016-10-23T18:43:27Z,arbital,, 61867,https://arbital.com/p/log_tutorial_overview,Logarithm tutorial overview,"['Nate Soares', 'Patrick Stevens']",2016-07-01T14:06:07Z,arbital,, 61885,https://arbital.com/p/value_alignment_value,Value,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-06-01T17:56:26Z,arbital,, 61904,https://arbital.com/p/kernel_of_ring_homomorphism,Kernel of ring homomorphism,['Patrick Stevens'],2016-08-04T17:38:29Z,arbital,, 61919,https://arbital.com/p/strictly_confused,Strictly confused,"['Alexei Andreev', 'Roland G', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-07-04T02:08:40Z,arbital,, 61934,https://arbital.com/p/patch_resistant,Patch resistance,"['Eric Rogstad', 'Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-06-27T00:35:02Z,arbital,, 61955,https://arbital.com/p/data_bit,Bit (of data),"['Eric Rogstad', 'Eric Bruylant', 'Nate Soares']",2016-06-02T19:07:48Z,arbital,, 61964,https://arbital.com/p/integers_intro_math0,Integers: Intro (Math 0),"['Eric Bruylant', 'Joe Zeng']",2016-07-07T15:30:38Z,arbital,, 61979,https://arbital.com/p/probability,Probability,"['Alan Liddell', 'Alexei Andreev', 'Eric Rogstad', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-08-26T09:14:18Z,arbital,, 61990,https://arbital.com/p/big_picture_awareness,Big-picture strategic awareness,"['Eric Rogstad', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-09T16:53:48Z,arbital,, 62011,https://arbital.com/p/avert_self_improvement,Averting the convergent instrumental strategy of self-improvement,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-27T23:51:31Z,arbital,, 62020,https://arbital.com/p/cyclic_group,Cyclic group,"['Eric Bruylant', 'Mark Chimes', 'Patrick Stevens']",2016-07-10T06:04:34Z,arbital,, 62029,https://arbital.com/p/modalized_modal_sentence,Modalized modal sentence,['Jaime Sevilla Molina'],2016-07-27T20:30:17Z,arbital,, 62038,https://arbital.com/p/math_join,Join and meet,"['Eric Bruylant', 'Kevin Clancy']",2016-12-21T04:42:35Z,arbital,, 62049,https://arbital.com/p/set_absolute_complement,Absolute Complement,['M Yass'],2016-08-05T23:32:07Z,arbital,, 62058,https://arbital.com/p/niceness_defense,Niceness is the first line of defense,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-01-18T04:27:42Z,arbital,, 62071,https://arbital.com/p/poset_monotone_function,Monotone function,['Kevin Clancy'],2016-12-03T01:42:37Z,arbital,, 62080,https://arbital.com/p/math0,Math 0,"['Joe Zeng', 'Alexei Andreev', 'Mark Chimes', 'Jaime Sevilla Molina', 'Eliezer Yudkowsky']",2016-07-09T16:01:44Z,arbital,, 62089,https://arbital.com/p/factorial,Factorial,"['Michael Cohen', 'Patrick Stevens', 'Douglas Weathers', 'Eric Bruylant', 'Henri Lemoine']",2023-06-20T17:25:34Z,arbital,, 62098,https://arbital.com/p/posterior_probability,Posterior probability,"['Alexei Andreev', 'Eric Rogstad', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-07-10T05:08:40Z,arbital,, 62107,https://arbital.com/p/environmental_goals,Environmental goals,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-19T20:10:10Z,arbital,, 62124,https://arbital.com/p/ai_arms_race,AI arms races,"['Nate Soares', 'Eliezer Yudkowsky']",2016-05-25T17:42:03Z,arbital,, 62133,https://arbital.com/p/epistemic_exclusion,Epistemic exclusion,"['Eric Bruylant', 'Eliezer Yudkowsky']",2015-12-28T21:20:39Z,arbital,, 62143,https://arbital.com/p/poset_join_exercises,Join and meet: Exercises,"['Eric Bruylant', 'Kevin Clancy']",2016-07-29T19:24:27Z,arbital,, 62158,https://arbital.com/p/multiple_stage_fallacy,Multiple stage fallacy,['Eliezer Yudkowsky'],2016-06-19T21:54:17Z,arbital,, 62177,https://arbital.com/p/uncountability_intuitive,Uncountability: Intuitive Intro,"['Malcolm McCrimmon', 'Eric Rogstad', 'Jason Gross', 'Eric Bruylant', 'Joe Zeng']",2016-11-26T17:32:51Z,arbital,, 62192,https://arbital.com/p/alternating_group_is_simple,The alternating groups on more than four letters are simple,['Patrick Stevens'],2016-06-17T17:19:24Z,arbital,, 62201,https://arbital.com/p/featured_math,Featured math content,['Eric Bruylant'],2016-10-24T16:34:13Z,arbital,, 62210,https://arbital.com/p/bayes_strength_of_evidence,Strength of Bayesian evidence,"['Nate Soares', 'Eliezer Yudkowsky']",2016-07-08T13:27:37Z,arbital,, 62219,https://arbital.com/p/metric,Metric,"['Eric Rogstad', 'Adam Buchbinder', 'Bryce Woodworth', 'Kevin Clancy']",2017-03-23T22:15:16Z,arbital,, 62228,https://arbital.com/p/user_manipulation,User manipulation,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-31T16:26:09Z,arbital,, 62241,https://arbital.com/p/bezout_theorem,Bézout's theorem,"['Eric Bruylant', 'Soyoko U.', 'Patrick Stevens']",2016-09-22T04:26:22Z,arbital,, 62250,https://arbital.com/p/understandability_principle,Understandability principle,"['Rob Bensinger', 'Eric Bruylant', 'Eliezer Yudkowsky']",2017-03-07T19:48:46Z,arbital,, 62262,https://arbital.com/p/irreducible_element_ring_theory,Irreducible element (ring theory),"['Eric Bruylant', 'Patrick Stevens']",2016-08-01T20:04:03Z,arbital,, 62271,https://arbital.com/p/associativity_vs_commutativity,Associativity vs commutativity,"['Eric Bruylant', 'Nate Soares']",2016-05-15T11:13:51Z,arbital,, 62284,https://arbital.com/p/correlated_coverage,Correlated coverage,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-26T23:37:41Z,arbital,, 62302,https://arbital.com/p/value_alignment_open_problem,AI alignment open problem,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2017-02-06T02:05:36Z,arbital,, 62311,https://arbital.com/p/domain_distance,Distances between cognitive domains,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-18T02:18:38Z,arbital,, 62323,https://arbital.com/p/bayes_rule_definition,Bayes' rule: Definition,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-10-04T04:14:07Z,arbital,, 62337,https://arbital.com/p/preference_stability,Consequentialist preferences are reflectively stable by default,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-05-22T13:15:11Z,arbital,, 62348,https://arbital.com/p/mathematical_induction,Mathematical induction,"['Kevin Clancy', 'Patrick LaVictoire', 'Dylan Hendrickson', 'Douglas Weathers', 'Eric Bruylant']",2016-08-09T10:17:38Z,arbital,, 62363,https://arbital.com/p/behaviorist,Behaviorist genie,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-03-31T16:34:00Z,arbital,, 62387,https://arbital.com/p/sample_space_probability,Sample space,"['Tsvi BT', 'Stephanie Zolayvar']",2016-05-25T20:31:39Z,arbital,, 62396,https://arbital.com/p/central_example,Central examples,"['Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T16:22:15Z,arbital,, 62405,https://arbital.com/p/30b,User maximization,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-31T16:31:03Z,arbital,, 62414,https://arbital.com/p/kernel_of_group_homomorphism,Kernel of group homomorphism,"['Eric Bruylant', 'Patrick Stevens']",2016-06-17T13:06:36Z,arbital,, 62423,https://arbital.com/p/load_bearing_premises,Flag the load-bearing premises,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-19T21:25:11Z,arbital,, 62435,https://arbital.com/p/logistic_function,Logistic function,"['Eric Bruylant', 'Joe Zeng']",2016-07-06T23:33:37Z,arbital,, 62458,https://arbital.com/p/harmless_supernova,Harmless supernova fallacy,['Eliezer Yudkowsky'],2018-02-16T17:57:02Z,arbital,, 62472,https://arbital.com/p/cartesian_boundary,Cartesian agent-environment boundary,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-01-19T04:24:18Z,arbital,, 62482,https://arbital.com/p/real_number_as_cauchy_sequence,Real number (as Cauchy sequence),"['Eric Bruylant', 'Patrick Stevens', 'Joe Zeng']",2016-07-05T20:06:38Z,arbital,, 62491,https://arbital.com/p/alternating_group_five_conjugacy_classes,Conjugacy classes of the alternating group on five elements,['Patrick Stevens'],2016-06-18T13:41:33Z,arbital,, 62500,https://arbital.com/p/complacency_valley,Valley of Dangerous Complacency,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-23T21:33:57Z,arbital,, 62511,https://arbital.com/p/ackermann_function,Ackermann function,"['Alexei Andreev', 'Eric Rogstad', 'Chris Barnett', 'Alex Appel', 'Nate Soares', 'Eric Bruylant']",2016-06-10T14:56:04Z,arbital,, 62520,https://arbital.com/p/log_as_length,Log as generalized length,"['Nate Soares', 'Michael Keenan', 'Alexei Andreev']",2016-09-14T23:38:09Z,arbital,, 62529,https://arbital.com/p/exclusive_exhaustive,Mutually exclusive and exhaustive,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Fedor Belolutskiy']",2016-04-26T21:40:08Z,arbital,, 62538,https://arbital.com/p/CT_thesis,Church-Turing thesis,"['Alexei Andreev', 'Eric Rogstad', 'Patrick Stevens', 'Eric Bruylant', 'Connor Flexman', 'Jaime Sevilla Molina']",2016-07-21T21:12:21Z,arbital,, 62547,https://arbital.com/p/bayes_rule_probability,Bayes' rule: Probability form,"['Eric Bruylant', 'Alexei Andreev', 'Nadeem Mohsin', 'Eric Rogstad', 'Nate Soares', 'Adom Hartell', 'Eliezer Yudkowsky']",2017-08-13T04:21:13Z,arbital,, 62562,https://arbital.com/p/bayes_rule_guide,Bayes' rule: Guide,"['Joe Zeng', 'Alexei Andreev', 'Eric Rogstad', 'Tim Huegerich', 'Nate Soares', 'John John', 'Eric Bruylant', 'Kira Kutscher', 'Chris Cooper', 'Eliezer Yudkowsky']",2016-10-25T21:43:14Z,arbital,, 62571,https://arbital.com/p/4j,Coordinative AI development hypothetical,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Janos Kramar', 'Alexei Andreev']",2015-12-16T01:25:36Z,arbital,, 62581,https://arbital.com/p/kripke_model,Kripke model,"['Dylan Hendrickson', 'Jaime Sevilla Molina']",2016-07-26T14:42:16Z,arbital,, 62603,https://arbital.com/p/conservative_concept,Conservative concept boundary,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-22T18:26:05Z,arbital,, 62624,https://arbital.com/p/proof_lobs_thorem,Proof of Löb's theorem,"['M Yass', 'Dylan Hendrickson', 'Malcolm McCrimmon', 'Eric Bruylant', 'Jaime Sevilla Molina']",2016-10-14T08:10:23Z,arbital,, 62633,https://arbital.com/p/user_querying,Querying the AGI user,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-20T02:35:33Z,arbital,, 62654,https://arbital.com/p/orbit_stabiliser_theorem_external_resources,Orbit-Stabiliser theorem: External Resources,"['Eric Bruylant', 'Mark Chimes', 'Patrick Stevens']",2016-07-02T21:58:00Z,arbital,, 62663,https://arbital.com/p/universal_property,Universal property,"['Eric Rogstad', 'Patrick Stevens']",2016-12-31T13:33:03Z,arbital,, 62680,https://arbital.com/p/addition_of_rational_numbers_math_0,Addition of rational numbers (Math 0),"['Eric Bruylant', 'Patrick Stevens']",2016-08-13T12:14:24Z,arbital,, 62689,https://arbital.com/p/safe_training_for_imitators,Safe training procedures for human-imitators,"['Eric Bruylant', 'Jessica Taylor']",2016-03-24T03:31:26Z,arbital,, 62712,https://arbital.com/p/free_group_is_torsion_free,Free groups are torsion-free,"['Dylan Hendrickson', 'Patrick Stevens']",2016-07-25T17:38:13Z,arbital,, 62721,https://arbital.com/p/effability,Effability principle,"['zohar jackson', 'Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-16T19:07:39Z,arbital,, 62741,https://arbital.com/p/direct_limit_oppose,"Directing, vs. limiting, vs. opposing","['Eric Rogstad', 'Eric Bruylant', 'Eliezer Yudkowsky']",2017-05-22T22:39:26Z,arbital,, 62767,https://arbital.com/p/value_identification,Value identification problem,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-15T05:23:49Z,arbital,, 62779,https://arbital.com/p/fallacy,Fallacies,['Eliezer Yudkowsky'],2016-05-25T19:30:15Z,arbital,, 62788,https://arbital.com/p/mindcrime_introduction,Mindcrime: Introduction,"['Alexei Andreev', 'Eric Rogstad', 'Alex Pear', 'Eric Bruylant', 'Niplav Yushtun', 'Eliezer Yudkowsky']",2016-12-21T00:49:46Z,arbital,, 62802,https://arbital.com/p/dwim,Do-What-I-Mean hierarchy,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Damon Pourtahmaseb-Sasi']",2016-06-06T17:38:45Z,arbital,, 62841,https://arbital.com/p/antisymmetric_relation,Antisymmetric relation,"['Eric Rogstad', 'Kevin Clancy', 'M Yass']",2016-08-05T22:42:35Z,arbital,, 62850,https://arbital.com/p/convergent_self_modification,Convergent strategies of self-modification,"['Eric Rogstad', 'Eric Bruylant', 'Eliezer Yudkowsky']",2016-05-18T04:34:21Z,arbital,, 62869,https://arbital.com/p/group_orbit,Group orbit,"['Eric Bruylant', 'Patrick Stevens', 'Adele Lopez']",2016-06-28T21:01:45Z,arbital,, 62878,https://arbital.com/p/start_meta_tag,Start,"['Kevin Clancy', 'Joe Zeng', 'Alexei Andreev', 'Eric Rogstad', 'Stephanie Zolayvar', 'Nate Soares', 'Eric Bruylant', 'Jaime Sevilla Molina', 'Eliezer Yudkowsky']",2016-08-03T20:19:29Z,arbital,, 62888,https://arbital.com/p/sample_spaces_are_too_large,Sample spaces are too large,['Tsvi BT'],2016-05-25T20:08:44Z,arbital,, 62897,https://arbital.com/p/subjective_probability,Subjective probability,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2017-02-08T17:36:49Z,arbital,, 62910,https://arbital.com/p/value_alignment_subject_list,List: value-alignment subjects,"['Alexei Andreev', 'Matthew Graves', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky']",2017-01-27T19:30:51Z,arbital,, 62943,https://arbital.com/p/underestimate_value_complexity_perceputal_property,Underestimating complexity of value because goodness feels like a simple property,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-26T23:23:07Z,arbital,, 62952,https://arbital.com/p/intension_extension,Intension vs. extension,"['Eliezer Yudkowsky', 'Alexei Andreev']",2015-12-16T16:20:57Z,arbital,, 62962,https://arbital.com/p/likelihood_not_pvalue_faq,Report likelihoods not p-values: FAQ,"['[ ]', 'Nate Soares', 'Diane Ritter', 'Eric Bruylant', 'Travis Rivera']",2017-09-09T14:57:58Z,arbital,, 62984,https://arbital.com/p/diamond_maximizer,Diamond maximizer,"['Eliezer Yudkowsky', 'Nate Soares', 'Eric Bruylant']",2015-12-17T21:58:47Z,arbital,, 63005,https://arbital.com/p/rationals_form_a_field_math_0,Rational arithmetic all works together,"['Patrick Stevens', 'Alexei Andreev']",2016-08-14T06:39:06Z,arbital,, 63014,https://arbital.com/p/data_capacity,Data capacity,['Nate Soares'],2016-06-24T02:11:04Z,arbital,, 63030,https://arbital.com/p/real_world,Real-world domain,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-01-05T15:12:13Z,arbital,, 63041,https://arbital.com/p/orthogonality,Orthogonality Thesis,"['Alexei Andreev', 'Eric Rogstad', 'Rob Bensinger', 'Eric Bruylant', 'Eliezer Yudkowsky']",2022-06-08T04:35:51Z,arbital,, 63056,https://arbital.com/p/limited_agi,Limited AGI,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-22T00:12:59Z,arbital,, 63078,https://arbital.com/p/relation_mathematics,Relation,"['Kevin Clancy', 'Alexei Andreev', 'Dylan Hendrickson', 'Nate Soares', 'Eric Bruylant', 'Joe Zeng']",2016-07-07T15:11:14Z,arbital,, 63087,https://arbital.com/p/total_alignment,Total alignment,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-06T17:52:05Z,arbital,, 63102,https://arbital.com/p/real_numbers_uncountable,Real numbers are uncountable,['Eric Bruylant'],2016-10-21T09:16:58Z,arbital,, 63111,https://arbital.com/p/identify_causal_goals,Identifying causal goal concepts from sensory data,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-04-14T19:18:05Z,arbital,, 63120,https://arbital.com/p/generalized_associative_law,Generalized associative law,"['Eric Rogstad', 'Eric Bruylant', 'Nate Soares']",2016-05-16T23:38:37Z,arbital,, 63130,https://arbital.com/p/free_group_formal_definition,Formal definition of the free group,['Patrick Stevens'],2016-08-05T06:44:17Z,arbital,, 63139,https://arbital.com/p/omni_test,Omnipotence test for AI safety,"['Eric Bruylant', 'Olivia Schaefer', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-03-31T15:04:12Z,arbital,, 63157,https://arbital.com/p/faithful_simulation,Faithful simulation,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-04-14T01:17:00Z,arbital,, 63175,https://arbital.com/p/pivotal,Pivotal act,"['Alexei Andreev', 'Rob Bensinger', 'Nate Soares', 'Eric Bruylant', 'Eliezer Yudkowsky']",2021-11-16T04:22:19Z,arbital,, 63194,https://arbital.com/p/complexity_theory,Complexity theory,"['Eric Bruylant', 'Adom Hartell', 'Eric Rogstad', 'Jaime Sevilla Molina']",2016-10-08T15:56:04Z,arbital,, 63210,https://arbital.com/p/free_group_universal_property,Free group universal property,['Patrick Stevens'],2016-10-23T16:25:51Z,arbital,, 63220,https://arbital.com/p/logarithm,Logarithm,"['Eric Rogstad', 'Nate Soares']",2016-06-20T21:56:48Z,arbital,, 63237,https://arbital.com/p/vector_space_direct_sum,Direct sum of vector spaces,['Nate Soares'],2016-05-27T17:58:14Z,arbital,, 63246,https://arbital.com/p/gen_elt,Generalized element,"['Dylan Hendrickson', 'Eric Bruylant', 'Luke Sciarappa']",2016-10-07T15:59:07Z,arbital,, 63255,https://arbital.com/p/fractional_bits_as_expected_cost,Fractional bits: Expected cost interpretation,['Nate Soares'],2016-06-24T02:29:24Z,arbital,, 63273,https://arbital.com/p/FAI,Friendly AI,"['Eric Bruylant', 'Eliezer Yudkowsky']",2015-12-28T21:06:04Z,arbital,, 63282,https://arbital.com/p/closure_mathematics,Closure,"['Eric Bruylant', 'Nate Soares']",2016-06-07T03:57:29Z,arbital,, 63291,https://arbital.com/p/group_conjugate,Group conjugate,"['Eric Bruylant', 'Patrick Stevens']",2016-06-20T07:05:54Z,arbital,, 63306,https://arbital.com/p/dont_solve_whole_problem,Don't try to solve the entire alignment problem,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-06-29T02:48:31Z,arbital,, 63318,https://arbital.com/p/likelihood_vs_pvalue,"Likelihood functions, p-values, and the replication crisis","['Eric Bruylant', 'Grigoriy Beziuk', 'Eric Rogstad', 'Patrick Stevens', 'Nate Soares', 'Drake Thomas', 'Adnll', 'Eliezer Yudkowsky']",2018-07-01T23:10:21Z,arbital,, 63340,https://arbital.com/p/cosmic_endowment,Cosmic endowment,"['Eric Bruylant', 'Nate Soares', 'Rob Bensinger', 'Eliezer Yudkowsky']",2018-10-20T13:32:21Z,arbital,, 63349,https://arbital.com/p/magician_message,Communication: magician example,['Nate Soares'],2016-06-02T19:42:25Z,arbital,, 63360,https://arbital.com/p/value_laden,Value-laden,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-04-18T16:35:38Z,arbital,, 63375,https://arbital.com/p/goodharts_curse,Goodhart's Curse,"['Rob Bensinger', 'Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-21T22:09:10Z,arbital,, 63398,https://arbital.com/p/why_is_log_like_length,Why is log like length?,"['Eric Rogstad', 'Nate Soares']",2016-06-08T17:05:04Z,arbital,, 63407,https://arbital.com/p/relevant_limited_AI,Relevant limited AI,"['Rob Bensinger', 'Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-09-24T22:04:01Z,arbital,, 63422,https://arbital.com/p/value_achievement_dilemma,Value achievement dilemma,"['Eric Bruylant', 'Eliezer Yudkowsky', 'Alexei Andreev']",2017-02-01T23:41:23Z,arbital,, 63443,https://arbital.com/p/sockdresser_search,Sock-dresser search,['Nate Soares'],2016-07-06T12:11:50Z,arbital,, 63452,https://arbital.com/p/blue_oysters,Blue oysters,"['Nate Soares', 'Alexei Andreev']",2016-07-15T21:32:59Z,arbital,, 63461,https://arbital.com/p/euclidean_domain_is_pid,Euclidean domains are principal ideal domains,['Patrick Stevens'],2016-08-14T15:43:26Z,arbital,, 63475,https://arbital.com/p/cognitive_alignment,Generalized principle of cognitive alignment,"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-02-13T17:55:58Z,arbital,, 63487,https://arbital.com/p/normative_extrapolated_volition,Extrapolated volition (normative moral theory),"['Eric Bruylant', 'Eliezer Yudkowsky']",2017-01-07T07:06:33Z,arbital,, 63506,https://arbital.com/p/consistency,Consistency,"['Eric Rogstad', 'Jaime Sevilla Molina']",2016-07-24T06:48:18Z,arbital,, 63517,https://arbital.com/p/equivalence_relation,Equivalence relation,"['Dylan Hendrickson', 'Eric Bruylant', 'Joe Zeng']",2016-07-07T16:52:44Z,arbital,, 63526,https://arbital.com/p/instrumental_convergence,Instrumental convergence,"['Eric Bruylant', 'Eliezer Yudkowsky', 'spxtr spxtr', 'Alexei Andreev']",2017-04-10T04:12:52Z,arbital,, 63548,https://arbital.com/p/complexity_zoo,Complexity theory: Complexity zoo,"['Eric Rogstad', 'Jaime Sevilla Molina', 'Duncan Wilson']",2017-01-25T13:45:55Z,arbital,, 63564,https://arbital.com/p/foreseeable_difficulties,Methodology of foreseeable difficulties,"['Eric Bruylant', 'Alexei Andreev', 'Eliezer Yudkowsky', 'Matthew Graves']",2016-11-22T23:34:12Z,arbital,, 63574,https://arbital.com/p/unbounded_analysis,Methodology of unbounded analysis,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2016-01-20T00:05:11Z,arbital,, 63598,https://arbital.com/p/iff,Iff,"['Eric Bruylant', 'M Yass', 'Alexei Andreev']",2016-10-04T19:23:12Z,arbital,, 63607,https://arbital.com/p/mind_projection,Mind projection fallacy,"['Nate Soares', 'Eliezer Yudkowsky']",2016-06-30T02:51:58Z,arbital,, 63620,https://arbital.com/p/encoding_dependent_messages,Dependent messages can be encoded cheaply,['Nate Soares'],2016-05-29T16:56:20Z,arbital,, 63629,https://arbital.com/p/arithmetical_hierarchy,Arithmetical hierarchy,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-01-16T23:51:28Z,arbital,, 63642,https://arbital.com/p/commutativity_examples,Commutativity: Examples,"['Eric Bruylant', 'Nate Soares']",2016-05-15T11:23:17Z,arbital,, 63656,https://arbital.com/p/bayes_update,Bayesian update,"['Eric Bruylant', 'Nate Soares', 'Eliezer Yudkowsky', 'Alexei Andreev']",2017-02-08T17:36:41Z,arbital,, 63665,https://arbital.com/p/safe_plan_identification,Safe plan identification and verification,"['Eric Bruylant', 'Eliezer Yudkowsky']",2016-03-23T21:17:13Z,arbital,, 63678,https://arbital.com/p/empty_set_universal_property,Universal property of the empty set,"['Eric Bruylant', 'Patrick Stevens', 'Alexei Andreev']",2016-08-27T16:29:32Z,arbital,, 63694,https://arbital.com/p/string_concatenation,concat (function),"['Eric Bruylant', 'Nate Soares']",2016-05-12T21:16:32Z,arbital,, 63703,https://arxiv.org/abs/2205.12749,A Human-Centric Assessment Framework for AI,"['Sascha Saralajew', 'Ammar Shaker', 'Zhao Xu', 'Kiril Gashteovski', 'Bhushan Kotnis', 'Wiem Ben Rim', 'Jürgen Quittek', 'Carolin Lawrence']",2022-05-25T00:00:00Z,arxiv,, 63724,https://arxiv.org/abs/2002.11708,Generalized Hindsight for Reinforcement Learning,"['Alexander C. Li', 'Lerrel Pinto', 'Pieter Abbeel']",2020-02-26T00:00:00Z,arxiv,, 63739,https://arxiv.org/abs/2011.12863,European Strategy on AI: Are we truly fostering social good?,"['Francesca Foffano', 'Teresa Scantamburlo', 'Atia Cortés', 'Chiara Bissolo']",2020-11-25T00:00:00Z,arxiv,, 63765,https://arxiv.org/abs/1811.03493,"Integrative Biological Simulation, Neuropsychology, and AI Safety","['Gopal P. Sarma', 'Adam Safron', 'Nick J. Hay']",2018-11-07T00:00:00Z,arxiv,, 63793,https://arxiv.org/abs/2102.07152,On the Equilibrium Elicitation of Markov Games Through Information Design,"['Tao Zhang', 'Quanyan Zhu']",2021-02-14T00:00:00Z,arxiv,, 63810,https://arxiv.org/abs/1808.00508,Neural Arithmetic Logic Units,"['Andrew Trask', 'Felix Hill', 'Scott Reed', 'Jack Rae', 'Chris Dyer', 'Phil Blunsom']",2018-08-01T00:00:00Z,arxiv,, 63832,https://arxiv.org/abs/1807.10299,Variational Option Discovery Algorithms,"['Joshua Achiam', 'Harrison Edwards', 'Dario Amodei', 'Pieter Abbeel']",2018-07-26T00:00:00Z,arxiv,, 63862,https://arxiv.org/abs/2103.05746,Analyzing Human Models that Adapt Online.,"['Andrea Bajcsy', 'Anand Siththaranjan', 'Claire J', 'Tomlin', 'Anca D', 'Dragan']",2021-08-14T00:00:00Z,arxiv,, 63883,https://arxiv.org/abs/1712.04172,A Low-Cost Ethics Shaping Approach for Designing Reinforcement Learning Agents,"['Yueh-Hua Wu', 'Shou-De Lin']",2017-12-12T00:00:00Z,arxiv,, 63904,https://arxiv.org/abs/1807.04742,Visual Reinforcement Learning with Imagined Goals,"['Ashvin Nair', 'Vitchyr Pong', 'Murtaza Dalal', 'Shikhar Bahl', 'Steven Lin', 'Sergey Levine']",2018-07-12T00:00:00Z,arxiv,, 63928,https://arxiv.org/abs/1907.02908,On Inductive Biases in Deep Reinforcement Learning,"['Matteo Hessel', 'Hado van Hasselt', 'Joseph Modayil', 'David Silver']",2019-07-05T00:00:00Z,arxiv,, 63952,https://arxiv.org/abs/1801.03583,Graphical Models for Processing Missing Data.,"['Karthika Mohan', 'Judea Pearl']",2019-08-14T00:00:00Z,arxiv,, 63970,https://arxiv.org/abs/1304.2759,Reasoning About Beliefs and Actions Under Computational Resource Constraints,['Eric J. Horvitz'],2013-03-27T00:00:00Z,arxiv,, 63992,https://arxiv.org/abs/2004.09984,BERT-ATTACK: Adversarial Attack Against BERT Using BERT,"['Linyang Li', 'Ruotian Ma', 'Qipeng Guo', 'Xiangyang Xue', 'Xipeng Qiu']",2020-04-21T00:00:00Z,arxiv,, 64009,https://arxiv.org/abs/1906.02845,Likelihood Ratios for Out-of-Distribution Detection,"['Jie Ren', 'Peter J. Liu', 'Emily Fertig', 'Jasper Snoek', 'Ryan Poplin', 'Mark A. DePristo', 'Joshua V. Dillon', 'Balaji Lakshminarayanan']",2019-06-07T00:00:00Z,arxiv,, 64022,https://arxiv.org/abs/1701.08317,Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy,"['Tathagata Chakraborti', 'Sarath Sreedharan', 'Yu Zhang', 'Subbarao Kambhampati']",2017-01-28T00:00:00Z,arxiv,, 64039,https://arxiv.org/abs/1304.5159,Interactive POMDP Lite: Towards Practical Planning to Predict and Exploit Intentions for Interacting with Self-Interested Agents,"['Trong Nghia Hoang', 'Kian Hsiang Low']",2013-04-18T00:00:00Z,arxiv,, 64056,https://arxiv.org/abs/1907.12392,A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment,"['Felix Leibfried', 'Sergio Pascual-Diaz', 'Jordi Grau-Moya']",2019-07-26T00:00:00Z,arxiv,, 64076,https://arxiv.org/abs/1611.09321,Improving Policy Gradient by Exploring Under-appreciated Rewards,"['Ofir Nachum', 'Mohammad Norouzi', 'Dale Schuurmans']",2016-11-28T00:00:00Z,arxiv,, 64087,https://arxiv.org/abs/2011.05064,What Did You Think Would Happen? Explaining Agent Behaviour Through Intended Outcomes,"['Herman Yau', 'Chris Russell', 'Simon Hadfield']",2020-11-10T00:00:00Z,arxiv,, 64100,https://arxiv.org/abs/1901.06085,Theory of Minds: Understanding Behavior in Groups Through Inverse Planning,"['Michael Shum', 'Max Kleiman-Weiner', 'Michael L. Littman', 'Joshua B. Tenenbaum']",2019-01-18T00:00:00Z,arxiv,, 64115,https://arxiv.org/abs/2108.00366,Agent-aware state estimation for autonomous vehicles.,"['Shane Parr', 'Ishan Khatri', 'Justin Svegliato', 'Shlomo Zilberstein']",2021-08-14T00:00:00Z,arxiv,, 64131,https://arxiv.org/abs/1609.03765,Graph Aggregation,"['Ulle Endriss', 'Umberto Grandi']",2016-09-13T00:00:00Z,arxiv,, 64150,https://arxiv.org/abs/2109.01903,Robust fine-tuning of zero-shot models,"['Mitchell Wortsman', 'Gabriel Ilharco', 'Jong Wook Kim', 'Mike Li', 'Simon Kornblith', 'Rebecca Roelofs', 'Raphael Gontijo-Lopes', 'Hannaneh Hajishirzi', 'Ali Farhadi', 'Hongseok Namkoong', 'Ludwig Schmidt']",2021-09-04T00:00:00Z,arxiv,, 64164,https://arxiv.org/abs/2003.04297,Improved Baselines with Momentum Contrastive Learning,"['Xinlei Chen', 'Haoqi Fan', 'Ross Girshick', 'Kaiming He']",2020-03-09T00:00:00Z,arxiv,, 64178,https://arxiv.org/abs/2202.05262,Locating and Editing Factual Associations in GPT,"['Kevin Meng', 'David Bau', 'Alex Andonian', 'Yonatan Belinkov']",2022-02-10T00:00:00Z,arxiv,, 64195,https://arxiv.org/abs/2103.03938,Causal Analysis of Agent Behavior for AI Safety,"['Grégoire Déletang', 'Jordi Grau-Moya', 'Miljan Martic', 'Tim Genewein', 'Tom McGrath', 'Vladimir Mikulik', 'Markus Kunesch', 'Shane Legg', 'Pedro A. Ortega']",2021-03-05T00:00:00Z,arxiv,, 64214,https://arxiv.org/abs/1906.01983,The Computational Structure of Unintentional Meaning.,"['Mark K', 'Ho', 'Joanna Korman', 'Thomas L', 'Griffiths']",2019-08-14T00:00:00Z,arxiv,, 64225,https://arxiv.org/abs/1310.6438,Game Theory with Translucent Players,"['Joseph Y. Halpern', 'Rafael Pass']",2013-10-23T23:54:32Z,arxiv,, 64240,https://arxiv.org/abs/1909.04630,Meta-Learning with Implicit Gradients,"['Aravind Rajeswaran', 'Chelsea Finn', 'Sham Kakade', 'Sergey Levine']",2019-09-10T00:00:00Z,arxiv,, 64250,https://arxiv.org/abs/2102.12092,Zero-Shot Text-to-Image Generation,"['Aditya Ramesh', 'Mikhail Pavlov', 'Gabriel Goh', 'Scott Gray', 'Chelsea Voss', 'Alec Radford', 'Mark Chen', 'Ilya Sutskever']",2021-02-24T00:00:00Z,arxiv,, 64280,https://arxiv.org/abs/2107.13668,Discovering User-Interpretable Capabilities of Black-Box Planning Agents,"['Pulkit Verma', 'Shashank Rao Marpally', 'Siddharth Srivastava']",2021-07-28T00:00:00Z,arxiv,, 64291,https://arxiv.org/abs/1904.12004,Knowing When to Stop: Evaluation and Verification of Conformity to Output-size Specifications,"['Chenglong Wang', 'Rudy Bunel', 'Krishnamurthy Dvijotham', 'Po-Sen Huang', 'Edward Grefenstette', 'Pushmeet Kohli']",2019-04-26T00:00:00Z,arxiv,, 64310,https://arxiv.org/abs/1705.10557,Universal Reinforcement Learning Algorithms: Survey and Experiments,"['John Aslanides', 'Jan Leike', 'Marcus Hutter']",2017-05-30T00:00:00Z,arxiv,, 64351,https://arxiv.org/abs/2010.07738,Do's and Don'ts for Human and Digital Worker Integration,"['Vinod Muthusamy', 'Merve Unuvar', 'Hagen Völzer', 'Justin D. Weisz']",2020-10-15T00:00:00Z,arxiv,, 64378,https://arxiv.org/abs/2202.03188,Knowledge-Integrated Informed AI for National Security,"['Anu K. Myne', 'Kevin J. Leahy', 'Ryan J. Soklaski']",2022-02-04T00:00:00Z,arxiv,, 64406,https://arxiv.org/abs/2212.03175,Learning Representations that Enable Generalization in Assistive Tasks.,"['J', 'Z', 'Y', 'He', 'A', 'Raghunathan', 'D', 'S', 'Brown', 'Z', 'Erickson', 'and A', 'D', 'Dragan']",2022-08-14T00:00:00Z,arxiv,, 64425,https://arxiv.org/abs/1909.09314,Meta-Inverse Reinforcement Learning with Probabilistic Context Variables,"['Lantao Yu', 'Tianhe Yu', 'Chelsea Finn', 'Stefano Ermon']",2019-09-20T00:00:00Z,arxiv,, 64437,https://arxiv.org/abs/2104.11353,Optimal Cost Design for Model Predictive Control.,"['Avik Jain', 'Lawrence Chan', 'Daniel S', 'Brown', 'Anca D', 'Dragan']",2021-08-14T00:00:00Z,arxiv,, 64452,https://arxiv.org/abs/1707.08747,A Formal Approach to the Problem of Logical Non-Omniscience,"['Scott Garrabrant', 'Tsvi Benson-Tilsen', 'Andrew Critch', 'Nate Soares', 'Jessica Taylor']",2017-07-27T00:00:00Z,arxiv,, 64469,https://arxiv.org/abs/2004.06496,Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning,"['Michael Everett', 'Bjorn Lutjens', 'Jonathan P. How']",2020-04-11T00:00:00Z,arxiv,, 64479,https://arxiv.org/abs/1512.05832,"Learning the Preferences of Ignorant, Inconsistent Agents","['Owain Evans', 'Andreas Stuhlmueller', 'Noah D. Goodman']",2015-12-18T00:00:00Z,arxiv,, 64495,https://arxiv.org/abs/2005.11295,From ImageNet to Image Classification: Contextualizing Progress on Benchmarks,"['Dimitris Tsipras', 'Shibani Santurkar', 'Logan Engstrom', 'Andrew Ilyas', 'Aleksander Madry']",2020-05-22T00:00:00Z,arxiv,, 64522,https://arxiv.org/abs/1907.02610,Adversarial Robustness through Local Linearization,"['Chongli Qin', 'James Martens', 'Sven Gowal', 'Dilip Krishnan', 'Krishnamurthy Dvijotham', 'Alhussein Fawzi', 'Soham De', 'Robert Stanforth', 'Pushmeet Kohli']",2019-07-04T00:00:00Z,arxiv,, 64536,https://arxiv.org/abs/2107.14093,A Decision Model for Decentralized Autonomous Organization Platform Selection: Three Industry Case Studies,"['Elena Baninemeh', 'Siamak Farshidi', 'Slinger Jansen']",2021-07-07T00:00:00Z,arxiv,, 64552,https://arxiv.org/abs/1806.01946,Learning to Understand Goal Specifications by Modelling Reward,"['Dzmitry Bahdanau', 'Felix Hill', 'Jan Leike', 'Edward Hughes', 'Arian Hosseini', 'Pushmeet Kohli', 'Edward Grefenstette']",2018-06-05T00:00:00Z,arxiv,, 64579,https://arxiv.org/abs/1709.04447,A Learning and Masking Approach to Secure Learning,"['Linh Nguyen', 'Sky Wang', 'Arunesh Sinha']",2017-09-13T00:00:00Z,arxiv,, 64610,https://arxiv.org/abs/1912.02757,Deep Ensembles: A Loss Landscape Perspective,"['Stanislav Fort', 'Huiyi Hu', 'Balaji Lakshminarayanan']",2019-12-05T00:00:00Z,arxiv,, 64630,https://arxiv.org/abs/2006.03357,Curiosity Killed or Incapacitated the Cat and the Asymptotically Optimal Agent,"['Michael K. Cohen', 'Elliot Catt', 'Marcus Hutter']",2020-06-05T00:00:00Z,arxiv,, 64646,https://arxiv.org/abs/2104.06556,Situational Confidence Assistance for Lifelong Shared Autonomy.,"['Matthew Zurek', 'Andreea Bobu', 'Daniel S', 'Brown', 'Anca D', 'Dragan']",2021-08-14T00:00:00Z,arxiv,, 64662,https://arxiv.org/abs/1911.04266,(When) Is Truth-telling Favored in AI Debate?,"['Vojtěch Kovařík', 'Ryan Carey']",2019-11-11T00:00:00Z,arxiv,, 64694,https://arxiv.org/abs/1903.08894,Towards Characterizing Divergence in Deep Q-Learning,"['Joshua Achiam', 'Ethan Knight', 'Pieter Abbeel']",2019-03-21T00:00:00Z,arxiv,, 64714,https://arxiv.org/abs/cs/0605024,A Formal Measure of Machine Intelligence,"['Shane Legg', 'Marcus Hutter']",2006-05-06T00:00:00Z,arxiv,, 64731,https://arxiv.org/abs/1810.00619,SmartChoices: Hybridizing Programming and Machine Learning,"['Victor Carbune', 'Thierry Coppey', 'Alexander Daryin', 'Thomas Deselaers', 'Nikhil Sarda', 'Jay Yagnik']",2018-10-01T00:00:00Z,arxiv,, 64751,https://arxiv.org/abs/1707.00183,Teacher-Student Curriculum Learning,"['Tambet Matiisen', 'Avital Oliver', 'Taco Cohen', 'John Schulman']",2017-07-01T00:00:00Z,arxiv,, 64769,https://arxiv.org/abs/1709.06275,Incorrigibility in the CIRL Framework,['Ryan Carey'],2017-09-19T00:00:00Z,arxiv,, 64790,https://arxiv.org/abs/2107.05363,Towards solving the 7-in-a-row game,"['Domonkos Czifra', 'Endre Csóka', 'Zsolt Zombori', 'Géza Makay']",2021-07-05T00:00:00Z,arxiv,, 64812,https://arxiv.org/abs/1806.01214,Implementing Mediators with Asynchronous Cheap Talk.,"['Ittai Abraham', 'Danny Dolev', 'Ivan Geffner', 'Joseph Y', 'Halpern']",2019-08-14T00:00:00Z,arxiv,, 64836,https://arxiv.org/abs/1911.08265,"Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model","['Julian Schrittwieser', 'Ioannis Antonoglou', 'Thomas Hubert', 'Karen Simonyan', 'Laurent Sifre', 'Simon Schmitt', 'Arthur Guez', 'Edward Lockhart', 'Demis Hassabis', 'Thore Graepel', 'Timothy Lillicrap', 'David Silver']",2019-11-19T00:00:00Z,arxiv,, 64862,https://arxiv.org/abs/1312.6114,Auto-Encoding Variational Bayes,"['Diederik P Kingma', 'Max Welling']",2013-12-20T00:00:00Z,arxiv,, 64882,https://arxiv.org/abs/2109.09672,Actionable Approaches to Promote Ethical AI in Libraries,"['Helen Bubinger', 'Jesse David Dinneen']",2021-09-20T00:00:00Z,arxiv,, 64899,https://arxiv.org/abs/2103.03874,Measuring mathematical problem solving with the math dataset.,"['Dan Hendrycks', 'Collin Burns', 'Saurav Kadavath', 'Akul Arora', 'Steven Basart', 'Eric Tang', 'Dawn Song', 'Jacob Steinhardt']",2021-08-14T00:00:00Z,arxiv,, 64930,https://arxiv.org/abs/2006.13208,Feature Expansive Reward Learning: Rethinking Human Input.,"['Andreea Bobu', 'Marius Wiggert', 'Claire Tomlin', 'Anca D', 'Dragan']",2021-08-15T00:00:00Z,arxiv,, 64946,https://arxiv.org/abs/2110.07574,Can Machines Learn Morality? The Delphi Experiment,"['Liwei Jiang', 'Jena D. Hwang', 'Chandra Bhagavatula', 'Ronan Le Bras', 'Jenny Liang', 'Jesse Dodge', 'Keisuke Sakaguchi', 'Maxwell Forbes', 'Jon Borchardt', 'Saadia Gabriel', 'Yulia Tsvetkov', 'Oren Etzioni', 'Maarten Sap', 'Regina Rini', 'Yejin Choi']",2021-10-14T00:00:00Z,arxiv,, 64968,https://arxiv.org/abs/2002.05709,A Simple Framework for Contrastive Learning of Visual Representations,"['Ting Chen', 'Simon Kornblith', 'Mohammad Norouzi', 'Geoffrey Hinton']",2020-02-13T00:00:00Z,arxiv,, 64993,https://arxiv.org/abs/2010.02229,Learning to Generalize for Sequential Decision Making,"['Xusen Yin', 'Ralph Weischedel', 'Jonathan May']",2020-10-05T00:00:00Z,arxiv,, 65014,https://arxiv.org/abs/1805.06826,The Blessings of Multiple Causes,"['Yixin Wang', 'David M. Blei']",2018-05-17T00:00:00Z,arxiv,, 65032,https://arxiv.org/abs/1906.08663,Modeling AGI Safety Frameworks with Causal Influence Diagrams,"['Tom Everitt', 'Ramana Kumar', 'Victoria Krakovna', 'Shane Legg']",2019-06-20T00:00:00Z,arxiv,, 65062,https://arxiv.org/abs/1806.09055,DARTS: Differentiable Architecture Search,"['Hanxiao Liu', 'Karen Simonyan', 'Yiming Yang']",2018-06-24T00:00:00Z,arxiv,, 65083,https://arxiv.org/abs/1705.09349,Together We Know How to Achieve: An Epistemic Logic of Know-How,"['Pavel Naumov', 'Jia Tao']",2017-05-25T00:00:00Z,arxiv,, 65106,https://arxiv.org/abs/1605.03142,Self-Modification of Policy and Utility Function in Rational Agents,"['Tom Everitt', 'Daniel Filan', 'Mayank Daswani', 'Marcus Hutter']",2016-05-10T00:00:00Z,arxiv,, 65119,https://arxiv.org/abs/1809.06404,Adversarial Imitation via Variational Inverse Reinforcement Learning,"['Ahmed H. Qureshi', 'Byron Boots', 'Michael C. Yip']",2018-09-17T00:00:00Z,arxiv,, 65135,https://arxiv.org/abs/1912.09571,Measuring the intelligence of an idealized mechanical knowing agent,['Samuel Allen Alexander'],2019-12-03T00:00:00Z,arxiv,, 65148,https://arxiv.org/abs/1912.06680,Dota 2 with Large Scale Deep Reinforcement Learning,"['OpenAI', ':', 'Christopher Berner', 'Greg Brockman', 'Brooke Chan', 'Vicki Cheung', 'Przemysław Dębiak', 'Christy Dennison', 'David Farhi', 'Quirin Fischer', 'Shariq Hashme', 'Chris Hesse', 'Rafal Józefowicz', 'Scott Gray', 'Catherine Olsson', 'Jakub Pachocki', 'Michael Petrov', 'Henrique P. d. O. Pinto', 'Jonathan Raiman', 'Tim Salimans', 'Jeremy Schlatter', 'Jonas Schneider', 'Szymon Sidor', 'Ilya Sutskever', 'Jie Tang', 'Filip Wolski', 'Susan Zhang']",2019-12-13T00:00:00Z,arxiv,, 65175,https://arxiv.org/abs/2103.15551,Toward Building Science Discovery Machines,"['Abdullah Khalili', 'Abdelhamid Bouchachia']",2021-03-24T00:00:00Z,arxiv,, 65212,https://arxiv.org/abs/2101.02500,Bridging In- and Out-of-distribution Samples for Their Better Discriminability,"['Engkarat Techapanurak', 'Anh-Chuong Dang', 'Takayuki Okatani']",2021-01-07T00:00:00Z,arxiv,, 65227,https://arxiv.org/abs/1902.10250,Diagnosing Bottlenecks in Deep Q-learning Algorithms,"['Justin Fu', 'Aviral Kumar', 'Matthew Soh', 'Sergey Levine']",2019-02-26T00:00:00Z,arxiv,, 65268,https://arxiv.org/abs/2202.04943,Interpretable pipelines with evolutionarily optimized modules for RL tasks with visual inputs,"['Leonardo Lucio Custode', 'Giovanni Iacca']",2022-02-10T00:00:00Z,arxiv,, 65280,https://arxiv.org/abs/2006.13258,Adversarial Soft Advantage Fitting: Imitation Learning without Policy Optimization,"['Paul Barde', 'Julien Roy', 'Wonseok Jeon', 'Joelle Pineau', 'Christopher Pal', 'Derek Nowrouzezahrai']",2020-06-23T00:00:00Z,arxiv,, 65302,https://arxiv.org/abs/1810.06758,Discriminator Rejection Sampling,"['Samaneh Azadi', 'Catherine Olsson', 'Trevor Darrell', 'Ian Goodfellow', 'Augustus Odena']",2018-10-16T00:00:00Z,arxiv,, 65315,https://arxiv.org/abs/1901.05761,Attentive Neural Processes,"['Hyunjik Kim', 'Andriy Mnih', 'Jonathan Schwarz', 'Marta Garnelo', 'Ali Eslami', 'Dan Rosenbaum', 'Oriol Vinyals', 'Yee Whye Teh']",2019-01-17T00:00:00Z,arxiv,, 65333,https://arxiv.org/abs/1312.5602,Playing Atari with Deep Reinforcement Learning,"['Volodymyr Mnih', 'Koray Kavukcuoglu', 'David Silver', 'Alex Graves', 'Ioannis Antonoglou', 'Daan Wierstra', 'Martin Riedmiller']",2013-12-19T00:00:00Z,arxiv,, 65364,https://arxiv.org/abs/2001.03246,The Logic of Strategic Assets: From Oil to Artificial Intelligence,"['Jeffrey Ding', 'Allan Dafoe']",2020-01-09T00:00:00Z,arxiv,, 65394,https://arxiv.org/abs/2109.14502,Untangling Braids with Multi-agent Q-Learning,"['Abdullah Khan', 'Alexei Vernitski', 'Alexei Lisitsa']",2021-09-29T00:00:00Z,arxiv,, 65410,https://arxiv.org/abs/1907.11932,Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment,"['Di Jin', 'Zhijing Jin', 'Joey Tianyi Zhou', 'Peter Szolovits']",2019-07-27T00:00:00Z,arxiv,, 65432,https://arxiv.org/abs/1803.02852,"Value Alignment, Fair Play, and the Rights of Service Robots",['Daniel Estrada'],2018-03-07T00:00:00Z,arxiv,, 65449,https://arxiv.org/abs/1806.05695,Evolving simple programs for playing Atari games,"['Dennis G Wilson', 'Sylvain Cussat-Blanc', 'Hervé Luga', 'Julian F Miller']",2018-06-14T00:00:00Z,arxiv,, 65470,https://arxiv.org/abs/1705.10528,Constrained Policy Optimization,"['Joshua Achiam', 'David Held', 'Aviv Tamar', 'Pieter Abbeel']",2017-05-30T00:00:00Z,arxiv,, 65488,https://arxiv.org/abs/1809.02206,Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark,"['Akshat Agarwal', 'Ryan Hope', 'Katia Sycara']",2018-09-06T00:00:00Z,arxiv,, 65513,https://arxiv.org/abs/1905.11979,Causal Confusion in Imitation Learning,"['Pim de Haan', 'Dinesh Jayaraman', 'Sergey Levine']",2019-05-28T00:00:00Z,arxiv,, 65528,https://arxiv.org/abs/2103.07460,Towards Risk Modeling for Collaborative AI,"['Matteo Camilli', 'Michael Felderer', 'Andrea Giusti', 'Dominik T. Matt', 'Anna Perini', 'Barbara Russo', 'Angelo Susi']",2021-03-12T00:00:00Z,arxiv,, 65554,https://arxiv.org/abs/1902.06744,Using Machine Learning to Guide Cognitive Modeling: A Case Study in Moral Reasoning.,"['Mayank Agrawal', 'Joshua C', 'Peterson', 'Thomas L', 'Griffiths']",2019-08-14T00:00:00Z,arxiv,, 65572,https://arxiv.org/abs/1604.06963,Limits to Verification and Validation of Agentic Behavior,['David J. Jilk'],2016-04-23T00:00:00Z,arxiv,, 65591,https://arxiv.org/abs/1401.3426,Networks of Influence Diagrams: A Formalism for Representing Agents' Beliefs and Decision-Making Processes,"['Yaakov Gal', 'Avi Pfeffer']",2014-01-15T00:00:00Z,arxiv,, 65610,https://arxiv.org/abs/2208.14426,Correct-by-Construction Runtime Enforcement in AI -- A Survey,"['Bettina Könighofer', 'Roderick Bloem', 'Rüdiger Ehlers', 'Christian Pek']",2022-08-30T00:00:00Z,arxiv,, 65649,https://arxiv.org/abs/2203.08594,Towards a Roadmap on Software Engineering for Responsible AI,"['Qinghua Lu', 'Liming Zhu', 'Xiwei Xu', 'Jon Whittle', 'Zhenchang Xing']",2022-03-09T00:00:00Z,arxiv,, 65686,https://arxiv.org/abs/2110.08412,Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining,"['Andreas Madsen', 'Nicholas Meade', 'Vaibhav Adlakha', 'Siva Reddy']",2021-10-15T00:00:00Z,arxiv,, 65708,https://arxiv.org/abs/2005.01831,Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?,"['Peter Hase', 'Mohit Bansal']",2020-05-04T00:00:00Z,arxiv,, 65740,https://arxiv.org/abs/2112.10190,Demanding and Designing Aligned Cognitive Architectures,['Koen Holtman'],2021-12-19T00:00:00Z,arxiv,, 65761,https://arxiv.org/abs/2110.13423,Towards more Generalizable One-shot Visual Imitation Learning.,"['Zhao Mandi', 'Fangchen Liu', 'Kimin Lee', 'Pieter Abbeel']",2022-08-14T00:00:00Z,arxiv,, 65778,https://arxiv.org/abs/2104.00677,Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis.,"['Ajay Jain', 'Matthew Tancik', 'Pieter Abbeel']",2021-08-14T00:00:00Z,arxiv,, 65801,https://arxiv.org/abs/1903.03877,Literal or Pedagogic Human? Analyzing Human Model Misspecification in Objective Learning.,"['Smitha Milli', 'Anca D', 'Dragan']",2019-08-15T00:00:00Z,arxiv,, 65822,https://arxiv.org/abs/2002.09815,Neuron Shapley: Discovering the Responsible Neurons,"['Amirata Ghorbani', 'James Zou']",2020-02-23T00:00:00Z,arxiv,, 65845,https://arxiv.org/abs/2105.02117,Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers,"['Baobao Zhang', 'Markus Anderljung', 'Lauren Kahn', 'Noemi Dreksler', 'Michael C. Horowitz', 'Allan Dafoe']",2021-05-05T00:00:00Z,arxiv,, 65877,https://arxiv.org/abs/1810.08167,Expressing Robot Incapability,"['Minae Kwon', 'Sandy H. Huang', 'Anca D. Dragan']",2018-10-18T00:00:00Z,arxiv,, 65888,https://arxiv.org/abs/1409.1556,Very Deep Convolutional Networks for Large-Scale Image Recognition,"['Karen Simonyan', 'Andrew Zisserman']",2014-09-04T00:00:00Z,arxiv,, 65915,https://arxiv.org/abs/2102.12962,Bias-reduced Multi-step Hindsight Experience Replay for Efficient Multi-goal Reinforcement Learning,"['Rui Yang', 'Jiafei Lyu', 'Yu Yang', 'Jiangpeng Yan', 'Feng Luo', 'Dijun Luo', 'Lanqing Li', 'Xiu Li']",2021-02-25T00:00:00Z,arxiv,, 65938,https://arxiv.org/abs/1805.05935,Feedback-Based Tree Search for Reinforcement Learning,"['Daniel R. Jiang', 'Emmanuel Ekwedike', 'Han Liu']",2018-05-15T00:00:00Z,arxiv,, 65960,https://arxiv.org/abs/1901.10031,Lyapunov-based Safe Policy Optimization for Continuous Control,"['Yinlam Chow', 'Ofir Nachum', 'Aleksandra Faust', 'Edgar Duenez-Guzman', 'Mohammad Ghavamzadeh']",2019-01-28T00:00:00Z,arxiv,, 65982,https://arxiv.org/abs/2201.08555,Identifying Adversarial Attacks on Text Classifiers,"['Zhouhang Xie', 'Jonathan Brophy', 'Adam Noack', 'Wencong You', 'Kalyani Asthana', 'Carter Perkins', 'Sabrina Reis', 'Sameer Singh', 'Daniel Lowd']",2022-01-21T00:00:00Z,arxiv,, 65993,https://arxiv.org/abs/2107.04409,An Orchestration Platform that Puts Radiologists in the Driver's Seat of AI Innovation: A Methodological Approach,"['Raphael Y. Cohen', 'Aaron D. Sodickson']",2021-07-06T00:00:00Z,arxiv,, 66033,https://arxiv.org/abs/1902.02918,Certified Adversarial Robustness via Randomized Smoothing,"['Jeremy M Cohen', 'Elan Rosenfeld', 'J. Zico Kolter']",2019-02-08T00:00:00Z,arxiv,, 66049,https://arxiv.org/abs/1304.2376,Generating Decision Structures and Causal Explanations for Decision Making,['Spencer Star'],2013-03-27T00:00:00Z,arxiv,, 66066,https://arxiv.org/abs/2210.03230,NAS-Bench-Suite-Zero: Accelerating Research on Zero Cost Proxies,"['Arjun Krishnakumar', 'Colin White', 'Arber Zela', 'Renbo Tu', 'Mahmoud Safari', 'Frank Hutter']",2022-10-06T00:00:00Z,arxiv,, 66089,https://arxiv.org/abs/1907.08461,Delegative Reinforcement Learning: learning to avoid traps with a little help,['Vanessa Kosoy'],2019-07-19T00:00:00Z,arxiv,, 66101,https://arxiv.org/abs/1904.07854,End-to-End Robotic Reinforcement Learning without Reward Engineering,"['Avi Singh', 'Larry Yang', 'Kristian Hartikainen', 'Chelsea Finn', 'Sergey Levine']",2019-04-16T00:00:00Z,arxiv,, 66132,https://arxiv.org/abs/2204.03361,Robust Event-Driven Interactions in Cooperative Multi-Agent Learning,"['Daniel Jarne Ornia', 'Manuel Mazo Jr']",2022-04-07T00:00:00Z,arxiv,, 66145,https://arxiv.org/abs/1902.04257,Deep Reinforcement Learning from Policy-Dependent Human Feedback,"['Dilip Arumugam', 'Jun Ki Lee', 'Sophie Saskin', 'Michael L. Littman']",2019-02-12T00:00:00Z,arxiv,, 66163,https://arxiv.org/abs/2202.01691,Solving Dynamic Principal-Agent Problems with a Rationally Inattentive Principal,"['Tong Mu', 'Stephan Zheng', 'Alexander Trott']",2022-01-18T00:00:00Z,arxiv,, 66181,https://arxiv.org/abs/2110.12588,QuantifyML: How Good is my Machine Learning Model?,"['Muhammad Usman', 'Divya Gopinath', 'Corina S. Păsăreanu']",2021-10-25T00:00:00Z,arxiv,, 66203,https://arxiv.org/abs/2109.06443,Designing a Combinatorial Financial Options Market,"['Xintong Wang', 'David M. Pennock', 'Nikhil R. Devanur', 'David M. Rothschild', 'Biaoshuai Tao', 'Michael P. Wellman']",2021-08-21T00:00:00Z,arxiv,, 66221,https://arxiv.org/abs/1912.04472,Deep Bayesian Reward Learning from Preferences,"['Daniel S. Brown', 'Scott Niekum']",2019-12-10T00:00:00Z,arxiv,, 66244,https://arxiv.org/abs/2206.14176,DayDreamer: World Models for Physical Robot Learning.,"['Philipp Wu', 'Alejandro Escontrela', 'Danijar Hafner', 'Ken Goldberg', 'Pieter Abbeel']",2022-08-14T00:00:00Z,arxiv,, 66267,https://arxiv.org/abs/1810.00869,Training Machine Learning Models by Regularizing their Explanations,['Andrew Slavin Ross'],2018-09-29T00:00:00Z,arxiv,, 66298,https://arxiv.org/abs/1704.05796,"Network Dissection: Quantifying Interpretability of Deep Visual Representations",['David Bau'],2017-04-19T16:10:38Z,arxiv,, 66325,https://arxiv.org/abs/2106.04696,Curriculum Design for Teaching via Demonstrations: Theory and Applications,"['Gaurav Yengera', 'Rati Devidze', 'Parameswaran Kamalaruban', 'Adish Singla']",2021-06-08T00:00:00Z,arxiv,, 66341,https://arxiv.org/abs/1711.05101,Fixing Weight Decay Regularization in Adam,['Ilya Loshchilov & Frank Hutter'],2017-11-14T14:24:06Z,arxiv,, 66367,https://arxiv.org/abs/1812.03868,Toward the Engineering of Virtuous Machines,"['Naveen Sundar Govindarajulu', 'Selmer Bringsjord', 'Rikhiya Ghosh']",2018-12-07T00:00:00Z,arxiv,, 66379,https://arxiv.org/abs/2006.08140,The Social Contract for AI,"['Mirka Snyder Caron', 'Abhishek Gupta']",2020-06-15T00:00:00Z,arxiv,, 66406,https://arxiv.org/abs/1806.08479,Human-Interactive Subgoal Supervision for Efficient Inverse Reinforcement Learning,"['Xinlei Pan', 'Eshed Ohn-Bar', 'Nicholas Rhinehart', 'Yan Xu', 'Yilin Shen', 'Kris M. Kitani']",2018-06-22T00:00:00Z,arxiv,, 66429,https://arxiv.org/abs/2102.04999,Pairwise Weights for Temporal Credit Assignment.,"['Zeyu Zheng', 'Risto Vuorio', 'Richard Lewis', 'and Satinder Singh']",2022-08-14T00:00:00Z,arxiv,, 66443,https://arxiv.org/abs/2208.01009,Few-shot Adaptation Works with UnpredicTable Data,['Jun Shern Chan'],2022-08-01T17:35:25Z,arxiv,, 66462,https://arxiv.org/abs/2004.10802,A Neural Scaling Law from the Dimension of the Data Manifold,"['Utkarsh Sharma', 'Jared Kaplan']",2020-04-22T00:00:00Z,arxiv,, 66481,https://arxiv.org/abs/1207.3874,Reasoning about Agent Programs using ATL-like Logics,"['Nitin Yadav', 'Sebastian Sardina']",2012-07-17T00:00:00Z,arxiv,, 66496,https://arxiv.org/abs/0803.2981,Idiotypic Immune Networks in Mobile Robot Control,"['Amanda Whitbrook', 'Uwe Aickelin', 'Jonathan Garibaldi']",2008-03-20T00:00:00Z,arxiv,, 66516,https://arxiv.org/abs/1905.13178,Better Future through AI: Avoiding Pitfalls and Guiding AI Towards its Full Potential,"['Risto Miikkulainen', 'Bret Greenstein', 'Babak Hodjat', 'Jerry Smith']",2019-05-30T00:00:00Z,arxiv,, 66549,https://arxiv.org/abs/1806.05502,Scrutinizing and De-Biasing Intuitive Physics with Neural Stethoscopes,"['Fabian B. Fuchs', 'Oliver Groth', 'Adam R. Kosiorek', 'Alex Bewley', 'Markus Wulfmeier', 'Andrea Vedaldi', 'Ingmar Posner']",2018-06-14T00:00:00Z,arxiv,, 66564,https://arxiv.org/abs/2209.13578,Learning When to Advise Human Decision Makers,"['Gali Noti', 'Yiling Chen']",2022-09-27T00:00:00Z,arxiv,, 66585,https://arxiv.org/abs/1908.03568,Behaviour Suite for Reinforcement Learning,"['Ian Osband', 'Yotam Doron', 'Matteo Hessel', 'John Aslanides', 'Eren Sezener', 'Andre Saraiva', 'Katrina McKinney', 'Tor Lattimore', 'Csaba Szepesvari', 'Satinder Singh', 'Benjamin Van Roy', 'Richard Sutton', 'David Silver', 'Hado Van Hasselt']",2019-08-09T00:00:00Z,arxiv,, 66605,https://arxiv.org/abs/2001.06782,Gradient Surgery for Multi-Task Learning,"['Tianhe Yu', 'Saurabh Kumar', 'Abhishek Gupta', 'Sergey Levine', 'Karol Hausman', 'Chelsea Finn']",2020-01-19T00:00:00Z,arxiv,, 66618,https://arxiv.org/abs/2210.10039,How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios.,"['Mantas Mazeika', 'Eric Tang', 'Andy Zou', 'Steven Basart', 'Jun Shern Chan', 'Dawn Song', 'David Forsyth', 'Jacob Steinhardt', 'Dan Hendrycks']",2022-08-14T00:00:00Z,arxiv,, 66639,https://arxiv.org/abs/1811.01439,Explaining Explanations in AI,"['Brent Mittelstadt', 'Chris Russell', 'Sandra Wachter']",2018-11-04T00:00:00Z,arxiv,, 66661,https://arxiv.org/abs/1911.09005,Hard Choices in Artificial Intelligence: Addressing Normative Uncertainty through Sociotechnical Commitments.,"['Roel Dobbe', 'Thomas Krendl Gilbert', 'Yonatan Mintz']",2019-08-15T00:00:00Z,arxiv,, 66681,https://arxiv.org/abs/1905.10615,Adversarial Policies: Attacking Deep Reinforcement Learning.,"['Adam Gleave', 'Michael Dennis', 'Cody Wild', 'Neel Kant', 'Sergey Levine', 'Stuart Russell']",2020-08-15T00:00:00Z,arxiv,, 66700,https://arxiv.org/abs/2007.03244,Robust Learning with Frequency Domain Regularization,"['Weiyu Guo', 'Yidong Ouyang']",2020-07-07T00:00:00Z,arxiv,, 66715,https://arxiv.org/abs/2007.02092,Customized Handling of Unintended Interface Operation in Assistive Robots,"['Deepak Gopinath', 'Mahdieh Nejati Javaremi', 'Brenna D. Argall']",2020-07-04T00:00:00Z,arxiv,, 66730,https://arxiv.org/abs/1906.04161,Self-Supervised Exploration via Disagreement,"['Deepak Pathak', 'Dhiraj Gandhi', 'Abhinav Gupta']",2019-06-10T00:00:00Z,arxiv,, 66747,https://arxiv.org/abs/1806.05234,Understanding the Meaning of Understanding,['Daniele Funaro'],2018-06-13T00:00:00Z,arxiv,, 66769,https://arxiv.org/abs/1612.07896,A Base Camp for Scaling AI,"['C. J. C. Burges', 'T. Hart', 'Z. Yang', 'S. Cucerzan', 'R. W. White', 'A. Pastusiak', 'J. Lewis']",2016-12-23T00:00:00Z,arxiv,, 66797,https://arxiv.org/abs/1303.1494,Two Procedures for Compiling Influence Diagrams,"['Paul E. Lehner', 'Azar Sadigh']",2013-03-06T00:00:00Z,arxiv,, 66814,https://arxiv.org/abs/2004.00470,Counterfactual Multi-Agent Reinforcement Learning with Graph Convolution Communication,"['Jianyu Su', 'Stephen Adams', 'Peter A. Beling']",2020-04-01T00:00:00Z,arxiv,, 66830,https://arxiv.org/abs/1807.03571,A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees,"['Min Wu', 'Matthew Wicker', 'Wenjie Ruan', 'Xiaowei Huang', 'Marta Kwiatkowska']",2018-07-10T00:00:00Z,arxiv,, 66850,https://arxiv.org/abs/2007.09297,Modulation of viability signals for self-regulatory control,"['Alvaro Ovalle', 'Simon M. Lucas']",2020-07-18T00:00:00Z,arxiv,, 66863,https://arxiv.org/abs/1911.00226,Generating Justifications for Norm-Related Agent Decisions,"['Daniel Kasenberg', 'Antonio Roque', 'Ravenna Thielstrom', 'Meia Chita-Tegmark', 'Matthias Scheutz']",2019-11-01T00:00:00Z,arxiv,, 66879,https://arxiv.org/abs/1901.02918,Making AI meaningful again,"['Jobst Landgrebe', 'Barry Smith']",2019-01-09T00:00:00Z,arxiv,, 66900,https://arxiv.org/abs/1411.2842,Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons,"['Matthias Englert', 'Sandra Siebert', 'Martin Ziegler']",2014-11-11T00:00:00Z,arxiv,, 66917,https://arxiv.org/abs/1904.01318,Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents,"['Christian Rupprecht', 'Cyril Ibrahim', 'Christopher J. Pal']",2019-04-02T00:00:00Z,arxiv,, 66934,https://arxiv.org/abs/2001.08361,Scaling Laws for Neural Language Models,"['Jared Kaplan', 'Sam McCandlish', 'Tom Henighan', 'Tom B. Brown', 'Benjamin Chess', 'Rewon Child', 'Scott Gray', 'Alec Radford', 'Jeffrey Wu', 'Dario Amodei']",2020-01-23T00:00:00Z,arxiv,, 66962,https://arxiv.org/abs/1802.07810,Manipulating and Measuring Model Interpretability,"['Forough Poursabzi-Sangdeh', 'Daniel G. Goldstein', 'Jake M. Hofman', 'Jennifer Wortman Vaughan', 'Hanna Wallach']",2018-02-21T00:00:00Z,arxiv,, 66986,https://arxiv.org/abs/2010.02846,Safety Aware Reinforcement Learning (SARL),"['Santiago Miret', 'Somdeb Majumdar', 'Carroll Wainwright']",2020-10-06T00:00:00Z,arxiv,, 67001,https://arxiv.org/abs/2111.05328,Data Augmentation Can Improve Robustness,"['Sylvestre-Alvise Rebuffi', 'Sven Gowal', 'Dan A. Calian', 'Florian Stimberg', 'Olivia Wiles', 'Timothy Mann']",2021-11-09T00:00:00Z,arxiv,, 67015,https://arxiv.org/abs/1706.02513,Responsible Autonomy,['Virginia Dignum'],2017-06-08T00:00:00Z,arxiv,, 67043,https://arxiv.org/abs/2202.07789,Safe Reinforcement Learning by Imagining the Near Future,"['Garrett Thomas', 'Yuping Luo', 'Tengyu Ma']",2022-02-15T00:00:00Z,arxiv,, 67057,https://arxiv.org/abs/1905.01019,Adversarial Training with Voronoi Constraints.,"['Marc Khoury', 'Dylan Hadfield-Menell']",2019-08-14T00:00:00Z,arxiv,, 67072,https://arxiv.org/abs/1807.00553,A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics.,"['Roel Dobbe', 'Sarah Dean', 'Thomas Gilbert', 'Nitin Kohli']",2018-08-14T00:00:00Z,arxiv,, 67100,https://arxiv.org/abs/1805.08882,Multi-task Maximum Entropy Inverse Reinforcement Learning.,"['Adam Gleave', 'Oliver Habryka']",2018-08-15T00:00:00Z,arxiv,, 67115,https://arxiv.org/abs/1606.01540,OpenAI Gym,"['Greg Brockman', 'Vicki Cheung', 'Ludwig Pettersson', 'Jonas Schneider', 'John Schulman', 'Jie Tang', 'Wojciech Zaremba']",2016-06-05T00:00:00Z,arxiv,, 67138,https://arxiv.org/abs/1811.01134,A Marauder's Map of Security and Privacy in Machine Learning,['Nicolas Papernot'],2018-11-03T00:00:00Z,arxiv,, 67172,https://arxiv.org/abs/2003.03384,AutoML-Zero: Evolving Machine Learning Algorithms From Scratch,"['Esteban Real', 'Chen Liang', 'David R. So', 'Quoc V. Le']",2020-03-06T00:00:00Z,arxiv,, 67199,https://arxiv.org/abs/1906.02641,An Extensible Interactive Interface for Agent Design,"['Matthew Rahtz', 'James Fang', 'Anca D. Dragan', 'Dylan Hadfield-Menell']",2019-06-06T00:00:00Z,arxiv,, 67217,https://arxiv.org/abs/1512.02595,Deep Speech 2: End-to-End Speech Recognition in English and Mandarin,"['Dario Amodei', 'Rishita Anubhai', 'Eric Battenberg', 'Carl Case', 'Jared Casper', 'Bryan Catanzaro', 'Jingdong Chen', 'Mike Chrzanowski', 'Adam Coates', 'Greg Diamos', 'Erich Elsen', 'Jesse Engel', 'Linxi Fan', 'Christopher Fougner', 'Tony Han', 'Awni Hannun', 'Billy Jun', 'Patrick LeGresley', 'Libby Lin', 'Sharan Narang', 'Andrew Ng', 'Sherjil Ozair', 'Ryan Prenger', 'Jonathan Raiman', 'Sanjeev Satheesh', 'David Seetapun', 'Shubho Sengupta', 'Yi Wang', 'Zhiqian Wang', 'Chong Wang', 'Bo Xiao', 'Dani Yogatama', 'Jun Zhan', 'Zhenyao Zhu']",2015-12-08T00:00:00Z,arxiv,, 67247,https://arxiv.org/abs/1910.04365,Asking Easy Questions: A User-Friendly Approach to Active Reward Learning,"['Erdem Bıyık', 'Malayandi Palan', 'Nicholas C. Landolfi', 'Dylan P. Losey', 'Dorsa Sadigh']",2019-10-10T00:00:00Z,arxiv,, 67276,https://arxiv.org/abs/2310.13548,Towards Understanding Sycophancy in Language Models,['Mrinank Sharma'],2023-10-20T14:46:48Z,arxiv,, 67304,https://arxiv.org/abs/1604.04315,Moving Beyond the Turing Test with the Allen AI Science Challenge,"['Carissa Schoenick', 'Peter Clark', 'Oyvind Tafjord', 'Peter Turney', 'Oren Etzioni']",2016-04-14T00:00:00Z,arxiv,, 67321,https://arxiv.org/abs/2012.13016,Antitrust and Artificial Intelligence (AAI): Antitrust Vigilance Lifecycle and AI Legal Reasoning Autonomy,['Lance Eliot'],2020-12-23T00:00:00Z,arxiv,, 67353,https://arxiv.org/abs/1807.03341,Troubling Trends in Machine Learning Scholarship,"['Zachary C. Lipton', 'Jacob Steinhardt']",2018-07-09T00:00:00Z,arxiv,, 67379,https://arxiv.org/abs/2002.00941,Quantifying Hypothesis Space Misspecification in Learning from Human-Robot Demonstrations and Physical Corrections.,"['Andreea Bobu', 'Andrea Bajcsy', 'Jaime F', 'Fisac', 'Sampada Deglurkar', 'Anca D', 'Dragan']",2019-08-14T00:00:00Z,arxiv,, 67396,https://arxiv.org/abs/1304.1515,When Should a Decision Maker Ignore the Advice of a Decision Aid?,"['Paul E. Lehner', 'Theresa M. Mullin', 'Marvin S. Cohen']",2013-03-27T00:00:00Z,arxiv,, 67417,https://arxiv.org/abs/2103.15746,Towards An Ethics-Audit Bot,"['Siani Pearson', 'Martin Lloyd', 'Vivek Nallur']",2021-03-29T00:00:00Z,arxiv,, 67436,https://arxiv.org/abs/1304.2751,Integrating Logical and Probabilistic Reasoning for Decision Making,"['John S. Breese', 'Edison Tse']",2013-03-27T00:00:00Z,arxiv,, 67451,https://arxiv.org/abs/1410.8233,Do Artificial Reinforcement-Learning Agents Matter Morally?,['Brian Tomasik'],2014-10-30T00:00:00Z,arxiv,, 67477,https://arxiv.org/abs/1807.06583,Interpretable Latent Spaces for Learning from Demonstration,"['Yordan Hristov', 'Alex Lascarides', 'Subramanian Ramamoorthy']",2018-07-17T00:00:00Z,arxiv,, 67493,https://arxiv.org/abs/1904.09024,When is a Prediction Knowledge?,"['Alex Kearney', 'Patrick M. Pilarski']",2019-04-18T00:00:00Z,arxiv,, 67507,https://arxiv.org/abs/1806.00109,Probabilistically Safe Robot Planning with Confidence-Based Human Predictions,"['Jaime F. Fisac', 'Andrea Bajcsy', 'Sylvia L. Herbert', 'David Fridovich-Keil', 'Steven Wang', 'Claire J. Tomlin', 'Anca D. Dragan']",2018-05-31T00:00:00Z,arxiv,, 67523,https://arxiv.org/abs/2203.13880,Reinforcement Learning with Action-Free Pre-Training from Videos.,"['Younggyo Seo', 'Kimin Lee', 'Stephen James', 'Pieter Abbeel']",2022-08-14T00:00:00Z,arxiv,, 67547,https://arxiv.org/abs/2107.12806,Towards Industrial Private AI: A two-tier framework for data and model security,"['Sunder Ali Khowaja', 'Kapal Dev', 'Nawab Muhammad Faseeh Qureshi', 'Parus Khuwaja', 'Luca Foschini']",2021-07-27T00:00:00Z,arxiv,, 67570,https://arxiv.org/abs/1903.03096,Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples,"['Eleni Triantafillou', 'Tyler Zhu', 'Vincent Dumoulin', 'Pascal Lamblin', 'Utku Evci', 'Kelvin Xu', 'Ross Goroshin', 'Carles Gelada', 'Kevin Swersky', 'Pierre-Antoine Manzagol', 'Hugo Larochelle']",2019-03-07T00:00:00Z,arxiv,, 67588,https://arxiv.org/abs/1607.06759,Predicting Enemy's Actions Improves Commander Decision-Making,"['Michael Ownby', 'Alexander Kott']",2016-07-22T00:00:00Z,arxiv,, 67607,https://arxiv.org/abs/2108.00106,Soft Calibration Objectives for Neural Networks,"['Archit Karandikar', 'Nicholas Cain', 'Dustin Tran', 'Balaji Lakshminarayanan', 'Jonathon Shlens', 'Michael C. Mozer', 'Becca Roelofs']",2021-07-30T00:00:00Z,arxiv,, 67626,https://arxiv.org/abs/2101.12509,Challenges for Using Impact Regularizers to Avoid Negative Side Effects,"['David Lindner', 'Kyle Matoba', 'Alexander Meulemans']",2021-01-29T00:00:00Z,arxiv,, 67666,https://arxiv.org/abs/1809.07193,TStarBots: Defeating the Cheating Level Builtin AI in StarCraft II in the Full Game,"['Peng Sun', 'Xinghai Sun', 'Lei Han', 'Jiechao Xiong', 'Qing Wang', 'Bo Li', 'Yang Zheng', 'Ji Liu', 'Yongsheng Liu', 'Han Liu', 'Tong Zhang']",2018-09-19T00:00:00Z,arxiv,, 67694,https://arxiv.org/abs/2201.04632,The Concept of Criticality in AI Safety,"['Yitzhak Spielberg', 'Amos Azaria']",2022-01-12T00:00:00Z,arxiv,, 67711,https://arxiv.org/abs/2006.14536,Smooth Adversarial Training,['Cihang Xie'],2020-06-25T16:34:39Z,arxiv,, 67726,https://arxiv.org/abs/2104.04893,The Atari Data Scraper,"['Brittany Davis Pierson', 'Justine Ventura', 'Matthew E. Taylor']",2021-04-11T00:00:00Z,arxiv,, 67745,https://arxiv.org/abs/2011.03395,Underspecification Presents Challenges for Credibility in Modern Machine Learning,"[""Alexander D'Amour"", 'Katherine Heller', 'Dan Moldovan', 'Ben Adlam', 'Babak Alipanahi', 'Alex Beutel', 'Christina Chen', 'Jonathan Deaton', 'Jacob Eisenstein', 'Matthew D. Hoffman', 'Farhad Hormozdiari', 'Neil Houlsby', 'Shaobo Hou', 'Ghassen Jerfel', 'Alan Karthikesalingam', 'Mario Lucic', 'Yian Ma', 'Cory McLean', 'Diana Mincu', 'Akinori Mitani', 'Andrea Montanari', 'Zachary Nado', 'Vivek Natarajan', 'Christopher Nielson', 'Thomas F. Osborne', 'Rajiv Raman', 'Kim Ramasamy', 'Rory Sayres', 'Jessica Schrouff', 'Martin Seneviratne', 'Shannon Sequeira', 'Harini Suresh', 'Victor Veitch', 'Max Vladymyrov', 'Xuezhi Wang', 'Kellie Webster', 'Steve Yadlowsky', 'Taedong Yun', 'Xiaohua Zhai', 'D. Sculley']",2020-11-06T00:00:00Z,arxiv,, 67778,https://arxiv.org/abs/1211.2290,Dating Texts without Explicit Temporal Cues,"['Abhimanu Kumar', 'Jason Baldridge', 'Matthew Lease', 'Joydeep Ghosh']",2012-11-10T00:00:00Z,arxiv,, 67789,https://arxiv.org/abs/1807.03888,A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks,['Kimin Lee'],2018-07-10T22:14:04Z,arxiv,, 67811,https://arxiv.org/abs/1812.00190,Deep Learning Application in Security and Privacy -- Theory and Practice: A Position Paper,"['Julia A. Meister', 'Raja Naeem Akram', 'Konstantinos Markantonakis']",2018-12-01T00:00:00Z,arxiv,, 67843,https://arxiv.org/abs/1110.6437,Anthropic decision theory,['Stuart Armstrong'],2011-10-28T00:00:00Z,arxiv,, 67861,https://arxiv.org/abs/2110.06674,Truthful AI: Developing and governing AI that does not lie,"['Owain Evans1†', 'Owen Cotton-Barratt1†', 'Lukas Finnveden1‡', 'Adam Bales2‡\nAvital Balwit1', 'Peter Wills1', '3', 'Luca Righetti1', 'William Saunders4']",2021-01-01T00:00:00Z,arxiv,, 67898,https://arxiv.org/abs/2007.02382,Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions,"['Michael Chang', 'Sidhant Kaushik', 'S. Matthew Weinberg', 'Thomas L. Griffiths', 'Sergey Levine']",2020-07-05T00:00:00Z,arxiv,, 67921,https://arxiv.org/abs/2103.02354,Evaluating Robustness of Counterfactual Explanations,"['André Artelt', 'Valerie Vaquet', 'Riza Velioglu', 'Fabian Hinder', 'Johannes Brinkrolf', 'Malte Schilling', 'Barbara Hammer']",2021-03-03T00:00:00Z,arxiv,, 67937,https://arxiv.org/abs/2203.02155,Training language models to follow instructions with human feedback,['Long Ouyang\n&Jeff Wu'],2022-03-04T07:04:42Z,arxiv,, 67963,https://arxiv.org/abs/1909.09906,Leveraging Human Guidance for Deep Reinforcement Learning Tasks,"['Ruohan Zhang', 'Faraz Torabi', 'Lin Guan', 'Dana H. Ballard', 'Peter Stone']",2019-09-21T00:00:00Z,arxiv,, 67999,https://arxiv.org/abs/1806.00069,Explaining Explanations: An Overview of Interpretability of Machine Learning,"['Leilani H. Gilpin', 'David Bau', 'Ben Z. Yuan', 'Ayesha Bajwa', 'Michael Specter', 'Lalana Kagal']",2018-05-31T00:00:00Z,arxiv,, 68028,https://arxiv.org/abs/1805.08328,Verifiable Reinforcement Learning via Policy Extraction,"['Osbert Bastani', 'Yewen Pu', 'Armando Solar-Lezama']",2018-05-22T00:00:00Z,arxiv,, 68048,https://arxiv.org/abs/1804.00092,Iterative Learning with Open-set Noisy Labels,"['Yisen Wang', 'Weiyang Liu', 'Xingjun Ma', 'James Bailey', 'Hongyuan Zha', 'Le Song', 'Shu-Tao Xia']",2018-03-31T00:00:00Z,arxiv,, 68063,https://arxiv.org/abs/1811.09722,Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior,"['Tathagata Chakraborti', 'Anagha Kulkarni', 'Sarath Sreedharan', 'David E. Smith', 'Subbarao Kambhampati']",2018-11-23T00:00:00Z,arxiv,, 68086,https://arxiv.org/abs/1809.01560,Reinforcement Learning under Threats,"['Victor Gallego', 'Roi Naveiro', 'David Rios Insua']",2018-09-05T00:00:00Z,arxiv,, 68102,https://arxiv.org/abs/1910.04098,Improving Generalization in Meta Reinforcement Learning using Learned Objectives,"['Louis Kirsch', 'Sjoerd van Steenkiste', 'Jürgen Schmidhuber']",2019-10-09T00:00:00Z,arxiv,, 68118,https://arxiv.org/abs/1810.09591,Applying Deep Learning To Airbnb Search,"['Malay Haldar', 'Mustafa Abdool', 'Prashant Ramanathan', 'Tao Xu', 'Shulin Yang', 'Huizhong Duan', 'Qing Zhang', 'Nick Barrow-Williams', 'Bradley C. Turnbull', 'Brendan M. Collins', 'Thomas Legrand']",2018-10-22T00:00:00Z,arxiv,, 68161,https://arxiv.org/abs/2005.04305,Measuring the Algorithmic Efficiency of Neural Networks,"['Danny Hernandez', 'Tom B. Brown']",2020-05-08T00:00:00Z,arxiv,, 68183,https://arxiv.org/abs/1812.10352,Learning Not to Learn: Training Deep Neural Networks with Biased Data,"['Byungju Kim', 'Hyunwoo Kim', 'Kyungsu Kim', 'Sungjin Kim', 'Junmo Kim']",2018-12-26T00:00:00Z,arxiv,, 68200,https://arxiv.org/abs/1606.08514,Towards Verified Artificial Intelligence,"['Sanjit A. Seshia', 'Dorsa Sadigh', 'and S. Shankar Sastry']",2016-06-27T23:51:04Z,arxiv,, 68233,https://arxiv.org/abs/2111.00585,JEDAI: A System for Skill-Aligned Explainable Robot Planning.,"['Naman Shah', 'Pulkit Verma', 'Trevor Angle', 'Siddharth Srivastava']",2022-08-14T00:00:00Z,arxiv,, 68252,https://arxiv.org/abs/2106.13249,Modeling the Mistakes of Boundedly Rational Agents Within a Bayesian Theory of Mind,"['Arwa Alanqary', 'Gloria Z. Lin', 'Joie Le', 'Tan Zhi-Xuan', 'Vikash K. Mansinghka', 'Joshua B. Tenenbaum']",2021-06-24T00:00:00Z,arxiv,, 68265,https://arxiv.org/abs/2106.04235,Definitions of intent suitable for algorithms,['Hal Ashton'],2021-06-08T00:00:00Z,arxiv,, 68283,https://arxiv.org/abs/2203.15103,Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions.,"['Alejandro Escontrela', 'Xue Bin Peng', 'Wenhao Yu', 'Tingnan Zhang', 'Atil Iscen', 'Ken Goldberg', 'Pieter Abbeel']",2022-08-14T00:00:00Z,arxiv,, 68303,https://arxiv.org/abs/1607.07730,A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis,"['Anthony M. Barrett', 'Seth D. Baum']",2016-07-25T13:04:22Z,arxiv,, 68336,https://arxiv.org/abs/2001.09768,"Artificial Intelligence, Values and Alignment",['Iason Gabriel'],2020-01-13T00:00:00Z,arxiv,, 68355,https://arxiv.org/abs/1902.07379,Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting,"['Jun Shu', 'Qi Xie', 'Lixuan Yi', 'Qian Zhao', 'Sanping Zhou', 'Zongben Xu', 'Deyu Meng']",2019-02-20T00:00:00Z,arxiv,, 68370,https://arxiv.org/abs/1803.03453,The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities,"['Joel Lehman', 'Jeff Clune', 'Dusan Misevic', 'Christoph Adami', 'Lee Altenberg', 'Julie Beaulieu', 'Peter J. Bentley', 'Samuel Bernard', 'Guillaume Beslon', 'David M. Bryson', 'Patryk Chrabaszcz', 'Nick Cheney', 'Antoine Cully', 'Stephane Doncieux', 'Fred C. Dyer', 'Kai Olav Ellefsen', 'Robert Feldt', 'Stephan Fischer', 'Stephanie Forrest', 'Antoine Frénoy', 'Christian Gagné', 'Leni Le Goff', 'Laura M. Grabowski', 'Babak Hodjat', 'Frank Hutter', 'Laurent Keller', 'Carole Knibbe', 'Peter Krcah', 'Richard E. Lenski', 'Hod Lipson', 'Robert MacCurdy', 'Carlos Maestre', 'Risto Miikkulainen', 'Sara Mitri', 'David E. Moriarty', 'Jean-Baptiste Mouret', 'Anh Nguyen', 'Charles Ofria', 'Marc Parizeau', 'David Parsons', 'Robert T. Pennock', 'William F. Punch', 'Thomas S. Ray', 'Marc Schoenauer', 'Eric Shulte', 'Karl Sims', 'Kenneth O. Stanley', 'François Taddei', 'Danesh Tarapore', 'Simon Thibault', 'Westley Weimer', 'Richard Watson', 'Jason Yosinski']",2018-03-09T00:00:00Z,arxiv,, 68393,https://arxiv.org/abs/1903.03088,Self-Tuning Networks: Bilevel Optimization of Hyperparameters using Structured Best-Response Functions,"['Matthew MacKay', 'Paul Vicol', 'Jon Lorraine', 'David Duvenaud', 'Roger Grosse']",2019-03-07T00:00:00Z,arxiv,, 68413,https://arxiv.org/abs/1711.00363,Servant of Many Masters: Shifting priorities in Pareto-optimal sequential decision-making,"['Andrew Critch', 'Stuart Russell']",2017-10-31T00:00:00Z,arxiv,, 68430,https://arxiv.org/abs/1811.06284,Guiding the One-to-one Mapping in CycleGAN via Optimal Transport,"['Guansong Lu', 'Zhiming Zhou', 'Yuxuan Song', 'Kan Ren', 'Yong Yu']",2018-11-15T00:00:00Z,arxiv,, 68442,https://arxiv.org/abs/1910.14436,How can AI Automate End-to-End Data Science?,"['Charu Aggarwal', 'Djallel Bouneffouf', 'Horst Samulowitz', 'Beat Buesser', 'Thanh Hoang', 'Udayan Khurana', 'Sijia Liu', 'Tejaswini Pedapati', 'Parikshit Ram', 'Ambrish Rawat', 'Martin Wistuba', 'Alexander Gray']",2019-10-22T00:00:00Z,arxiv,, 68467,https://arxiv.org/abs/1911.00459,Positive-Unlabeled Reward Learning,"['Danfei Xu', 'Misha Denil']",2019-11-01T00:00:00Z,arxiv,, 68482,https://arxiv.org/abs/1809.03008,Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability,"['Kai Y. Xiao', 'Vincent Tjeng', 'Nur Muhammad Shafiullah', 'Aleksander Madry']",2018-09-09T00:00:00Z,arxiv,, 68498,https://arxiv.org/abs/2205.10816,Chain of Thought Imitation with Procedure Cloning.,"['Mengjiao (Sherry) Yang', 'Dale Schuurmans', 'Pieter Abbeel', 'Ofir Nachum']",2022-08-14T00:00:00Z,arxiv,, 68521,https://arxiv.org/abs/2109.07958,TruthfulQA: Measuring How Models Mimic Human Falsehoods,['Stephanie Lin'],2021-01-01T00:00:00Z,arxiv,, 68548,https://arxiv.org/abs/1512.03385,Deep Residual Learning for Image Recognition,['Kaiming He \u2003\u2003Xiangyu Zhang \u2003\u2003Shaoqing Ren \u2003\u2003Jian Sun'],2015-12-10T19:51:55Z,arxiv,, 68566,https://arxiv.org/abs/1203.0699,Ambiguous Language and Differences in Beliefs,"['Joseph Y. Halpern', 'Willemien Kets']",2012-03-04T00:00:00Z,arxiv,, 68586,https://arxiv.org/abs/2201.02759,Modeling Human-AI Team Decision Making,"['Wei Ye', 'Francesco Bullo', 'Noah Friedkin', 'Ambuj K Singh']",2022-01-08T00:00:00Z,arxiv,, 68613,https://arxiv.org/abs/1803.05049,Fractal AI: A fragile theory of intelligence,"['Sergio Hernandez Cerezo', 'Guillem Duran Ballester']",2018-03-13T00:00:00Z,arxiv,, 68636,https://arxiv.org/abs/2203.02481,AutoDIME: Automatic Design of Interesting Multi-Agent Environments,"['Ingmar Kanitscheider', 'Harri Edwards']",2022-03-04T00:00:00Z,arxiv,, 68659,https://arxiv.org/abs/2112.10925,"DB-BERT: a Database Tuning Tool that ""Reads the Manual""",['Immanuel Trummer'],2021-12-21T00:00:00Z,arxiv,, 68673,https://arxiv.org/abs/1410.5787,"The Precautionary Principle (with Application to the Genetic Modification of Organisms)","['Nassim Nicholas Taleb1', 'Rupert Read4', 'Raphael Douady3', 'Joseph Norman2', 'Yaneer Bar-Yam2']",2014-10-17T17:30:43Z,arxiv,, 68707,https://arxiv.org/abs/2107.06857,Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting Pot,"['Joel Z. Leibo', 'Edgar Duéñez-Guzmán', 'Alexander Sasha Vezhnevets', 'John P. Agapiou', 'Peter Sunehag', 'Raphael Koster', 'Jayd Matyas', 'Charles Beattie', 'Igor Mordatch', 'Thore Graepel']",2021-07-14T00:00:00Z,arxiv,, 68725,https://arxiv.org/abs/2209.10341,LCRL: Certified Policy Synthesis via Logically-Constrained Reinforcement Learning,"['Hosein Hasanbeig', 'Daniel Kroening', 'Alessandro Abate']",2022-09-21T00:00:00Z,arxiv,, 68741,https://arxiv.org/abs/1307.2191,A Knowledge-based Treatment of Human-Automation Systems,"['Yoram Moses', 'Marcia K. Shamo']",2013-07-08T00:00:00Z,arxiv,, 68759,https://arxiv.org/abs/2302.08582,Pretraining Language Models with Human Preferences,"['Tomasz Korbak', 'Kejian Shi', 'Angelica Chen', 'Rasika Bhalerao', 'Christopher L. Buckley', 'Jason Phang', 'Samuel R. Bowman', 'Ethan Perez']",2023-02-16T21:03:33Z,arxiv,, 68781,https://arxiv.org/abs/1709.06166,DropoutDAgger: A Bayesian Approach to Safe Imitation Learning,"['Kunal Menda', 'Katherine Driggs-Campbell', 'Mykel J. Kochenderfer']",2017-09-18T00:00:00Z,arxiv,, 68793,https://arxiv.org/abs/2202.02776,"Human rights, democracy, and the rule of law assurance framework for AI systems: A proposal","['David Leslie', 'Christopher Burr', 'Mhairi Aitken', 'Michael Katell', 'Morgan Briggs', 'Cami Rincon']",2022-02-06T00:00:00Z,arxiv,, 68808,https://arxiv.org/abs/1903.02020,Using Natural Language for Reward Shaping in Reinforcement Learning,"['Prasoon Goyal', 'Scott Niekum', 'Raymond J. Mooney']",2019-03-05T00:00:00Z,arxiv,, 68826,https://arxiv.org/abs/2006.15191,"Is SGD a Bayesian sampler? Well, almost","['Chris Mingard', 'Guillermo Valle-Pérez', 'Joar Skalse', 'Ard A. Louis']",2020-06-26T00:00:00Z,arxiv,, 68845,https://arxiv.org/abs/1511.07543,Convergent Learning: Do different neural networks learn the same representations?,['Yixuan Li'],2015-11-24T02:31:46Z,arxiv,, 68871,https://arxiv.org/abs/2107.04953,Designing Recommender Systems to Depolarize.,['Jonathan Stray'],2021-08-15T00:00:00Z,arxiv,, 68902,https://arxiv.org/abs/1807.04723,The Bottleneck Simulator: A Model-based Deep Reinforcement Learning Approach,"['Iulian Vlad Serban', 'Chinnadhurai Sankar', 'Michael Pieper', 'Joelle Pineau', 'Yoshua Bengio']",2018-07-12T00:00:00Z,arxiv,, 68918,https://arxiv.org/abs/2001.04465,LESS is More: Rethinking Probabilistic Models of Human Behavior.,"['Andreea Bobu', 'Dexter R', 'R', 'Scobee', 'Jaime F', 'Fisac', 'S', 'Shankar Sastry', 'Anca D', 'Dragan']",2020-08-15T00:00:00Z,arxiv,, 68936,https://arxiv.org/abs/2005.01643,"Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems","['Sergey Levine', 'Aviral Kumar', 'George Tucker', 'Justin Fu']",2020-05-04T00:00:00Z,arxiv,, 68970,https://arxiv.org/abs/2211.01817,Liability regimes in the age of AI: a use-case driven analysis of the burden of proof,"['David Fernández Llorca', 'Vicky Charisi', 'Ronan Hamon', 'Ignacio Sánchez', 'Emilia Gómez']",2022-11-03T00:00:00Z,arxiv,, 68991,https://arxiv.org/abs/2009.08302,Learnable Strategies for Bilateral Agent Negotiation over Multiple Issues,"['Pallavi Bagga', 'Nicola Paoletti', 'Kostas Stathis']",2020-09-17T00:00:00Z,arxiv,, 69017,https://arxiv.org/abs/2102.02503,"Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models","['Alex Tamkin', 'Miles Brundage', 'Jack Clark', 'Deep Ganguli']",2021-02-04T00:00:00Z,arxiv,, 69045,https://arxiv.org/abs/2009.12612,Neurosymbolic Reinforcement Learning with Formally Verified Exploration,"['Greg Anderson', 'Abhinav Verma', 'Isil Dillig', 'Swarat Chaudhuri']",2020-09-26T00:00:00Z,arxiv,, 69064,https://arxiv.org/abs/2004.06100,Pretrained Transformers Improve Out-of-Distribution Robustness.,"['Dan Hendrycks', 'Xiaoyuan Liu', 'Eric Wallace', 'Adam Dziedzic', 'Rishabh Krishnan', 'Dawn Song']",2020-08-14T00:00:00Z,arxiv,, 69088,https://arxiv.org/abs/1807.08060,Safe Option-Critic: Learning Safety in the Option-Critic Architecture,"['Arushi Jain', 'Khimya Khetarpal', 'Doina Precup']",2018-07-21T00:00:00Z,arxiv,, 69104,https://arxiv.org/abs/2101.06060,The Challenge of Value Alignment: from Fairer Algorithms to AI Safety,"['Iason Gabriel', 'Vafa Ghazavi']",2021-01-15T00:00:00Z,arxiv,, 69152,https://arxiv.org/abs/1902.03245,"Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of Task Delegability","['Brian Lubars', 'Chenhao Tan']",2019-02-08T00:00:00Z,arxiv,, 69171,https://arxiv.org/abs/1705.09990,Should Robots be Obedient?,"['Smitha Milli', 'Dylan Hadfield-Menell', 'Anca Dragan', 'Stuart Russell']",2017-05-28T00:00:00Z,arxiv,, 69200,https://arxiv.org/abs/2107.09045,"On the Veracity of Local, Model-agnostic Explanations in Audio Classification: Targeted Investigations with Adversarial Examples","['Verena Praher', 'Katharina Prinz', 'Arthur Flexer', 'Gerhard Widmer']",2021-07-19T00:00:00Z,arxiv,, 69219,https://arxiv.org/abs/2105.09938,Measuring Coding Challenge Competence With APPS.,"['Dan Hendrycks', 'Steven Basart', 'Saurav Kadavath', 'Mantas Mazeika', 'Akul Arora', 'Ethan Guo', 'Collin Burns', 'Samir Puranik', 'Horace He', 'Dawn Song', 'Jacob Steinhardt']",2021-08-15T00:00:00Z,arxiv,, 69232,https://arxiv.org/abs/1711.06782,Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning,"['Benjamin Eysenbach', 'Shixiang Gu', 'Julian Ibarz', 'Sergey Levine']",2017-11-18T00:00:00Z,arxiv,, 69256,https://arxiv.org/abs/1804.00097,Adversarial Attacks and Defences Competition,"['Alexey Kurakin', 'Ian Goodfellow', 'Samy Bengio', 'Yinpeng Dong', 'Fangzhou Liao', 'Ming Liang', 'Tianyu Pang', 'Jun Zhu', 'Xiaolin Hu', 'Cihang Xie', 'Jianyu Wang', 'Zhishuai Zhang', 'Zhou Ren', 'Alan Yuille', 'Sangxia Huang', 'Yao Zhao', 'Yuzhe Zhao', 'Zhonglin Han', 'Junjiajia Long', 'Yerkebulan Berdibekov', 'Takuya Akiba', 'Seiya Tokui', 'Motoki Abe']",2018-03-31T00:00:00Z,arxiv,, 69281,https://arxiv.org/abs/2012.01365,DERAIL: Diagnostic Environments for Reward And Imitation Learning.,"['Pedro Freire', 'Adam Gleave', 'Sam Toyer', 'Stuart Russell']",2020-08-14T00:00:00Z,arxiv,, 69317,https://arxiv.org/abs/1802.05300,Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise.,"['Dan Hendrycks', 'Mantas Mazeika', 'Duncan Wilson', 'Kevin Gimpel']",2018-08-14T00:00:00Z,arxiv,, 69334,https://arxiv.org/abs/1512.05849,Modeling Progress in AI,['Miles Brundage'],2015-12-18T00:00:00Z,arxiv,, 69355,https://arxiv.org/abs/1810.11181,Neural Modular Control for Embodied Question Answering,"['Abhishek Das', 'Georgia Gkioxari', 'Stefan Lee', 'Devi Parikh', 'Dhruv Batra']",2018-10-26T00:00:00Z,arxiv,, 69371,https://arxiv.org/abs/1912.05907,Anti-Alignments -- Measuring The Precision of Process Models and Event Logs,"['Thomas Chatain', 'Mathilde Boltenhagen', 'Josep Carmona']",2019-11-28T00:00:00Z,arxiv,, 69390,https://arxiv.org/abs/2201.05818,"Measuring Non-Probabilistic Uncertainty: A cognitive, logical and computational assessment of known and unknown unknowns","['Florian Ellsaesser', 'Guido Fioretti', 'Gail E. James']",2022-01-15T00:00:00Z,arxiv,, 69411,https://arxiv.org/abs/1604.04721,An artificial intelligence tool for heterogeneous team formation in the classroom,"['Juan M. Alberola', 'Elena Del Val', 'Victor Sanchez-Anguix', 'Alberto Palomares', 'Maria Dolores Teruel']",2016-04-16T00:00:00Z,arxiv,, 69428,https://arxiv.org/abs/1807.10875,TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing,"['Augustus Odena', 'Ian Goodfellow']",2018-07-28T00:00:00Z,arxiv,, 69446,https://arxiv.org/abs/2310.12921,Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning,['Juan Rocamonde'],2023-10-19T17:17:06Z,arxiv,, 69473,https://arxiv.org/abs/2111.12797,ReAct: Out-of-distribution Detection With Rectified Activations,"['Yiyou Sun', 'Chuan Guo', 'Yixuan Li']",2021-11-24T00:00:00Z,arxiv,, 69486,https://arxiv.org/abs/1906.10842,"Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs","['Soheil Kolouri', 'Aniruddha Saha', 'Hamed Pirsiavash', 'Heiko Hoffmann']",2019-06-26T04:30:55Z,arxiv,, 69503,https://arxiv.org/abs/1702.08222,Synergistic Team Composition,"['Ewa Andrejczuk', 'Juan A. Rodriguez-Aguilar', 'Carme Roig', 'Carles Sierra']",2017-02-27T00:00:00Z,arxiv,, 69518,https://arxiv.org/abs/1901.08573,Theoretically Principled Trade-off between Robustness and Accuracy,"['Hongyang Zhang', 'Yaodong Yu', 'Jiantao Jiao', 'Eric P. Xing', 'Laurent El Ghaoui', 'Michael I. Jordan']",2019-01-24T00:00:00Z,arxiv,, 69534,https://arxiv.org/abs/2206.02790,Improving Model Understanding and Trust with Counterfactual Explanations of Model Confidence,"['Thao Le', 'Tim Miller', 'Ronal Singh', 'Liz Sonenberg']",2022-06-06T00:00:00Z,arxiv,, 69546,https://arxiv.org/abs/1807.06158,Generative Adversarial Imitation from Observation,"['Faraz Torabi', 'Garrett Warnell', 'Peter Stone']",2018-07-17T00:00:00Z,arxiv,, 69560,https://arxiv.org/abs/2011.06275,Performance of Bounded-Rational Agents With the Ability to Self-Modify,"['Jakub Tětek', 'Marek Sklenka', 'Tomáš Gavenčiak']",2020-11-12T00:00:00Z,arxiv,, 69581,https://arxiv.org/abs/1303.5720,An Approximate Nonmyopic Computation for Value of Information,"['David Heckerman', 'Eric J. Horvitz', 'Blackford Middleton']",2013-03-20T00:00:00Z,arxiv,, 69595,https://arxiv.org/abs/1711.02827,Inverse Reward Design.,"['Dylan Hadfield-Menell', 'Smitha Milli', 'Pieter Abbeel', 'Stuart Russell', 'Anca Dragan']",2017-08-15T00:00:00Z,arxiv,, 69613,https://arxiv.org/abs/1910.04417,Imitation Learning from Observations by Minimizing Inverse Dynamics Disagreement,"['Chao Yang', 'Xiaojian Ma', 'Wenbing Huang', 'Fuchun Sun', 'Huaping Liu', 'Junzhou Huang', 'Chuang Gan']",2019-10-10T00:00:00Z,arxiv,, 69630,https://arxiv.org/abs/1710.11248,Learning Robust Rewards with Adversarial Inverse Reinforcement Learning,"['Justin Fu', 'Katie Luo', 'Sergey Levine']",2017-10-30T00:00:00Z,arxiv,, 69647,https://arxiv.org/abs/2109.13916,Unsolved Problems in ML Safety.,"['Dan Hendrycks', 'Nicholas Carlini', 'John Schulman', 'Jacob Steinhardt']",2021-08-15T00:00:00Z,arxiv,, 69680,https://arxiv.org/abs/1901.11084,A Comparative Analysis of Expected and Distributional Reinforcement Learning,"['Clare Lyle', 'Pablo Samuel Castro', 'Marc G. Bellemare']",2019-01-30T00:00:00Z,arxiv,, 69697,https://arxiv.org/abs/1806.09795,Multi-agent Inverse Reinforcement Learning for Certain General-sum Stochastic Games,"['Xiaomin Lin', 'Stephen C. Adams', 'Peter A. Beling']",2018-06-26T00:00:00Z,arxiv,, 69715,https://arxiv.org/abs/1806.10019,Adversarial Active Exploration for Inverse Dynamics Model Learning,"['Zhang-Wei Hong', 'Tsu-Jui Fu', 'Tzu-Yun Shann', 'Yi-Hsiang Chang', 'Chun-Yi Lee']",2018-06-26T00:00:00Z,arxiv,, 69731,https://arxiv.org/abs/1910.04527,The Quest for Interpretable and Responsible Artificial Intelligence,['Vaishak Belle'],2019-10-10T00:00:00Z,arxiv,, 69767,https://arxiv.org/abs/2109.08904,Towards Resilient Artificial Intelligence: Survey and Research Issues,"['Oliver Eigner', 'Sebastian Eresheim', 'Peter Kieseberg', 'Lukas Daniel Klausner', 'Martin Pirker', 'Torsten Priebe', 'Simon Tjoa', 'Fiammetta Marulli', 'Francesco Mercaldo']",2021-09-18T00:00:00Z,arxiv,, 69811,https://arxiv.org/abs/2209.14876,Repairing Bugs in Python Assignments Using Large Language Models,"['Jialu Zhang', 'José Cambronero', 'Sumit Gulwani', 'Vu Le', 'Ruzica Piskac', 'Gustavo Soares', 'Gust Verbruggen']",2022-09-29T00:00:00Z,arxiv,, 69837,https://arxiv.org/abs/1905.12186,Asymptotically Unambitious Artificial General Intelligence,"['Michael K Cohen', 'Badri Vellambi', 'Marcus Hutter']",2019-05-29T00:00:00Z,arxiv,, 69858,https://arxiv.org/abs/1912.00747,The Transformative Potential of Artificial Intelligence,"['Ross Gruetzemacher', 'Jess Whittlestone']",2019-11-27T00:00:00Z,arxiv,, 69883,https://arxiv.org/abs/2102.07024,Interactive Learning from Activity Description,"['Khanh Nguyen', 'Dipendra Misra', 'Robert Schapire', 'Miro Dudík', 'Patrick Shafto']",2021-02-13T00:00:00Z,arxiv,, 69896,https://arxiv.org/abs/1909.06965,Better AI through Logical Scaffolding,"['Nikos Arechiga', 'Jonathan DeCastro', 'Soonho Kong', 'Karen Leung']",2019-09-12T00:00:00Z,arxiv,, 69915,https://arxiv.org/abs/2106.12142,IQ-Learn: Inverse soft-Q Learning for Imitation,"['Divyansh Garg', 'Shuvam Chakraborty', 'Chris Cundy', 'Jiaming Song', 'Matthieu Geist', 'Stefano Ermon']",2021-06-23T00:00:00Z,arxiv,, 69939,https://arxiv.org/abs/2308.03296,Studying Large Language Model Generalization with Influence Functions,"['Roger Grosse', 'Juhan Bae', 'Cem Anil', 'Nelson Elhage', 'Alex Tamkin', 'Amirhossein Tajdini', 'Benoit Steiner', 'Dustin Li', 'Esin Durmus', 'Ethan Perez', 'Evan Hubinger', 'Kamilė Lukošiūtė', 'Karina Nguyen', 'Nicholas Joseph', 'Sam McCandlish', 'Jared Kaplan', 'Samuel R. Bowman']",2023-08-07T04:47:42Z,arxiv,, 69960,https://arxiv.org/abs/2002.11328,Rethinking Bias-Variance Trade-off for Generalization of Neural Networks,"['Zitong Yang', 'Yaodong Yu', 'Chong You', 'Jacob Steinhardt', 'Yi Ma']",2020-02-26T00:00:00Z,arxiv,, 69986,https://arxiv.org/abs/1802.03493,More Robust Doubly Robust Off-policy Evaluation,"['Mehrdad Farajtabar', 'Yinlam Chow', 'Mohammad Ghavamzadeh']",2018-02-10T00:00:00Z,arxiv,, 69999,https://arxiv.org/abs/1512.07942,Multi-Level Cause-Effect Systems,"['Krzysztof Chalupka', 'Pietro Perona', 'Frederick Eberhardt']",2015-12-25T00:00:00Z,arxiv,, 70015,https://arxiv.org/abs/1912.05743,Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning,"['Akanksha Atrey', 'Kaleigh Clary', 'David Jensen']",2019-12-09T00:00:00Z,arxiv,, 70025,https://arxiv.org/abs/2102.09343,"AI Can Stop Mass Shootings, and More","['Selmer Bringsjord', 'Naveen Sundar Govindarajulu', 'Michael Giancola']",2021-02-05T00:00:00Z,arxiv,, 70041,https://arxiv.org/abs/1907.02544,Large Scale Adversarial Representation Learning,"['Jeff Donahue', 'Karen Simonyan']",2019-07-04T00:00:00Z,arxiv,, 70075,https://arxiv.org/abs/1912.01172,Value-laden Disciplinary Shifts in Machine Learning.,"['Ravit Dotan', 'Smitha Milli']",2020-08-14T00:00:00Z,arxiv,, 70099,https://arxiv.org/abs/2008.09043,"Considerations, Good Practices, Risks and Pitfalls in Developing AI Solutions Against COVID-19","['Alexandra Luccioni', 'Joseph Bullock', 'Katherine Hoffmann Pham', 'Cynthia Sin Nga Lam', 'Miguel Luengo-Oroz']",2020-08-13T00:00:00Z,arxiv,, 70132,https://arxiv.org/abs/1807.06096,Safe Reinforcement Learning via Probabilistic Shields,"['Nils Jansen', 'Bettina Könighofer', 'Sebastian Junges', 'Alexandru C. Serban', 'Roderick Bloem']",2018-07-16T00:00:00Z,arxiv,, 70148,https://arxiv.org/abs/2209.07096,Multi-Objective Policy Gradients with Topological Constraints.,"['Kyle Wray*', 'Stas Tiomkin*', 'Mykel Korchenderfer', 'Pieter Abbeel']",2022-08-14T00:00:00Z,arxiv,, 70166,https://arxiv.org/abs/2201.02177,Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets,"['Alethea Power', 'Yuri Burda', 'Harri Edwards', 'Igor Babuschkin', 'Vedant Misra']",2022-01-06T00:00:00Z,arxiv,, 70190,https://arxiv.org/abs/1206.5538,Representation Learning: A Review and New Perspectives,['Yoshua Bengio'],2012-06-24T20:51:38Z,arxiv,, 70208,https://arxiv.org/abs/2106.10316,Proper Value Equivalence.,"['Christopher Grimm', 'Andre Barreto', 'Gregory Farquhar', 'David Silver', 'and Satinder Singh']",2021-08-14T00:00:00Z,arxiv,, 70222,https://arxiv.org/abs/1607.02533,Adversarial examples in the physical world,['Alexey Kurakin'],2016-07-08T21:12:11Z,arxiv,, 70237,https://arxiv.org/abs/2110.09240,Value alignment: a formal approach,"['Carles Sierra', 'Nardine Osman', 'Pablo Noriega', 'Jordi Sabater-Mir', 'Antoni Perelló']",2021-10-18T00:00:00Z,arxiv,, 70250,https://arxiv.org/abs/1804.03235,Large scale distributed neural network training through online distillation,"['Rohan Anil', 'Gabriel Pereyra', 'Alexandre Passos', 'Robert Ormandi', 'George E. Dahl', 'Geoffrey E. Hinton']",2018-04-09T00:00:00Z,arxiv,, 70270,https://arxiv.org/abs/1802.06306,Learning Data-Driven Objectives to Optimize Interactive Systems,"['Ziming Li', 'Julia Kiseleva', 'Alekh Agarwal', 'Maarten de Rijke']",2018-02-17T00:00:00Z,arxiv,, 70288,https://arxiv.org/abs/2108.13956,APS: Active Pretraining with Successor Features.,"['Hao Liu', 'Pieter Abbeel']",2021-08-14T00:00:00Z,arxiv,, 70305,https://arxiv.org/abs/2003.01709,Hierarchically Decoupled Imitation for Morphological Transfer.,"['Donald J', 'Hejna III', 'Pieter Abbeel', 'Lerrel Pinto']",2019-08-14T00:00:00Z,arxiv,, 70326,https://arxiv.org/abs/2006.09882,Unsupervised Learning of Visual Features by Contrasting Cluster Assignments,"['Mathilde Caron', 'Ishan Misra', 'Julien Mairal', 'Priya Goyal', 'Piotr Bojanowski', 'Armand Joulin']",2020-06-17T00:00:00Z,arxiv,, 70358,https://arxiv.org/abs/1806.10071,Learning Existing Social Conventions via Observationally Augmented Self-Play,"['Adam Lerer', 'Alexander Peysakhovich']",2018-06-26T00:00:00Z,arxiv,, 70378,https://arxiv.org/abs/2002.09758,Unsupervised Question Decomposition for Question Answering,"['Ethan Perez', 'Patrick Lewis', 'Wen-tau Yih', 'Kyunghyun Cho', 'Douwe Kiela']",2020-02-22T00:00:00Z,arxiv,, 70398,https://arxiv.org/abs/1706.04599,On Calibration of Modern Neural Networks,"['Chuan Guo', 'Geoff Pleiss', 'Yu Sun', 'Kilian Q. Weinberger', 'Chuan Guo', 'Geoff Pleiss', 'Yu Sun', 'Kilian Q. Weinberger']",2018-01-01T00:00:00Z,arxiv,, 70421,https://arxiv.org/abs/1810.02541,PPO-CMA: Proximal Policy Optimization with Covariance Matrix Adaptation,"['Perttu Hämäläinen', 'Amin Babadi', 'Xiaoxiao Ma', 'Jaakko Lehtinen']",2018-10-05T00:00:00Z,arxiv,, 70441,https://arxiv.org/abs/1606.06565,Concrete Problems in AI Safety,"['Dario Amodei', 'Chris Olah', 'Jacob Steinhardt', 'Paul Christiano', 'John Schulman', 'Dan Mané']",2016-06-21T00:00:00Z,arxiv,, 70479,https://arxiv.org/abs/2010.04112,Information-Driven Adaptive Sensing Based on Deep Reinforcement Learning,"['Abdulmajid Murad', 'Frank Alexander Kraemer', 'Kerstin Bach', 'Gavin Taylor']",2020-10-08T00:00:00Z,arxiv,, 70495,https://arxiv.org/abs/1503.07619,Shared Autonomy via Hindsight Optimization,"['Shervin Javdani', 'Siddhartha S. Srinivasa', 'J. Andrew Bagnell']",2015-03-26T00:00:00Z,arxiv,, 70520,https://arxiv.org/abs/1807.00403,Towards Mixed Optimization for Reinforcement Learning with Program Synthesis,"['Surya Bhupatiraju', 'Kumar Krishna Agrawal', 'Rishabh Singh']",2018-07-01T00:00:00Z,arxiv,, 70542,https://arxiv.org/abs/2205.13728,GALOIS: Boosting Deep Reinforcement Learning via Generalizable Logic Synthesis,"['Yushi Cao', 'Zhiming Li', 'Tianpei Yang', 'Hao Zhang', 'Yan Zheng', 'Yi Li', 'Jianye Hao', 'Yang Liu']",2022-05-27T00:00:00Z,arxiv,, 70560,https://arxiv.org/abs/1906.01820,Risks from Learned Optimization in Advanced Machine Learning Systems,"['Evan Hubinger', 'Chris van Merwijk', 'Vladimir Mikulik', 'Joar Skalse', 'and Scott Garrabrant']",2019-06-05T04:43:25Z,arxiv,, 70593,https://arxiv.org/abs/2211.11972,imitation: Clean Imitation Learning Implementations,['\\name'],2022-11-22T03:11:29Z,arxiv,, 70610,https://arxiv.org/abs/1902.06531,STRIP: A Defence Against Trojan Attacks on Deep Neural Networks,"['Yansong Gao', 'Chang Xu', 'Derui Wang', 'Shiping Chen', 'Damith C.\xa0Ranasinghe', 'and Surya Nepal']",2019-02-18T11:49:33Z,arxiv,, 70626,https://arxiv.org/abs/1903.01959,Learning Exploration Policies for Navigation,"['Tao Chen', 'Saurabh Gupta', 'Abhinav Gupta']",2019-03-05T00:00:00Z,arxiv,, 70646,https://arxiv.org/abs/1811.05590,Emergence of Addictive Behaviors in Reinforcement Learning Agents,"['Vahid Behzadan', 'Roman V. Yampolskiy', 'Arslan Munir']",2018-11-14T00:00:00Z,arxiv,, 70656,https://arxiv.org/abs/2203.01855,Reasoning about Counterfactuals to Improve Human Inverse Reinforcement Learning,"['Michael S. Lee', 'Henny Admoni', 'Reid Simmons']",2022-03-03T00:00:00Z,arxiv,, 70677,https://arxiv.org/abs/1907.03046,Learning a Behavioral Repertoire from Demonstrations,"['Niels Justesen', 'Miguel Gonzalez Duque', 'Daniel Cabarcas Jaramillo', 'Jean-Baptiste Mouret', 'Sebastian Risi']",2019-07-05T00:00:00Z,arxiv,, 70705,https://arxiv.org/abs/1209.4838,Formal Definition of AI,['Dimiter Dobrev'],2012-09-21T00:00:00Z,arxiv,, 70728,https://arxiv.org/abs/1611.08219,The Off-Switch Game,"['Dylan Hadfield-Menell', 'Anca Dragan', 'Pieter Abbeel', 'Stuart Russell']",2016-11-24T15:23:48Z,arxiv,, 70746,https://arxiv.org/abs/2209.15157,Rethinking and Recomputing the Value of ML Models,"['Burcu Sayin', 'Fabio Casati', 'Andrea Passerini', 'Jie Yang', 'Xinyue Chen']",2022-09-30T00:00:00Z,arxiv,, 70777,https://arxiv.org/abs/1901.10513,Adversarial Examples Are a Natural Consequence of Test Error in Noise,"['Nicolas Ford', 'Justin Gilmer', 'Nicholas Carlini', 'Ekin D. Cubuk']",2018-01-01T00:00:00Z,arxiv,, 70797,https://arxiv.org/abs/2107.06692,Deep Adaptive Multi-Intention Inverse Reinforcement Learning,"['Ariyan Bighashdel', 'Panagiotis Meletis', 'Pavol Jancura', 'Gijs Dubbelman']",2021-07-14T00:00:00Z,arxiv,, 70813,https://arxiv.org/abs/2202.13985,The dangers in algorithms learning humans' values and irrationalities,"['Rebecca Gorman', 'Stuart Armstrong']",2022-02-28T00:00:00Z,arxiv,, 70838,https://arxiv.org/abs/2104.13733,Gradient-based Adversarial Attacks against Text Transformers,"['Chuan Guo', 'Alexandre Sablayrolles', 'Hervé Jégou', 'Douwe Kiela']",2021-04-15T00:00:00Z,arxiv,, 70851,https://arxiv.org/abs/1308.3778,Game Theory with Translucent Players,"['Joseph Y. Halpern', 'Rafael Pass']",2013-08-17T00:00:00Z,arxiv,, 70868,https://arxiv.org/abs/2206.09360,Modeling Transformative AI Risks (MTAIR) Project -- Summary Report,"['Sam Clarke', 'Ben Cottier', 'Aryeh Englander', 'Daniel Eth', 'David Manheim', 'Samuel Dylan Martin', 'Issa Rice']",2022-06-19T00:00:00Z,arxiv,, 70889,https://arxiv.org/abs/1909.08068,From the Internet of Information to the Internet of Intelligence,['F. Richard Yu'],2019-08-30T00:00:00Z,arxiv,, 70916,https://arxiv.org/abs/1905.09730,On modelling the emergence of logical thinking,"['Cristian Ivan', 'Bipin Indurkhya']",2019-05-23T00:00:00Z,arxiv,, 70938,https://arxiv.org/abs/1810.11545,Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time,"['Vinicius G. Goecks', 'Gregory M. Gremillion', 'Vernon J. Lawhern', 'John Valasek', 'Nicholas R. Waytowich']",2018-10-26T00:00:00Z,arxiv,, 70954,https://arxiv.org/abs/1906.08237,XLNet: Generalized Autoregressive Pretraining for Language Understanding,"['Zhilin Yang', 'Zihang Dai', 'Yiming Yang', 'Jaime Carbonell', 'Ruslan Salakhutdinov', 'Quoc V. Le']",2019-06-19T00:00:00Z,arxiv,, 70972,https://arxiv.org/abs/2202.11798,Drawing Inductor Layout with a Reinforcement Learning Agent: Method and Application for VCO Inductors,"['Cameron Haigh', 'Zichen Zhang', 'Negar Hassanpour', 'Khurram Javed', 'Yingying Fu', 'Shayan Shahramian', 'Shawn Zhang', 'Jun Luo']",2022-02-23T00:00:00Z,arxiv,, 71004,https://arxiv.org/abs/1811.07871,Scalable agent alignment via reward modeling: a research direction,"['Jan Leike', 'David Krueger', 'Tom Everitt', 'Miljan Martic', 'Vishal Maini', 'Shane Legg']",2018-11-19T00:00:00Z,arxiv,, 71047,https://arxiv.org/abs/1406.2661,Generative Adversarial Networks,"['Ian J. Goodfellow', 'Jean Pouget-Abadie', 'Mehdi Mirza', 'Bing Xu', 'David Warde-Farley', 'Sherjil Ozair', 'Aaron Courville', 'Yoshua Bengio']",2014-06-10T00:00:00Z,arxiv,, 71068,https://arxiv.org/abs/2205.07722,How Different Groups Prioritize Ethical Values for Responsible AI,"['Maurice Jakesch', 'Zana Buçinca', 'Saleema Amershi', 'Alexandra Olteanu']",2022-05-16T00:00:00Z,arxiv,, 71089,https://arxiv.org/abs/2001.05068,Social and Governance Implications of Improved Data Efficiency,"['Aaron D. Tucker', 'Markus Anderljung', 'Allan Dafoe']",2020-01-14T00:00:00Z,arxiv,, 71113,https://arxiv.org/abs/2001.09773,Algorithmic Fairness from a Non-ideal Perspective,"['Sina Fazelpour', 'Zachary C. Lipton']",2020-01-08T00:00:00Z,arxiv,, 71129,https://arxiv.org/abs/2005.09382,Human Instruction-Following with Deep Reinforcement Learning via Transfer-Learning from Text,"['Felix Hill', 'Sona Mokra', 'Nathaniel Wong', 'Tim Harley']",2020-05-19T00:00:00Z,arxiv,, 71149,https://arxiv.org/abs/1805.07871,A Framework and Method for Online Inverse Reinforcement Learning,"['Saurabh Arora', 'Prashant Doshi', 'Bikramjit Banerjee']",2018-05-21T00:00:00Z,arxiv,, 71165,https://arxiv.org/abs/2102.01293,Scaling Laws for Transfer,"['Danny Hernandez', 'Jared Kaplan', 'Tom Henighan', 'Sam McCandlish']",2021-02-02T00:00:00Z,arxiv,, 71187,https://arxiv.org/abs/1906.05838,Goal-conditioned Imitation Learning,"['Yiming Ding', 'Carlos Florensa', 'Mariano Phielipp', 'Pieter Abbeel']",2019-06-13T00:00:00Z,arxiv,, 71211,https://arxiv.org/abs/1804.09170,Realistic Evaluation of Deep Semi-Supervised Learning Algorithms,"['Avital Oliver', 'Augustus Odena', 'Colin Raffel', 'Ekin D. Cubuk', 'Ian J. Goodfellow']",2018-04-24T00:00:00Z,arxiv,, 71229,https://arxiv.org/abs/cs/0701125,Universal Algorithmic Intelligence: A mathematical top->down approach,['Marcus Hutter'],2007-01-20T00:00:00Z,arxiv,, 71252,https://arxiv.org/abs/2102.07716,How RL Agents Behave When Their Actions Are Modified,"['Eric D. Langlois', 'Tom Everitt']",2021-02-15T00:00:00Z,arxiv,, 71266,https://arxiv.org/abs/1711.07356,Evaluating Robustness of Neural Networks with Mixed Integer Programming,"['Vincent Tjeng', 'Kai Xiao', 'Russ Tedrake']",2017-11-20T00:00:00Z,arxiv,, 71286,https://arxiv.org/abs/1902.09508,Improving Robustness of Machine Translation with Synthetic Noise,"['Vaibhav Vaibhav', 'Sumeet Singh', 'Craig Stewart', 'Graham Neubig']",2019-02-25T00:00:00Z,arxiv,, 71305,https://arxiv.org/abs/1912.01603,Dream to Control: Learning Behaviors by Latent Imagination,"['Danijar Hafner', 'Timothy Lillicrap', 'Jimmy Ba', 'Mohammad Norouzi']",2019-12-03T00:00:00Z,arxiv,, 71320,https://arxiv.org/abs/1905.07861,Perceptual Values from Observation,"['Ashley D. Edwards', 'Charles L. Isbell']",2019-05-20T00:00:00Z,arxiv,, 71338,https://arxiv.org/abs/2009.09153,Hidden Incentives for Auto-Induced Distributional Shift,"['David Krueger', 'Tegan Maharaj', 'Jan Leike']",2020-09-19T00:00:00Z,arxiv,, 71363,https://arxiv.org/abs/2103.08022,Success Weighted by Completion Time: A Dynamics-Aware Evaluation Criteria for Embodied Navigation,"['Naoki Yokoyama', 'Sehoon Ha', 'Dhruv Batra']",2021-03-14T00:00:00Z,arxiv,, 71377,https://arxiv.org/abs/1911.08453,Planning with Goal-Conditioned Policies,"['Soroush Nasiriany', 'Vitchyr H. Pong', 'Steven Lin', 'Sergey Levine']",2019-11-19T00:00:00Z,arxiv,, 71398,https://arxiv.org/abs/1209.2355,Counterfactual Reasoning and Learning Systems,"['Léon Bottou', 'Jonas Peters', 'Joaquin Quiñonero-Candela', 'Denis X. Charles', 'D. Max Chickering', 'Elon Portugaly', 'Dipankar Ray', 'Patrice Simard', 'Ed Snelson']",2012-09-11T00:00:00Z,arxiv,, 71424,https://arxiv.org/abs/2107.07013,Passive Attention in Artificial Neural Networks Predicts Human Visual Selectivity.,"['Thomas A', 'Langlois', 'H', 'Charles Zhao', 'Erin Grantd', 'Ishita Dasguptae', 'Thomas L', 'Griffiths', 'and Nori Jacoby']",2021-08-14T00:00:00Z,arxiv,, 71441,https://arxiv.org/abs/1709.10163,Deep TAMER: Interactive Agent Shaping in High-Dimensional State Spaces,"['Garrett Warnell', 'Nicholas Waytowich', 'Vernon Lawhern', 'Peter Stone']",2017-09-28T00:00:00Z,arxiv,, 71455,https://arxiv.org/abs/2111.04838,Efficient estimates of optimal transport via low-dimensional embeddings,"['Patric M. Fulop', 'Vincent Danos']",2021-11-08T00:00:00Z,arxiv,, 71470,https://arxiv.org/abs/1806.01186,Penalizing side effects using stepwise relative reachability,"['Victoria Krakovna', 'Laurent Orseau', 'Ramana Kumar', 'Miljan Martic', 'Shane Legg']",2018-06-04T00:00:00Z,arxiv,, 71491,https://arxiv.org/abs/2202.00343,Interactive configurator with FO(.) and IDP-Z3,"['Pierre Carbonnelle', 'Simon Vandevelde', 'Joost Vennekens', 'Marc Denecker']",2022-02-01T00:00:00Z,arxiv,, 71509,https://arxiv.org/abs/1812.09376,Human-AI Learning Performance in Multi-Armed Bandits,"['Ravi Pandya', 'Sandy H. Huang', 'Dylan Hadfield-Menell', 'Anca D. Dragan']",2018-12-21T00:00:00Z,arxiv,, 71529,https://arxiv.org/abs/1803.03407,Institutional Metaphors for Designing Large-Scale Distributed AI versus AI Techniques for Running Institutions,"['Alexander Boer', 'Giovanni Sileno']",2018-03-09T00:00:00Z,arxiv,, 71561,https://arxiv.org/abs/1803.04585,Categorizing Variants of Goodhart's Law,"['David Manheim', 'Scott Garrabrant']",2018-03-13T00:00:00Z,arxiv,, 71587,https://arxiv.org/abs/2009.14180,Learning to Play Against Any Mixture of Opponents,"['Max Olan Smith', 'Thomas Anthony', 'Yongzhao Wang', 'Michael P. Wellman']",2020-09-29T17:48:10Z,arxiv,, 71608,https://arxiv.org/abs/2110.13880,Understanding Interlocking Dynamics of Cooperative Rationalization,"['Mo Yu', 'Yang Zhang', 'Shiyu Chang', 'Tommi S. Jaakkola']",2021-10-26T00:00:00Z,arxiv,, 71619,https://arxiv.org/abs/1806.01203,Relational inductive bias for physical construction in humans and machines,"['Jessica B. Hamrick', 'Kelsey R. Allen', 'Victor Bapst', 'Tina Zhu', 'Kevin R. McKee', 'Joshua B. Tenenbaum', 'Peter W. Battaglia']",2018-06-04T00:00:00Z,arxiv,, 71637,https://arxiv.org/abs/2004.09044,"Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense","['Yixin Zhu', 'Tao Gao', 'Lifeng Fan', 'Siyuan Huang', 'Mark Edmonds', 'Hangxin Liu', 'Feng Gao', 'Chi Zhang', 'Siyuan Qi', 'Ying Nian Wu', 'Joshua B. Tenenbaum', 'Song-Chun Zhu']",2020-04-20T00:00:00Z,arxiv,, 71659,https://arxiv.org/abs/2206.15474,"Forecasting Future World Events with Neural Networks",['Andy Zou'],2022-06-30T17:59:14Z,arxiv,, 71684,https://arxiv.org/abs/2106.15764,The Threat of Offensive AI to Organizations,"['Yisroel Mirsky', 'Ambra Demontis', 'Jaidip Kotak', 'Ram Shankar', 'Deng Gelei', 'Liu Yang', 'Xiangyu Zhang', 'Wenke Lee', 'Yuval Elovici', 'Battista Biggio']",2021-06-30T00:00:00Z,arxiv,, 71707,https://arxiv.org/abs/2201.08102,Safe Deep RL in 3D Environments using Human Feedback,"['Matthew Rahtz', 'Vikrant Varma', 'Ramana Kumar', 'Zachary Kenton', 'Shane Legg', 'Jan Leike']",2022-01-20T00:00:00Z,arxiv,, 71727,https://arxiv.org/abs/1806.08340,Interpretable Discovery in Large Image Data Sets,"['Kiri L. Wagstaff', 'Jake Lee']",2018-06-21T00:00:00Z,arxiv,, 71744,https://arxiv.org/abs/1605.03143,Avoiding Wireheading with Value Reinforcement Learning,"['Tom Everitt', 'Marcus Hutter']",2016-05-10T00:00:00Z,arxiv,, 71761,https://arxiv.org/abs/2211.09961,Path Independent Equilibrium Models Can Better Exploit Test-Time Computation.,"['Cem Anil*', 'Ashwini Pokle*', 'Kaiqu Liang*', 'Johannes Treutlein', 'Yuhuai Wu', 'Shaojie Bai', 'J', 'Zico Kolter', 'and Roger Grosse']",2022-08-14T00:00:00Z,arxiv,, 71784,https://arxiv.org/abs/1701.06049,Interactive Learning from Policy-Dependent Human Feedback,"['James MacGlashan', 'Mark K Ho', 'Robert Loftin', 'Bei Peng', 'Guan Wang', 'David Roberts', 'Matthew E. Taylor', 'Michael L. Littman']",2017-01-21T00:00:00Z,arxiv,, 71805,https://arxiv.org/abs/2005.00582,Learning to Complement Humans,"['Bryan Wilder', 'Eric Horvitz', 'Ece Kamar']",2020-05-01T00:00:00Z,arxiv,, 71822,https://arxiv.org/abs/1811.06521,Reward learning from human preferences and demonstrations in Atari,"['Borja Ibarz', 'Jan Leike', 'Tobias Pohlen', 'Geoffrey Irving', 'Shane Legg', 'Dario Amodei']",2018-11-15T00:00:00Z,arxiv,, 71846,https://arxiv.org/abs/2205.10785,Responsible Artificial Intelligence -- from Principles to Practice,['Virginia Dignum'],2022-05-22T00:00:00Z,arxiv,, 71868,https://arxiv.org/abs/2202.03286,Red Teaming Language Models with Language Models,"['Ethan Perez', 'Saffron Huang', 'Francis Song', 'Trevor Cai', 'Roman Ring', 'John Aslanides', 'Amelia Glaese', 'Nat McAleese', 'Geoffrey Irving']",2022-02-07T00:00:00Z,arxiv,, 71908,https://arxiv.org/abs/2101.06133,Teaming up with information agents,"['Jurriaan van Diggelen', 'Wiard Jorritsma', 'Bob van der Vecht']",2021-01-15T00:00:00Z,arxiv,, 71925,https://arxiv.org/abs/2006.07532,Online Bayesian Goal Inference for Boundedly-Rational Planning Agents,"['Tan Zhi-Xuan', 'Jordyn L. Mann', 'Tom Silver', 'Joshua B. Tenenbaum', 'Vikash K. Mansinghka']",2020-06-13T00:00:00Z,arxiv,, 71938,https://arxiv.org/abs/2205.15241,Multi-Game Decision Transformers,"['Kuang-Huei Lee', 'Ofir Nachum', 'Mengjiao Yang', 'Lisa Lee', 'Daniel Freeman', 'Winnie Xu', 'Sergio Guadarrama', 'Ian Fischer', 'Eric Jang', 'Henryk Michalewski', 'Igor Mordatch']",2022-05-30T00:00:00Z,arxiv,, 71957,https://arxiv.org/abs/2109.00177,Problem Learning: Towards the Free Will of Machines,['Yongfeng Zhang'],2021-09-01T00:00:00Z,arxiv,, 71982,https://arxiv.org/abs/1601.03411,Analysis of Algorithms and Partial Algorithms,['Andrew MacFie'],2016-01-13T00:00:00Z,arxiv,, 71999,https://arxiv.org/abs/1811.04017,A generic framework for privacy preserving deep learning,"['Theo Ryffel', 'Andrew Trask', 'Morten Dahl', 'Bobby Wagner', 'Jason Mancuso', 'Daniel Rueckert', 'Jonathan Passerat-Palmbach']",2018-11-09T00:00:00Z,arxiv,, 72021,https://arxiv.org/abs/1201.6583,Empowerment for Continuous Agent-Environment Systems,"['Tobias Jung', 'Daniel Polani', 'Peter Stone']",2012-01-31T00:00:00Z,arxiv,, 72045,https://arxiv.org/abs/1912.10305,Questions to Guide the Future of Artificial Intelligence Research,['Jordan Ott'],2019-12-21T00:00:00Z,arxiv,, 72068,https://arxiv.org/abs/2004.13649,Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels,"['Ilya Kostrikov', 'Denis Yarats', 'Rob Fergus']",2020-04-28T00:00:00Z,arxiv,, 72083,https://arxiv.org/abs/2102.01685,Agent Incentives: A Causal Perspective,"['Tom Everitt', 'Ryan Carey', 'Eric Langlois', 'Pedro A Ortega', 'Shane Legg']",2021-02-02T00:00:00Z,arxiv,, 72102,https://arxiv.org/abs/1809.06995,Interpretable Reinforcement Learning with Ensemble Methods,"['Alexander Brown', 'Marek Petrik']",2018-09-19T00:00:00Z,arxiv,, 72120,https://arxiv.org/abs/2101.06704,Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions,"['Nodens Koren', 'Qiuhong Ke', 'Yisen Wang', 'James Bailey', 'Xingjun Ma']",2021-01-17T00:00:00Z,arxiv,, 72142,https://arxiv.org/abs/2009.09071,Measurement in AI Policy: Opportunities and Challenges,"['Saurabh Mishra', 'Jack Clark', 'C. Raymond Perrault']",2020-09-10T00:00:00Z,arxiv,, 72169,https://arxiv.org/abs/2103.07815,Dynamically Switching Human Prediction Models for Efficient Planning.,"['Arjun Sripathy', 'Andreea Bobu', 'Daniel S', 'Brown', 'Anca D', 'Dragan']",2021-08-14T00:00:00Z,arxiv,, 72190,https://arxiv.org/abs/1211.4957,An Experiment on the Connection between the DLs' Family DL and the Real World,"['Antonio Pisasale', 'Domenico Cantone']",2012-11-21T00:00:00Z,arxiv,, 72204,https://arxiv.org/abs/1210.1785,Relative Expressiveness of Defeasible Logics,['Michael Maher'],2012-10-05T00:00:00Z,arxiv,, 72227,https://arxiv.org/abs/2111.09259,Acquisition of Chess Knowledge in AlphaZero,"['Thomas McGrath', 'Andrei Kapishnikov', 'Nenad Tomašev', 'Adam Pearce', 'Demis Hassabis', 'Been Kim', 'Ulrich Paquet', 'Vladimir Kramnik']",2021-11-17T17:46:19Z,arxiv,, 72251,https://arxiv.org/abs/2003.04960,Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey,"['Sanmit Narvekar', 'Bei Peng', 'Matteo Leonetti', 'Jivko Sinapov', 'Matthew E. Taylor', 'Peter Stone']",2020-03-10T00:00:00Z,arxiv,, 72269,https://arxiv.org/abs/2205.12401,Reward Uncertainty for Exploration in Preference-based Reinforcement Learning.,"['Xinran Liang', 'Katherine Shu', 'Kimin Lee', 'Pieter Abbeel']",2022-08-14T00:00:00Z,arxiv,, 72280,https://arxiv.org/abs/1806.00610,Between Progress and Potential Impact of AI: the Neglected Dimensions,"['Fernando Martínez-Plumed', 'Shahar Avin', 'Miles Brundage', 'Allan Dafoe', 'Sean Ó hÉigeartaigh', 'José Hernández-Orallo']",2018-06-02T00:00:00Z,arxiv,, 72303,https://arxiv.org/abs/1910.01074,Formal Language Constraints for Markov Decision Processes,"['Eleanor Quint', 'Dong Xu', 'Samuel Flint', 'Stephen Scott', 'Matthew Dwyer']",2019-10-02T00:00:00Z,arxiv,, 72318,https://arxiv.org/abs/2205.09201,Mimicking Behaviors in Separated Domains,"['Giuseppe De Giacomo', 'Dror Fried', 'Fabio Patrizi', 'Shufang Zhu']",2022-05-18T00:00:00Z,arxiv,, 72335,https://arxiv.org/abs/1810.08647,Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning,"['Natasha Jaques', 'Angeliki Lazaridou', 'Edward Hughes', 'Caglar Gulcehre', 'Pedro A. Ortega', 'DJ Strouse', 'Joel Z. Leibo', 'Nando de Freitas']",2018-10-19T00:00:00Z,arxiv,, 72358,https://arxiv.org/abs/1312.0144,Knowing Whether,"['Jie Fan', 'Yanjing Wang', 'Hans van Ditmarsch']",2013-11-30T00:00:00Z,arxiv,, 72374,https://arxiv.org/abs/2107.08995,Causal Inference Struggles with Agency on Online Platforms.,"['Smitha Milli', 'Luca Belli', 'Moritz Hardt']",2021-08-14T00:00:00Z,arxiv,, 72385,https://arxiv.org/abs/2207.00868,"The Linguistic Blind Spot of Value-Aligned Agency, Natural and Artificial",['Travis LaCroix'],2022-07-02T00:00:00Z,arxiv,, 72404,https://arxiv.org/abs/1810.12282,Assessing Generalization in Deep Reinforcement Learning,"['Charles Packer', 'Katelyn Gao', 'Jernej Kos', 'Philipp Krähenbühl', 'Vladlen Koltun', 'Dawn Song']",2018-10-29T00:00:00Z,arxiv,, 72425,https://arxiv.org/abs/2007.12173,Bridging the Imitation Gap by Adaptive Insubordination,"['Luca Weihs', 'Unnat Jain', 'Iou-Jen Liu', 'Jordi Salvador', 'Svetlana Lazebnik', 'Aniruddha Kembhavi', 'Alexander Schwing']",2020-07-23T00:00:00Z,arxiv,, 72436,https://arxiv.org/abs/1810.02274,Episodic Curiosity through Reachability,"['Nikolay Savinov', 'Anton Raichuk', 'Raphaël Marinier', 'Damien Vincent', 'Marc Pollefeys', 'Timothy Lillicrap', 'Sylvain Gelly']",2018-10-04T00:00:00Z,arxiv,, 72445,https://arxiv.org/abs/1811.00525,On the Geometry of Adversarial Examples.,"['Marc Khoury', 'Dylan Hadfield-Menell']",2020-08-15T00:00:00Z,arxiv,, 72473,https://arxiv.org/abs/2106.04338,"Engines of Power: Electricity, AI, and General-Purpose Military Transformations","['Jeffrey Ding', 'Allan Dafoe']",2021-06-08T00:00:00Z,arxiv,, 72497,https://arxiv.org/abs/1703.08922,On Automating the Doctrine of Double Effect,"['Naveen Sundar Govindarajulu', 'Selmer Bringsjord']",2017-03-27T00:00:00Z,arxiv,, 72516,https://arxiv.org/abs/1408.1485,A Logic for Reasoning about Upper Probabilities,"['Joseph Y. Halpern', 'Riccardo Pucella']",2014-08-07T00:00:00Z,arxiv,, 72533,https://arxiv.org/abs/2103.14659,Alignment of Language Agents,"['Zachary Kenton', 'Tom Everitt', 'Laura Weidinger', 'Iason Gabriel', 'Vladimir Mikulik', 'Geoffrey Irving']",2021-03-26T00:00:00Z,arxiv,, 72578,https://arxiv.org/abs/1907.01475,Generalizing from a few environments in safety-critical reinforcement learning,"['Zachary Kenton', 'Angelos Filos', 'Owain Evans', 'Yarin Gal']",2019-07-02T00:00:00Z,arxiv,, 72603,https://arxiv.org/abs/1707.05858,Logic Programming approaches for routing fault-free and maximally-parallel Wavelength Routed Optical Networks on Chip (Application paper),"['Marco Gavanelli', 'Maddalena Nonato', 'Andrea Peano', 'Davide Bertozzi']",2017-07-18T00:00:00Z,arxiv,, 72623,https://arxiv.org/abs/1901.00064,Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function),['Peter Eckersley'],2018-12-31T00:00:00Z,arxiv,, 72642,https://arxiv.org/abs/1804.06459,On Learning Intrinsic Rewards for Policy Gradient Methods.,"['Zeyu Zheng', 'Junhyuk Oh', 'Satinder Singh']",2018-08-14T00:00:00Z,arxiv,, 72659,https://arxiv.org/abs/2204.10759,The Boltzmann Policy Distribution: Accounting for Systematic Suboptimality in Human Models.,"['Cassidy Laidlaw', 'Anca Dragan']",2022-08-14T00:00:00Z,arxiv,, 72680,https://arxiv.org/abs/2202.07364,Zero-Shot Assistance in Sequential Decision Problems,"['Sebastiaan De Peuter', 'Samuel Kaski']",2022-02-15T00:00:00Z,arxiv,, 72699,https://arxiv.org/abs/2201.07719,Improving Behavioural Cloning with Human-Driven Dynamic Dataset Augmentation,"['Federico Malato', 'Joona Jehkonen', 'Ville Hautamäki']",2022-01-19T00:00:00Z,arxiv,, 72711,https://arxiv.org/abs/1507.05895,Decision Maker based on Atomic Switches,"['Song-Ju Kim', 'Tohru Tsuruoka', 'Tsuyoshi Hasegawa', 'Masakazu Aono']",2015-07-21T00:00:00Z,arxiv,, 72727,https://arxiv.org/abs/1206.4613,Near-Optimal BRL using Optimistic Local Transitions,"['Mauricio Araya', 'Olivier Buffet', 'Vincent Thomas']",2012-06-18T00:00:00Z,arxiv,, 72745,https://arxiv.org/abs/2202.12566,Composing Complex and Hybrid AI Solutions,"['Peter Schüller', 'João Paolo Costeira', 'James Crowley', 'Jasmin Grosinger', 'Félix Ingrand', 'Uwe Köckemann', 'Alessandro Saffiotti', 'Martin Welss']",2022-02-25T00:00:00Z,arxiv,, 72770,https://arxiv.org/abs/1907.04543,An Optimistic Perspective on Offline Reinforcement Learning,"['Rishabh Agarwal', 'Dale Schuurmans', 'Mohammad Norouzi']",2019-07-10T00:00:00Z,arxiv,, 72789,https://arxiv.org/abs/1807.10272,Evaluating and Understanding the Robustness of Adversarial Logit Pairing,"['Logan Engstrom', 'Andrew Ilyas', 'Anish Athalye']",2018-07-26T00:00:00Z,arxiv,, 72804,https://arxiv.org/abs/1807.07991,Knowledge Integration for Disease Characterization: A Breast Cancer Example,"['Oshani Seneviratne', 'Sabbir M. Rashid', 'Shruthi Chari', 'James P. McCusker', 'Kristin P. Bennett', 'James A. Hendler', 'Deborah L. McGuinness']",2018-07-20T00:00:00Z,arxiv,, 72828,https://arxiv.org/abs/2003.13350,Agent57: Outperforming the Atari Human Benchmark,"['Adrià Puigdomènech Badia', 'Bilal Piot', 'Steven Kapturowski', 'Pablo Sprechmann', 'Alex Vitvitskyi', 'Daniel Guo', 'Charles Blundell']",2020-03-30T00:00:00Z,arxiv,, 72848,https://arxiv.org/abs/2202.09931,Deconstructing Distributions: A Pointwise Framework of Learning,"['Gal Kaplun', 'Nikhil Ghosh', 'Saurabh Garg', 'Boaz Barak', 'Preetum Nakkiran']",2022-02-20T00:00:00Z,arxiv,, 72868,https://arxiv.org/abs/2206.13498,Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior,"['Jean-Stanislas Denain', 'Jacob Steinhardt']",2022-06-27T00:00:00Z,arxiv,, 72884,https://arxiv.org/abs/1912.03820,Meta-Learning without Memorization,"['Mingzhang Yin', 'George Tucker', 'Mingyuan Zhou', 'Sergey Levine', 'Chelsea Finn']",2019-12-09T00:00:00Z,arxiv,, 72906,https://arxiv.org/abs/1712.05812,Occam's razor is insufficient to infer the preferences of irrational agents,"['Stuart Armstrong', 'Sören Mindermann']",2017-12-15T00:00:00Z,arxiv,, 72927,https://arxiv.org/abs/2011.04328,Risk Assessment for Machine Learning Models,"['Paul Schwerdtner', 'Florens Greßner', 'Nikhil Kapoor', 'Felix Assion', 'René Sass', 'Wiebke Günther', 'Fabian Hüger', 'Peter Schlicht']",2020-11-09T00:00:00Z,arxiv,, 72963,https://arxiv.org/abs/2203.02927,Enabling Automated Machine Learning for Model-Driven AI Engineering,"['Armin Moin', 'Ukrit Wattanavaekin', 'Alexandra Lungu', 'Moharram Challenger', 'Atta Badii', 'Stephan Günnemann']",2022-03-06T00:00:00Z,arxiv,, 72978,https://arxiv.org/abs/2011.04864,Natural Language Inference in Context -- Investigating Contextual Reasoning over Long Texts,"['Hanmeng Liu', 'Leyang Cui', 'Jian Liu', 'Yue Zhang']",2020-11-10T00:00:00Z,arxiv,, 72990,https://arxiv.org/abs/2008.08076,Deploying Lifelong Open-Domain Dialogue Learning,"['Kurt Shuster', 'Jack Urbanek', 'Emily Dinan', 'Arthur Szlam', 'Jason Weston']",2020-08-18T00:00:00Z,arxiv,, 73022,https://arxiv.org/abs/2203.06760,CMKD: CNN/Transformer-Based Cross-Model Knowledge Distillation for Audio Classification,"['Yuan Gong', 'Sameer Khurana', 'Andrew Rouditchenko', 'James Glass']",2022-03-13T00:00:00Z,arxiv,, 73045,https://arxiv.org/abs/2208.12878,DETERRENT: Detecting Trojans using Reinforcement Learning,"['Vasudev Gohil', 'Satwik Patnaik', 'Hao Guo', 'Dileep Kalathil', 'Jeyavijayan', 'Rajendran']",2022-08-26T00:00:00Z,arxiv,, 73059,https://arxiv.org/abs/1610.02136,"A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks",['Dan Hendrycks'],2016-10-07T04:06:01Z,arxiv,, 73076,https://arxiv.org/abs/1806.02027,Discrete-Continuous Mixtures in Probabilistic Programming: Generalized Semantics and Inference Algorithms.,"['Yi Wu', 'Siddharth Srivastava', 'Nicholas Hay', 'Simon Du', 'Stuart Russell']",2018-08-14T00:00:00Z,arxiv,, 73094,https://arxiv.org/abs/1604.00545,The AGI Containment Problem,"['James Babcock', 'János Kramár', 'Roman Yampolskiy']",2016-04-02T19:26:05Z,arxiv,, 73116,https://arxiv.org/abs/2006.09519,Aligning with Heterogeneous Preferences for Kidney Exchange.,['Rachel Freedman'],2020-08-14T00:00:00Z,arxiv,, 73131,https://arxiv.org/abs/1810.08575,Supervising strong learners by amplifying weak experts,"['Paul Christiano', 'Buck Shlegeris', 'Dario Amodei']",2018-10-19T00:00:00Z,arxiv,, 73144,https://arxiv.org/abs/1906.02299,Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning,"['Noel C. F. Codella', 'Michael Hind', 'Karthikeyan Natesan Ramamurthy', 'Murray Campbell', 'Amit Dhurandhar', 'Kush R. Varshney', 'Dennis Wei', 'Aleksandra Mojsilović']",2019-06-05T00:00:00Z,arxiv,, 73162,https://arxiv.org/abs/2204.02515,Inferring Rewards from Language in Context.,"['Daniel Fried', 'Dan Klein', 'Anca Dragan']",2022-08-14T00:00:00Z,arxiv,, 73177,https://arxiv.org/abs/1909.04694,Efficient Iterative Linear-Quadratic Approximations for Nonlinear Multi-Player General-Sum Differential Games.,"['David Fridovich-Keil', 'Ellis Ratner', 'Lasse Peters', 'Anca D', 'Dragan', 'Claire J', 'Tomlin']",2020-08-14T00:00:00Z,arxiv,, 73192,https://arxiv.org/abs/2205.10232,"Exploring the Trade-off between Plausibility, Change Intensity and Adversarial Power in Counterfactual Explanations using Multi-objective Optimization","['Javier Del Ser', 'Alejandro Barredo-Arrieta', 'Natalia Díaz-Rodríguez', 'Francisco Herrera', 'Andreas Holzinger']",2022-05-20T00:00:00Z,arxiv,, 73209,https://arxiv.org/abs/1902.09469,Embedded Agency,"['Abram Demski', 'Scott Garrabrant']",2019-02-25T00:00:00Z,arxiv,, 73245,https://arxiv.org/abs/1003.1343,What does Newcomb's paradox teach us?,"['David H. Wolpert', 'Gregory Benford']",2010-03-06T00:00:00Z,arxiv,, 73257,https://arxiv.org/abs/1701.01487,Designing a Safe Autonomous Artificial Intelligence Agent based on Human Self-Regulation,['Mark Muraven'],2017-01-05T00:00:00Z,arxiv,, 73283,https://arxiv.org/abs/2109.01652,Finetuned Language Models Are Zero-Shot Learners,"['Jason Wei', 'Maarten Bosma', 'Vincent Y. Zhao', 'Kelvin Guu', 'Adams Wei Yu', 'Brian Lester', 'Nan Du', 'Andrew M. Dai', 'Quoc V. Le']",2021-09-03T00:00:00Z,arxiv,, 73306,https://arxiv.org/abs/1606.08415,Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units,"['Dan Hendrycks', 'Kevin Gimpel']",2016-06-27T19:20:40Z,arxiv,, 73326,https://arxiv.org/abs/1811.07882,Guiding Policies with Language via Meta-Learning,"['John D. Co-Reyes', 'Abhishek Gupta', 'Suvansh Sanjeev', 'Nick Altieri', 'Jacob Andreas', 'John DeNero', 'Pieter Abbeel', 'Sergey Levine']",2018-11-19T00:00:00Z,arxiv,, 73342,https://arxiv.org/abs/1304.2357,An Empirical Comparison of Three Inference Methods,['David Heckerman'],2013-03-27T00:00:00Z,arxiv,, 73366,https://arxiv.org/abs/2001.11038,A high-precision abundance analysis of the nuclear benchmark star HD 20,"['Michael Hanke', 'Camilla Juul Hansen', 'Hans-Günter Ludwig', 'Sergio Cristallo', 'Andrew McWilliam', 'Eva K. Grebel', 'Luciano Piersanti']",2020-01-29T00:00:00Z,arxiv,, 73381,https://arxiv.org/abs/1810.08700,Safe Reinforcement Learning with Model Uncertainty Estimates,"['Björn Lütjens', 'Michael Everett', 'Jonathan P. How']",2018-10-19T00:00:00Z,arxiv,, 73396,https://arxiv.org/abs/2108.03360,DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation,"['Mingyi Liu', 'Zhiying Tu', 'Xiaofei Xu', 'Zhongjie Wang']",2021-08-07T00:00:00Z,arxiv,, 73419,https://arxiv.org/abs/1812.06510,The limit of artificial intelligence: Can machines be rational?,['Tshilidzi Marwala'],2018-12-16T00:00:00Z,arxiv,, 73440,https://arxiv.org/abs/1607.08289,Mammalian Value Systems,"['Gopal P. Sarma', 'Nick J. Hay']",2016-07-28T00:00:00Z,arxiv,, 73461,https://arxiv.org/abs/1012.5705,Looking for plausibility,['Wan Ahmad Tajuddin Wan Abdullah'],2010-12-28T00:00:00Z,arxiv,, 73475,https://arxiv.org/abs/2003.05370,Dividing the Ontology Alignment Task with Semantic Embeddings and Logic-based Modules,"['Ernesto Jiménez-Ruiz', 'Asan Agibetov', 'Jiaoyan Chen', 'Matthias Samwald', 'Valerie Cross']",2020-02-25T00:00:00Z,arxiv,, 73495,https://arxiv.org/abs/2206.03044,CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness,"['Julien Girard-Satabin', 'Michele Alberti', 'François Bobot', 'Zakaria Chihani', 'Augustin Lemesle']",2022-06-07T00:00:00Z,arxiv,, 73517,https://arxiv.org/abs/1312.6199,Intriguing properties of neural networks,['Christian Szegedy'],2013-12-21T03:36:08Z,arxiv,, 73537,https://arxiv.org/abs/1606.03490,The Mythos of Model Interpretability,['Zachary C. Lipton'],2016-06-10T21:28:47Z,arxiv,, 73555,https://arxiv.org/abs/1903.01003,Hacking Google reCAPTCHA v3 using Reinforcement Learning,"['Ismail Akrout', 'Amal Feriani', 'Mohamed Akrout']",2019-03-03T00:00:00Z,arxiv,, 73576,https://arxiv.org/abs/1905.02175,"Adversarial Examples Are Not Bugs, They Are Features","['Andrew Ilyas', 'Shibani Santurkar', 'Dimitris Tsipras', 'Logan Engstrom', 'Brandon Tran', 'Aleksander Madry']",2019-05-06T00:00:00Z,arxiv,, 73596,https://arxiv.org/abs/2007.04973,Contrastive Code Representation Learning.,"['Paras Jain', 'Ajay Jain', 'Tianjun Zhang', 'Pieter Abbeel', 'Joseph E', 'Gonzalez', 'Ion Stoica']",2021-08-14T00:00:00Z,arxiv,, 73608,https://arxiv.org/abs/2303.08112,Eliciting Latent Predictions from Transformers with the Tuned Lens,"['Nora Belrose', 'Zach Furman', 'Logan Smith', 'Danny Halawi', 'Igor Ostrovsky', 'Lev McKinney', 'Stella Biderman', 'Jacob Steinhardt']",2023-03-14T17:47:09Z,arxiv,, 73631,https://arxiv.org/abs/1810.00184,Stakeholders in Explainable AI,"['Alun Preece', 'Dan Harborne', 'Dave Braines', 'Richard Tomsett', 'Supriyo Chakraborty']",2018-09-29T00:00:00Z,arxiv,, 73664,https://arxiv.org/abs/1905.12888,Imitation Learning as $f$-Divergence Minimization,"['Liyiming Ke', 'Sanjiban Choudhury', 'Matt Barnes', 'Wen Sun', 'Gilwoo Lee', 'Siddhartha Srinivasa']",2019-05-30T00:00:00Z,arxiv,, 73683,https://arxiv.org/abs/1906.03218,Planning With Uncertain Specifications (PUnS),"['Ankit Shah', 'Shen Li', 'Julie Shah']",2019-06-07T00:00:00Z,arxiv,, 73698,https://arxiv.org/abs/1805.08462,Meta-Learning with Hessian-Free Approach in Deep Neural Nets Training,"['Boyu Chen', 'Wenlian Lu', 'Ernest Fokoue']",2018-05-22T00:00:00Z,arxiv,, 73715,https://arxiv.org/abs/2006.05133,Contestable Black Boxes,"['Andrea Aler Tubella', 'Andreas Theodorou', 'Virginia Dignum', 'Loizos Michael']",2020-06-09T00:00:00Z,arxiv,, 73738,https://arxiv.org/abs/1805.08915,A Psychopathological Approach to Safety Engineering in AI and AGI,"['Vahid Behzadan', 'Arslan Munir', 'Roman V. Yampolskiy']",2018-05-23T00:00:00Z,arxiv,, 73761,https://arxiv.org/abs/2209.00711,A Technique to Create Weaker Abstract Board Game Agents via Reinforcement Learning,"['Peter Jamieson', 'Indrima Upadhyay']",2022-09-01T00:00:00Z,arxiv,, 73784,https://arxiv.org/abs/1811.09656,Hierarchical visuomotor control of humanoids,"['Josh Merel', 'Arun Ahuja', 'Vu Pham', 'Saran Tunyasuvunakool', 'Siqi Liu', 'Dhruva Tirumala', 'Nicolas Heess', 'Greg Wayne']",2018-11-23T00:00:00Z,arxiv,, 73801,https://arxiv.org/abs/2203.06690,Algebraic Learning: Towards Interpretable Information Modeling,['Tong Owen Yang'],2022-03-13T00:00:00Z,arxiv,, 73823,https://arxiv.org/abs/2306.12001,An Overview of Catastrophic AI Risks,"['Dan Hendrycks', 'Mantas Mazeika', 'Thomas Woodside']",2023-06-21T00:00:00Z,arxiv,, 73853,https://arxiv.org/abs/1811.03516,Learning from Demonstration in the Wild,"['Feryal Behbahani', 'Kyriacos Shiarlis', 'Xi Chen', 'Vitaly Kurin', 'Sudhanshu Kasewa', 'Ciprian Stirbu', 'João Gomes', 'Supratik Paul', 'Frans A. Oliehoek', 'João Messias', 'Shimon Whiteson']",2018-11-08T00:00:00Z,arxiv,, 73876,https://arxiv.org/abs/1910.04281,Integrating Behavior Cloning and Reinforcement Learning for Improved Performance in Dense and Sparse Reward Environments,"['Vinicius G. Goecks', 'Gregory M. Gremillion', 'Vernon J. Lawhern', 'John Valasek', 'Nicholas R. Waytowich']",2019-10-09T00:00:00Z,arxiv,, 73899,https://arxiv.org/abs/2201.12462,Explaining Reinforcement Learning Policies through Counterfactual Trajectories,"['Julius Frost', 'Olivia Watkins', 'Eric Weiner', 'Pieter Abbeel', 'Trevor Darrell', 'Bryan Plummer', 'Kate Saenko']",2022-01-29T00:00:00Z,arxiv,, 73912,https://arxiv.org/abs/2303.13506,The Quantization Model of Neural Scaling,"['Eric J. Michaud', 'Ziming Liu', 'Uzay Girit', 'Max Tegmark']",2023-03-23T17:58:43Z,arxiv,, 73924,https://arxiv.org/abs/2210.07424,Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction.,"['YuXuan (Andrew) Liu', 'Nikhil Mishra', 'Maximilian Sieb', 'Fred Shentu', 'Pieter Abbeel', 'Peter Chen']",2022-08-14T00:00:00Z,arxiv,, 73941,https://arxiv.org/abs/1909.05863,Finding Generalizable Evidence by Learning to Convince Q&A Models,"['Ethan Perez', 'Siddharth Karamcheti', 'Rob Fergus', 'Jason Weston', 'Douwe Kiela', 'Kyunghyun Cho']",2019-09-12T00:00:00Z,arxiv,, 73967,https://arxiv.org/abs/2011.08820,REALab: An Embedded Perspective on Tampering,"['Ramana Kumar', 'Jonathan Uesato', 'Richard Ngo', 'Tom Everitt', 'Victoria Krakovna', 'Shane Legg']",2020-11-17T00:00:00Z,arxiv,, 73985,https://arxiv.org/abs/2206.06091,Towards Autonomous Grading In The Real World,"['Yakov Miron', 'Chana Ross', 'Yuval Goldfracht', 'Chen Tessler', 'Dotan Di Castro']",2022-06-13T00:00:00Z,arxiv,, 74003,https://arxiv.org/abs/2206.08966,Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks.,"['Anthony M', 'Barrett', 'Dan Hendrycks', 'Jessica Newman', 'Brandie Nonnecke']",2022-08-14T00:00:00Z,arxiv,, 74038,https://arxiv.org/abs/2210.01892,Polysemanticity and Capacity in Neural Networks,"['Authors: Adam Scherlis', 'Kshitij Sachan', 'Adam S. Jermyn', 'Joe Benton', 'Buck Shlegeris']",2022-10-04T00:00:00Z,arxiv,, 74055,https://arxiv.org/abs/1204.2601,Detecting lateral genetic material transfer,"['C. Calderón', 'L. Delaye', 'V. Mireles', 'P. Miramontes']",2012-04-12T00:00:00Z,arxiv,, 74071,https://arxiv.org/abs/2108.05382,Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback.,"['Xiaofei Wang', 'Kimin Lee', 'Kourosh Hakhamaneshi', 'Pieter Abbeel', 'Michael Laskin']",2021-08-14T00:00:00Z,arxiv,, 74093,https://arxiv.org/abs/1706.04972,Device Placement Optimization with Reinforcement Learning,"['Azalia Mirhoseini', 'Hieu Pham', 'Quoc V. Le', 'Benoit Steiner', 'Rasmus Larsen', 'Yuefeng Zhou', 'Naveen Kumar', 'Mohammad Norouzi', 'Samy Bengio', 'Jeff Dean']",2017-06-13T00:00:00Z,arxiv,, 74107,https://arxiv.org/abs/1812.01647,Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures,"['Jonathan Uesato', 'Ananya Kumar', 'Csaba Szepesvari', 'Tom Erez', 'Avraham Ruderman', 'Keith Anderson', 'Krishmamurthy', 'Dvijotham', 'Nicolas Heess', 'Pushmeet Kohli']",2018-12-04T00:00:00Z,arxiv,, 74129,https://arxiv.org/abs/2012.11705,Taking Principles Seriously: A Hybrid Approach to Value Alignment,"['Tae Wan Kim', 'John Hooker', 'Thomas Donaldson']",2020-12-21T00:00:00Z,arxiv,, 74153,https://arxiv.org/abs/2101.01857,Reinforcement Learning with Latent Flow.,"['Wenling Shang', 'Xiaofei Wang', 'Aravind Srinivas', 'Aravind Rajeswaran', 'Yang Gao', 'Pieter Abbeel', 'Michael Laskin']",2021-08-14T00:00:00Z,arxiv,, 74166,https://arxiv.org/abs/1508.05154,Posterior calibration and exploratory analysis for natural language processing models,['Khanh Nguyen'],2015-08-21T00:25:51Z,arxiv,, 74195,https://arxiv.org/abs/1507.01986,Toward Idealized Decision Theory,"['Nate Soares', 'Benja Fallenstein']",2015-07-07T00:00:00Z,arxiv,, 74217,https://arxiv.org/abs/1901.09960,Using Pre-Training Can Improve Model Robustness and Uncertainty,"['Dan Hendrycks', 'Kimin Lee', 'Mantas Mazeika']",2019-01-28T19:37:07Z,arxiv,, 74246,https://arxiv.org/abs/1706.06906,Expert and Non-Expert Opinion about Technological Unemployment,['Toby Walsh'],2017-06-21T00:00:00Z,arxiv,, 74267,https://arxiv.org/abs/2211.00241,Adversarial Policies Beat Professional-Level Go AIs,['Tony Tong Wang'],2022-11-01T03:13:20Z,arxiv,, 74295,https://arxiv.org/abs/2206.10553,Uncertainty Quantification for Competency Assessment of Autonomous Agents,"['Aastha Acharya', 'Rebecca Russell', 'Nisar R. Ahmed']",2022-06-21T00:00:00Z,arxiv,, 74308,https://arxiv.org/abs/2206.00364,Elucidating the Design Space of Diffusion-Based Generative Models,"['Tero Karras', 'Miika Aittala', 'Timo Aila', 'Samuli Laine']",2022-06-01T00:00:00Z,arxiv,, 74336,https://arxiv.org/abs/2008.09293,A Composable Specification Language for Reinforcement Learning Tasks,"['Kishor Jothimurugan', 'Rajeev Alur', 'Osbert Bastani']",2020-08-21T00:00:00Z,arxiv,, 74355,https://arxiv.org/abs/1906.03926,A Survey of Reinforcement Learning Informed by Natural Language,"['Jelena Luketina', 'Nantas Nardelli', 'Gregory Farquhar', 'Jakob Foerster', 'Jacob Andreas', 'Edward Grefenstette', 'Shimon Whiteson', 'Tim Rocktäschel']",2019-06-10T00:00:00Z,arxiv,, 74382,https://arxiv.org/abs/2006.04635,Learning to Play No-Press Diplomacy with Best Response Policy Iteration,"['Thomas Anthony', 'Tom Eccles', 'Andrea Tacchetti', 'János Kramár', 'Ian Gemp', 'Thomas C. Hudson', 'Nicolas Porcel', 'Marc Lanctot', 'Julien Pérolat', 'Richard Everett', 'Roman Werpachowski', 'Satinder Singh', 'Thore Graepel', 'Yoram Bachrach']",2020-06-08T00:00:00Z,arxiv,, 74402,https://arxiv.org/abs/1903.06151,Deep Reinforcement Learning with Feedback-based Exploration,"['Jan Scholten', 'Daan Wout', 'Carlos Celemin', 'Jens Kober']",2019-03-14T00:00:00Z,arxiv,, 74422,https://arxiv.org/abs/2204.10018,Path-Specific Objectives for Safer Agent Incentives,"['Sebastian Farquhar', 'Ryan Carey', 'Tom Everitt']",2022-04-21T00:00:00Z,arxiv,, 74441,https://arxiv.org/abs/1906.09453,Image Synthesis with a Single (Robust) Classifier,"['Shibani Santurkar', 'Dimitris Tsipras', 'Brandon Tran', 'Andrew Ilyas', 'Logan Engstrom', 'Aleksander Madry']",2019-06-06T00:00:00Z,arxiv,, 74465,https://arxiv.org/abs/2010.15920,Recovery RL: Safe Reinforcement Learning with Learned Recovery Zones,"['Brijen Thananjeyan', 'Ashwin Balakrishna', 'Suraj Nair', 'Michael Luo', 'Krishnan Srinivasan', 'Minho Hwang', 'Joseph E. Gonzalez', 'Julian Ibarz', 'Chelsea Finn', 'Ken Goldberg']",2020-10-29T00:00:00Z,arxiv,, 74489,https://arxiv.org/abs/1304.1516,Inference Policies,['Paul E. Lehner'],2013-03-27T00:00:00Z,arxiv,, 74515,https://arxiv.org/abs/1809.05630,Towards Better Interpretability in Deep Q-Networks,"['Raghuram Mandyam Annasamy', 'Katia Sycara']",2018-09-15T00:00:00Z,arxiv,, 74539,https://arxiv.org/abs/2107.09046,Playful Interactions for Representation Learning.,"['Sarah Young', 'Jyothish Pari', 'Pieter Abbeel', 'Lerrel Pinto']",2022-08-14T00:00:00Z,arxiv,, 74556,https://arxiv.org/abs/2010.10181,Robust Imitation Learning from Noisy Demonstrations,"['Voot Tangkaratt', 'Nontawat Charoenphakdee', 'Masashi Sugiyama']",2020-10-20T00:00:00Z,arxiv,, 74572,https://arxiv.org/abs/2205.01663,Adversarial Training for High-Stakes Reliability,"['Authors: Daniel M. Ziegler', 'Seraphina Nix', 'Lawrence Chan', 'Tim Bauman', 'Peter Schmidt-Nielsen', 'Tao Lin', 'Adam Scherlis', 'Noa Nabeshima', 'Ben Weinstein-Raun', 'Daniel de Haas', 'Buck Shlegeris', 'Nate Thomas']",2022-05-03T00:00:00Z,arxiv,, 74592,https://arxiv.org/abs/2205.04279,Aligned with Whom? Direct and social goals for AI systems,"['Anton Korinek', 'Avital Balwit']",2022-05-09T00:00:00Z,arxiv,, 74630,https://arxiv.org/abs/1909.06769,VILD: Variational Imitation Learning with Diverse-quality Demonstrations,"['Voot Tangkaratt', 'Bo Han', 'Mohammad Emtiyaz Khan', 'Masashi Sugiyama']",2019-09-15T00:00:00Z,arxiv,, 74643,https://arxiv.org/abs/2204.07049,Sim-to-Real 6D Object Pose Estimation via Iterative Self-training for Robotic Bin-picking.,"['Kai Chen', 'Rui Cao', 'Stephen James', 'Yichuan Li', 'Yun-Hui Liu', 'Pieter Abbeel', 'Qi Dou']",2022-08-14T00:00:00Z,arxiv,, 74665,https://arxiv.org/abs/2001.06528,Activism by the AI Community: Analysing Recent Achievements and Future Prospects,['Haydn Belfield'],2020-01-17T00:00:00Z,arxiv,, 74684,https://arxiv.org/abs/2006.14779,Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance,"['Gagan Bansal', 'Tongshuang Wu', 'Joyce Zhou', 'Raymond Fok', 'Besmira Nushi', 'Ece Kamar', 'Marco Tulio Ribeiro', 'Daniel S. Weld']",2020-06-26T00:00:00Z,arxiv,, 74704,https://arxiv.org/abs/2107.10939,What are you optimizing for? Aligning Recommender Systems with Human Values,"['Jonathan Stray', 'Ivan Vendrov', 'Jeremy Nixon', 'Steven Adler', 'Dylan Hadfield-Menell']",2021-07-22T00:00:00Z,arxiv,, 74734,https://arxiv.org/abs/1912.13465,Reward-Conditioned Policies,"['Aviral Kumar', 'Xue Bin Peng', 'Sergey Levine']",2019-12-31T00:00:00Z,arxiv,, 74752,https://arxiv.org/abs/2210.10999,Task Phasing: Automated Curriculum Learning from Demonstrations,"['Vaibhav Bajaj', 'Guni Sharon', 'Peter Stone']",2022-10-20T00:00:00Z,arxiv,, 74769,https://arxiv.org/abs/1912.08786,Why we need an AI-resilient society,['Thomas Bartz-Beielstein'],2019-12-18T00:00:00Z,arxiv,, 74795,https://arxiv.org/abs/2006.09181,Formal Verification of End-to-End Learning in Cyber-Physical Systems: Progress and Challenges,"['Nathan Fulton', 'Nathan Hunt', 'Nghia Hoang', 'Subhro Das']",2020-06-15T00:00:00Z,arxiv,, 74821,https://arxiv.org/abs/1902.06162,Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey,"['Longlong Jing', 'Yingli Tian']",2019-02-16T00:00:00Z,arxiv,, 74842,https://arxiv.org/abs/1610.07997,Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures,"['Roman V. Yampolskiy', 'M. S. Spellchecker']",2016-10-25T00:00:00Z,arxiv,, 74872,https://arxiv.org/abs/2201.03544,The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models,"['Alexander Pan', 'Kush Bhatia', 'Jacob Steinhardt']",2022-01-10T00:00:00Z,arxiv,, 74898,https://arxiv.org/abs/2212.00169,Time-Efficient Reward Learning via Visually Assisted Cluster Ranking.,"['David Zhang', 'Micah Carroll', 'Andreea Bobu', 'Anca Dragan']",2022-08-14T00:00:00Z,arxiv,, 74920,https://arxiv.org/abs/1810.03548,Meta-Learning: A Survey,['Joaquin Vanschoren'],2018-10-08T00:00:00Z,arxiv,, 74943,https://arxiv.org/abs/1910.06636,Solving Logic Grid Puzzles with an Algorithm that Imitates Human Behavior,"['Guillaume Escamocher', ""Barry O'Sullivan""]",2019-10-15T00:00:00Z,arxiv,, 74954,https://arxiv.org/abs/1806.07552,Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems,"['Richard Tomsett', 'Dave Braines', 'Dan Harborne', 'Alun Preece', 'Supriyo Chakraborty']",2018-06-20T00:00:00Z,arxiv,, 74976,https://arxiv.org/abs/1812.02217,Truly Autonomous Machines Are Ethical,['John Hooker'],2018-12-05T00:00:00Z,arxiv,, 74999,https://arxiv.org/abs/2306.16388,Towards Measuring the Representation of Subjective Global Opinions in Language Models,['Esin Durmus \u2003Karina Nguyen \u2003Thomas I. Liao \u2003Nicholas Schiefer'],2023-06-28T17:31:53Z,arxiv,, 75025,https://arxiv.org/abs/2002.05282,A Bounded Measure for Estimating the Benefit of Visualization,"['Min Chen', 'Mateu Sbert', 'Alfie Abdul-Rahman', 'Deborah Silver']",2020-02-12T00:00:00Z,arxiv,, 75038,https://arxiv.org/abs/1310.4546,Distributed Representations of Words and Phrases and their Compositionality,"['Tomas Mikolov', 'Ilya Sutskever', 'Kai Chen', 'Greg Corrado', 'Jeffrey Dean']",2013-10-16T00:00:00Z,arxiv,, 75062,https://arxiv.org/abs/2010.09670,RobustBench: a standardized adversarial robustness benchmark,"['Francesco Croce', 'Maksym Andriushchenko', 'Vikash Sehwag', 'Nicolas Flammarion', 'Mung Chiang', 'Prateek Mittal', 'Matthias Hein']",2020-10-19T17:06:18Z,arxiv,, 75098,https://arxiv.org/abs/1804.04268,Incomplete Contracting and AI Alignment.,"['Dylan Hadfield-Menell', 'Gillian K', 'Hadfield']",2020-08-15T00:00:00Z,arxiv,, 75125,https://arxiv.org/abs/1812.05285,IRLAS: Inverse Reinforcement Learning for Architecture Search,"['Minghao Guo', 'Zhao Zhong', 'Wei Wu', 'Dahua Lin', 'Junjie Yan']",2018-12-13T00:00:00Z,arxiv,, 75145,https://arxiv.org/abs/1608.04644,"Towards Evaluating the Robustness of Neural Networks",['Nicholas Carlini \u2003\u2003David Wagner'],2016-08-16T15:59:35Z,arxiv,, 75166,https://arxiv.org/abs/1311.2901,Visualizing and Understanding Convolutional Networks,"['Matthew D Zeiler', 'Rob Fergus']",2013-11-12T00:00:00Z,arxiv,, 75180,https://arxiv.org/abs/1909.01387,Making Efficient Use of Demonstrations to Solve Hard Exploration Problems,"['Tom Le Paine', 'Caglar Gulcehre', 'Bobak Shahriari', 'Misha Denil', 'Matt Hoffman', 'Hubert Soyer', 'Richard Tanburn', 'Steven Kapturowski', 'Neil Rabinowitz', 'Duncan Williams', 'Gabriel Barth-Maron', 'Ziyu Wang', 'Nando de Freitas', 'Worlds Team']",2019-09-03T00:00:00Z,arxiv,, 75202,https://arxiv.org/abs/1805.08313,Learning Safe Policies with Expert Guidance,"['Jessie Huang', 'Fa Wu', 'Doina Precup', 'Yang Cai']",2018-05-21T00:00:00Z,arxiv,, 75219,https://arxiv.org/abs/1806.01822,Relational recurrent neural networks,"['Adam Santoro', 'Ryan Faulkner', 'David Raposo', 'Jack Rae', 'Mike Chrzanowski', 'Theophane Weber', 'Daan Wierstra', 'Oriol Vinyals', 'Razvan Pascanu', 'Timothy Lillicrap']",2018-06-05T00:00:00Z,arxiv,, 75231,https://arxiv.org/abs/1303.1516,Constructing Lower Probabilities,"['Carl G. Wagner', 'Bruce Tonn']",2013-03-06T00:00:00Z,arxiv,, 75246,https://arxiv.org/abs/1706.06083,Towards Deep Learning Models Resistant to Adversarial Attacks,"['Aleksander Madry', 'Aleksandar Makelov', 'Ludwig Schmidt', 'Dimitris Tsipras', 'Adrian Vladu']",2017-06-19T17:53:11Z,arxiv,, 75265,https://arxiv.org/abs/2202.02540,Science Facing Interoperability as a Necessary Condition of Success and Evil,['Remy Demichelis'],2022-02-05T00:00:00Z,arxiv,, 75285,https://arxiv.org/abs/2112.09332,WebGPT: Browser-assisted question-answering with human feedback,"['Reiichiro Nakano', 'Jacob Hilton', 'Suchir Balaji', 'Jeff Wu', 'Long Ouyang', 'Christina Kim', 'Christopher Hesse', 'Shantanu Jain', 'Vineet Kosaraju', 'William Saunders', 'Xu Jiang', 'Karl Cobbe', 'Tyna Eloundou', 'Gretchen Krueger', 'Kevin Button', 'Matthew Knight', 'Benjamin Chess', 'John Schulman']",2021-12-17T00:00:00Z,arxiv,, 75321,https://arxiv.org/abs/2111.01705,AI Ethics Statements -- Analysis and lessons learnt from NeurIPS Broader Impact Statements,"['Carolyn Ashurst', 'Emmie Hine', 'Paul Sedille', 'Alexis Carlier']",2021-11-02T00:00:00Z,arxiv,, 75338,https://arxiv.org/abs/2206.11812,Formalizing the Problem of Side Effect Regularization,"['Alexander Matt Turner', 'Aseem Saxena', 'Prasad Tadepalli']",2022-06-23T00:00:00Z,arxiv,, 75351,https://arxiv.org/abs/1409.0813,Friendly Artificial Intelligence: the Physics Challenge,['Max Tegmark'],2014-09-02T00:00:00Z,arxiv,, 75367,https://arxiv.org/abs/2010.07877,Avoiding Side Effects By Considering Future Tasks,"['Victoria Krakovna', 'Laurent Orseau', 'Richard Ngo', 'Miljan Martic', 'Shane Legg']",2020-10-15T00:00:00Z,arxiv,, 75388,https://arxiv.org/abs/1805.07470,Solving the Rubik's Cube Without Human Knowledge,"['Stephen McAleer', 'Forest Agostinelli', 'Alexander Shmakov', 'Pierre Baldi']",2018-05-18T00:00:00Z,arxiv,, 75412,https://arxiv.org/abs/1708.06733,BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain,"['Tianyu Gu', 'Brendan Dolan-Gavitt', 'Siddharth Garg']",2017-08-22T17:31:54Z,arxiv,, 75435,https://arxiv.org/abs/1601.06569,Towards Resolving Unidentifiability in Inverse Reinforcement Learning,['Kareem Amin'],2016-01-25T11:50:43Z,arxiv,, 75455,https://arxiv.org/abs/1506.02142,"Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning","['Yarin Gal', 'Zoubin Ghahramani']",2015-06-06T12:30:43Z,arxiv,, 75475,https://arxiv.org/abs/1910.10897,Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning,"['Tianhe Yu', 'Deirdre Quillen', 'Zhanpeng He', 'Ryan Julian', 'Avnish Narayan', 'Hayden Shively', 'Adithya Bellathur', 'Karol Hausman', 'Chelsea Finn', 'Sergey Levine']",2019-10-24T00:00:00Z,arxiv,, 75489,https://arxiv.org/abs/0906.4321,Reasoning About Knowledge of Unawareness Revisited,"['Joseph Y. Halpern', 'Leandro Rego']",2009-06-23T00:00:00Z,arxiv,, 75504,https://arxiv.org/abs/2209.13046,Understanding Hindsight Goal Relabeling from a Divergence Minimization Perspective,"['Lunjun Zhang', 'Bradly C. Stadie']",2022-09-26T00:00:00Z,arxiv,, 75521,https://arxiv.org/abs/1912.01217,SafeLife 1.0: Exploring Side Effects in Complex Environments,"['Carroll L. Wainwright', 'Peter Eckersley']",2019-12-03T00:00:00Z,arxiv,, 75538,https://arxiv.org/abs/2001.10208,Towards Learning Multi-agent Negotiations via Self-Play,['Yichuan Charlie Tang'],2020-01-28T00:00:00Z,arxiv,, 75547,https://arxiv.org/abs/1703.06856,Counterfactual Fairness,"['Matt J. Kusner', 'Joshua R. Loftus', 'Chris Russell', 'Ricardo Silva']",2017-03-20T00:00:00Z,arxiv,, 75569,https://arxiv.org/abs/1906.09136,Categorizing Wireheading in Partially Embedded Agents,"['Arushi Majha', 'Sayan Sarkar', 'Davide Zagami']",2019-06-21T00:00:00Z,arxiv,, 75590,https://arxiv.org/abs/2002.01059,Preventing Imitation Learning with Adversarial Policy Ensembles,"['Albert Zhan', 'Stas Tiomkin', 'Pieter Abbeel']",2020-01-31T00:00:00Z,arxiv,, 75603,https://arxiv.org/abs/1808.01976,Adversarial Vision Challenge,"['Wieland Brendel', 'Jonas Rauber', 'Alexey Kurakin', 'Nicolas Papernot', 'Behar Veliqi', 'Marcel Salathé', 'Sharada P. Mohanty', 'Matthias Bethge']",2018-08-06T00:00:00Z,arxiv,, 75629,https://arxiv.org/abs/1903.03171,Challenges for an Ontology of Artificial Intelligence,['Scott H. Hawley'],2019-02-25T00:00:00Z,arxiv,, 75656,https://arxiv.org/abs/1810.03292,Sanity Checks for Saliency Maps,"['Julius Adebayo', 'Justin Gilmer', 'Michael Muelly', 'Ian Goodfellow', 'Moritz Hardt', 'Been Kim']",2018-10-08T00:00:00Z,arxiv,, 75674,https://arxiv.org/abs/1811.08549,Reinforcement Learning and Inverse Reinforcement Learning with System 1 and System 2,['Alexander Peysakhovich'],2018-11-19T00:00:00Z,arxiv,, 75689,https://arxiv.org/abs/2103.06312,The AI Index 2021 Annual Report,"['Daniel Zhang', 'Saurabh Mishra', 'Erik Brynjolfsson', 'John Etchemendy', 'Deep Ganguli', 'Barbara Grosz', 'Terah Lyons', 'James Manyika', 'Juan Carlos Niebles', 'Michael Sellitto', 'Yoav Shoham', 'Jack Clark', 'Raymond Perrault']",2021-03-09T00:00:00Z,arxiv,, 75726,https://arxiv.org/abs/2009.03300,Measuring Massive Multitask Language Understanding.,"['Dan Hendrycks', 'Collin Burns', 'Steven Basart', 'Andy Zou', 'Mantas Mazeika', 'Dawn Song', 'Jacob Steinhardt']",2020-08-14T00:00:00Z,arxiv,, 75747,https://arxiv.org/abs/1802.01421,First-order Adversarial Vulnerability of Neural Networks and Input Dimension,"['Carl-Johann Simon-Gabriel', 'Yann Ollivier', 'Léon Bottou', 'Bernhard Schölkopf', 'David Lopez-Paz']",2018-02-05T00:00:00Z,arxiv,, 75780,https://arxiv.org/abs/1902.06787,Regularizing Black-box Models for Improved Interpretability,"['Gregory Plumb', 'Maruan Al-Shedivat', 'Angel Alexander Cabrera', 'Adam Perer', 'Eric Xing', 'Ameet Talwalkar']",2019-02-18T00:00:00Z,arxiv,, 75800,https://arxiv.org/abs/2106.01345,Decision Transformer: Reinforcement Learning via Sequence Modeling.,"['Lili Chen', 'Kevin Lu', 'Aravind Rajeswaran', 'Kimin Lee', 'Aditya Grover', 'Michael Laskin', 'Pieter Abbeel', 'Aravind Srinivas', 'Igor Mordatch']",2021-08-15T00:00:00Z,arxiv,, 75823,https://arxiv.org/abs/2206.05862,X-Risk Analysis for AI Research.,"['Dan Hendrycks', 'Mantas Mazeika']",2022-08-14T00:00:00Z,arxiv,, 75844,https://arxiv.org/abs/2002.11089,Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement,"['Benjamin Eysenbach', 'Xinyang Geng', 'Sergey Levine', 'Ruslan Salakhutdinov']",2020-02-25T00:00:00Z,arxiv,, 75858,https://arxiv.org/abs/2010.11929,An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,"['Alexey Dosovitskiy', 'Lucas Beyer', 'Alexander Kolesnikov', 'Dirk Weissenborn', 'Xiaohua Zhai', 'Thomas Unterthiner', 'Mostafa Dehghani', 'Matthias Minderer', 'Georg Heigold', 'Sylvain Gelly', 'Jakob Uszkoreit', 'Neil Houlsby']",2020-10-22T00:00:00Z,arxiv,, 75889,https://arxiv.org/abs/2004.12265,Causal Mediation Analysis for Interpreting Neural NLP: The Case of Gender Bias,['Jesse Vig'],2020-04-26T01:53:03Z,arxiv,, 75901,https://arxiv.org/abs/2210.14721,Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving Without Real Data.,"['John So*', 'Amber Xie*', 'Jeffrey Edlund', 'Rohan Thakker', 'Sunggoo Jung', 'Ali-akbar Agha-mohammadi', 'Pieter Abbeel', 'Stephen James']",2022-08-14T00:00:00Z,arxiv,, 75924,https://arxiv.org/abs/2105.00884,RL-IoT: Reinforcement Learning to Interact with IoT Devices,"['Giulia Milan', 'Luca Vassio', 'Idilio Drago', 'Marco Mellia']",2021-05-03T00:00:00Z,arxiv,, 75936,https://arxiv.org/abs/1902.09592,Verification of Non-Linear Specifications for Neural Networks,"['Chongli Qin', 'Krishnamurthy', 'Dvijotham', ""Brendan O'Donoghue"", 'Rudy Bunel', 'Robert Stanforth', 'Sven Gowal', 'Jonathan Uesato', 'Grzegorz Swirszcz', 'Pushmeet Kohli']",2019-02-25T00:00:00Z,arxiv,, 75951,https://arxiv.org/abs/2201.00762,Execute Order 66: Targeted Data Poisoning for Reinforcement Learning,"['Harrison Foley', 'Liam Fowl', 'Tom Goldstein', 'Gavin Taylor']",2022-01-03T00:00:00Z,arxiv,, 75961,https://arxiv.org/abs/1312.5713,Giving the AI definition a form suitable for the engineer,['Dimiter Dobrev'],2013-12-19T00:00:00Z,arxiv,, 75998,https://arxiv.org/abs/1804.04241,Capsules for Object Segmentation,"['Rodney LaLonde', 'Ulas Bagci']",2018-04-11T00:00:00Z,arxiv,, 76016,https://arxiv.org/abs/1906.04358,Weight Agnostic Neural Networks,"['Adam Gaier', 'David Ha']",2019-06-11T00:00:00Z,arxiv,, 76034,https://arxiv.org/abs/2111.04158,A Word on Machine Ethics: A Response to Jiang et al. (2021),"['Zeerak Talat', 'Hagen Blix', 'Josef Valvoda', 'Maya Indira Ganesh', 'Ryan Cotterell', 'Adina Williams']",2021-11-07T00:00:00Z,arxiv,, 76056,https://arxiv.org/abs/1805.08974,Do Better ImageNet Models Transfer Better?,"['Simon Kornblith', 'Jonathon Shlens', 'Quoc V. Le']",2018-05-23T00:00:00Z,arxiv,, 76078,https://arxiv.org/abs/2106.06009,Synthesising Reinforcement Learning Policies through Set-Valued Inductive Rule Learning,"['Youri Coppens', 'Denis Steckelmacher', 'Catholijn M. Jonker', 'Ann Nowé']",2021-06-10T00:00:00Z,arxiv,, 76092,https://arxiv.org/abs/2004.13102,Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork,"['Gagan Bansal', 'Besmira Nushi', 'Ece Kamar', 'Eric Horvitz', 'Daniel S. Weld']",2020-04-27T00:00:00Z,arxiv,, 76111,https://arxiv.org/abs/1705.05254,Strategically knowing how,"['Raul Fervari', 'Andreas Herzig', 'Yanjun Li', 'Yanjing Wang']",2017-05-15T00:00:00Z,arxiv,, 76127,https://arxiv.org/abs/2106.10394,Uncertain Decisions Facilitate Better Preference Learning,"['Cassidy Laidlaw', 'Stuart Russell']",2021-06-19T00:00:00Z,arxiv,, 76147,https://arxiv.org/abs/2102.06911,Modelling Cooperation in Network Games with Spatio-Temporal Complexity,"['Michiel A. Bakker', 'Richard Everett', 'Laura Weidinger', 'Iason Gabriel', 'William S. Isaac', 'Joel Z. Leibo', 'Edward Hughes']",2021-02-13T00:00:00Z,arxiv,, 76165,https://arxiv.org/abs/2001.00496,Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning,"['Andreas Sedlmeier', 'Thomas Gabor', 'Thomy Phan', 'Lenz Belzner', 'Claudia Linnhoff-Popien']",2019-12-31T00:00:00Z,arxiv,, 76179,https://arxiv.org/abs/1602.04290,Designing Intelligent Instruments,"['Kevin H. Knuth', 'Philip M. Erner', 'Scott Frasso']",2016-02-13T00:00:00Z,arxiv,, 76193,https://arxiv.org/abs/1105.3821,Ontological Crises in Artificial Agents' Value Systems,['Peter de Blanc'],2011-05-19T00:00:00Z,arxiv,, 76202,https://arxiv.org/abs/1811.04251,Formal Limitations on the Measurement of Mutual Information,"['David McAllester', 'Karl Stratos']",2018-11-10T00:00:00Z,arxiv,, 76216,https://arxiv.org/abs/2111.13872,Normative Disagreement as a Challenge for Cooperative AI,"['Julian Stastny', 'Maxime Riché', 'Alexander Lyzhov', 'Johannes Treutlein', 'Allan Dafoe', 'Jesse Clifton']",2021-11-27T00:00:00Z,arxiv,, 76230,https://arxiv.org/abs/2108.02818,Evaluating CLIP: Towards Characterization of Broader Capabilities and Downstream Implications,"['Sandhini Agarwal', 'Gretchen Krueger', 'Jack Clark', 'Alec Radford', 'Jong Wook Kim', 'Miles Brundage']",2021-08-05T00:00:00Z,arxiv,, 76259,https://arxiv.org/abs/1606.02447,Learning Language Games through Interaction,['Sida I. Wang \u2003\u2003Percy Liang \u2003\u2003Christopher D. Manning'],2016-06-08T08:27:09Z,arxiv,, 76281,https://arxiv.org/abs/1806.06877,"A Survey of Inverse Reinforcement Learning: Challenges, Methods and Progress","['Saurabh Arora', 'Prashant Doshi']",2018-06-18T00:00:00Z,arxiv,, 76314,https://arxiv.org/abs/1911.05722,Momentum Contrast for Unsupervised Visual Representation Learning,"['Kaiming He', 'Haoqi Fan', 'Yuxin Wu', 'Saining Xie', 'Ross Girshick']",2019-11-13T00:00:00Z,arxiv,, 76332,https://arxiv.org/abs/1802.05250,Generating Plans that Predict Themselves.,"['Jaime F', 'Fisac', 'Chang Liu', 'Jessica B', 'Hamrick', 'S', 'Shankar Sastry', 'J', 'Karl Hedrick', 'Thomas L', 'Griffiths', 'Anca D', 'Dragan']",2016-08-14T00:00:00Z,arxiv,, 76349,https://arxiv.org/abs/2110.08176,Collaborating with Humans without Human Data,"['DJ Strouse', 'Kevin R. McKee', 'Matt Botvinick', 'Edward Hughes', 'Richard Everett']",2021-10-15T00:00:00Z,arxiv,, 76362,https://arxiv.org/abs/2206.04436,Towards Safe Reinforcement Learning via Constraining Conditional Value-at-Risk,"['Chengyang Ying', 'Xinning Zhou', 'Hang Su', 'Dong Yan', 'Ning Chen', 'Jun Zhu']",2022-06-09T00:00:00Z,arxiv,, 76384,https://arxiv.org/abs/1907.04534,The Role of Cooperation in Responsible AI Development,"['Amanda Askell', 'Miles Brundage', 'Gillian Hadfield']",2019-07-10T00:00:00Z,arxiv,, 76403,https://arxiv.org/abs/2003.04881,Pruned Neural Networks are Surprisingly Modular,"['Daniel Filan', 'Shlomi Hod', 'Cody Wild', 'Andrew Critch', 'Stuart Russell']",2020-03-10T00:00:00Z,arxiv,, 76419,https://arxiv.org/abs/1912.11095,Defining AI in Policy versus Practice,"['P. M. Krafft', 'Meg Young', 'Michael Katell', 'Karen Huang', 'Ghislain Bugingo']",2019-12-23T00:00:00Z,arxiv,, 76430,https://arxiv.org/abs/2204.08324,Hierarchical Optimal Transport for Comparing Histopathology Datasets,"['Anna Yeaton', 'Rahul G. Krishnan', 'Rebecca Mieloszyk', 'David Alvarez-Melis', 'Grace Huynh']",2022-04-18T00:00:00Z,arxiv,, 76444,https://arxiv.org/abs/1810.06284,CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning,"['Cédric Colas', 'Pierre Fournier', 'Olivier Sigaud', 'Mohamed Chetouani', 'Pierre-Yves Oudeyer']",2018-10-15T00:00:00Z,arxiv,, 76465,https://arxiv.org/abs/2206.00259,IDANI: Inference-time Domain Adaptation via Neuron-level Interventions,"['Omer Antverg', 'Eyal Ben-David', 'Yonatan Belinkov']",2022-06-01T00:00:00Z,arxiv,, 76479,https://arxiv.org/abs/2110.15191,URLB: Unsupervised Reinforcement Learning Benchmark.,"['Michael Laskin', 'Denis Yarats', 'Hao Liu', 'Kimin Lee', 'Albert Zhan', 'Kevin Lu', 'Catherine Cang', 'Lerrel Pinto', 'Pieter Abbeel']",2021-08-14T00:00:00Z,arxiv,, 76501,https://arxiv.org/abs/1903.12261,Benchmarking Neural Network Robustness to Common Corruptions and Perturbations.,"['Dan Hendrycks', 'Thomas Dietterich']",2019-08-14T00:00:00Z,arxiv,, 76532,https://arxiv.org/abs/1809.00092,Cost Functions for Robot Motion Style.,"['Allan Zhou', 'Anca D', 'Dragan']",2018-08-14T00:00:00Z,arxiv,, 76547,https://arxiv.org/abs/1910.05789,On the Utility of Learning about Humans for Human-AI Coordination,"['Micah Carroll', 'Rohin Shah', 'Mark K. Ho', 'Thomas L. Griffiths', 'Sanjit A. Seshia', 'Pieter Abbeel', 'Anca Dragan']",2019-10-13T00:00:00Z,arxiv,, 76564,https://arxiv.org/abs/2306.09479,Inverse Scaling: When Bigger Isn’t Better,['\\name'],2023-06-15T20:11:23Z,arxiv,, 76594,https://arxiv.org/abs/1603.04068,A Signaling Game Approach to Databases Querying and Interaction,"['Ben McCamish', 'Vahid Ghadakchi', 'Arash Termehchy', 'Behrouz Touri']",2016-03-13T00:00:00Z,arxiv,, 76612,https://arxiv.org/abs/2006.12136,Safe Reinforcement Learning via Curriculum Induction,"['Matteo Turchetta', 'Andrey Kolobov', 'Shital Shah', 'Andreas Krause', 'Alekh Agarwal']",2020-06-22T00:00:00Z,arxiv,, 76628,https://arxiv.org/abs/2304.03279,"Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the Machiavelli Benchmark","['Alexander Pan', 'Chan Jun Shern', 'Andy Zou', 'Nathaniel Li', 'Steven Basart', 'Thomas Woodside', 'Jonathan Ng', 'Hanlin Zhang', 'Scott Emmons', 'Dan Hendrycks']",2023-04-06T17:59:03Z,arxiv,, 76647,https://arxiv.org/abs/2109.14004,Joint Communication and Motion Planning for Cobots.,"['Mehdi Dadvar', 'Keyvan Majd', 'Elena Oikonomou', 'Georgios Fainekos', 'Siddharth Srivastava']",2022-08-14T00:00:00Z,arxiv,, 76671,https://arxiv.org/abs/2010.05150,Safe Reinforcement Learning with Natural Language Constraints,"['Tsung-Yen Yang', 'Michael Hu', 'Yinlam Chow', 'Peter J. Ramadge', 'Karthik Narasimhan']",2020-10-11T00:00:00Z,arxiv,, 76687,https://arxiv.org/abs/1705.04226,Robot Planning with Mathematical Models of Human State and Action,['Anca D. Dragan'],2017-05-11T00:00:00Z,arxiv,, 76723,https://arxiv.org/abs/2109.11513,Temporal Inference with Finite Factored Sets,['Scott Garrabrant'],2021-09-23T00:00:00Z,arxiv,, 76737,https://arxiv.org/abs/1709.08071,Autonomous Agents Modelling Other Agents: A Comprehensive Survey and Open Problems,"['Stefano V. Albrecht', 'Peter Stone']",2017-09-23T00:00:00Z,arxiv,, 76788,https://arxiv.org/abs/2001.08016,Subjective Knowledge and Reasoning about Agents in Multi-Agent Systems,"['Shikha Singh', 'Deepak Khemani']",2020-01-22T00:00:00Z,arxiv,, 76804,https://arxiv.org/abs/2002.06177,The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence,['Gary Marcus'],2020-02-14T00:00:00Z,arxiv,, 76845,https://arxiv.org/abs/2003.04080,Two Decades of AI4NETS-AI/ML for Data Networks: Challenges & Research Directions,['Pedro Casas'],2020-03-03T00:00:00Z,arxiv,, 76881,https://arxiv.org/abs/1903.01267,Using Causal Analysis to Learn Specifications from Task Demonstrations,"['Daniel Angelov', 'Yordan Hristov', 'Subramanian Ramamoorthy']",2019-03-04T00:00:00Z,arxiv,, 76900,https://arxiv.org/abs/1904.12690,Capturing human categorization of natural images at scale by combining deep networks and cognitive models.,"['Ruairidh M', 'Battleday', 'Joshua C', 'Peterson', 'Thomas L', 'Griffiths']",2019-08-14T00:00:00Z,arxiv,, 76920,https://arxiv.org/abs/2203.00904,Weakly Supervised Correspondence Learning.,"['Zihan Wang*', 'Zhangjie Cao*', 'Yilun Hao', 'Dorsa Sadigh']",2022-08-14T00:00:00Z,arxiv,, 76936,https://arxiv.org/abs/1804.10692,Reward Learning from Narrated Demonstrations,"['Hsiao-Yu Fish Tung', 'Adam W. Harley', 'Liang-Kang Huang', 'Katerina Fragkiadaki']",2018-04-27T00:00:00Z,arxiv,, 76961,https://arxiv.org/abs/2304.14997,"Towards Automated Circuit Discovery for Mechanistic Interpretability",['Arthur Conmy'],2023-04-28T17:36:53Z,arxiv,, 76980,https://arxiv.org/abs/1801.02612,Spatially Transformed Adversarial Examples,"['Chaowei Xiao', 'Jun-Yan Zhu', 'Bo Li', 'Warren He', 'Mingyan Liu', 'Dawn Song']",2018-01-08T00:00:00Z,arxiv,, 76993,https://arxiv.org/abs/2207.05058,Inferring and Conveying Intentionality: Beyond Numerical Rewards to Logical Intentions,"['Susmit Jha', 'John Rushby']",2022-07-06T00:00:00Z,arxiv,, 77011,https://arxiv.org/abs/2112.07013,PantheonRL: A MARL Library for Dynamic Training Interactions.,"['Bidipta Sarkar*', 'Aditi Talati*', 'Andy Shih*', 'Dorsa Sadigh']",2022-08-14T00:00:00Z,arxiv,, 77025,https://arxiv.org/abs/2009.00802,Estimating the Brittleness of AI: Safety Integrity Levels and the Need for Testing Out-Of-Distribution Performance,['Andrew J. Lohn'],2020-09-02T00:00:00Z,arxiv,, 77052,https://arxiv.org/abs/2111.11212,Finding Useful Predictions by Meta-gradient Descent to Improve Decision-making,"['Alex Kearney', 'Anna Koop', 'Johannes Günther', 'Patrick M. Pilarski']",2021-11-18T00:00:00Z,arxiv,, 77062,https://arxiv.org/abs/2111.08156,Improving Learning from Demonstrations by Learning from Experience,"['Haofeng Liu', 'Yiwen Chen', 'Jiayi Tan', 'Marcelo H Ang Jr']",2021-11-16T00:00:00Z,arxiv,, 77083,https://arxiv.org/abs/1810.06721,Optimizing Agent Behavior over Long Time Scales by Transporting Value,"['Chia-Chun Hung', 'Timothy Lillicrap', 'Josh Abramson', 'Yan Wu', 'Mehdi Mirza', 'Federico Carnevale', 'Arun Ahuja', 'Greg Wayne']",2018-10-15T00:00:00Z,arxiv,, 77102,https://arxiv.org/abs/1901.08579,Forecasting Transformative AI: An Expert Survey,"['Ross Gruetzemacher', 'David Paradice', 'Kang Bok Lee']",2019-01-24T00:00:00Z,arxiv,, 77134,https://arxiv.org/abs/1907.07174,Natural Adversarial Examples.,"['Dan Hendrycks', 'Kevin Zhao', 'Steven Basart', 'Jacob Steinhardt', 'Dawn Song']",2019-08-14T00:00:00Z,arxiv,, 77160,https://arxiv.org/abs/2102.07017,Mitigating Negative Side Effects via Environment Shaping,"['Sandhya Saisubramanian', 'Shlomo Zilberstein']",2021-02-13T00:00:00Z,arxiv,, 77177,https://arxiv.org/abs/1811.03571,Intrinsic Geometric Vulnerability of High-Dimensional Artificial Intelligence,"['Luca Bortolussi', 'Guido Sanguinetti']",2018-11-08T00:00:00Z,arxiv,, 77199,https://arxiv.org/abs/2206.14244,Masked World Models for Visual Control.,"['Younggyo Seo', 'Danijar Hafner', 'Hao Liu', 'Fangchen Liu', 'Stephen James', 'Kimin Lee', 'Pieter Abbeel']",2022-08-14T00:00:00Z,arxiv,, 77214,https://arxiv.org/abs/1603.00448,Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization,"['Chelsea Finn', 'Sergey Levine', 'Pieter Abbeel']",2016-03-01T20:35:56Z,arxiv,, 77236,https://arxiv.org/abs/2202.00161,CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery,"['Michael Laskin', 'Hao Liu', 'Xue Bin Peng', 'Denis Yarats', 'Aravind Rajeswaran', 'Pieter Abbeel']",2022-02-01T00:00:00Z,arxiv,, 77251,https://arxiv.org/abs/2006.14032,Compositional Explanations of Neurons,"['Jesse Mu', 'Jacob Andreas']",2020-06-24T00:00:00Z,arxiv,, 77262,https://arxiv.org/abs/1802.07740,Machine Theory of Mind,"['Neil C. Rabinowitz', 'Frank Perbet', 'H. Francis Song', 'Chiyuan Zhang', 'S. M. Ali Eslami', 'Matthew Botvinick']",2018-02-21T00:00:00Z,arxiv,, 77284,https://arxiv.org/abs/2005.13601,"The Adversarial Resilience Learning Architecture for AI-based Modelling, Exploration, and Operation of Complex Cyber-Physical Systems","['Eric MSP Veith', 'Nils Wenninghoff', 'Emilie Frost']",2020-05-27T00:00:00Z,arxiv,, 77301,https://arxiv.org/abs/2102.10646,A Game-Theoretic Approach for Hierarchical Epidemic Control,"['Feiran Jia', 'Aditya Mate', 'Zun Li', 'Shahin Jabbari', 'Mithun Chakraborty', 'Milind Tambe', 'Michael Wellman', 'Yevgeniy Vorobeychik']",2021-02-21T00:00:00Z,arxiv,, 77327,https://arxiv.org/abs/1709.01547,Knowledge Transfer Between Artificial Intelligence Systems,"['Ivan Y. Tyukin', 'Alexander N. Gorban', 'Konstantin Sofeikov', 'Ilya Romanenko']",2017-09-05T00:00:00Z,arxiv,, 77346,https://arxiv.org/abs/2103.12021,Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism,"['Paria Rashidinejad', 'Banghua Zhu', 'Cong Ma', 'Jiantao Jiao', 'Stuart Russell']",2021-03-22T00:00:00Z,arxiv,, 77363,https://arxiv.org/abs/2108.04219,Pragmatic Image Compression for Human-in-the-Loop Decision-Making.,"['Siddharth Reddy', 'Anca D', 'Dragan', 'Sergey Levine']",2021-08-14T00:00:00Z,arxiv,, 77375,https://arxiv.org/abs/1409.0473,Neural Machine Translation by Jointly Learning to Align and Translate,"['Dzmitry Bahdanau', 'Kyunghyun Cho', 'Yoshua Bengio']",2014-09-01T00:00:00Z,arxiv,, 77394,https://arxiv.org/abs/1912.05671,Linear Mode Connectivity and the Lottery Ticket Hypothesis,"['Jonathan Frankle', 'Gintare Karolina Dziugaite', 'Daniel M. Roy', 'Michael Carbin']",2019-12-11T00:00:00Z,arxiv,, 77415,https://arxiv.org/abs/1806.03820,"An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning","['Dhruv Malik', 'Malayandi Palaniappan', 'Jaime F. Fisac', 'Dylan Hadfield-Menell', 'Stuart Russell', 'Anca D. Dragan']",2018-06-11T00:00:00Z,arxiv,, 77438,https://arxiv.org/abs/1903.01021,A Strongly Asymptotically Optimal Agent in General Environments,"['Michael K. Cohen', 'Elliot Catt', 'Marcus Hutter']",2019-03-04T00:00:00Z,arxiv,, 77462,https://arxiv.org/abs/1908.08016,Testing Robustness Against Unforeseen Adversaries,"['Daniel Kang', 'Yi Sun', 'Dan Hendrycks', 'Tom Brown', 'Jacob Steinhardt']",2019-08-21T00:00:00Z,arxiv,, 77479,https://arxiv.org/abs/1506.07359,Sequential Extensions of Causal and Evidential Decision Theory,"['Tom Everitt', 'Jan Leike', 'Marcus Hutter']",2015-06-24T00:00:00Z,arxiv,, 77499,https://arxiv.org/abs/2201.07207,Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents.,"['Wenlong Huang', 'Pieter Abbeel', 'Deepak Pathak', 'Igor Mordatch']",2022-08-14T00:00:00Z,arxiv,, 77521,https://arxiv.org/abs/2206.02628,HYCEDIS: HYbrid Confidence Engine for Deep Document Intelligence System,"['Bao-Sinh Nguyen', 'Quang-Bach Tran', 'Tuan-Anh Nguyen Dang', 'Duc Nguyen', 'Hung Le']",2022-06-01T00:00:00Z,arxiv,, 77538,https://arxiv.org/abs/2103.06076,"Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs","['Solon Barocas', 'Anhong Guo', 'Ece Kamar', 'Jacquelyn Krones', 'Meredith Ringel Morris', 'Jennifer Wortman Vaughan', 'Duncan Wadsworth', 'Hanna Wallach']",2021-03-10T00:00:00Z,arxiv,, 77567,https://arxiv.org/abs/2101.11832,Making Responsible AI the Norm rather than the Exception,['Abhishek Gupta'],2021-01-28T00:00:00Z,arxiv,, 77607,https://arxiv.org/abs/1710.05060,Functional Decision Theory: A New Theory of Instrumental Rationality,"['Eliezer Yudkowsky', 'Nate Soares']",2017-10-13T00:00:00Z,arxiv,, 77617,https://arxiv.org/abs/2206.08364,Interaction-Grounded Learning with Action-inclusive Feedback,"['Tengyang Xie', 'Akanksha Saran', 'Dylan J. Foster', 'Lekan Molu', 'Ida Momennejad', 'Nan Jiang', 'Paul Mineiro', 'John Langford']",2022-06-16T00:00:00Z,arxiv,, 77632,https://arxiv.org/abs/1912.05500,What Can Learned Intrinsic Rewards Capture?,"['Zeyu Zheng', 'Junhyuk Oh', 'Matteo Hessel', 'Zhongwen Xu', 'Manuel Kroiss', 'Hado van Hasselt', 'David Silver', 'Satinder Singh']",2019-12-11T00:00:00Z,arxiv,, 77647,https://arxiv.org/abs/1903.00742,Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research,"['Joel Z. Leibo', 'Edward Hughes', 'Marc Lanctot', 'Thore Graepel']",2019-03-02T00:00:00Z,arxiv,, 77673,https://arxiv.org/abs/2112.00659,Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines,"['Jiachen Sun', 'Akshay Mehra', 'Bhavya Kailkhura', 'Pin-Yu Chen', 'Dan Hendrycks', 'Jihun Hamm', 'Z. Morley Mao']",2021-12-01T00:00:00Z,arxiv,, 77693,https://arxiv.org/abs/2012.10800,Probabilistic Dependency Graphs,"['Oliver Richardson', 'Joseph Y Halpern']",2020-12-19T00:00:00Z,arxiv,, 77710,https://arxiv.org/abs/2102.04255,AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks,"['McKane Andrus', 'Sarah Dean', 'Thomas Krendl Gilbert', 'Nathan Lambert', 'Tom Zick']",2021-02-04T00:00:00Z,arxiv,, 77731,https://arxiv.org/abs/1602.03924,Modeling Human Ad Hoc Coordination,"['Peter M. Krafft', 'Chris L. Baker', 'Alex Pentland', 'Joshua B. Tenenbaum']",2016-02-11T00:00:00Z,arxiv,, 77753,https://arxiv.org/abs/1812.02795,Verification of deep probabilistic models,"['Krishnamurthy Dvijotham', 'Marta Garnelo', 'Alhussein Fawzi', 'Pushmeet Kohli']",2018-12-06T00:00:00Z,arxiv,, 77768,https://arxiv.org/abs/2104.12278,Causal Learning for Socially Responsible AI,"['Lu Cheng', 'Ahmadreza Mosallanezhad', 'Paras Sheth', 'Huan Liu']",2021-04-25T00:00:00Z,arxiv,, 77809,https://arxiv.org/abs/2104.08441,Action Advising with Advice Imitation in Deep Reinforcement Learning,"['Ercument Ilhan', 'Jeremy Gow', 'Diego Perez-Liebana']",2021-04-17T00:00:00Z,arxiv,, 77820,https://arxiv.org/abs/2102.04527,Playing the Blame Game with Robots,"['Markus Kneer', 'Michael T. Stuart']",2021-02-08T00:00:00Z,arxiv,, 77835,https://arxiv.org/abs/2012.07975,Learning Visual Robotic Control Efficiently with Contrastive Pre-training and Data Augmentation.,"['Albert Zhan', 'Ruihan (Philip) Zhao', 'Lerrel Pinto', 'Pieter Abbeel', 'Misha Laskin']",2022-08-14T00:00:00Z,arxiv,, 77847,https://arxiv.org/abs/2009.05186,An Argumentation-based Approach for Identifying and Dealing with Incompatibilities among Procedural Goals,"['Mariela Morveli-Espinoza', 'Juan Carlos Nieves', 'Ayslan Possebom', 'Josep Puyol-Gruart', 'Cesar Augusto Tacla']",2020-09-11T00:00:00Z,arxiv,, 77868,https://arxiv.org/abs/2201.06619,Planning Not to Talk: Multiagent Systems that are Robust to Communication Loss,"['Mustafa O. Karabag', 'Cyrus Neary', 'Ufuk Topcu']",2022-01-17T00:00:00Z,arxiv,, 77887,https://arxiv.org/abs/2003.01690,Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks,"['Francesco Croce', 'Matthias Hein']",2020-03-03T18:15:55Z,arxiv,, 77908,https://arxiv.org/abs/1512.07943,Toward a Research Agenda in Adversarial Reasoning: Computational Approaches to Anticipating the Opponent's Intent and Actions,"['Alexander Kott', 'Michael Ownby']",2015-12-25T00:00:00Z,arxiv,, 77926,https://arxiv.org/abs/1912.02781,AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty.,"['Dan Hendrycks', 'Norman Mu', 'Ekin D', 'Cubuk', 'Barret Zoph', 'Justin Gilmer', 'Balaji Lakshminarayanan']",2020-08-14T00:00:00Z,arxiv,, 77946,https://arxiv.org/abs/1808.08460,The Social Cost of Strategic Classification,"['Smitha Milli', 'John Miller', 'Anca D. Dragan', 'Moritz Hardt']",2018-08-25T00:00:00Z,arxiv,, 77970,https://arxiv.org/abs/1805.00899,AI safety via debate,"['Geoffrey Irving', 'Paul Christiano', 'Dario Amodei']",2018-05-02T00:00:00Z,arxiv,, 77994,https://arxiv.org/abs/1905.05675,The Algonauts Project: A Platform for Communication between the Sciences of Biological and Artificial Intelligence,"['Radoslaw Martin Cichy', 'Gemma Roig', 'Alex Andonian', 'Kshitij Dwivedi', 'Benjamin Lahner', 'Alex Lascelles', 'Yalda Mohsenzadeh', 'Kandan Ramakrishnan', 'Aude Oliva']",2019-05-14T00:00:00Z,arxiv,, 78012,https://arxiv.org/abs/1303.1488,A Synthesis of Logical and Probabilistic Reasoning for Program Understanding and Debugging,"['Lisa J. Burnell', 'Eric J. Horvitz']",2013-03-06T00:00:00Z,arxiv,, 78025,https://arxiv.org/abs/2005.14165,Language Models are Few-Shot Learners,"['Tom B. Brown', 'Benjamin Mann', 'Nick Ryder', 'Melanie Subbiah', 'Jared Kaplan', 'Prafulla Dhariwal', 'Arvind Neelakantan', 'Pranav Shyam', 'Girish Sastry', 'Amanda Askell', 'Sandhini Agarwal', 'Ariel Herbert-Voss', 'Gretchen Krueger', 'Tom Henighan', 'Rewon Child', 'Aditya Ramesh', 'Daniel M. Ziegler', 'Jeffrey Wu', 'Clemens Winter', 'Christopher Hesse', 'Mark Chen', 'Eric Sigler', 'Mateusz Litwin', 'Scott Gray', 'Benjamin Chess', 'Jack Clark', 'Christopher Berner', 'Sam McCandlish', 'Alec Radford', 'Ilya Sutskever', 'Dario Amodei']",2020-05-28T00:00:00Z,arxiv,, 78062,https://arxiv.org/abs/2104.12547,A Framework for Ethical AI at the United Nations,['Lambert Hogenhout'],2021-04-09T00:00:00Z,arxiv,, 78090,https://arxiv.org/abs/2012.12469,Augmenting Policy Learning with Routines Discovered from a Single Demonstration,"['Zelin Zhao', 'Chuang Gan', 'Jiajun Wu', 'Xiaoxiao Guo', 'Joshua B. Tenenbaum']",2020-12-23T00:00:00Z,arxiv,, 78115,https://arxiv.org/abs/2005.10141,Rational Consensus,"['Joseph Y. Halpern', 'Xavier Vilaca']",2020-05-20T00:00:00Z,arxiv,, 78129,https://arxiv.org/abs/2105.07852,Hard Choices and Hard Limits for Artificial Intelligence,['Bryce Goodman'],2021-05-04T00:00:00Z,arxiv,, 78151,https://arxiv.org/abs/2008.07667,Runtime-Safety-Guided Policy Repair,"['Weichao Zhou', 'Ruihan Gao', 'BaekGyu Kim', 'Eunsuk Kang', 'Wenchao Li']",2020-08-17T00:00:00Z,arxiv,, 78171,https://arxiv.org/abs/1906.12340,Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty.,"['Dan Hendrycks', 'Mantas Mazeika', 'Saurav Kadavath', 'Dawn Song']",2019-08-14T00:00:00Z,arxiv,, 78196,https://arxiv.org/abs/1902.05542,Unsupervised Visuomotor Control through Distributional Planning Networks,"['Tianhe Yu', 'Gleb Shevchuk', 'Dorsa Sadigh', 'Chelsea Finn']",2019-02-14T00:00:00Z,arxiv,, 78211,https://arxiv.org/abs/1904.06387,Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations,"['Daniel S. Brown', 'Wonjoon Goo', 'Prabhat Nagarajan', 'Scott Niekum']",2019-04-12T00:00:00Z,arxiv,, 78226,https://arxiv.org/abs/1703.03717,Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations,"['Andrew Slavin Ross', 'Michael C. Hughes', 'Finale Doshi-Velez']",2017-03-10T00:00:00Z,arxiv,, 78244,https://arxiv.org/abs/1804.10817,A Logic of Agent Organizations,"['Virginia Dignum', 'Frank Dignum']",2018-04-28T00:00:00Z,arxiv,, 78262,https://arxiv.org/abs/1903.12394,Informed Machine Learning -- A Taxonomy and Survey of Integrating Knowledge into Learning Systems,"['Laura von Rueden', 'Sebastian Mayer', 'Katharina Beckh', 'Bogdan Georgiev', 'Sven Giesselbach', 'Raoul Heese', 'Birgit Kirsch', 'Julius Pfrommer', 'Annika Pick', 'Rajkumar Ramamurthy', 'Michal Walczak', 'Jochen Garcke', 'Christian Bauckhage', 'Jannis Schuecker']",2019-03-29T00:00:00Z,arxiv,, 78294,https://arxiv.org/abs/2102.07574,Machine Learning Model Development from a Software Engineering Perspective: A Systematic Literature Review,"['Giuliano Lorenzoni', 'Paulo Alencar', 'Nathalia Nascimento', 'Donald Cowan']",2021-02-15T00:00:00Z,arxiv,, 78320,https://arxiv.org/abs/1905.01034,Transfer of Adversarial Robustness Between Perturbation Types,"['Daniel Kang', 'Yi Sun', 'Tom Brown', 'Dan Hendrycks', 'Jacob Steinhardt']",2019-05-03T00:00:00Z,arxiv,, 78343,https://arxiv.org/abs/2111.06420,Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities,"['Waddah Saeed', 'Christian Omlin']",2021-11-11T00:00:00Z,arxiv,, 78374,https://arxiv.org/abs/2202.05834,Predicting Out-of-Distribution Error with the Projection Norm,"['Yaodong Yu', 'Zitong Yang', 'Alexander Wei', 'Yi Ma', 'Jacob Steinhardt']",2022-02-11T00:00:00Z,arxiv,, 78395,https://arxiv.org/abs/2009.13676,"The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity","['Mohamed Abdalla', 'Moustafa Abdalla']",2020-09-28T00:00:00Z,arxiv,, 78411,https://arxiv.org/abs/1610.02357,Xception: Deep Learning with Depthwise Separable Convolutions,['François Chollet'],2016-10-07T00:00:00Z,arxiv,, 78433,https://arxiv.org/abs/1705.04630,Forecasting using incomplete models,['Vanessa Kosoy'],2017-05-12T00:00:00Z,arxiv,, 78444,https://arxiv.org/abs/1712.06365,Indifference' methods for managing agent rewards,"['Stuart Armstrong', ""Xavier O'Rourke""]",2017-12-18T00:00:00Z,arxiv,, 78477,https://arxiv.org/abs/1508.03032,OOASP: Connecting Object-oriented and Logic Programming,"['Andreas Falkner', 'Anna Ryabokon', 'Gottfried Schenner', 'Kostyantyn Shchekotykhin']",2015-08-12T00:00:00Z,arxiv,, 78492,https://arxiv.org/abs/1502.03167,Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,"['Sergey Ioffe', 'Christian Szegedy']",2015-02-11T01:44:18Z,arxiv,, 78509,https://arxiv.org/abs/2001.09318,Silly rules improve the capacity of agents to learn stable enforcement and compliance behaviors,"['Raphael Köster', 'Dylan Hadfield-Menell', 'Gillian K. Hadfield', 'Joel Z. Leibo']",2020-01-25T00:00:00Z,arxiv,, 78530,https://arxiv.org/abs/2202.06264,A Simplified Variant of Gödel's Ontological Argument,['Christoph Benzmüller'],2022-02-13T00:00:00Z,arxiv,, 78545,https://arxiv.org/abs/2105.06791,Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations,"['Matthew Watson', 'Bashar Awwad Shiekh Hasan', 'Noura Al Moubayed']",2021-05-14T00:00:00Z,arxiv,, 78568,https://arxiv.org/abs/1810.04303,Batch Active Preference-Based Learning of Reward Functions,"['Erdem Bıyık', 'Dorsa Sadigh']",2018-10-10T00:00:00Z,arxiv,, 78587,https://arxiv.org/abs/1905.01296,PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings,"['Nicholas Rhinehart', 'Rowan McAllister', 'Kris Kitani', 'Sergey Levine']",2019-05-03T00:00:00Z,arxiv,, 78610,https://arxiv.org/abs/1006.1563,ToLeRating UR-STD,"['Jan Feyereisl', 'Uwe Aickelin']",2010-06-08T00:00:00Z,arxiv,, 78629,https://arxiv.org/abs/1803.08971,Computational Power and the Social Impact of Artificial Intelligence,['Tim Hwang'],2018-03-23T00:00:00Z,arxiv,, 78659,https://arxiv.org/abs/2202.07785,Predictability and Surprise in Large Generative Models,"['Deep Ganguli', 'Danny Hernandez', 'Liane Lovitt', 'Nova DasSarma', 'Tom Henighan', 'Andy Jones', 'Nicholas Joseph', 'Jackson Kernion', 'Ben Mann', 'Amanda Askell', 'Yuntao Bai', 'Anna Chen', 'Tom Conerly', 'Dawn Drain', 'Nelson Elhage', 'Sheer El Showk', 'Stanislav Fort', 'Zac Hatfield-Dodds', 'Scott Johnston', 'Shauna Kravec', 'Neel Nanda', 'Kamal Ndousse', 'Catherine Olsson', 'Daniela Amodei', 'Dario Amodei', 'Tom Brown', 'Jared Kaplan', 'Sam McCandlish', 'Chris Olah', 'Jack Clark']",2022-02-15T00:00:00Z,arxiv,, 78688,https://arxiv.org/abs/2103.06602,Symbolic Reinforcement Learning for Safe RAN Control,"['Alexandros Nikou', 'Anusha Mujumdar', 'Marin Orlic', 'Aneta Vulgarakis Feljan']",2021-03-11T00:00:00Z,arxiv,, 78704,https://arxiv.org/abs/1809.05214,Model-Based Reinforcement Learning via Meta-Policy Optimization,"['Ignasi Clavera', 'Jonas Rothfuss', 'John Schulman', 'Yasuhiro Fujita', 'Tamim Asfour', 'Pieter Abbeel']",2018-09-14T00:00:00Z,arxiv,, 78722,https://arxiv.org/abs/2002.06100,Analyzing Differentiable Fuzzy Logic Operators,"['Emile van Krieken', 'Erman Acar', 'Frank van Harmelen']",2020-02-14T00:00:00Z,arxiv,, 78737,https://arxiv.org/abs/1809.03060,Active Inverse Reward Design.,"['Sören Mindermann', 'Rohin Shah', 'Adam Gleave', 'Dylan Hadfield-Menell']",2018-08-15T00:00:00Z,arxiv,, 78750,https://arxiv.org/abs/2105.04857,Leveraging Sparse Linear Layers for Debuggable Deep Networks,"['Eric Wong', 'Shibani Santurkar', 'Aleksander Mądry']",2021-05-11T00:00:00Z,arxiv,, 78772,https://arxiv.org/abs/1807.00196,Modeling Friends and Foes,"['Pedro A. Ortega', 'Shane Legg']",2018-06-30T00:00:00Z,arxiv,, 78789,https://arxiv.org/abs/2212.09251,Discovering Language Model Behaviors with Model-Written Evaluations,"['Ethan Perez', 'Sam Ringer', 'Kamilė Lukošiūtė']",2022-12-19T05:13:52Z,arxiv,, 78816,https://arxiv.org/abs/1912.01188,Adaptive Online Planning for Continual Lifelong Learning,"['Kevin Lu', 'Igor Mordatch', 'Pieter Abbeel']",2019-12-03T00:00:00Z,arxiv,, 78834,https://arxiv.org/abs/1809.03447,Expert-augmented actor-critic for ViZDoom and Montezumas Revenge,"['Michał Garmulewicz', 'Henryk Michalewski', 'Piotr Miłoś']",2018-09-10T00:00:00Z,arxiv,, 78849,https://arxiv.org/abs/1609.05058,A Formal Solution to the Grain of Truth Problem,"['Jan Leike', 'Jessica Taylor', 'Benya Fallenstein']",2016-09-16T00:00:00Z,arxiv,, 78865,https://arxiv.org/abs/1702.08608,Towards A Rigorous Science of Interpretable Machine Learning,"['Finale Doshi-Velez', 'Been Kim']",2017-02-28T00:00:00Z,arxiv,, 78884,https://arxiv.org/abs/1505.05424,Weight Uncertainty in Neural Networks,"['Charles Blundell', 'Julien Cornebise', 'Koray Kavukcuoglu', 'Daan Wierstra']",2015-05-20T15:39:48Z,arxiv,, 78902,https://arxiv.org/abs/1604.05288,Inductive Coherence,"['Scott Garrabrant', 'Benya Fallenstein', 'Abram Demski', 'Nate Soares']",2016-04-18T00:00:00Z,arxiv,, 78917,https://arxiv.org/abs/2206.12503,Multi-Modal and Multi-Factor Branching Time Active Inference,"['Théophile Champion', 'Marek Grześ', 'Howard Bowman']",2022-06-24T00:00:00Z,arxiv,, 78926,https://arxiv.org/abs/2204.14146,Training Language Models with Language Feedback,['Jérémy Scheurer'],2022-04-29T15:06:58Z,arxiv,, 78944,https://arxiv.org/abs/2108.07804,A Framework for Understanding AI-Induced Field Change: How AI Technologies are Legitimized and Institutionalized,['Benjamin Cedric Larsen'],2021-08-18T00:00:00Z,arxiv,, 78974,https://arxiv.org/abs/2001.08823,What's a Good Prediction? Challenges in evaluating an agent's knowledge,"['Alex Kearney', 'Anna Koop', 'Patrick M. Pilarski']",2020-01-23T00:00:00Z,arxiv,, 78991,https://arxiv.org/abs/1906.00429,Learner-aware Teaching: Inverse Reinforcement Learning with Preferences and Constraints,"['Sebastian Tschiatschek', 'Ahana Ghosh', 'Luis Haug', 'Rati Devidze', 'Adish Singla']",2019-06-02T00:00:00Z,arxiv,, 79011,https://arxiv.org/abs/1808.06508,Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies,"['Alessandro Achille', 'Tom Eccles', 'Loic Matthey', 'Christopher P. Burgess', 'Nick Watters', 'Alexander Lerchner', 'Irina Higgins']",2018-08-20T00:00:00Z,arxiv,, 79047,https://arxiv.org/abs/2111.04916,Building an AI-ready RSE Workforce,"['Ying Zhang', 'Matthew A. Gitzendanner', 'Dan S. Maxwell', 'Justin W. Richardson', 'Kaleb E. Smith', 'Eric A. Stubbs', 'Brian J. Stucky', 'Jingchao Zhang', 'Erik Deumens']",2021-11-09T00:00:00Z,arxiv,, 79071,https://arxiv.org/abs/1606.05313,Unsupervised Risk Estimation Using Only Conditional Independence Structure,['Jacob Steinhardt'],2016-06-16T18:48:51Z,arxiv,, 79091,https://arxiv.org/abs/2002.12500,Efficiently Guiding Imitation Learning Agents with Human Gaze,"['Akanksha Saran', 'Ruohan Zhang', 'Elaine Schaertl Short', 'Scott Niekum']",2020-02-28T00:00:00Z,arxiv,, 79106,https://arxiv.org/abs/1611.02315,Learning from Untrusted Data,"['Moses Charikar', 'Jacob Steinhardt', 'Gregory Valiant']",2016-11-07T00:00:00Z,arxiv,, 79135,https://arxiv.org/abs/2208.08345,Discovering Agents,"['Zachary Kenton', 'Ramana Kumar', 'Sebastian Farquhar', 'Jonathan Richens', 'Matt MacDermott', 'Tom Everitt']",2022-08-17T00:00:00Z,arxiv,, 79148,https://arxiv.org/abs/2205.13743,Personalized Algorithmic Recourse with Preference Elicitation,"['Giovanni De Toni', 'Paolo Viappiani', 'Stefano Teso', 'Bruno Lepri', 'Andrea Passerini']",2022-05-27T00:00:00Z,arxiv,, 79159,https://arxiv.org/abs/2110.08514,Analyzing Dynamic Adversarial Training Data in the Limit,"['Eric Wallace', 'Adina Williams', 'Robin Jia', 'Douwe Kiela']",2021-10-16T00:00:00Z,arxiv,, 79171,https://arxiv.org/abs/2102.09180,A maximum entropy model of bounded rational decision-making with prior beliefs and market feedback,"['Benjamin Patrick Evans', 'Mikhail Prokopenko']",2021-02-18T00:00:00Z,arxiv,, 79188,https://arxiv.org/abs/1909.12892,Automated curricula through setter-solver interactions,"['Sebastien Racaniere', 'Andrew K. Lampinen', 'Adam Santoro', 'David P. Reichert', 'Vlad Firoiu', 'Timothy P. Lillicrap']",2019-09-27T00:00:00Z,arxiv,, 79209,https://arxiv.org/abs/2006.14796,AvE: Assistance via Empowerment,"['Yuqing Du', 'Stas Tiomkin', 'Emre Kiciman', 'Daniel Polani', 'Pieter Abbeel', 'Anca Dragan']",2020-06-26T00:00:00Z,arxiv,, 79224,https://arxiv.org/abs/1907.01657,Dynamics-Aware Unsupervised Discovery of Skills,"['Archit Sharma', 'Shixiang Gu', 'Sergey Levine', 'Vikash Kumar', 'Karol Hausman']",2019-07-02T00:00:00Z,arxiv,, 79240,https://arxiv.org/abs/1903.01973,Learning Latent Plans from Play,"['Corey Lynch', 'Mohi Khansari', 'Ted Xiao', 'Vikash Kumar', 'Jonathan Tompson', 'Sergey Levine', 'Pierre Sermanet']",2019-03-05T00:00:00Z,arxiv,, 79262,https://arxiv.org/abs/2202.05302,"Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient",['Max W. Shen'],2022-02-10T00:00:00Z,arxiv,, 79288,https://arxiv.org/abs/1810.05162,Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation,"['Chaowei Xiao', 'Ruizhi Deng', 'Bo Li', 'Fisher Yu', 'Mingyan Liu', 'Dawn Song']",2018-10-11T00:00:00Z,arxiv,, 79303,https://arxiv.org/abs/2002.01510,Generalizing meanings from partners to populations: Hierarchical inference supports convention formation on networks.,"['Robert D', 'Hawkins', 'Noah D', 'Goodman', 'Adele E', 'Goldberg', 'Thomas L', 'Griffiths']",2020-08-14T00:00:00Z,arxiv,, 79319,https://arxiv.org/abs/2106.03927,Improving Social Welfare While Preserving Autonomy via a Pareto Mediator,"['Stephen McAleer', 'John Lanier', 'Michael Dennis', 'Pierre Baldi', 'Roy Fox']",2021-06-07T00:00:00Z,arxiv,, 79336,https://arxiv.org/abs/1705.03394,That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox,"['Anders Sandberg', 'Stuart Armstrong', 'Milan M. Cirkovic']",2017-04-27T00:00:00Z,arxiv,, 79361,https://arxiv.org/abs/1905.04933,Lie on the Fly: Strategic Voting in an Iterative Preference Elicitation Process,"['Lihi Dery', 'Svetlana Obraztsova', 'Zinovi Rabinovich', 'Meir Kalech']",2019-05-13T00:00:00Z,arxiv,, 79375,https://arxiv.org/abs/2103.12558,Assured Learning-enabled Autonomy: A Metacognitive Reinforcement Learning Framework,"['Aquib Mustafa', 'Majid Mazouchi', 'Subramanya Nageshrao', 'Hamidreza Modares']",2021-03-23T00:00:00Z,arxiv,, 79393,https://arxiv.org/abs/2101.10305,Accumulating Risk Capital Through Investing in Cooperation,"['Charlotte Roman', 'Michael Dennis', 'Andrew Critch', 'Stuart Russell']",2021-01-25T00:00:00Z,arxiv,, 79409,https://arxiv.org/abs/1909.08593,Fine-Tuning Language Models from Human Preferences,"['Daniel M. Ziegler', 'Nisan Stiennon', 'Jeffrey Wu', 'Tom B. Brown', 'Alec Radford', 'Dario Amodei', 'Paul Christiano', 'Geoffrey Irving']",2019-09-18T00:00:00Z,arxiv,, 79434,https://arxiv.org/abs/2202.10608,It Takes Four to Tango: Multiagent Self Play for Automatic Curriculum Generation.,"['Yuqing Du', 'Pieter Abbeel', 'Aditya Grover']",2022-08-14T00:00:00Z,arxiv,, 79451,https://arxiv.org/abs/2011.00517,Ask Your Humans: Using Human Instructions to Improve Generalization in Reinforcement Learning,"['Valerie Chen', 'Abhinav Gupta', 'Kenneth Marino']",2020-11-01T00:00:00Z,arxiv,, 79471,https://arxiv.org/abs/1705.08807,When Will AI Exceed Human Performance? Evidence from AI Experts,"['Katja Grace', 'John Salvatier', 'Allan Dafoe', 'Baobao Zhang', 'Owain Evans']",2017-05-24T00:00:00Z,arxiv,, 79495,https://arxiv.org/abs/2009.08092,Distributional Generalization: A New Kind of Generalization,"['Preetum Nakkiran', 'Yamini Bansal']",2020-09-17T00:00:00Z,arxiv,, 79515,https://arxiv.org/abs/2209.09608,Graph Value Iteration.,"['D Feng', 'CP Gomes', 'B Selman']",2022-08-14T00:00:00Z,arxiv,, 79544,https://arxiv.org/abs/2110.01834,Thinking Fast and Slow in AI: the Role of Metacognition,"['Marianna Bergamaschi Ganapini', 'Murray Campbell', 'Francesco Fabiano', 'Lior Horesh', 'Jon Lenchner', 'Andrea Loreggia', 'Nicholas Mattei', 'Francesca Rossi', 'Biplav Srivastava', 'Kristen Brent Venable']",2021-10-05T00:00:00Z,arxiv,, 79555,https://arxiv.org/abs/2006.04948,AI Research Considerations for Human Existential Safety (ARCHES),"['Andrew Critch', 'David Krueger']",2020-05-30T00:00:00Z,arxiv,, 79585,https://arxiv.org/abs/1806.09605,Many-Goals Reinforcement Learning.,"['Vivek Veeriah', 'Junhyuk Oh', 'Satinder Singh']",2018-08-14T00:00:00Z,arxiv,, 79606,https://arxiv.org/abs/1804.02477,Programmatically Interpretable Reinforcement Learning,"['Abhinav Verma', 'Vijayaraghavan Murali', 'Rishabh Singh', 'Pushmeet Kohli', 'Swarat Chaudhuri']",2018-04-06T00:00:00Z,arxiv,, 79629,https://arxiv.org/abs/1911.09785,ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring,"['David Berthelot', 'Nicholas Carlini', 'Ekin D. Cubuk', 'Alex Kurakin', 'Kihyuk Sohn', 'Han Zhang', 'Colin Raffel']",2019-11-21T00:00:00Z,arxiv,, 79650,https://arxiv.org/abs/1811.02216,An Optimal Itinerary Generation in a Configuration Space of Large Intellectual Agent Groups with Linear Logic,['Dmitry Maximov'],2018-11-06T00:00:00Z,arxiv,, 79675,https://arxiv.org/abs/2108.06217,Beyond Fairness Metrics: Roadblocks and Challenges for Ethical AI in Practice,"['Jiahao Chen', 'Victor Storchan', 'Eren Kurshan']",2021-08-11T00:00:00Z,arxiv,, 79708,https://arxiv.org/abs/2208.13885,"Reinforcement Learning for Hardware Security: Opportunities, Developments, and Challenges","['Satwik Patnaik', 'Vasudev Gohil', 'Hao Guo', 'Jeyavijayan', 'Rajendran']",2022-08-29T00:00:00Z,arxiv,, 79732,https://arxiv.org/abs/1704.03296,Interpretable Explanations of Black Boxes by Meaningful Perturbation,"['Ruth C. Fong', 'Andrea Vedaldi']",2017-04-11T14:15:20Z,arxiv,, 79748,https://arxiv.org/abs/2009.13649,The EMPATHIC Framework for Task Learning from Implicit Human Feedback,"['Yuchen Cui', 'Qiping Zhang', 'Alessandro Allievi', 'Peter Stone', 'Scott Niekum', 'W. Bradley Knox']",2020-09-28T00:00:00Z,arxiv,, 79766,https://arxiv.org/abs/2002.09089,Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences,"['Daniel S. Brown', 'Russell Coleman', 'Ravi Srinivasan', 'Scott Niekum']",2020-02-21T00:00:00Z,arxiv,, 79791,https://arxiv.org/abs/2103.12656,Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification,"['Benjamin Eysenbach', 'Sergey Levine', 'Ruslan Salakhutdinov']",2021-03-23T00:00:00Z,arxiv,, 79805,https://arxiv.org/abs/2010.00581,Emergent Social Learning via Multi-agent Reinforcement Learning,"['Kamal Ndousse', 'Douglas Eck', 'Sergey Levine', 'Natasha Jaques']",2020-10-01T00:00:00Z,arxiv,, 79826,https://arxiv.org/abs/1903.09516,Was ist eine Professur fuer Kuenstliche Intelligenz?,"['Kristian Kersting', 'Jan Peters', 'Constantin Rothkopf']",2019-02-17T00:00:00Z,arxiv,, 79849,https://arxiv.org/abs/1904.07451,Counterfactual Visual Explanations,"['Yash Goyal', 'Ziyan Wu', 'Jan Ernst', 'Dhruv Batra', 'Devi Parikh', 'Stefan Lee']",2019-04-16T00:00:00Z,arxiv,, 79863,https://arxiv.org/abs/2205.15948,Two-Dimensional Quantum Material Identification via Self-Attention and Soft-labeling in Deep Learning,"['Xuan Bac Nguyen', 'Apoorva Bisht', 'Hugh Churchill', 'Khoa Luu']",2022-05-31T00:00:00Z,arxiv,, 79875,https://arxiv.org/abs/1807.08706,Contrastive Explanations for Reinforcement Learning in terms of Expected Consequences,"['Jasper van der Waa', 'Jurriaan van Diggelen', 'Karel van den Bosch', 'Mark Neerincx']",2018-07-23T00:00:00Z,arxiv,, 79894,https://arxiv.org/abs/2012.06005,Learning to Resolve Conflicts for Multi-Agent Path Finding with Conflict-Based Search,"['Taoan Huang', 'Bistra Dilkina', 'Sven Koenig']",2020-12-10T00:00:00Z,arxiv,, 79904,https://arxiv.org/abs/1711.08378,"Building Machines that Learn and Think for Themselves: Commentary on Lake et al., Behavioral and Brain Sciences, 2017","['M. Botvinick', 'D. G. T. Barrett', 'P. Battaglia', 'N. de Freitas', 'D. Kumaran', 'J. Z Leibo', 'T. Lillicrap', 'J. Modayil', 'S. Mohamed', 'N. C. Rabinowitz', 'D. J. Rezende', 'A. Santoro', 'T. Schaul', 'C. Summerfield', 'G. Wayne', 'T. Weber', 'D. Wierstra', 'S. Legg', 'D. Hassabis']",2017-11-22T00:00:00Z,arxiv,, 79930,https://arxiv.org/abs/1802.01636,Do You Want Your Autonomous Car to Drive Like You?.,"['Chandrayee Basu', 'Qian Yang', 'David Hungerman', 'Mukesh Singhal', 'Anca Dragan']",2017-08-14T00:00:00Z,arxiv,, 79943,https://arxiv.org/abs/2002.10221,The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI,['Samuel Allen Alexander'],2020-02-15T00:00:00Z,arxiv,, 79961,https://arxiv.org/abs/2002.05671,AI safety: state of the field through quantitative lens,"['Mislav Juric', 'Agneza Sandic', 'Mario Brcic']",2020-02-12T00:00:00Z,arxiv,, 80003,https://arxiv.org/abs/2002.05380,CEB Improves Model Robustness,"['Ian Fischer', 'Alexander A. Alemi']",2020-02-13T00:00:00Z,arxiv,, 80028,https://arxiv.org/abs/2004.08599,Three Modern Roles for Logic in AI,['Adnan Darwiche'],2020-04-18T00:00:00Z,arxiv,, 80055,https://arxiv.org/abs/2203.06555,Label-only Model Inversion Attack: The Attack that Requires the Least Information,"['Dayong Ye', 'Tianqing Zhu', 'Shuai Zhou', 'Bo Liu', 'Wanlei Zhou']",2022-03-13T00:00:00Z,arxiv,, 80064,https://arxiv.org/abs/2111.00876,On the Expressivity of Markov Reward.,"['David Abel', 'Will Dabney', 'Anna Harutyunyan', 'Mark K', 'Ho', 'Michael L', 'Littman', 'Doina Precup', 'and Satinder Singh']",2021-08-14T00:00:00Z,arxiv,, 80079,https://arxiv.org/abs/1609.04904,Long-Term Trends in the Public Perception of Artificial Intelligence,"['Ethan Fast', 'Eric Horvitz']",2016-09-16T00:00:00Z,arxiv,, 80093,https://arxiv.org/abs/2208.06590,Recognition of All Categories of Entities by AI,"['Hiroshi Yamakawa', 'Yutaka Matsuo']",2022-08-13T00:00:00Z,arxiv,, 80110,https://arxiv.org/abs/2208.04714,The History of AI Rights Research,['Jamie Harris'],2022-07-06T00:00:00Z,arxiv,, 80132,https://arxiv.org/abs/2006.13900,Quantifying Differences in Reward Functions,"['Adam Gleave', 'Michael Dennis', 'Shane Legg', 'Stuart Russell', 'Jan Leike']",2020-06-24T00:00:00Z,arxiv,, 80150,https://arxiv.org/abs/2107.12544,"Human-Level Reinforcement Learning through Theory-Based Modeling, Exploration, and Planning","['Pedro A. Tsividis', 'Joao Loula', 'Jake Burga', 'Nathan Foss', 'Andres Campero', 'Thomas Pouncy', 'Samuel J. Gershman', 'Joshua B. Tenenbaum']",2021-07-27T00:00:00Z,arxiv,, 80172,https://arxiv.org/abs/1806.07912,Resource-Efficient Neural Architect,"['Yanqi Zhou', 'Siavash Ebrahimi', 'Sercan Ö. Arık', 'Haonan Yu', 'Hairong Liu', 'Greg Diamos']",2018-06-12T00:00:00Z,arxiv,, 80191,https://arxiv.org/abs/1809.01036,A Roadmap for Robust End-to-End Alignment,['Lê Nguyên Hoang'],2018-09-04T00:00:00Z,arxiv,, 80243,https://arxiv.org/abs/1505.00399,Metareasoning for Planning Under Uncertainty,"['Christopher H. Lin', 'Andrey Kolobov', 'Ece Kamar', 'Eric Horvitz']",2015-05-03T00:00:00Z,arxiv,, 80259,https://arxiv.org/abs/1809.03956,Abstraction Learning,"['Fei Deng', 'Jinsheng Ren', 'Feng Chen']",2018-09-11T00:00:00Z,arxiv,, 80283,https://arxiv.org/abs/2009.06410,Beneficial and Harmful Explanatory Machine Learning,"['Lun Ai', 'Stephen H. Muggleton', 'Céline Hocquette', 'Mark Gromowski', 'Ute Schmid']",2020-09-09T00:00:00Z,arxiv,, 80301,https://arxiv.org/abs/1712.06440,Three IQs of AI Systems and their Testing Methods,"['Feng Liu', 'Yong Shi', 'Ying Liu']",2017-12-14T00:00:00Z,arxiv,, 80318,https://arxiv.org/abs/1806.10729,Illuminating Generalization in Deep Reinforcement Learning through Procedural Level Generation,"['Niels Justesen', 'Ruben Rodriguez Torrado', 'Philip Bontrager', 'Ahmed Khalifa', 'Julian Togelius', 'Sebastian Risi']",2018-06-28T00:00:00Z,arxiv,, 80336,https://arxiv.org/abs/2210.03427,Generating Quizzes to Support Training on Quality Management and Assurance in Space Science and Engineering,"['Andrés García-Silva', 'Cristian Berrío', 'José Manuel Gómez-Pérez']",2022-10-07T00:00:00Z,arxiv,, 80349,https://arxiv.org/abs/2109.14700,Safety Assurances for Human-Robot Interaction via Confidence-aware Game-theoretic Human Models.,"['R', 'Tian', 'L', 'Sun', 'A', 'Bajcsy', 'M', 'Tomizuka', 'and A', 'D', 'Dragan']",2022-08-14T00:00:00Z,arxiv,, 80368,https://arxiv.org/abs/1811.07834,Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments,"['David Fridovich-Keil', 'Jaime F. Fisac', 'Claire J. Tomlin']",2018-11-19T00:00:00Z,arxiv,, 80380,https://arxiv.org/abs/2106.05091,PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training,"['Kimin Lee', 'Laura Smith', 'Pieter Abbeel']",2021-06-09T00:00:00Z,arxiv,, 80404,https://arxiv.org/abs/2204.09852,The Risks of Machine Learning Systems,"['Samson Tan', 'Araz Taeihagh', 'Kathy Baxter']",2022-04-21T00:00:00Z,arxiv,, 80421,https://arxiv.org/abs/2002.08484,Estimating Training Data Influence by Tracing Gradient Descent,"['Garima Pruthi', 'Frederick Liu', 'Mukund Sundararajan', 'Satyen Kale']",2020-02-19T00:00:00Z,arxiv,, 80439,https://arxiv.org/abs/1312.6034,Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps,['Karen Simonyan'],2013-12-20T16:45:54Z,arxiv,, 80457,https://arxiv.org/abs/1807.00401,Machine learning 2.0 : Engineering Data Driven AI Products,"['James Max Kanter', 'Benjamin Schreck', 'Kalyan Veeramachaneni']",2018-07-01T00:00:00Z,arxiv,, 80478,https://arxiv.org/abs/1610.06918,"Learning to Protect Communications with Adversarial Neural Cryptography",['Martín Abadi and David G. Andersen'],2016-10-21T19:58:29Z,arxiv,, 80495,https://arxiv.org/abs/2106.08492,Developing a Fidelity Evaluation Approach for Interpretable Machine Learning,"['Mythreyi Velmurugan', 'Chun Ouyang', 'Catarina Moreira', 'Renuka Sindhgatta']",2021-06-16T00:00:00Z,arxiv,, 80510,https://arxiv.org/abs/1806.02501,Simplifying Reward Design through Divide-and-Conquer,"['Ellis Ratner', 'Dylan Hadfield-Menell', 'Anca D. Dragan']",2018-06-07T00:00:00Z,arxiv,, 80530,https://arxiv.org/abs/1811.04784,Improving Generalization for Abstract Reasoning Tasks Using Disentangled Feature Representations,"['Xander Steenbrugge', 'Sam Leroux', 'Tim Verbelen', 'Bart Dhoedt']",2018-11-12T00:00:00Z,arxiv,, 80545,https://arxiv.org/abs/1509.07582,Constructing Abstraction Hierarchies Using a Skill-Symbol Loop,['George Konidaris'],2015-09-25T00:00:00Z,arxiv,, 80556,https://arxiv.org/abs/1906.02530,Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift,"['Yaniv Ovadia', 'Emily Fertig', 'Jie Ren', 'Zachary Nado', 'D Sculley', 'Sebastian Nowozin', 'Joshua V. Dillon', 'Balaji Lakshminarayanan', 'Jasper Snoek']",2019-06-06T00:00:00Z,arxiv,, 80579,https://arxiv.org/abs/1809.02925,Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning,"['Ilya Kostrikov', 'Kumar Krishna Agrawal', 'Debidatta Dwibedi', 'Sergey Levine', 'Jonathan Tompson']",2018-09-09T00:00:00Z,arxiv,, 80597,https://arxiv.org/abs/2010.11645,Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming,"['Sumanth Dathathri', 'Krishnamurthy Dvijotham', 'Alexey Kurakin', 'Aditi Raghunathan', 'Jonathan Uesato', 'Rudy Bunel', 'Shreya Shankar', 'Jacob Steinhardt', 'Ian Goodfellow', 'Percy Liang', 'Pushmeet Kohli']",2020-10-22T00:00:00Z,arxiv,, 80619,https://arxiv.org/abs/1903.06256,Learning Robust Representations by Projecting Superficial Statistics Out,"['Haohan Wang', 'Zexue He', 'Zachary C. Lipton', 'Eric P. Xing']",2019-03-02T00:00:00Z,arxiv,, 80635,https://arxiv.org/abs/1811.02625,MixTrain: Scalable Training of Verifiably Robust Neural Networks,"['Shiqi Wang', 'Yizheng Chen', 'Ahmed Abdou', 'Suman Jana']",2018-11-06T00:00:00Z,arxiv,, 80648,https://arxiv.org/abs/2303.16749,Improving Code Generation by Training with Natural Language Feedback,"['Angelica Chen', 'Jérémy Scheurer', 'Tomasz Korbak', 'Jon Ander Campos', 'Jun Shern Chan', 'Samuel R. Bowman', 'Kyunghyun Cho', 'Ethan Perez']",2023-03-28T16:15:31Z,arxiv,, 80662,https://arxiv.org/abs/2001.00818,A Framework for Democratizing AI,"['Shakkeel Ahmed', 'Ravi S. Mula', 'Soma S. Dhavala']",2020-01-01T00:00:00Z,arxiv,, 80687,https://arxiv.org/abs/2203.08465,Building AI Innovation Labs together with Companies,"['Jens Heidrich', 'Andreas Jedlitschka', 'Adam Trendowicz', 'Anna Maria Vollmer']",2022-03-16T00:00:00Z,arxiv,, 80711,https://arxiv.org/abs/1707.08476,Guidelines for Artificial Intelligence Containment,"['James Babcock', 'Janos Kramar', 'Roman V. Yampolskiy']",2017-07-24T00:00:00Z,arxiv,, 80741,https://arxiv.org/abs/2011.08827,Avoiding Tampering Incentives in Deep RL via Decoupled Approval,"['Jonathan Uesato', 'Ramana Kumar', 'Victoria Krakovna', 'Tom Everitt', 'Richard Ngo', 'Shane Legg']",2020-11-17T00:00:00Z,arxiv,, 80752,https://arxiv.org/abs/1810.06530,Successor Uncertainties: Exploration and Uncertainty in Temporal Difference Learning,"['David Janz', 'Jiri Hron', 'Przemysław Mazur', 'Katja Hofmann', 'José Miguel Hernández-Lobato', 'Sebastian Tschiatschek']",2018-10-15T00:00:00Z,arxiv,, 80781,https://arxiv.org/abs/2004.14990,Reinforcement Learning with Augmented Data,"['Michael Laskin', 'Kimin Lee', 'Adam Stooke', 'Lerrel Pinto', 'Pieter Abbeel', 'Aravind Srinivas']",2020-04-30T00:00:00Z,arxiv,, 80801,https://arxiv.org/abs/2201.11441,Human-centered mechanism design with Democratic AI,"['Raphael Koster', 'Jan Balaguer', 'Andrea Tacchetti', 'Ari Weinstein', 'Tina Zhu', 'Oliver Hauser', 'Duncan Williams', 'Lucy Campbell-Gillingham', 'Phoebe Thacker', 'Matthew Botvinick', 'Christopher Summerfield']",2022-01-27T00:00:00Z,arxiv,, 80817,https://arxiv.org/abs/2204.12822,"A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions","['Thulitha Senevirathna', 'Vinh Hoa La', 'Samuel Marchal', 'Bartlomiej Siniarski', 'Madhusanka Liyanage', 'Shen Wang']",2022-04-27T00:00:00Z,arxiv,, 80836,https://arxiv.org/abs/1812.04606,Deep Anomaly Detection with Outlier Exposure.,"['Dan Hendrycks', 'Mantas Mazeika', 'Thomas Dietterich']",2019-08-14T00:00:00Z,arxiv,, 80859,https://arxiv.org/abs/2012.05862,Understanding Learned Reward Functions.,"['Eric J', 'Michaud', 'Adam Gleave', 'Stuart Russell']",2020-08-14T00:00:00Z,arxiv,, 80874,https://arxiv.org/abs/2105.11447,True Few-Shot Learning with Language Models,"['Ethan Perez', 'Douwe Kiela', 'Kyunghyun Cho']",2021-05-24T00:00:00Z,arxiv,, 80898,https://arxiv.org/abs/1708.08611,Safe Reinforcement Learning via Shielding,"['Mohammed Alshiekh', 'Roderick Bloem', 'Ruediger Ehlers', 'Bettina Könighofer', 'Scott Niekum', 'Ufuk Topcu']",2017-08-29T00:00:00Z,arxiv,, 80916,https://arxiv.org/abs/2010.05769,Parameterized Reinforcement Learning for Optical System Optimization,"['Heribert Wankerl', 'Maike L. Stern', 'Ali Mahdavi', 'Christoph Eichler', 'Elmar W. Lang']",2020-10-09T00:00:00Z,arxiv,, 80934,https://arxiv.org/abs/1810.11043,One-Shot Hierarchical Imitation Learning of Compound Visuomotor Tasks,"['Tianhe Yu', 'Pieter Abbeel', 'Sergey Levine', 'Chelsea Finn']",2018-10-25T00:00:00Z,arxiv,, 80950,https://arxiv.org/abs/2210.04964,Generating Executable Action Plans with Environmentally-Aware Language Models,"['Maitrey Gramopadhye', 'Daniel Szafir']",2022-10-10T00:00:00Z,arxiv,, 80968,https://arxiv.org/abs/2109.04083,User Tampering in Reinforcement Learning Recommender Systems,"['Charles Evans', 'Atoosa Kasirzadeh']",2021-09-09T00:00:00Z,arxiv,, 80984,https://arxiv.org/abs/1806.08874,The Foundations of Deep Learning with a Path Towards General Intelligence,['Eray Özkural'],2018-06-22T00:00:00Z,arxiv,, 81016,https://arxiv.org/abs/1606.05374,Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer Prediction,"['Jacob Steinhardt', 'Gregory Valiant', 'Moses Charikar']",2016-06-16T00:00:00Z,arxiv,, 81025,https://arxiv.org/abs/1903.10187,"Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support","['Christoph Benzmüller', 'Xavier Parent', 'Leendert van der Torre']",2019-03-25T00:00:00Z,arxiv,, 81055,https://arxiv.org/abs/1802.01604,Learning from Richer Human Guidance: Augmenting Comparison-Based Learning with Feature Queries,"['Chandrayee Basu', 'Mukesh Singhal', 'Anca D. Dragan']",2018-02-05T00:00:00Z,arxiv,, 81075,https://arxiv.org/abs/1106.2657,I Don't Want to Think About it Now:Decision Theory With Costly Computation,"['Joseph Y. Halpern', 'Rafael Pass']",2011-06-14T00:00:00Z,arxiv,, 81096,https://arxiv.org/abs/2112.05135,PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures,"['Dan Hendrycks', 'Andy Zou', 'Mantas Mazeika', 'Leonard Tang', 'Bo Li', 'Dawn Song', 'Jacob Steinhardt']",2021-12-09T00:00:00Z,arxiv,, 81109,https://arxiv.org/abs/2101.11038,Muppet: Massive Multi-task Representations with Pre-Finetuning,"['Armen Aghajanyan', 'Anchit Gupta', 'Akshat Shrivastava', 'Xilun Chen', 'Luke Zettlemoyer', 'Sonal Gupta']",2021-01-26T00:00:00Z,arxiv,, 81135,https://arxiv.org/abs/1705.10998,The Atari Grand Challenge Dataset,"['Vitaly Kurin', 'Sebastian Nowozin', 'Katja Hofmann', 'Lucas Beyer', 'Bastian Leibe']",2017-05-31T00:00:00Z,arxiv,, 81154,https://arxiv.org/abs/2006.08753,Pessimism About Unknown Unknowns Inspires Conservatism,"['Michael K. Cohen', 'Marcus Hutter']",2020-06-15T00:00:00Z,arxiv,, 81173,https://arxiv.org/abs/1906.08226,Unsupervised State Representation Learning in Atari,"['Ankesh Anand', 'Evan Racah', 'Sherjil Ozair', 'Yoshua Bengio', 'Marc-Alexandre Côté', 'R Devon Hjelm']",2019-06-19T00:00:00Z,arxiv,, 81183,https://arxiv.org/abs/1901.11184,Human-Centered Artificial Intelligence and Machine Learning,['Mark O. Riedl'],2019-01-31T00:00:00Z,arxiv,, 81199,https://arxiv.org/abs/1708.00376,Using Program Induction to Interpret Transition System Dynamics,"['Svetlin Penkov', 'Subramanian Ramamoorthy']",2017-07-26T00:00:00Z,arxiv,, 81218,https://arxiv.org/abs/2210.05125,Human-AI Coordination via Human-Regularized Search and Learning,"['Hengyuan Hu', 'David J Wu', 'Adam Lerer', 'Jakob Foerster', 'Noam Brown']",2022-10-11T00:00:00Z,arxiv,, 81245,https://arxiv.org/abs/2207.14378,Latent Properties of Lifelong Learning Systems,"['Corban Rivera', 'Chace Ashcraft', 'Alexander New', 'James Schmidt', 'Gautam Vallabha']",2022-07-28T00:00:00Z,arxiv,, 81259,https://arxiv.org/abs/2201.12440,Certifying Model Accuracy under Distribution Shifts,"['Aounon Kumar', 'Alexander Levine', 'Tom Goldstein', 'Soheil Feizi']",2022-01-28T00:00:00Z,arxiv,, 81279,https://arxiv.org/abs/2202.12985,OCR-IDL: OCR Annotations for Industry Document Library Dataset,"['Ali Furkan Biten', 'Rubèn Tito', 'Lluis Gomez', 'Ernest Valveny', 'Dimosthenis Karatzas']",2022-02-25T00:00:00Z,arxiv,, 81298,https://arxiv.org/abs/1802.01780,Goal Inference Improves Objective and Perceived Performance in Human-Robot Collaboration,"['Chang Liu', 'Jessica B. Hamrick', 'Jaime F. Fisac', 'Anca D. Dragan', 'J. Karl Hedrick', 'S. Shankar Sastry', 'Thomas L. Griffiths']",2018-02-06T00:00:00Z,arxiv,, 81312,https://arxiv.org/abs/1810.07311,Finding Options that Minimize Planning Time,"['Yuu Jinnai', 'David Abel', 'D Ellis Hershkowitz', 'Michael Littman', 'George Konidaris']",2018-10-16T00:00:00Z,arxiv,, 81326,https://arxiv.org/abs/2008.12566,A Framework for Improving Scholarly Neural Network Diagrams,"['Guy Clarke Marshall', 'André Freitas', 'Caroline Jay']",2020-08-28T00:00:00Z,arxiv,, 81348,https://arxiv.org/abs/1810.01032,Reinforcement Learning with Perturbed Rewards,"['Jingkang Wang', 'Yang Liu', 'Bo Li']",2018-10-02T00:00:00Z,arxiv,, 81370,https://arxiv.org/abs/2203.08492,Resilient Neural Forecasting Systems,"['Michael Bohlke-Schneider', 'Shubham Kapoor', 'Tim Januschowski']",2022-03-16T00:00:00Z,arxiv,, 81400,https://arxiv.org/abs/2103.09230,Lyapunov Barrier Policy Optimization,"['Harshit Sikchi', 'Wenxuan Zhou', 'David Held']",2021-03-16T00:00:00Z,arxiv,, 81421,https://arxiv.org/abs/2003.06507,The Conflict Between People's Urge to Punish AI and Legal Systems,"['Gabriel Lima', 'Meeyoung Cha', 'Chihyung Jeon', 'Kyungsin Park']",2020-03-13T00:00:00Z,arxiv,, 81441,https://arxiv.org/abs/1912.12613,Asking the Right Questions: Learning Interpretable Action Models Through Query Answering,"['Pulkit Verma', 'Shashank Rao Marpally', 'Siddharth Srivastava']",2019-12-29T00:00:00Z,arxiv,, 81458,https://arxiv.org/abs/2209.05170,Resource Allocation to Agents with Restrictions: Maximizing Likelihood with Minimum Compromise,"['Yohai Trabelsi', 'Abhijin Adiga', 'Sarit Kraus', 'S. S. Ravi']",2022-09-12T00:00:00Z,arxiv,, 81481,https://arxiv.org/abs/2012.14536,Multi-Principal Assistance Games: Definition and Collegial Mechanisms,"['Arnaud Fickinger', 'Simon Zhuang', 'Andrew Critch', 'Dylan Hadfield-Menell', 'Stuart Russell']",2020-12-29T00:00:00Z,arxiv,, 81498,https://arxiv.org/abs/1910.06764,Stabilizing Transformers for Reinforcement Learning,"['Emilio Parisotto', 'H. Francis Song', 'Jack W. Rae', 'Razvan Pascanu', 'Caglar Gulcehre', 'Siddhant M. Jayakumar', 'Max Jaderberg', 'Raphael Lopez Kaufman', 'Aidan Clark', 'Seb Noury', 'Matthew M. Botvinick', 'Nicolas Heess', 'Raia Hadsell']",2019-10-13T00:00:00Z,arxiv,, 81520,https://arxiv.org/abs/1807.09936,Multi-Agent Generative Adversarial Imitation Learning,"['Jiaming Song', 'Hongyu Ren', 'Dorsa Sadigh', 'Stefano Ermon']",2018-07-26T00:00:00Z,arxiv,, 81544,https://arxiv.org/abs/1804.02485,Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations,"['Alex Lamb', 'Jonathan Binas', 'Anirudh Goyal', 'Dmitriy Serdyuk', 'Sandeep Subramanian', 'Ioannis Mitliagkas', 'Yoshua Bengio']",2018-04-07T00:00:00Z,arxiv,, 81567,https://arxiv.org/abs/1812.04814,Linking Artificial Intelligence Principles,"['Yi Zeng', 'Enmeng Lu', 'Cunqing Huangfu']",2018-12-12T00:00:00Z,arxiv,, 81600,https://arxiv.org/abs/2006.09436,SAMBA: Safe Model-Based & Active Reinforcement Learning,"['Alexander I. Cowen-Rivers', 'Daniel Palenicek', 'Vincent Moens', 'Mohammed Abdullah', 'Aivar Sootla', 'Jun Wang', 'Haitham Ammar']",2020-06-12T00:00:00Z,arxiv,, 81625,https://arxiv.org/abs/2012.07195,Efficient Querying for Cooperative Probabilistic Commitments,"['Qi Zhang', 'Edmund H. Durfee', 'Satinder Singh']",2020-12-14T00:00:00Z,arxiv,, 81641,https://arxiv.org/abs/2011.00401,The MAGICAL Benchmark for Robust Imitation.,"['Sam Toyer', 'Rohin Shah', 'Andrew Critch', 'Stuart Russell']",2020-08-14T00:00:00Z,arxiv,, 81669,https://arxiv.org/abs/1806.04915,The IQ of Artificial Intelligence,['Dimiter Dobrev'],2018-06-13T00:00:00Z,arxiv,, 81689,https://arxiv.org/abs/cs/0203013,Representing and Aggregating Conflicting Beliefs,"['Pedrito Maynard-Reid II', 'Daniel Lehmann']",2002-03-11T00:00:00Z,arxiv,, 81712,https://arxiv.org/abs/1606.03137,Cooperative Inverse Reinforcement Learning,"['Dylan Hadfield-Menell', 'Anca Dragan', 'Pieter Abbeel', 'Stuart Russell']",2016-06-09T22:39:54Z,arxiv,, 81727,https://arxiv.org/abs/1807.11113,Reinforced Auto-Zoom Net: Towards Accurate and Fast Breast Cancer Segmentation in Whole-slide Images,"['Nanqing Dong', 'Michael Kampffmeyer', 'Xiaodan Liang', 'Zeya Wang', 'Wei Dai', 'Eric P. Xing']",2018-07-29T00:00:00Z,arxiv,, 81743,https://arxiv.org/abs/2003.06066,Sample Efficient Reinforcement Learning through Learning from Demonstrations in Minecraft,"['Christian Scheller', 'Yanick Schraner', 'Manfred Vogel']",2020-03-12T00:00:00Z,arxiv,, 81772,https://arxiv.org/abs/1904.09605,Generative Exploration and Exploitation,"['Jiechuan Jiang', 'Zongqing Lu']",2019-04-21T00:00:00Z,arxiv,, 81798,https://arxiv.org/abs/1910.06266,Using AI/ML to gain situational understanding from passive network observations,"['D. Verma', 'S. Calo']",2019-10-14T00:00:00Z,arxiv,, 81828,https://arxiv.org/abs/2107.07394,Explore and Control with Adversarial Surprise.,"['Arnaud Fickinger', 'Natasha Jaques', 'Samyak Parajuli', 'Michael Chang', 'Nicholas Rhinehart', 'Glen Berseth', 'Stuart Russell', 'Sergey Levine']",2021-08-14T00:00:00Z,arxiv,, 81842,https://arxiv.org/abs/2106.06613,"A New Formalism, Method and Open Issues for Zero-Shot Coordination","['Johannes Treutlein', 'Michael Dennis', 'Caspar Oesterheld', 'Jakob Foerster']",2021-06-11T21:06:04Z,arxiv,, 81859,https://arxiv.org/abs/1906.11583,Approximate Causal Abstraction.,"['Sander Beckers', 'Frederick Eberhardt', 'Joseph Y', 'Halpern']",2019-08-14T00:00:00Z,arxiv,, 81879,https://arxiv.org/abs/1907.10580,IR-VIC: Unsupervised Discovery of Sub-goals for Transfer in RL,"['Nirbhay Modhe', 'Prithvijit Chattopadhyay', 'Mohit Sharma', 'Abhishek Das', 'Devi Parikh', 'Dhruv Batra', 'Ramakrishna Vedantam']",2019-07-24T00:00:00Z,arxiv,, 81894,https://arxiv.org/abs/2007.08911,Technologies for Trustworthy Machine Learning: A Survey in a Socio-Technical Context,"['Ehsan Toreini', 'Mhairi Aitken', 'Kovila P. L. Coopamootoo', 'Karen Elliott', 'Vladimiro Gonzalez Zelaya', 'Paolo Missier', 'Magdalene Ng', 'Aad van Moorsel']",2020-07-17T00:00:00Z,arxiv,, 81912,https://arxiv.org/abs/1808.03845,Social Cohesion in Autonomous Driving.,"['Nicholas C', 'Landolfi', 'Anca D', 'Dragan']",2018-08-14T00:00:00Z,arxiv,, 81931,https://arxiv.org/abs/2112.10751,RvS: What is Essential for Offline RL via Supervised Learning?.,"['Scott Emmons', 'Benjamin Eysenbach', 'Ilya Kostrikov', 'Sergey Levine']",2022-08-14T00:00:00Z,arxiv,, 81956,https://arxiv.org/abs/1905.10985,"AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence",['Jeff Clune'],2019-05-27T00:00:00Z,arxiv,, 81978,https://arxiv.org/abs/1707.07328,Adversarial Examples for Evaluating Reading Comprehension Systems,['Robin Jia'],2017-07-23T18:26:29Z,arxiv,, 82000,https://arxiv.org/abs/1810.04805,BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,"['Jacob Devlin', 'Ming-Wei Chang', 'Kenton Lee', 'Kristina Toutanova']",2018-10-11T00:00:00Z,arxiv,, 82020,https://arxiv.org/abs/2001.04335,Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society,"['Carina Prunkl', 'Jess Whittlestone']",2020-01-13T00:00:00Z,arxiv,, 82039,https://arxiv.org/abs/2111.04885,Lymph Node Detection in T2 MRI with Transformers,"['Tejas Sudharshan Mathai', 'Sungwon Lee', 'Daniel C. Elton', 'Thomas C. Shen', 'Yifan Peng', 'Zhiyong Lu', 'Ronald M. Summers']",2021-11-09T00:00:00Z,arxiv,, 82056,https://arxiv.org/abs/2211.14468,Similarity-based Cooperation.,"['Caspar Oesterheld', 'Johannes Treutlein', 'Roger Grosse', 'Vincent Conitzer', 'Jakob Foerster']",2022-08-14T00:00:00Z,arxiv,, 82078,https://arxiv.org/abs/2211.00593,Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small,"['Authors: Kevin Wang', 'Alexandre Variengien', 'Arthur Conmy', 'Buck Shlegeris', 'Jacob Steinhardt']",2022-11-01T00:00:00Z,arxiv,, 82101,https://arxiv.org/abs/2006.06547,Avoiding Side Effects in Complex Environments,"['Alexander Matt Turner*', 'Neale Ratzlaff*', 'Prasad Tadepalli']",2020-01-01T00:00:00Z,arxiv,, 82122,https://arxiv.org/abs/1810.01014,Bayesian Policy Optimization for Model Uncertainty,"['Gilwoo Lee', 'Brian Hou', 'Aditya Mandalika', 'Jeongseok Lee', 'Sanjiban Choudhury', 'Siddhartha S. Srinivasa']",2018-10-01T00:00:00Z,arxiv,, 82144,https://arxiv.org/abs/2004.03607,TuringAdvice: A Generative and Dynamic Evaluation of Language Use,"['Rowan Zellers', 'Ari Holtzman', 'Elizabeth Clark', 'Lianhui Qin', 'Ali Farhadi', 'Yejin Choi']",2020-04-07T00:00:00Z,arxiv,, 82166,https://arxiv.org/abs/2104.08691,The Power of Scale for Parameter-Efficient Prompt Tuning,"['Brian Lester', 'Rami Al-Rfou', 'Noah Constant']",2021-04-18T00:00:00Z,arxiv,, 82188,https://arxiv.org/abs/2206.08932,Putting GPT-3's Creativity to the (Alternative Uses) Test,"['Claire Stevenson', 'Iris Smal', 'Matthijs Baas', 'Raoul Grasman', 'Han van der Maas']",2022-06-10T00:00:00Z,arxiv,, 82204,https://arxiv.org/abs/2106.00655,The Impact of Network Connectivity on Collective Learning,"['Michael Crosscombe', 'Jonathan Lawry']",2021-06-01T00:00:00Z,arxiv,, 82224,https://arxiv.org/abs/2104.00163,DEALIO: Data-Efficient Adversarial Learning for Imitation from Observation,"['Faraz Torabi', 'Garrett Warnell', 'Peter Stone']",2021-03-31T00:00:00Z,arxiv,, 82234,https://arxiv.org/abs/2204.00323,Graph-in-Graph (GiG): Learning interpretable latent graphs in non-Euclidean domain for biological and healthcare applications,"['Kamilia Mullakaeva', 'Luca Cosmo', 'Anees Kazi', 'Seyed-Ahmad Ahmadi', 'Nassir Navab', 'Michael M. Bronstein']",2022-04-01T00:00:00Z,arxiv,, 82252,https://arxiv.org/abs/1908.04734,Reward Tampering Problems and Solutions in Reinforcement Learning: A Causal Influence Diagram Perspective,"['Tom Everitt', 'Marcus Hutter', 'Ramana Kumar', 'Victoria Krakovna']",2019-08-13T00:00:00Z,arxiv,, 82283,https://arxiv.org/abs/2106.07998,Revisiting the Calibration of Modern Neural Networks,"['Matthias Minderer', 'Josip Djolonga', 'Rob Romijnders', 'Frances Hubis', 'Xiaohua Zhai', 'Neil Houlsby', 'Dustin Tran', 'Mario Lucic']",2021-06-15T00:00:00Z,arxiv,, 82309,https://arxiv.org/abs/1910.06428,Restoration of marker occluded hematoxylin and eosin stained whole slide histology images using generative adversarial networks,"['Bairavi Venkatesh', 'Tosha Shah', 'Antong Chen', 'Soheil Ghafurian']",2019-10-14T00:00:00Z,arxiv,, 82324,https://arxiv.org/abs/2012.08630,Open Problems in Cooperative AI,"['Allan Dafoe', 'Edward Hughes', 'Yoram Bachrach', 'Tantum Collins', 'Kevin R. McKee', 'Joel Z. Leibo', 'Kate Larson', 'Thore Graepel']",2020-01-01T00:00:00Z,arxiv,, 82369,https://arxiv.org/abs/2006.14804,Widening the Pipeline in Human-Guided Reinforcement Learning with Explanation and Context-Aware Data Augmentation,"['Lin Guan', 'Mudit Verma', 'Sihang Guo', 'Ruohan Zhang', 'Subbarao Kambhampati']",2020-06-26T00:00:00Z,arxiv,, 82389,https://arxiv.org/abs/1907.11274,Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning,"['Aviv Ovadya', 'Jess Whittlestone']",2019-07-25T00:00:00Z,arxiv,, 82415,https://arxiv.org/abs/2106.09667,Poisoning and Backdooring Contrastive Learning,['Nicholas Carlini'],2021-06-17T17:20:45Z,arxiv,, 82433,https://arxiv.org/abs/2202.03597,Local Explanations for Reinforcement Learning,"['Ronny Luss', 'Amit Dhurandhar', 'Miao Liu']",2022-02-08T00:00:00Z,arxiv,, 82449,https://arxiv.org/abs/1803.04263,The Challenge of Crafting Intelligible Intelligence,"['Daniel S. Weld', 'Gagan Bansal']",2018-03-09T00:00:00Z,arxiv,, 82477,https://arxiv.org/abs/1404.0854,Enabling Automatic Certification of Online Auctions,"['Wei Bai', 'Emmanuel M. Tadjouddine', 'Yu Guo']",2014-04-03T00:00:00Z,arxiv,, 82490,https://arxiv.org/abs/2202.03172,The 6-Ds of Creating AI-Enabled Systems,['John Piorkowski'],2022-02-04T00:00:00Z,arxiv,, 82527,https://arxiv.org/abs/2103.13107,W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality,"['Francesco Ponzio', 'Enrico Macii', 'Elisa Ficarra', 'Santa Di Cataldo']",2021-03-24T00:00:00Z,arxiv,, 82546,https://arxiv.org/abs/1911.01547,On the Measure of Intelligence,['François Chollet'],2019-11-05T00:00:00Z,arxiv,, 82562,https://arxiv.org/abs/2207.13243,"Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks","['Tilman Räuker', 'Anson Ho', 'Stephen Casper', 'Dylan Hadfield-Menell']",2022-07-27T01:59:13Z,arxiv,, 82593,https://arxiv.org/abs/2004.13654,Pitfalls of learning a reward function online,"['Stuart Armstrong', 'Jan Leike', 'Laurent Orseau', 'Shane Legg']",2020-04-28T00:00:00Z,arxiv,, 82608,https://arxiv.org/abs/2205.11275,RL with KL penalties is better viewed as Bayesian inference,['Tomasz Korbak'],2022-05-23T12:47:13Z,arxiv,, 82626,https://arxiv.org/abs/1810.12644,The Responsibility Quantification (ResQu) Model of Human Interaction with Automation,"['Nir Douer', 'Joachim Meyer']",2018-10-30T00:00:00Z,arxiv,, 82643,https://arxiv.org/abs/1811.09716,"Robustness via curvature regularization, and vice versa","['Seyed-Mohsen Moosavi-Dezfooli', 'Alhussein Fawzi', 'Jonathan Uesato', 'Pascal Frossard']",2018-11-23T00:00:00Z,arxiv,, 82661,https://arxiv.org/abs/1912.00782,The relationship between trust in AI and trustworthy machine learning technologies,"['Ehsan Toreini', 'Mhairi Aitken', 'Kovila Coopamootoo', 'Karen Elliott', 'Carlos Gonzalez Zelaya', 'Aad van Moorsel']",2019-11-27T00:00:00Z,arxiv,, 82679,https://arxiv.org/abs/cond-mat/0412004,"Power laws, Pareto distributions and Zipf’s law",['M. E. J. Newman'],2004-12-01T04:34:56Z,arxiv,, 82695,https://arxiv.org/abs/2003.07305,DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction,"['Aviral Kumar', 'Abhishek Gupta', 'Sergey Levine']",2020-03-16T00:00:00Z,arxiv,, 82712,https://arxiv.org/abs/1901.02527,Robust Change Captioning,"['Dong Huk Park', 'Trevor Darrell', 'Anna Rohrbach']",2019-01-08T00:00:00Z,arxiv,, 82733,https://arxiv.org/abs/1807.03819,Universal Transformers,"['Mostafa Dehghani', 'Stephan Gouws', 'Oriol Vinyals', 'Jakob Uszkoreit', 'Łukasz Kaiser']",2018-07-10T00:00:00Z,arxiv,, 82756,https://arxiv.org/abs/1801.09344,Certified Defenses against Adversarial Examples,"['Aditi Raghunathan', 'Jacob Steinhardt & Percy Liang']",2018-01-01T00:00:00Z,arxiv,, 82776,https://arxiv.org/abs/2012.07421,Wilds: A Benchmark of in-the-Wild Distribution Shifts,"['Pang Wei Koh', 'Shiori Sagawa']",2020-12-14T11:14:56Z,arxiv,, 82792,https://arxiv.org/abs/1810.11116,Mimetic vs Anchored Value Alignment in Artificial Intelligence,"['Tae Wan Kim', 'Thomas Donaldson', 'John Hooker']",2018-10-25T00:00:00Z,arxiv,, 82815,https://arxiv.org/abs/2206.08593,Automatic Correction of Human Translations.,"['Jessy Lin', 'Geza Kovacs', 'Aditya Shastry', 'Joern Wuebker', 'John DeNero']",2022-08-14T00:00:00Z,arxiv,, 82831,https://arxiv.org/abs/1607.00656,A Hybrid POMDP-BDI Agent Architecture with Online Stochastic Planning and Plan Caching,"['Gavin Rens', 'Deshendran Moodley']",2016-07-03T00:00:00Z,arxiv,, 82865,https://arxiv.org/abs/2003.00260,On Safety Assessment of Artificial Intelligence,"['Jens Braband', 'Hendrik Schäbe']",2020-02-29T00:00:00Z,arxiv,, 82892,https://arxiv.org/abs/1810.01257,Near-Optimal Representation Learning for Hierarchical Reinforcement Learning,"['Ofir Nachum', 'Shixiang Gu', 'Honglak Lee', 'Sergey Levine']",2018-10-02T00:00:00Z,arxiv,, 82908,https://arxiv.org/abs/2103.00082,Secure Evaluation of Knowledge Graph Merging Gain,"['Leandro Eichenberger', 'Michael Cochez', 'Benjamin Heitmann', 'Stefan Decker']",2021-02-26T00:00:00Z,arxiv,, 82932,https://arxiv.org/abs/1610.00850,Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations.,"['Michael Laskey', 'Caleb Chuck', 'Jonathan Lee', 'Jeffrey Mahler', 'Sanjay Krishnan', 'Kevin Jamieson', 'Anca Dragan', 'Ken Goldberg']",2017-08-14T00:00:00Z,arxiv,, 82951,https://arxiv.org/abs/2110.07719,Certified Patch Robustness via Smoothed Vision Transformers,"['Hadi Salman', 'Saachi Jain', 'Eric Wong', 'Aleksander Mądry']",2021-10-11T00:00:00Z,arxiv,, 82972,https://arxiv.org/abs/1810.09136,Do Deep Generative Models Know What They Don't Know?,"['Eric Nalisnick', 'Akihiro Matsukawa', 'Yee Whye Teh', 'Dilan Gorur', 'Balaji Lakshminarayanan']",2018-10-22T00:00:00Z,arxiv,, 82994,https://arxiv.org/abs/2201.08111,Safety-Aware Multi-Agent Apprenticeship Learning,['Junchen Zhao'],2022-01-20T00:00:00Z,arxiv,, 83013,https://arxiv.org/abs/2007.09540,Multi-Principal Assistance Games.,"['Arnaud Fickinger', 'Simon Zhuang', 'Dylan Hadfield-Menell', 'Stuart Russell']",2020-08-14T00:00:00Z,arxiv,, 83037,https://arxiv.org/abs/2109.08273,ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning,"['Ryan Hoque', 'Ashwin Balakrishna', 'Ellen Novoseller', 'Albert Wilcox', 'Daniel S. Brown', 'Ken Goldberg']",2021-09-17T00:00:00Z,arxiv,, 83055,https://arxiv.org/abs/2108.07732,Program Synthesis with Large Language Models,"['Jacob Austin', 'Augustus Odena', 'Maxwell Nye', 'Maarten Bosma', 'Henryk Michalewski', 'David Dohan', 'Ellen Jiang', 'Carrie Cai', 'Michael Terry', 'Quoc Le', 'Charles Sutton']",2021-08-16T00:00:00Z,arxiv,, 83091,https://arxiv.org/abs/1706.03762,Attention Is All You Need,"['Ashish Vaswani', 'Noam Shazeer', 'Niki Parmar', 'Jakob Uszkoreit', 'Llion Jones', 'Aidan N. Gomez', 'Lukasz Kaiser', 'Illia Polosukhin']",2017-06-12T00:00:00Z,arxiv,, 83102,https://arxiv.org/abs/1907.02140,Integration of Imitation Learning using GAIL and Reinforcement Learning using Task-achievement Rewards via Probabilistic Graphical Model,"['Akira Kinose', 'Tadahiro Taniguchi']",2019-07-03T00:00:00Z,arxiv,, 83119,https://arxiv.org/abs/1711.01927,A Foundry of Human Activities and Infrastructures,"['Robert B. Allen', 'Eunsang Yang', 'Tatsawan Timakum']",2017-10-31T00:00:00Z,arxiv,, 83145,https://arxiv.org/abs/2207.08651,Boolean Decision Rules for Reinforcement Learning Policy Summarisation,"['James McCarthy', 'Rahul Nair', 'Elizabeth Daly', 'Radu Marinescu', 'Ivana Dusparic']",2022-07-18T00:00:00Z,arxiv,, 83157,https://arxiv.org/abs/2011.05623,"Fooling the primate brain with minimal, targeted image manipulation","['Li Yuan', 'Will Xiao', 'Giorgia Dellaferrera', 'Gabriel Kreiman', 'Francis E. H. Tay', 'Jiashi Feng', 'Margaret S. Livingstone']",2020-11-11T00:00:00Z,arxiv,, 83175,https://arxiv.org/abs/1807.00366,Beyond Winning and Losing: Modeling Human Motivations and Behaviors Using Inverse Reinforcement Learning,"['Baoxiang Wang', 'Tongfang Sun', 'Xianjun Sam Zheng']",2018-07-01T00:00:00Z,arxiv,, 83190,https://arxiv.org/abs/1805.11783,To Trust Or Not To Trust A Classifier,"['Heinrich Jiang', 'Been Kim', 'Melody Y. Guan', 'Maya Gupta']",2018-05-30T00:00:00Z,arxiv,, 83205,https://arxiv.org/abs/2008.12146,Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems,"['Sandhya Saisubramanian', 'Shlomo Zilberstein', 'Ece Kamar']",2020-01-01T00:00:00Z,arxiv,, 83249,https://arxiv.org/abs/2203.10807,ViM: Out-Of-Distribution with Virtual-logit Matching,['Haoqi Wang'],2022-03-21T08:56:55Z,arxiv,, 83260,https://arxiv.org/abs/2110.13136,What Would Jiminy Cricket Do? Towards Agents That Behave Morally,"['Dan Hendrycks', 'Mantas Mazeika', 'Andy Zou', 'Sahil Patel', 'Christine Zhu', 'Jesus Navarro', 'Dawn Song', 'Bo Li', 'Jacob Steinhardt']",2021-10-25T00:00:00Z,arxiv,, 83276,https://arxiv.org/abs/1510.03370,Asymptotic Logical Uncertainty and The Benford Test,"['Scott Garrabrant', 'Siddharth Bhaskar', 'Abram Demski', 'Joanna Garrabrant', 'George Koleszarik', 'Evan Lloyd']",2015-10-12T00:00:00Z,arxiv,, 83290,https://arxiv.org/abs/2007.08124,LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning,"['Jian Liu', 'Leyang Cui', 'Hanmeng Liu', 'Dandan Huang', 'Yile Wang', 'Yue Zhang']",2020-07-16T00:00:00Z,arxiv,, 83307,https://arxiv.org/abs/1808.02633,Courteous Autonomous Cars.,"['Liting Sun', 'Wei Zhan', 'Masayoshi Tomizuka', 'Anca D', 'Dragan']",2018-08-14T00:00:00Z,arxiv,, 83325,https://arxiv.org/abs/1605.09304,Synthesizing the preferred inputs for neurons in neural networks via deep generator networks,['Anh Nguyen'],2016-05-30T16:22:54Z,arxiv,, 83343,https://arxiv.org/abs/1705.05427,Repeated Inverse Reinforcement Learning.,"['Kareem Amin', 'Nan Jiang', 'Satinder Singh']",2017-08-14T00:00:00Z,arxiv,, 83378,https://arxiv.org/abs/1912.09729,Mastering Complex Control in MOBA Games with Deep Reinforcement Learning,"['Deheng Ye', 'Zhao Liu', 'Mingfei Sun', 'Bei Shi', 'Peilin Zhao', 'Hao Wu', 'Hongsheng Yu', 'Shaojie Yang', 'Xipeng Wu', 'Qingwei Guo', 'Qiaobo Chen', 'Yinyuting Yin', 'Hao Zhang', 'Tengfei Shi', 'Liang Wang', 'Qiang Fu', 'Wei Yang', 'Lanxiao Huang']",2019-12-20T00:00:00Z,arxiv,, 83415,https://arxiv.org/abs/2010.14496,Generative Temporal Difference Learning for Infinite-Horizon Prediction,"['Michael Janner', 'Igor Mordatch', 'Sergey Levine']",2020-10-27T00:00:00Z,arxiv,, 83434,https://arxiv.org/abs/2202.10122,HCMD-zero: Learning Value Aligned Mechanisms from Data,"['Jan Balaguer', 'Raphael Koster', 'Ari Weinstein', 'Lucy Campbell-Gillingham', 'Christopher Summerfield', 'Matthew Botvinick', 'Andrea Tacchetti']",2022-02-21T00:00:00Z,arxiv,, 83448,https://arxiv.org/abs/1807.06732,Motivating the Rules of the Game for Adversarial Example Research,['Justin Gilmer'],2018-01-01T00:00:00Z,arxiv,, 83478,https://arxiv.org/abs/2202.11960,All You Need Is Supervised Learning: From Imitation Learning to Meta-RL With Upside Down RL,"['Kai Arulkumaran', 'Dylan R. Ashley', 'Jürgen Schmidhuber', 'Rupesh K. Srivastava']",2022-02-24T00:00:00Z,arxiv,, 83497,https://arxiv.org/abs/1412.6572,"Explaining and Harnessing Adversarial Examples","['Ian J. Goodfellow', 'Jonathon Shlens & Christian Szegedy']",2014-12-20T01:17:12Z,arxiv,, 83517,https://arxiv.org/abs/2210.15767,"Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report","['Michael L. Littman', 'Ifeoma Ajunwa', 'Guy Berger', 'Craig Boutilier', 'Morgan Currie', 'Finale Doshi-Velez', 'Gillian Hadfield', 'Michael C. Horowitz', 'Charles Isbell', 'Hiroaki Kitano', 'Karen Levy', 'Terah Lyons', 'Melanie Mitchell', 'Julie Shah', 'Steven Sloman', 'Shannon Vallor', 'Toby Walsh']",2022-10-27T00:00:00Z,arxiv,, 83575,https://arxiv.org/abs/1911.11132,Scaling Out-of-Distribution Detection for Real-World Settings,"['Dan Hendrycks', 'Steven Basart', 'Mantas Mazeika', 'Andy Zou', 'Joe Kwon', 'Mohammadreza Mostajabi', 'Jacob Steinhardt', 'Dawn Song']",2019-11-25T00:00:00Z,arxiv,, 83598,https://arxiv.org/abs/1805.01772,Dynamic Control Flow in Large-Scale Machine Learning,"['Yuan Yu', 'Martín Abadi', 'Paul Barham', 'Eugene Brevdo', 'Mike Burrows', 'Andy Davis', 'Jeff Dean', 'Sanjay Ghemawat', 'Tim Harley', 'Peter Hawkins', 'Michael Isard', 'Manjunath Kudlur', 'Rajat Monga', 'Derek Murray', 'Xiaoqiang Zheng']",2018-05-04T00:00:00Z,arxiv,, 83619,https://arxiv.org/abs/2303.16755,Training Language Models with Language Feedback at Scale,"['Jérémy Scheurer', 'Jon Ander Campos', 'Tomasz Korbak', 'Jun Shern Chan', 'Angelica Chen', 'Kyunghyun Cho', 'Ethan Perez']",2023-03-28T17:04:15Z,arxiv,, 83640,https://arxiv.org/abs/2006.12655,Perceptual Adversarial Robustness: Defense Against Unseen Threat Models.,"['Cassidy Laidlaw', 'Sahil Singla', 'Soheil Feizi']",2021-08-14T00:00:00Z,arxiv,, 83653,https://arxiv.org/abs/1812.01225,Learning from Extrapolated Corrections,"['Jason Y. Zhang', 'Anca D. Dragan']",2018-12-04T00:00:00Z,arxiv,, 83667,https://arxiv.org/abs/2107.03374,Evaluating Large Language Models Trained on Code,"['Mark Chen', 'Jerry Tworek', 'Heewoo Jun', 'Qiming Yuan', 'Henrique Ponde de Oliveira Pinto', 'Jared Kaplan', 'Harri Edwards', 'Yuri Burda', 'Nicholas Joseph', 'Greg Brockman', 'Alex Ray', 'Raul Puri', 'Gretchen Krueger', 'Michael Petrov', 'Heidy Khlaaf', 'Girish Sastry', 'Pamela Mishkin', 'Brooke Chan', 'Scott Gray', 'Nick Ryder', 'Mikhail Pavlov', 'Alethea Power', 'Lukasz Kaiser', 'Mohammad Bavarian', 'Clemens Winter', 'Philippe Tillet', 'Felipe Petroski Such', 'Dave Cummings', 'Matthias Plappert', 'Fotios Chantzis', 'Elizabeth Barnes', 'Ariel Herbert-Voss', 'William Hebgen Guss', 'Alex Nichol', 'Alex Paino', 'Nikolas Tezak', 'Jie Tang', 'Igor Babuschkin', 'Suchir Balaji', 'Shantanu Jain', 'William Saunders', 'Christopher Hesse', 'Andrew N. Carr', 'Jan Leike', 'Josh Achiam', 'Vedant Misra', 'Evan Morikawa', 'Alec Radford', 'Matthew Knight', 'Miles Brundage', 'Mira Murati', 'Katie Mayer', 'Peter Welinder', 'Bob McGrew', 'Dario Amodei', 'Sam McCandlish', 'Ilya Sutskever', 'Wojciech Zaremba']",2021-07-07T00:00:00Z,arxiv,, 83697,https://arxiv.org/abs/2012.02096,Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design.,"['Michael Dennis', 'Natasha Jaques', 'Eugene Vinitsky', 'Alexandre Bayen', 'Stuart Russell', 'Andrew Critch', 'Sergey Levine']",2020-08-14T00:00:00Z,arxiv,, 83712,https://arxiv.org/abs/cs/0606010,A Decision-Making Support System Based on Know-How,"['V. V. Kryssanov', 'V. A. Abramov', 'Y. Fukuda', 'K. Konishi']",2006-06-02T00:00:00Z,arxiv,, 83733,https://arxiv.org/abs/2005.10297,"Causality, Responsibility and Blame in Team Plans.","['Natasha Alechina', 'Joseph Y', 'Halpern', 'Brian Logan']",2017-08-14T00:00:00Z,arxiv,, 83750,https://arxiv.org/abs/1806.09030,On Adversarial Examples for Character-Level Neural Machine Translation,"['Javid Ebrahimi', 'Daniel Lowd', 'Dejing Dou']",2018-06-23T00:00:00Z,arxiv,, 83769,https://arxiv.org/abs/2002.10657,Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization,['Satrajit Chatterjee'],2020-02-25T00:00:00Z,arxiv,, 83786,https://arxiv.org/abs/2203.04098,COLA: Consistent Learning with Opponent-Learning Awareness.,"['Timon Willi*', 'Alistair Letcher*', 'Johannes Treutlein*', 'Jakob Foerster']",2022-08-14T00:00:00Z,arxiv,, 83805,https://arxiv.org/abs/1905.11108,SQIL: Imitation Learning via Regularized Behavioral Cloning..,"['Siddharth Reddy', 'Anca D', 'Dragan', 'Sergey Levine']",2020-08-15T00:00:00Z,arxiv,, 83822,https://arxiv.org/abs/1805.00909,Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review,['Sergey Levine'],2018-05-02T00:00:00Z,arxiv,, 83850,https://arxiv.org/abs/2110.01311,Learning to Assist Agents by Observing Them,"['Antti Keurulainen', 'Isak Westerlund', 'Samuel Kaski', 'Alexander Ilin']",2021-10-04T00:00:00Z,arxiv,, 83870,https://arxiv.org/abs/2008.02790,Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices,"['Evan Zheran Liu', 'Aditi Raghunathan', 'Percy Liang', 'Chelsea Finn']",2020-08-06T00:00:00Z,arxiv,, 83883,https://arxiv.org/abs/1805.08010,Where Do You Think You’re Going?: Inferring Beliefs about Dynamics from Behavior.,"['Siddharth Reddy', 'Anca D', 'Dragan', 'Sergey Levine']",2018-08-15T00:00:00Z,arxiv,, 83907,https://arxiv.org/abs/1807.03748,Representation Learning with Contrastive Predictive Coding,"['Aaron van den Oord', 'Yazhe Li', 'Oriol Vinyals']",2018-07-10T00:00:00Z,arxiv,, 83926,https://arxiv.org/abs/2204.05151,Metaethical Perspectives on 'Benchmarking' AI Ethics,"['Travis LaCroix', 'Alexandra Sasha Luccioni']",2022-04-11T00:00:00Z,arxiv,, 83950,https://arxiv.org/abs/1904.06866,Predicting human decisions with behavioral theories and machine learning,"['Ori Plonsky', 'Reut Apel', 'Eyal Ert', 'Moshe Tennenholtz', 'David Bourgin', 'Joshua C. Peterson', 'Daniel Reichman', 'Thomas L. Griffiths', 'Stuart J. Russell', 'Evan C. Carter', 'James F. Cavanagh', 'Ido Erev']",2019-04-15T00:00:00Z,arxiv,, 83972,https://arxiv.org/abs/2107.04457,Aligning an optical interferometer with beam divergence control and continuous action space,"['Stepan Makarenko', 'Dmitry Sorokin', 'Alexander Ulanov', 'A. I. Lvovsky']",2021-07-09T00:00:00Z,arxiv,, 83999,https://arxiv.org/abs/2005.13635,AI Forensics: Did the Artificial Intelligence System Do It? Why?,"['Johannes Schneider', 'Frank Breitinger']",2020-05-27T00:00:00Z,arxiv,, 84023,https://arxiv.org/abs/2107.08981,Hierarchical Few-Shot Imitation with Skill Transition Models.,"['Kourosh Hakhamaneshi', 'Ruihan Zhao', 'Albert Zhan', 'Pieter Abbeel', 'Michael Laskin']",2022-08-14T00:00:00Z,arxiv,, 84045,https://arxiv.org/abs/2203.11677,Robust Action Gap Increasing with Clipped Advantage Learning,"['Zhe Zhang', 'Yaozhong Gan', 'Xiaoyang Tan']",2022-03-20T00:00:00Z,arxiv,, 84056,https://arxiv.org/abs/2110.08731,Improving End-To-End Modeling for Mispronunciation Detection with Effective Augmentation Mechanisms,"['Tien-Hong Lo', 'Yao-Ting Sung', 'Berlin Chen']",2021-10-17T00:00:00Z,arxiv,, 84072,https://arxiv.org/abs/2207.10192,Building Human Values into Recommender Systems: An Interdisciplinary Synthesis.,"['Jonathan Stray', 'Alon Halevy', 'Parisa Assar', 'Dylan Hadfield-Menell', 'Craig Boutilier', 'Amar Ashar', 'Lex Beattie', 'Michael Ekstrand', 'Claire Leibowicz', 'Connie Moon Sehat', 'Sara Johansen', 'Lianne Kerlin', 'David Vickrey', 'Spandana Singh', 'Sanne Vrijenhoek', 'Amy Zhang', 'McKane Andrus', 'Natali Helberger', 'Polina Proutskova', 'Tanushree Mitra', 'Nina Vasan']",2022-08-14T00:00:00Z,arxiv,, 84108,https://arxiv.org/abs/1805.12387,Agents and Devices: A Relative Definition of Agency,"['Laurent Orseau', 'Simon McGregor McGill', 'Shane Legg']",2018-05-31T00:00:00Z,arxiv,, 84130,https://arxiv.org/abs/2103.05247,Pretrained Transformers as Universal Computation Engines,"['Kevin Lu', 'Aditya Grover', 'Pieter Abbeel', 'Igor Mordatch']",2021-03-09T00:00:00Z,arxiv,, 84150,https://arxiv.org/abs/1810.04053,The 30-Year Cycle In The AI Debate,['Jean-Marie Chauvet'],2018-10-08T00:00:00Z,arxiv,, 84178,https://arxiv.org/abs/2008.07284,Forward and inverse reinforcement learning sharing network weights and hyperparameters,"['Eiji Uchibe', 'Kenji Doya']",2020-08-17T00:00:00Z,arxiv,, 84197,https://arxiv.org/abs/1907.04649,Quantifying the pathways to life using assembly spaces,"['Stuart M. Marshall', 'Douglas Moore', 'Alastair R. G. Murray', 'Sara I. Walker', 'Leroy Cronin']",2019-07-06T00:00:00Z,arxiv,, 84211,https://arxiv.org/abs/1912.05453,Value-of-Information based Arbitration between Model-based and Model-free Control,"['Krishn Bera', 'Yash Mandilwar', 'Bapi Raju']",2019-12-08T00:00:00Z,arxiv,, 84227,https://arxiv.org/abs/1901.01753,Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions,"['Rui Wang', 'Joel Lehman', 'Jeff Clune', 'Kenneth O. Stanley']",2019-01-07T00:00:00Z,arxiv,, 84248,https://arxiv.org/abs/2112.00861,A General Language Assistant as a Laboratory for Alignment,"['Amanda Askell', 'Yuntao Bai', 'Anna Chen', 'Dawn Drain', 'Deep Ganguli', 'Tom Henighan', 'Andy Jones', 'Nicholas Joseph', 'Ben Mann', 'Nova DasSarma', 'Nelson Elhage', 'Zac Hatfield-Dodds', 'Danny Hernandez', 'Jackson Kernion', 'Kamal Ndousse', 'Catherine Olsson', 'Dario Amodei', 'Tom Brown', 'Jack Clark', 'Sam McCandlish', 'Chris Olah', 'Jared Kaplan']",2021-12-01T00:00:00Z,arxiv,, 84271,https://arxiv.org/abs/2203.01441,3D Common Corruptions and Data Augmentation,"['Oğuzhan Fatih Kar', 'Teresa Yeo', 'Andrei Atanov', 'Amir Zamir']",2022-03-02T00:00:00Z,arxiv,, 84290,https://arxiv.org/abs/1811.04350,Towards Governing Agent's Efficacy: Action-Conditional $β$-VAE for Deep Transparent Reinforcement Learning,"['John Yang', 'Gyujeong Lee', 'Minsung Hyun', 'Simyung Chang', 'Nojun Kwak']",2018-11-11T00:00:00Z,arxiv,, 84310,https://arxiv.org/abs/2105.06551,Axes for Sociotechnical Inquiry in AI Research,"['Sarah Dean', 'Thomas Krendl Gilbert', 'Nathan Lambert', 'Tom Zick']",2021-04-26T00:00:00Z,arxiv,, 84333,https://arxiv.org/abs/2106.00661,Reward is Enough for Convex MDPs.,"['Tom Zahavy', ""Brendan O'Donoghue"", 'Guillaume Desjardins', 'Satinder Singh']",2021-08-14T00:00:00Z,arxiv,, 84361,https://arxiv.org/abs/1903.06758,Algorithms for Verifying Deep Neural Networks,"['Changliu Liu', 'Tomer Arnon', 'Christopher Lazarus', 'Christopher Strong', 'Clark Barrett', 'Mykel J. Kochenderfer']",2019-03-15T00:00:00Z,arxiv,, 84383,https://arxiv.org/abs/2111.03026,B-Pref: Benchmarking Preference-Based Reinforcement Learning,"['Kimin Lee', 'Laura Smith', 'Anca Dragan', 'Pieter Abbeel']",2021-11-04T00:00:00Z,arxiv,, 84400,https://arxiv.org/abs/2007.00251,Unifying Model Explainability and Robustness via Machine-Checkable Concepts,"['Vedant Nanda', 'Till Speicher', 'John P. Dickerson', 'Krishna P. Gummadi', 'Muhammad Bilal Zafar']",2020-07-01T00:00:00Z,arxiv,, 84417,https://arxiv.org/abs/1908.07613,Implications of Quantum Computing for Artificial Intelligence alignment research,"['Jaime Sevilla', 'Pablo Moreno']",2019-08-19T00:00:00Z,arxiv,, 84446,https://arxiv.org/abs/1807.11655,Security and Privacy Issues in Deep Learning,"['Ho Bae', 'Jaehee Jang', 'Dahuin Jung', 'Hyemi Jang', 'Heonseok Ha', 'Hyungyu Lee', 'Sungroh Yoon']",2018-07-31T00:00:00Z,arxiv,, 84477,https://arxiv.org/abs/1806.00667,Sufficient Conditions for Idealised Models to Have No Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks,"['Yarin Gal', 'Lewis Smith']",2018-06-02T00:00:00Z,arxiv,, 84499,https://arxiv.org/abs/1611.03372,A stochastically verifiable autonomous control architecture with reasoning,"['Paolo Izzo', 'Hongyang Qu', 'Sandor M. Veres']",2016-11-10T00:00:00Z,arxiv,, 84520,https://arxiv.org/abs/2204.04681,"Enhancing the Robustness, Efficiency, and Diversity of Differentiable Architecture Search","['Chao Li', 'Jia Ning', 'Han Hu', 'Kun He']",2022-04-10T00:00:00Z,arxiv,, 84540,https://arxiv.org/abs/1904.08166,Analysing Neural Network Topologies: a Game Theoretic Approach,"['Julian Stier', 'Gabriele Gianini', 'Michael Granitzer', 'Konstantin Ziegler']",2019-04-17T00:00:00Z,arxiv,, 84555,https://arxiv.org/abs/1906.10187,Learning to Interactively Learn and Assist,"['Mark Woodward', 'Chelsea Finn', 'Karol Hausman']",2019-06-24T00:00:00Z,arxiv,, 84572,https://arxiv.org/abs/1905.01067,"Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask","['Hattie Zhou', 'Janice Lan', 'Rosanne Liu', 'Jason Yosinski']",2019-05-03T00:00:00Z,arxiv,, 84602,https://arxiv.org/abs/2209.07928,The BLue Amazon Brain (BLAB): A Modular Architecture of Services about the Brazilian Maritime Territory,"['Paulo Pirozelli', 'Ais B. R. Castro', 'Ana Luiza C. de Oliveira', 'André S. Oliveira', 'Flávio N. Cação', 'Igor C. Silveira', 'João G. M. Campos', 'Laura C. Motheo', 'Leticia F. Figueiredo', 'Lucas F. A. O. Pellicer', 'Marcelo A. José', 'Marcos M. José', 'Pedro de M. Ligabue', 'Ricardo S. Grava', 'Rodrigo M. Tavares', 'Vinícius B. Matos', 'Yan V. Sym', 'Anna H. R. Costa', 'Anarosa A. F. Brandão', 'Denis D. Mauá', 'Fabio G. Cozman', 'Sarajane M. Peres']",2022-09-06T00:00:00Z,arxiv,, 84625,https://arxiv.org/abs/1911.04870,Network Classifiers With Output Smoothing,"['Elsa Rizk', 'Roula Nassif', 'Ali H. Sayed']",2019-10-30T00:00:00Z,arxiv,, 84641,https://arxiv.org/abs/1712.04307,AI Safety and Reproducibility: Establishing Robust Foundations for the Neuropsychology of Human Values,"['Gopal P. Sarma', 'Nick J. Hay', 'Adam Safron']",2017-12-08T00:00:00Z,arxiv,, 84657,https://arxiv.org/abs/2105.03360,Finding the unicorn: Predicting early stage startup success through a hybrid intelligence method,"['Dominik Dellermann', 'Nikolaus Lipusch', 'Philipp Ebel', 'Karl Michael Popp', 'Jan Marco Leimeister']",2021-05-07T00:00:00Z,arxiv,, 84673,https://arxiv.org/abs/2008.03525,Non-Adversarial Imitation Learning and its Connections to Adversarial Methods,"['Oleg Arenz', 'Gerhard Neumann']",2020-08-08T00:00:00Z,arxiv,, 84698,https://arxiv.org/abs/2010.14701,Scaling Laws for Autoregressive Generative Modeling,"['Tom Henighan', 'Jared Kaplan', 'Mor Katz', 'Mark Chen', 'Christopher Hesse', 'Jacob Jackson', 'Heewoo Jun', 'Tom B. Brown', 'Prafulla Dhariwal', 'Scott Gray', 'Chris Hallacy', 'Benjamin Mann', 'Alec Radford', 'Aditya Ramesh', 'Nick Ryder', 'Daniel M. Ziegler', 'John Schulman', 'Dario Amodei', 'Sam McCandlish']",2020-10-28T00:00:00Z,arxiv,, 84721,https://arxiv.org/abs/1805.07914,Imitating Latent Policies from Observation,"['Ashley D. Edwards', 'Himanshu Sahni', 'Yannick Schroecker', 'Charles L. Isbell']",2018-05-21T00:00:00Z,arxiv,, 84736,https://arxiv.org/abs/1303.1458,Tradeoffs in Constructing and Evaluating Temporal Influence Diagrams,['Gregory M. Provan'],2013-03-06T00:00:00Z,arxiv,, 84756,https://arxiv.org/abs/2102.02454,Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation,"['Mingqi Yuan', 'Mao-on Pun']",2021-02-04T00:00:00Z,arxiv,, 84782,https://arxiv.org/abs/1808.08946,Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures,"['Gongbo Tang', 'Mathias Müller', 'Annette Rios', 'Rico Sennrich']",2018-08-27T00:00:00Z,arxiv,, 84809,https://arxiv.org/abs/2211.01602,Optimal Behavior Prior: Improving Human-AI Collaboration Through Generalizable Human Models..,"['Mesut Yang', 'Micah Carroll', 'Anca Dragan']",2022-08-14T00:00:00Z,arxiv,, 84830,https://arxiv.org/abs/2201.09694,Scaling Up Knowledge Graph Creation to Large and Heterogeneous Data Sources,"['Enrique Iglesias', 'Samaneh Jozashoori', 'Maria-Esther Vidal']",2022-01-24T00:00:00Z,arxiv,, 84840,https://arxiv.org/abs/2006.07495,Open Questions in Creating Safe Open-ended AI: Tensions Between Control and Creativity,"['Adrien Ecoffet', 'Jeff Clune', 'Joel Lehman']",2020-06-12T00:00:00Z,arxiv,, 84875,https://arxiv.org/abs/1904.12901,Challenges of Real-World Reinforcement Learning,"['Gabriel Dulac-Arnold', 'Daniel Mankowitz', 'Todd Hester']",2019-04-29T00:00:00Z,arxiv,, 84903,https://arxiv.org/abs/2107.01969,The MineRL BASALT Competition on Learning from Human Feedback,"['Rohin Shah', 'Cody Wild', 'Steven H. Wang', 'Neel Alex', 'Brandon Houghton', 'William Guss', 'Sharada Mohanty', 'Anssi Kanervisto', 'Stephanie Milani', 'Nicholay Topin', 'Pieter Abbeel', 'Stuart Russell', 'Anca Dragan']",2021-07-05T00:00:00Z,arxiv,, 84921,https://arxiv.org/abs/1805.07805,Constrained Policy Improvement for Safe and Efficient Reinforcement Learning,"['Elad Sarafian', 'Aviv Tamar', 'Sarit Kraus']",2018-05-20T00:00:00Z,arxiv,, 84945,https://arxiv.org/abs/1206.5264,Apprenticeship Learning using Inverse Reinforcement Learning and Gradient Methods,"['Gergely Neu', 'Csaba Szepesvari']",2012-06-20T00:00:00Z,arxiv,, 84968,https://arxiv.org/abs/1711.05541,Good and safe uses of AI Oracles,"['Stuart Armstrong', ""Xavier O'Rorke""]",2017-11-15T00:00:00Z,arxiv,, 84992,https://arxiv.org/abs/2003.05012,Retrospective Analysis of the 2019 MineRL Competition on Sample Efficient Reinforcement Learning,"['Stephanie Milani', 'Nicholay Topin', 'Brandon Houghton', 'William H. Guss', 'Sharada P. Mohanty', 'Keisuke Nakata', 'Oriol Vinyals', 'Noboru Sean Kuno']",2020-03-10T00:00:00Z,arxiv,, 85021,https://arxiv.org/abs/2204.06117,AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip Hardware Trojan Detection,"['Huili Chen', 'Xinqiao Zhang', 'Ke Huang', 'Farinaz Koushanfar']",2022-04-12T00:00:00Z,arxiv,, 85039,https://arxiv.org/abs/2001.07522,Engineering AI Systems: A Research Agenda,"['Jan Bosch', 'Ivica Crnkovic', 'Helena Holmström Olsson']",2020-01-16T00:00:00Z,arxiv,, 85064,https://arxiv.org/abs/1907.03843,Norms for Beneficial A.I.: A Computational Analysis of the Societal Value Alignment Problem,"['Pedro Fernandes', 'Francisco C. Santos', 'Manuel Lopes']",2019-06-26T00:00:00Z,arxiv,, 85087,https://arxiv.org/abs/2111.09478,Software Engineering for Responsible AI: An Empirical Study and Operationalised Patterns,"['Qinghua Lu', 'Liming Zhu', 'Xiwei Xu', 'Jon Whittle', 'David Douglas', 'Conrad Sanderson']",2021-11-18T00:00:00Z,arxiv,, 85121,https://arxiv.org/abs/1607.00061,Towards A Virtual Assistant That Can Be Taught New Tasks In Any Domain By Its End-Users,"['I. Dan Melamed', 'Nobal B. Niraula']",2016-06-30T00:00:00Z,arxiv,, 85137,https://arxiv.org/abs/1810.03642,Fast Context Adaptation via Meta-Learning,"['Luisa M Zintgraf', 'Kyriacos Shiarlis', 'Vitaly Kurin', 'Katja Hofmann', 'Shimon Whiteson']",2018-10-08T00:00:00Z,arxiv,, 85153,https://arxiv.org/abs/1909.01440,LCA: Loss Change Allocation for Neural Network Training,"['Janice Lan', 'Rosanne Liu', 'Hattie Zhou', 'Jason Yosinski']",2019-09-03T00:00:00Z,arxiv,, 85179,https://arxiv.org/abs/2204.11464,Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods,"['Yi Wan', 'Ali Rahimi-Kalahroudi', 'Janarthanan Rajendran', 'Ida Momennejad', 'Sarath Chandar', 'Harm van Seijen']",2022-04-25T00:00:00Z,arxiv,, 85201,https://arxiv.org/abs/1904.03606,Extending planning knowledge using ontologies for goal opportunities,"['Mohannad Babli', 'Eva Onaindia', 'Eliseo Marzal']",2019-04-07T00:00:00Z,arxiv,, 85217,https://arxiv.org/abs/2102.09677,Training a Resilient Q-Network against Observational Interference,"['Chao-Han Huck Yang', 'I-Te Danny Hung', 'Yi Ouyang', 'Pin-Yu Chen']",2021-02-18T00:00:00Z,arxiv,, 85232,https://arxiv.org/abs/2204.07254,Methodical Advice Collection and Reuse in Deep Reinforcement Learning,"['Sahir', 'Ercüment İlhan', 'Srijita Das', 'Matthew E. Taylor']",2022-04-14T00:00:00Z,arxiv,, 85246,https://arxiv.org/abs/1705.08439,Thinking Fast and Slow with Deep Learning and Tree Search,"['Thomas Anthony', 'Zheng Tian', 'David Barber']",2017-05-23T00:00:00Z,arxiv,, 85261,https://arxiv.org/abs/1307.7127,"Man and Machine: Questions of Risk, Trust and Accountability in Today's AI Technology",['Piyush Ahuja'],2013-07-26T00:00:00Z,arxiv,, 85293,https://arxiv.org/abs/1803.07612,Generating Multi-Agent Trajectories using Programmatic Weak Supervision,"['Eric Zhan', 'Stephan Zheng', 'Yisong Yue', 'Long Sha', 'Patrick Lucey']",2018-03-20T00:00:00Z,arxiv,, 85306,https://arxiv.org/abs/2208.02246,AdaCat: Adaptive Categorical Discretization for Autoregressive Models.,"['Qiyang (Colin) Li', 'Ajay Jain', 'Pieter Abbeel']",2022-08-14T00:00:00Z,arxiv,, 85321,https://arxiv.org/abs/1908.01007,Improving Deep Reinforcement Learning in Minecraft with Action Advice,"['Spencer Frazier', 'Mark Riedl']",2019-08-02T00:00:00Z,arxiv,, 85338,https://arxiv.org/abs/2012.05876,Neurosymbolic AI: The 3rd Wave,"[""Artur d'Avila Garcez"", 'Luis C. Lamb']",2020-12-10T00:00:00Z,arxiv,, 85373,https://arxiv.org/abs/1902.08265,Quantifying Perceptual Distortion of Adversarial Examples,"['Matt Jordan', 'Naren Manoj', 'Surbhi Goel', 'Alexandros G. Dimakis']",2019-02-21T00:00:00Z,arxiv,, 85388,https://arxiv.org/abs/2106.06499,Policy Gradient Bayesian Robust Optimization for Imitation Learning.,"['Zaynah Javed', 'Daniel S', 'Brown', 'Satvik Sharma', 'Jerry Zhu', 'Ashwin Balakrishna', 'Marek Petrik', 'Anca D', 'Dragan', 'Ken Goldberg']",2021-08-14T00:00:00Z,arxiv,, 85406,https://arxiv.org/abs/2201.08115,"Priors, Hierarchy, and Information Asymmetry for Skill Transfer in Reinforcement Learning","['Sasha Salter', 'Kristian Hartikainen', 'Walter Goodwin', 'Ingmar Posner']",2022-01-20T00:00:00Z,arxiv,, 85426,https://arxiv.org/abs/2102.10985,Software Architecture for Next-Generation AI Planning Systems,"['Sebastian Graef', 'Ilche Georgievski']",2021-02-22T00:00:00Z,arxiv,, 85439,https://arxiv.org/abs/2302.03025,"A Toy Model of Universality: Reverse Engineering How Networks Learn Group Operations","['Bilal Chughtai', 'Lawrence Chan', 'Neel Nanda']",2023-02-06T18:59:20Z,arxiv,, 85454,https://arxiv.org/abs/1810.05157,Learning under Misspecified Objective Spaces,"['Andreea Bobu', 'Andrea Bajcsy', 'Jaime F. Fisac', 'Anca D. Dragan']",2018-10-11T00:00:00Z,arxiv,, 85474,https://arxiv.org/abs/1806.10758,A Benchmark for Interpretability Methods in Deep Neural Networks,"['Sara Hooker', 'Dumitru Erhan', 'Pieter-Jan Kindermans', 'Been Kim']",2018-06-28T00:00:00Z,arxiv,, 85490,https://arxiv.org/abs/1810.08174,Establishing Appropriate Trust via Critical States,"['Sandy H. Huang', 'Kush Bhatia', 'Pieter Abbeel', 'Anca D. Dragan']",2018-10-18T00:00:00Z,arxiv,, 85504,https://arxiv.org/abs/1809.02840,Neural Guided Constraint Logic Programming for Program Synthesis,"['Lisa Zhang', 'Gregory Rosenblatt', 'Ethan Fetaya', 'Renjie Liao', 'William E. Byrd', 'Matthew Might', 'Raquel Urtasun', 'Richard Zemel']",2018-09-08T00:00:00Z,arxiv,, 85520,https://arxiv.org/abs/2001.09977,Towards a Human-like Open-Domain Chatbot,"['Daniel Adiwardana', 'Minh-Thang Luong', 'David R. So', 'Jamie Hall', 'Noah Fiedel', 'Romal Thoppilan', 'Zi Yang', 'Apoorv Kulshreshtha', 'Gaurav Nemade', 'Yifeng Lu', 'Quoc V. Le']",2020-01-27T00:00:00Z,arxiv,, 85547,https://arxiv.org/abs/1908.00528,Neural Simplex Architecture,"['Dung T. Phan', 'Radu Grosu', 'Nils Jansen', 'Nicola Paoletti', 'Scott A. Smolka', 'Scott D. Stoller']",2019-08-01T00:00:00Z,arxiv,, 85569,https://arxiv.org/abs/2001.00463,The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?,"['Toby Shevlane', 'Allan Dafoe']",2019-12-27T00:00:00Z,arxiv,, 85585,https://arxiv.org/abs/2103.03386,Clusterability in Neural Networks.,"['Daniel Filan', 'Stephen Casper', 'Shlomi Hod', 'Cody Wild', 'Andrew Critch', 'Stuart Russell']",2021-08-14T00:00:00Z,arxiv,, 85606,https://arxiv.org/abs/2209.07636,Improving Language Model Prompting in Support of Semi-autonomous Task Learning,"['James R. Kirk', 'Robert E. Wray', 'Peter Lindes', 'John E. Laird']",2022-09-13T00:00:00Z,arxiv,, 85627,https://arxiv.org/abs/1610.02847,Situational Awareness by Risk-Conscious Skills,['Daniel J.\xa0Mankowitz'],2016-10-10T11:01:32Z,arxiv,, 85638,https://arxiv.org/abs/2105.08475,AI and Shared Prosperity,"['Katya Klinova', 'Anton Korinek']",2021-05-18T00:00:00Z,arxiv,, 85654,https://arxiv.org/abs/1903.07291,Semantic Image Synthesis with Spatially-Adaptive Normalization,"['Taesung Park', 'Ming-Yu Liu', 'Ting-Chun Wang', 'Jun-Yan Zhu']",2019-03-18T00:00:00Z,arxiv,, 85678,https://arxiv.org/abs/1907.05447,Grounding Value Alignment with Ethical Principles,"['Tae Wan Kim', 'Thomas Donaldson', 'John Hooker']",2019-07-11T00:00:00Z,arxiv,, 85694,https://arxiv.org/abs/2106.04480,There Is No Turning Back: A Self-Supervised Approach for Reversibility-Aware Reinforcement Learning,"['Nathan Grinsztajn', 'Johan Ferret', 'Olivier Pietquin', 'Philippe Preux', 'Matthieu Geist']",2021-06-08T00:00:00Z,arxiv,, 85715,https://arxiv.org/abs/1910.09508,Multi-agent Hierarchical Reinforcement Learning with Dynamic Termination,"['Dongge Han', 'Wendelin Boehmer', 'Michael Wooldridge', 'Alex Rogers']",2019-10-21T00:00:00Z,arxiv,, 85732,https://arxiv.org/abs/2009.09266,Humans learn too: Better Human-AI Interaction using Optimized Human Inputs,['Johannes Schneider'],2020-09-19T00:00:00Z,arxiv,, 85746,https://arxiv.org/abs/1810.08272,BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning,"['Maxime Chevalier-Boisvert', 'Dzmitry Bahdanau', 'Salem Lahlou', 'Lucas Willems', 'Chitwan Saharia', 'Thien Huu Nguyen', 'Yoshua Bengio']",2018-10-18T00:00:00Z,arxiv,, 85767,https://arxiv.org/abs/1212.1625,Testing the AgreementMaker System in the Anatomy Task of OAEI 2012,"['Daniel Faria', 'Catia Pesquita', 'Emanuel Santos', 'Francisco M. Couto', 'Cosmin Stroe', 'Isabel F. Cruz']",2012-12-07T00:00:00Z,arxiv,, 85794,https://arxiv.org/abs/2106.12207,Not all users are the same: Providing personalized explanations for sequential decision making problems,"['Utkarsh Soni', 'Sarath Sreedharan', 'Subbarao Kambhampati']",2021-06-23T00:00:00Z,arxiv,, 85818,https://arxiv.org/abs/1702.03465,Enabling Robots to Communicate their Objectives,"['Sandy H. Huang', 'David Held', 'Pieter Abbeel', 'Anca D. Dragan']",2017-02-11T00:00:00Z,arxiv,, 85835,https://arxiv.org/abs/1606.07092,Artificial Fun: Mapping Minds to the Space of Fun,"['Soenke Ziesche', 'Roman V. Yampolskiy']",2016-06-22T00:00:00Z,arxiv,, 85857,https://arxiv.org/abs/1903.01611,Stabilizing the Lottery Ticket Hypothesis,"['Jonathan Frankle', 'Gintare Karolina Dziugaite', 'Daniel M. Roy', 'Michael Carbin']",2019-03-05T00:00:00Z,arxiv,, 85876,https://arxiv.org/abs/1711.08068,Deterministic Policy Optimization by Combining Pathwise and Score Function Estimators for Discrete Action Spaces,"['Daniel Levy', 'Stefano Ermon']",2017-11-21T00:00:00Z,arxiv,, 85892,https://arxiv.org/abs/1602.04019,Energetics of the brain and AI,['Anders Sandberg'],2016-02-12T00:00:00Z,arxiv,, 85907,https://arxiv.org/abs/1912.01683,Optimal Policies Tend to Seek Power,"['Alexander Matt Turner', 'Logan Smith', 'Rohin Shah', 'Andrew Critch', 'Prasad Tadepalli']",2019-12-03T00:00:00Z,arxiv,, 85927,https://arxiv.org/abs/1602.06462,The Singularity May Never Be Near,['Toby Walsh'],2016-02-20T00:00:00Z,arxiv,, 85951,https://arxiv.org/abs/1808.09572,Cycle-of-Learning for Autonomous Systems from Human Interaction,"['Nicholas R. Waytowich', 'Vinicius G. Goecks', 'Vernon J. Lawhern']",2018-08-28T00:00:00Z,arxiv,, 85974,https://arxiv.org/abs/1209.3734,RIO: Minimizing User Interaction in Ontology Debugging,"['Patrick Rodler', 'Kostyantyn Shchekotykhin', 'Philipp Fleiss', 'Gerhard Friedrich']",2012-09-17T00:00:00Z,arxiv,, 85993,https://arxiv.org/abs/1905.09335,Imitation Learning from Video by Leveraging Proprioception,"['Faraz Torabi', 'Garrett Warnell', 'Peter Stone']",2019-05-22T00:00:00Z,arxiv,, 86017,https://arxiv.org/abs/2107.02692,ML-Quadrat & DriotData: A Model-Driven Engineering Tool and a Low-Code Platform for Smart IoT Services,"['Armin Moin', 'Andrei Mituca', 'Moharram Challenger', 'Atta Badii', 'Stephan Günnemann']",2021-07-06T00:00:00Z,arxiv,, 86029,https://arxiv.org/abs/2104.07143,An Interpretability Illusion for BERT,"['Tolga Bolukbasi', 'Adam Pearce', 'Ann Yuan', 'Andy Coenen', 'Emily Reif', 'Fernanda Viégas', 'Martin Wattenberg']",2021-04-14T22:04:48Z,arxiv,, 86048,https://arxiv.org/abs/2101.08153,Shielding Atari Games with Bounded Prescience,"['Mirco Giacobbe', 'Mohammadhosein Hasanbeig', 'Daniel Kroening', 'Hjalmar Wijk']",2021-01-20T00:00:00Z,arxiv,, 86063,https://arxiv.org/abs/1811.06272,"Woulda, Coulda, Shoulda: Counterfactually-Guided Policy Search","['Lars Buesing', 'Theophane Weber', 'Yori Zwols', 'Sebastien Racaniere', 'Arthur Guez', 'Jean-Baptiste Lespiau', 'Nicolas Heess']",2018-11-15T00:00:00Z,arxiv,, 86081,https://arxiv.org/abs/1810.02334,Unsupervised Learning via Meta-Learning,"['Kyle Hsu', 'Sergey Levine', 'Chelsea Finn']",2018-10-04T00:00:00Z,arxiv,, 86092,https://arxiv.org/abs/2205.03824,A Survey on AI Sustainability: Emerging Trends on Learning Algorithms and Research Challenges,"['Zhenghua Chen', 'Min Wu', 'Alvin Chan', 'Xiaoli Li', 'Yew-Soon Ong']",2022-05-08T00:00:00Z,arxiv,, 86143,https://arxiv.org/abs/2103.12142,Combining Reward Information from Multiple Sources,"['Dmitrii Krasheninnikov', 'Rohin Shah', 'Herke van Hoof']",2021-03-22T00:00:00Z,arxiv,, 86166,https://arxiv.org/abs/1810.12715,On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models,"['Sven Gowal', 'Krishnamurthy Dvijotham', 'Robert Stanforth', 'Rudy Bunel', 'Chongli Qin', 'Jonathan Uesato', 'Relja Arandjelovic', 'Timothy Mann', 'Pushmeet Kohli']",2018-10-30T00:00:00Z,arxiv,, 86181,https://arxiv.org/abs/2103.05661,On complementing end-to-end human behavior predictors with planning.,"['Liting Sun', 'Xiaogang Jia', 'Anca D', 'Dragan']",2021-08-14T00:00:00Z,arxiv,, 86202,https://arxiv.org/abs/1910.03466,Can We Distinguish Machine Learning from Human Learning?,"['Vicki Bier', 'Paul B. Kantor', 'Gary Lupyan', 'Xiaojin Zhu']",2019-10-08T00:00:00Z,arxiv,, 86225,https://arxiv.org/abs/2108.03298,What Matters in Learning from Offline Human Demonstrations for Robot Manipulation,"['Ajay Mandlekar', 'Danfei Xu', 'Josiah Wong', 'Soroush Nasiriany', 'Chen Wang', 'Rohun Kulkarni', 'Li Fei-Fei', 'Silvio Savarese', 'Yuke Zhu', 'Roberto Martín-Martín']",2021-08-06T00:00:00Z,arxiv,, 86266,https://arxiv.org/abs/2008.01848,Forecasting AI Progress: A Research Agenda,"['Ross Gruetzemacher', 'Florian Dorner', 'Niko Bernaola-Alvarez', 'Charlie Giattino', 'David Manheim']",2020-08-04T00:00:00Z,arxiv,, 86294,https://arxiv.org/abs/1709.03981,Aggregating incoherent agents who disagree,['Richard Pettigrew'],2017-09-12T00:00:00Z,arxiv,, 86311,https://arxiv.org/abs/1811.07807,Deeper Interpretability of Deep Networks,"['Tian Xu', 'Jiayu Zhan', 'Oliver G. B. Garrod', 'Philip H. S. Torr', 'Song-Chun Zhu', 'Robin A. A. Ince', 'Philippe G. Schyns']",2018-11-19T00:00:00Z,arxiv,, 86331,https://arxiv.org/abs/2202.04092,Machine Explanations and Human Understanding,"['Chacha Chen', 'Shi Feng', 'Amit Sharma', 'Chenhao Tan']",2022-02-08T00:00:00Z,arxiv,, 86349,https://arxiv.org/abs/2102.00311,Fairness through Social Welfare Optimization,"['Violet Xinying Chen', 'J. N. Hooker']",2021-01-30T00:00:00Z,arxiv,, 86367,https://arxiv.org/abs/1912.07242,More Data Can Hurt for Linear Regression: Sample-wise Double Descent,['Preetum Nakkiran'],2019-12-16T00:00:00Z,arxiv,, 86377,https://arxiv.org/abs/2107.08574,A Modulation Layer to Increase Neural Network Robustness Against Data Quality Issues,"['Mohamed Abdelhack', 'Jiaming Zhang', 'Sandhya Tripathi', 'Bradley A Fritz', 'Daniel Felsky', 'Michael S Avidan', 'Yixin Chen', 'Christopher R King']",2021-07-19T00:00:00Z,arxiv,, 86395,https://arxiv.org/abs/2001.08525,Optimal by Design: Model-Driven Synthesis of Adaptation Strategies for Autonomous Systems,"['Yehia Elrakaiby', 'Paola Spoletini', 'Bashar Nuseibeh']",2020-01-16T00:00:00Z,arxiv,, 86409,https://arxiv.org/abs/2202.03192,Reward is not enough: can we liberate AI from the reinforcement learning paradigm?,['Vacslav Glukhov'],2022-02-03T00:00:00Z,arxiv,, 86432,https://arxiv.org/abs/1602.04450,Bayesian Optimization with Safety Constraints: Safe and Automatic Parameter Tuning in Robotics,"['Felix Berkenkamp1', 'Andreas Krause1', 'and\nAngela P. Schoellig2']",2016-02-14T13:30:43Z,arxiv,, 86453,https://arxiv.org/abs/2104.04670,Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections,"['Ruiqi Zhong', 'Kristy Lee', 'Zheng Zhang', 'Dan Klein']",2021-04-10T00:00:00Z,arxiv,, 86495,https://arxiv.org/abs/1807.05037,Exploring Hierarchy-Aware Inverse Reinforcement Learning,"['Chris Cundy', 'Daniel Filan']",2018-07-13T00:00:00Z,arxiv,, 86512,https://arxiv.org/abs/1906.02715,Visualizing and Measuring the Geometry of BERT,"['Andy Coenen', 'Emily Reif']",2019-06-06T17:33:22Z,arxiv,, 86531,https://arxiv.org/abs/1512.04021,The Rationale behind the Concept of Goal,"['Guido Governatori', 'Francesco Olivieri', 'Simone Scannapieco', 'Antonino Rotolo', 'Matteo Cristani']",2015-12-13T00:00:00Z,arxiv,, 86555,https://arxiv.org/abs/2009.12576,Inverse Rational Control with Partially Observable Continuous Nonlinear Dynamics,"['Minhae Kwon', 'Saurabh Daptardar', 'Paul Schrater', 'Xaq Pitkow']",2020-09-26T00:00:00Z,arxiv,, 86566,https://arxiv.org/abs/1808.01688,Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models,"['Dong Su', 'Huan Zhang', 'Hongge Chen', 'Jinfeng Yi', 'Pin-Yu Chen', 'Yupeng Gao']",2018-08-05T00:00:00Z,arxiv,, 86587,https://arxiv.org/abs/2104.04147,"Artificial intelligence, human rights, democracy, and the rule of law: a primer","['David Leslie', 'Christopher Burr', 'Mhairi Aitken', 'Josh Cowls', 'Michael Katell', 'Morgan Briggs']",2021-04-02T00:00:00Z,arxiv,, 86617,https://arxiv.org/abs/1511.08130,A Roadmap towards Machine Intelligence,['Tomas Mikolov'],2015-11-25T17:32:18Z,arxiv,, 86641,https://arxiv.org/abs/2103.06268,CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review.,"['Dan Hendrycks', 'Collin Burns', 'Anya Chen', 'Spencer Ball']",2021-08-14T00:00:00Z,arxiv,, 86663,https://arxiv.org/abs/2002.04202,Leveraging Rationales to Improve Human Task Performance,"['Devleena Das', 'Sonia Chernova']",2020-02-11T00:00:00Z,arxiv,, 86678,https://arxiv.org/abs/2002.11174,TanksWorld: A Multi-Agent Environment for AI Safety Research,"['Corban G. Rivera', 'Olivia Lyons', 'Arielle Summitt', 'Ayman Fatima', 'Ji Pak', 'William Shao', 'Robert Chalmers', 'Aryeh Englander', 'Edward W. Staley', 'I-Jeng Wang', 'Ashley J. Llorens']",2020-02-25T00:00:00Z,arxiv,, 86702,https://arxiv.org/abs/2109.14076,RAFT: A Real-World Few-Shot Text Classification Benchmark,"['Neel Alex', 'Eli Lifland', 'Lewis Tunstall', 'Abhishek Thakur', 'Pegah Maham', 'C. Jess Riedel', 'Emmie Hine', 'Carolyn Ashurst', 'Paul Sedille', 'Alexis Carlier', 'Michael Noetel', 'Andreas Stuhlmüller']",2021-09-28T00:00:00Z,arxiv,, 86728,https://arxiv.org/abs/2110.10819,Shaking the foundations: delusions in sequence models for interaction and control,"['Pedro A. Ortega', 'Markus Kunesch', 'Grégoire Delétang', 'Tim Genewein', 'Jordi Grau-Moya', 'Joel Veness', 'Jonas Buchli', 'Jonas Degrave', 'Bilal Piot', 'Julien Perolat', 'Tom Everitt', 'Corentin Tallec', 'Emilio Parisotto', 'Tom Erez', 'Yutian Chen', 'Scott Reed', 'Marcus Hutter', 'Nando de Freitas', 'Shane Legg']",2021-10-20T00:00:00Z,arxiv,, 86744,https://arxiv.org/abs/cs/0205078,A Spectrum of Applications of Automated Reasoning,['Larry Wos'],2002-05-30T00:00:00Z,arxiv,, 86762,https://arxiv.org/abs/2008.07371,Artificial Intelligence is stupid and causal reasoning won't fix it,['John Mark Bishop'],2020-07-20T00:00:00Z,arxiv,, 86786,https://arxiv.org/abs/1604.08153,Classifying Options for Deep Reinforcement Learning,"['Kai Arulkumaran', 'Nat Dilokthanakul', 'Murray Shanahan', 'Anil Anthony Bharath']",2016-04-27T00:00:00Z,arxiv,, 86799,https://arxiv.org/abs/2307.14324,Evaluating the Moral Beliefs Encoded in LLMs Warning: This paper contains moral scenarios which are controversial and offensive in nature.,['Nino Scherrer'],2023-07-26T17:42:43Z,arxiv,, 86831,https://arxiv.org/abs/1910.07581,Scaling up psychology via Scientific Regret Minimization.,"['Mayank Agrawal', 'Joshua C', 'Peterson', 'Thomas L', 'Griffiths']",2020-08-14T00:00:00Z,arxiv,, 86843,https://arxiv.org/abs/1804.08606,Zero-Shot Visual Imitation,"['Deepak Pathak', 'Parsa Mahmoudieh', 'Guanghao Luo', 'Pulkit Agrawal', 'Dian Chen', 'Yide Shentu', 'Evan Shelhamer', 'Jitendra Malik', 'Alexei A. Efros', 'Trevor Darrell']",2018-04-23T00:00:00Z,arxiv,, 86871,https://arxiv.org/abs/2002.09571,Learning to Continually Learn,"['Shawn Beaulieu', 'Lapo Frati', 'Thomas Miconi', 'Joel Lehman', 'Kenneth O. Stanley', 'Jeff Clune', 'Nick Cheney']",2020-02-21T00:00:00Z,arxiv,, 86882,https://arxiv.org/abs/2105.12938,Interactive Explanations: Diagnosis and Repair of Reinforcement Learning Based Agent Behaviors,"['Christian Arzate Cruz', 'Takeo Igarashi']",2021-05-27T00:00:00Z,arxiv,, 86904,https://arxiv.org/abs/1510.04931,Bad Universal Priors and Notions of Optimality,['Jan Leike and Marcus Hutter'],2015-10-16T16:22:23Z,arxiv,, 86930,https://arxiv.org/abs/2006.16241,The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization.,"['Dan Hendrycks', 'Steven Basart', 'Norman Mu', 'Saurav Kadavath', 'Frank Wang', 'Evan Dorundo', 'Rahul Desai', 'Tyler Zhu', 'Samyak Parajuli', 'Mike Guo', 'Dawn Song', 'Jacob Steinhardt', 'Justin Gilmer']",2020-08-14T00:00:00Z,arxiv,, 86957,https://arxiv.org/abs/2007.13544,Combining Deep Reinforcement Learning and Search for Imperfect-Information Games,"['Noam Brown', 'Anton Bakhtin', 'Adam Lerer', 'Qucheng Gong']",2020-07-27T00:00:00Z,arxiv,, 86980,https://arxiv.org/abs/1801.03737,"Counterfactual equivalence for POMDPs, and underlying deterministic environments",['Stuart Armstrong'],2018-01-11T00:00:00Z,arxiv,, 87002,https://arxiv.org/abs/2001.00682,Auditing and Debugging Deep Learning Models via Decision Boundaries: Individual-level and Group-level Analysis,"['Roozbeh Yousefzadeh', ""Dianne P. O'Leary""]",2020-01-03T00:00:00Z,arxiv,, 87025,https://arxiv.org/abs/1704.02882,Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning,"['El Mahdi El Mhamdi', 'Rachid Guerraoui', 'Hadrien Hendrikx', 'Alexandre Maurer']",2017-04-10T00:00:00Z,arxiv,, 87049,https://arxiv.org/abs/astro-ph/0512204,How unlikely is a doomsday catastrophe?,"['Max Tegmark', 'Nick Bostrom']",2005-12-08T00:00:00Z,arxiv,, 87066,https://arxiv.org/abs/2107.06071,aiSTROM -- A roadmap for developing a successful AI strategy,['Dorien Herremans'],2021-06-25T00:00:00Z,arxiv,, 87102,https://arxiv.org/abs/2106.01826,Towards a Mathematical Theory of Abstraction,['Beren Millidge'],2021-06-03T00:00:00Z,arxiv,, 87115,https://arxiv.org/abs/1804.00792,Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks,"['Ali Shafahi', 'W. Ronny Huang', 'Mahyar Najibi', 'Octavian Suciu', 'Christoph Studer', 'Tudor Dumitras', 'Tom Goldstein']",2018-04-03T00:00:00Z,arxiv,, 87129,https://arxiv.org/abs/2206.03271,On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning.,"['Zhao Mandi', 'Pieter Abbeel', 'Stephen James']",2022-08-14T00:00:00Z,arxiv,, 87150,https://arxiv.org/abs/2005.01539,Open Loop In Natura Economic Planning,['Spyridon Samothrakis'],2020-05-04T00:00:00Z,arxiv,, 87179,https://arxiv.org/abs/2011.08512,Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database,['Sean McGregor'],2020-11-17T00:00:00Z,arxiv,, 87201,https://arxiv.org/abs/1909.13371,Gradient Descent: The Ultimate Optimizer,"['Kartik Chandra', 'Audrey Xie', 'Jonathan Ragan-Kelley', 'Erik Meijer']",2019-09-29T00:00:00Z,arxiv,, 87224,https://arxiv.org/abs/1810.00821,"Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow","['Xue Bin Peng', 'Angjoo Kanazawa', 'Sam Toyer', 'Pieter Abbeel', 'Sergey Levine']",2018-10-01T00:00:00Z,arxiv,, 87240,https://arxiv.org/abs/2310.17230,Codebook Features: Sparse and Discrete Interpretability for Neural Networks,"['Alex Tamkin', 'Mohammad Taufeeque', 'Noah D. Goodman']",2023-10-26T08:28:48Z,arxiv,, 87256,https://arxiv.org/abs/2002.09595,The Pragmatic Turn in Explainable Artificial Intelligence (XAI),['Andrés Páez'],2020-02-22T00:00:00Z,arxiv,, 87279,https://arxiv.org/abs/2202.01197,VOS: Learning What You Don't Know by Virtual Outlier Synthesis,"['Xuefeng Du', 'Zhaoning Wang', 'Mu Cai', 'Yixuan Li']",2022-02-02T00:00:00Z,arxiv,, 87290,https://arxiv.org/abs/2110.03684,Cross-Domain Imitation Learning via Optimal Transport.,"['Arnaud Fickinger', 'Samuel Cohen', 'Stuart Russell', 'Brandon Amos']",2022-08-14T00:00:00Z,arxiv,, 87306,https://arxiv.org/abs/2111.15366,AI and the Everything in the Whole Wide World Benchmark,"['Inioluwa Deborah Raji', 'Emily M. Bender', 'Amandalynne Paullada', 'Emily Denton', 'Alex Hanna']",2021-11-26T00:00:00Z,arxiv,, 87327,https://arxiv.org/abs/1804.02929,First Experiments with a Flexible Infrastructure for Normative Reasoning,"['Christoph Benzmüller', 'Xavier Parent']",2018-04-09T00:00:00Z,arxiv,, 87342,https://arxiv.org/abs/1505.02449,Automating change of representation for proofs in discrete mathematics,"['Daniel Raggi', 'Alan Bundy', 'Gudmund Grov', 'Alison Pease']",2015-05-10T00:00:00Z,arxiv,, 87360,https://arxiv.org/abs/1901.05856,Amplifying the Imitation Effect for Reinforcement Learning of UCAV's Mission Execution,"['Gyeong Taek Lee', 'Chang Ouk Kim']",2019-01-17T00:00:00Z,arxiv,, 87381,https://arxiv.org/abs/2111.13786,Learning from learning machines: a new generation of AI technology to meet the needs of science,"['Luca Pion-Tonachini', 'Kristofer Bouchard', 'Hector Garcia Martin', 'Sean Peisert', 'W. Bradley Holtz', 'Anil Aswani', 'Dipankar Dwivedi', 'Haruko Wainwright', 'Ghanshyam Pilania', 'Benjamin Nachman', 'Babetta L. Marrone', 'Nicola Falco', 'Prabhat', 'Daniel Arnold', 'Alejandro Wolf-Yadlin', 'Sarah Powers', 'Sharlee Climer', 'Quinn Jackson', 'Ty Carlson', 'Michael Sohn', 'Petrus Zwart', 'Neeraj Kumar', 'Amy Justice', 'Claire Tomlin', 'Daniel Jacobson', 'Gos Micklem', 'Georgios V. Gkoutos', 'Peter J. Bickel', 'Jean-Baptiste Cazier', 'Juliane Müller', 'Bobbie-Jo Webb-Robertson', 'Rick Stevens', 'Mark Anderson', 'Ken Kreutz-Delgado', 'Michael W. Mahoney', 'James B. Brown']",2021-11-27T00:00:00Z,arxiv,, 87402,https://arxiv.org/abs/2001.07118,The Incentives that Shape Behaviour,"['Ryan Carey', 'Eric Langlois', 'Tom Everitt', 'Shane Legg']",2020-01-20T00:00:00Z,arxiv,, 87418,https://arxiv.org/abs/2203.15414,Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice,"['Markus Borg', 'Johan Bengtsson', 'Harald Österling', 'Alexander Hagelborn', 'Isabella Gagner', 'Piotr Tomaszewski']",2022-03-29T00:00:00Z,arxiv,, 87456,https://arxiv.org/abs/2202.01747,The Met Dataset: Instance-level Recognition for Artworks,"['Nikolaos-Antonios Ypsilantis', 'Noa Garcia', 'Guangxing Han', 'Sarah Ibrahimi', 'Nanne Van Noord', 'Giorgos Tolias']",2022-02-03T00:00:00Z,arxiv,, 87481,https://arxiv.org/abs/2107.13270,A Reflection on Learning from Data: Epistemology Issues and Limitations,"['Ahmad Hammoudeh', 'Sara Tedmori', 'Nadim Obeid']",2021-07-28T00:00:00Z,arxiv,, 87517,https://arxiv.org/abs/1811.11298,Exploring Restart Distributions,"['Arash Tavakoli', 'Vitaly Levdik', 'Riashat Islam', 'Christopher M. Smith', 'Petar Kormushev']",2018-11-27T00:00:00Z,arxiv,, 87532,https://arxiv.org/abs/2107.11264,Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation,"['Sanghun Jung', 'Jungsoo Lee', 'Daehoon Gwak', 'Sungha Choi', 'Jaegul Choo']",2021-07-23T00:00:00Z,arxiv,, 87554,https://arxiv.org/abs/2012.10033,Exploring Fluent Query Reformulations with Text-to-Text Transformers and Reinforcement Learning,"['Jerry Zikun Chen', 'Shi Yu', 'Haoran Wang']",2020-12-18T00:00:00Z,arxiv,, 87583,https://arxiv.org/abs/1901.08654,The Assistive Multi-Armed Bandit.,"['Lawrence Chan', 'Dylan Hadfield-Menell', 'Siddhartha Srinivasa', 'Anca Dragan']",2019-08-15T00:00:00Z,arxiv,, 87608,https://arxiv.org/abs/1904.01484,Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?,"['Patrick Rodler', 'Dietmar Jannach', 'Konstantin Schekotihin', 'Philipp Fleiss']",2019-04-02T00:00:00Z,arxiv,, 87633,https://arxiv.org/abs/2306.06924,TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI,"['Andrew Critch', 'Stuart Russell']",2023-06-12T00:00:00Z,arxiv,, 87676,https://arxiv.org/abs/2208.10256,Information-Theoretic Equivalence of Entropic Multi-Marginal Optimal Transport: A Theory for Multi-Agent Communication,['Shuchan Wang'],2022-08-22T00:00:00Z,arxiv,, 87693,https://arxiv.org/abs/2108.11463,With One Voice: Composing a Travel Voice Assistant from Re-purposed Models,"['Shachaf Poran', 'Gil Amsalem', 'Amit Beka', 'Dmitri Goldenberg']",2021-08-04T00:00:00Z,arxiv,, 87729,https://arxiv.org/abs/1802.01744,Shared Autonomy via Deep Reinforcement Learning,"['Siddharth Reddy', 'Anca D. Dragan', 'Sergey Levine']",2018-02-06T00:00:00Z,arxiv,, 87751,https://arxiv.org/abs/2111.00210,Mastering Atari Games with Limited Data.,"['Weirui Ye', 'Shaohuai Liu', 'Thanard Kurutach', 'Pieter Abbeel', 'Yang Gao']",2021-08-14T00:00:00Z,arxiv,, 87772,https://arxiv.org/abs/1804.01396,Artificial Intelligence and its Role in Near Future,"['Jahanzaib Shabbir', 'Tarique Anwer']",2018-04-01T00:00:00Z,arxiv,, 87805,https://arxiv.org/abs/2103.15294,"""Weak AI"" is Likely to Never Become ""Strong AI"", So What is its Greatest Value for us?",['Bin Liu'],2021-03-29T00:00:00Z,arxiv,, 87835,https://arxiv.org/abs/2008.02275,Aligning AI With Shared Human Values.,"['Dan Hendrycks', 'Collin Burns', 'Steven Basart', 'Andrew Critch', 'Jerry Li', 'Dawn Song', 'Jacob Steinhardt']",2020-08-14T00:00:00Z,arxiv,, 87853,https://arxiv.org/abs/1808.01174,Generalization Error in Deep Learning,"['Daniel Jakubovitz', 'Raja Giryes', 'Miguel R. D. Rodrigues']",2018-08-03T00:00:00Z,arxiv,, 87880,https://arxiv.org/abs/1902.07685,World Discovery Models,"['Mohammad Gheshlaghi Azar', 'Bilal Piot', 'Bernardo Avila Pires', 'Jean-Bastien Grill', 'Florent Altché', 'Rémi Munos']",2019-02-20T00:00:00Z,arxiv,, 87895,https://arxiv.org/abs/2007.04068,Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence,"['Shakir Mohamed', 'Marie-Therese Png', 'William Isaac']",2020-07-08T00:00:00Z,arxiv,, 87922,https://arxiv.org/abs/2106.11039,Institutionalising Ethics in AI through Broader Impact Requirements,"['Carina Prunkl', 'Carolyn Ashurst', 'Markus Anderljung', 'Helena Webb', 'Jan Leike', 'Allan Dafoe']",2021-05-30T00:00:00Z,arxiv,, 87946,https://arxiv.org/abs/2005.07648,Language Conditioned Imitation Learning over Unstructured Data,"['Corey Lynch', 'Pierre Sermanet']",2020-05-15T00:00:00Z,arxiv,, 87965,https://arxiv.org/abs/1812.02900,Off-Policy Deep Reinforcement Learning without Exploration,"['Scott Fujimoto', 'David Meger', 'Doina Precup']",2018-12-07T00:00:00Z,arxiv,, 87982,https://arxiv.org/abs/2001.05919,Hidden Community Detection on Two-layer Stochastic Models: a Theoretical Perspective.,"['Jialu Bao', 'Kun He', 'Xiaodong Xin', 'Bart Selman', 'John E', 'Hopcroft']",2020-08-14T00:00:00Z,arxiv,, 87995,https://arxiv.org/abs/1210.4915,Self-confirming price-prediction strategies for simultaneous one-shot auctions.,"['Michael Wellman', 'Eric Sodomka', 'Amy Greenwald']",2017-08-14T00:00:00Z,arxiv,, 88010,https://arxiv.org/abs/2111.03913,Linguistic Cues of Deception in a Multilingual April Fools' Day Context,"['Katerina Papantoniou', 'Panagiotis Papadakos', 'Giorgos Flouris', 'Dimitris Plexousakis']",2021-11-06T00:00:00Z,arxiv,, 88029,https://arxiv.org/abs/2107.06882,Conservative Objective Models for Effective Offline Model-Based Optimization,"['Brandon Trabucco', 'Aviral Kumar', 'Xinyang Geng', 'Sergey Levine']",2021-07-14T00:00:00Z,arxiv,, 88042,https://arxiv.org/abs/1502.06512,From Seed AI to Technological Singularity via Recursively Self-Improving Software,['Roman V. Yampolskiy'],2015-02-23T17:08:30Z,arxiv,, 88068,https://arxiv.org/abs/1304.3107,A Backwards View for Assessment,"['Ross D. Shachter', 'David Heckerman']",2013-03-27T00:00:00Z,arxiv,, 88083,https://arxiv.org/abs/1612.06528,Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs,"['Sarmimala Saikia', 'Lovekesh Vig', 'Ashwin Srinivasan', 'Gautam Shroff', 'Puneet Agarwal', 'Richa Rawat']",2016-12-20T00:00:00Z,arxiv,, 88102,https://arxiv.org/abs/2006.02689,Solving hard AI planning instances using curriculum-driven deep reinforcement learning.,"['Dieqiao Feng', 'Carla P Gomes', 'Bart Selman']",2020-08-14T00:00:00Z,arxiv,, 88130,https://arxiv.org/abs/1902.11277,A Risk-Sensitive Finite-Time Reachability Approach for Safety of Stochastic Dynamic Systems.,"['Margaret P', 'Chapman', 'Jonathan Lacotte', 'Aviv Tamar', 'Donggun Lee', 'Kevin M', 'Smith', 'Victoria Cheng', 'Jaime F', 'Fisac', 'Susmit Jha', 'Marco Pavone', 'Claire J', 'Tomlin']",2019-08-14T00:00:00Z,arxiv,, 88147,https://arxiv.org/abs/1807.05185,Model Reconstruction from Model Explanations,"['Smitha Milli', 'Ludwig Schmidt', 'Anca D. Dragan', 'Moritz Hardt']",2018-07-13T00:00:00Z,arxiv,, 88162,https://arxiv.org/abs/1903.11680,Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks,"['Mingchen Li', 'Mahdi Soltanolkotabi', 'Samet Oymak']",2019-03-27T00:00:00Z,arxiv,, 88176,https://arxiv.org/abs/1804.09160,No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling,"['Xin Wang', 'Wenhu Chen', 'Yuan-Fang Wang', 'William Yang Wang']",2018-04-24T00:00:00Z,arxiv,, 88187,https://arxiv.org/abs/1707.06658,RAIL: Risk-Averse Imitation Learning,"['Anirban Santara', 'Abhishek Naik', 'Balaraman Ravindran', 'Dipankar Das', 'Dheevatsa Mudigere', 'Sasikanth Avancha', 'Bharat Kaul']",2017-07-20T00:00:00Z,arxiv,, 88198,https://arxiv.org/abs/1805.12152,Robustness May Be at Odds with Accuracy,"['Dimitris Tsipras', 'Shibani Santurkar', 'Logan Engstrom', 'Alexander Turner', 'Aleksander Madry']",2018-05-30T00:00:00Z,arxiv,, 88210,https://arxiv.org/abs/1809.08343,Interpretable Multi-Objective Reinforcement Learning through Policy Orchestration,"['Ritesh Noothigattu', 'Djallel Bouneffouf', 'Nicholas Mattei', 'Rachita Chandra', 'Piyush Madan', 'Kush Varshney', 'Murray Campbell', 'Moninder Singh', 'Francesca Rossi']",2018-09-21T00:00:00Z,arxiv,, 88225,https://arxiv.org/abs/1906.03973,E-LPIPS: Robust Perceptual Image Similarity via Random Transformation Ensembles,"['Markus Kettunen', 'Erik Härkönen', 'Jaakko Lehtinen']",2019-06-10T00:00:00Z,arxiv,, 88240,https://arxiv.org/abs/2203.00905,Responsible-AI-by-Design: a Pattern Collection for Designing Responsible AI Systems,"['Qinghua Lu', 'Liming Zhu', 'Xiwei Xu', 'Jon Whittle']",2022-03-02T00:00:00Z,arxiv,, 88288,https://arxiv.org/abs/1609.08524,UbuntuWorld 1.0 LTS - A Platform for Automated Problem Solving & Troubleshooting in the Ubuntu OS,"['Tathagata Chakraborti', 'Kartik Talamadupula', 'Kshitij P. Fadnis', 'Murray Campbell', 'Subbarao Kambhampati']",2016-09-27T00:00:00Z,arxiv,, 88304,https://arxiv.org/abs/2102.00834,Counterfactual Planning in AGI Systems,['Koen Holtman'],2021-01-29T00:00:00Z,arxiv,, 88342,https://arxiv.org/abs/0909.0901,Assessing the Impact of Informedness on a Consultant's Profit,"['Eugen Staab', 'Martin Caminada']",2009-09-04T00:00:00Z,arxiv,, 88361,https://arxiv.org/abs/1905.00547,The relationship between Biological and Artificial Intelligence,['George Cevora'],2019-05-01T00:00:00Z,arxiv,, 88385,https://arxiv.org/abs/1803.06567,A Dual Approach to Scalable Verification of Deep Networks,"['Krishnamurthy', 'Dvijotham', 'Robert Stanforth', 'Sven Gowal', 'Timothy Mann', 'Pushmeet Kohli']",2018-03-17T00:00:00Z,arxiv,, 88402,https://arxiv.org/abs/2104.03946,Learning What To Do by Simulating the Past,"['David Lindner', 'Rohin Shah', 'Pieter Abbeel', 'Anca Dragan']",2021-04-08T00:00:00Z,arxiv,, 88427,https://arxiv.org/abs/1702.08495,Don't Fear the Reaper: Refuting Bostrom's Superintelligence Argument,['Sebastian Benthall'],2017-02-27T00:00:00Z,arxiv,, 88443,https://arxiv.org/abs/1905.10498,Cold Case: The Lost MNIST Digits,"['Chhavi Yadav', 'Léon Bottou']",2019-05-25T00:00:00Z,arxiv,, 88460,https://arxiv.org/abs/cs/0510079,Evidence with Uncertain Likelihoods,"['Joseph Y. Halpern', 'Riccardo Pucella']",2005-10-25T00:00:00Z,arxiv,, 88478,https://arxiv.org/abs/1910.02330,Towards Deployment of Robust AI Agents for Human-Machine Partnerships,"['Ahana Ghosh', 'Sebastian Tschiatschek', 'Hamed Mahdavi', 'Adish Singla']",2019-10-05T00:00:00Z,arxiv,, 88495,https://arxiv.org/abs/1811.10154,Please Stop Explaining Black Box Models for High-Stakes Decisions,['Cynthia Rudin'],2018-11-26T03:00:25Z,arxiv,, 88524,https://arxiv.org/abs/2001.07455,Designing for the Long Tail of Machine Learning,"['Martin Lindvall', 'Jesper Molin']",2020-01-21T00:00:00Z,arxiv,, 88545,https://arxiv.org/abs/2005.01908,A multi-component framework for the analysis and design of explainable artificial intelligence,"['S. Atakishiyev', 'H. Babiker', 'N. Farruque', 'R. Goebel1', 'M-Y. Kima', 'M. H. Motallebi', 'J. Rabelo', 'T. Syed', 'O. R. Zaïane']",2020-05-05T00:00:00Z,arxiv,, 88565,https://arxiv.org/abs/1904.10079,The MineRL 2019 Competition on Sample Efficient Reinforcement Learning using Human Priors,"['William H. Guss', 'Cayden Codel', 'Katja Hofmann', 'Brandon Houghton', 'Noboru Kuno', 'Stephanie Milani', 'Sharada Mohanty', 'Diego Perez Liebana', 'Ruslan Salakhutdinov', 'Nicholay Topin', 'Manuela Veloso', 'Phillip Wang']",2019-04-22T00:00:00Z,arxiv,, 88589,https://arxiv.org/abs/2201.04633,Revelation of Task Difficulty in AI-aided Education,"['Yitzhak Spielberg', 'Amos Azaria']",2022-01-12T00:00:00Z,arxiv,, 88608,https://arxiv.org/abs/2210.03729,Knowledge-Grounded Reinforcement Learning,"['Zih-Yun Chiu', 'Yi-Lin Tuan', 'William Yang Wang', 'Michael C. Yip']",2022-10-07T00:00:00Z,arxiv,, 88627,https://arxiv.org/abs/1909.12673,A Constructive Prediction of the Generalization Error Across Scales,"['Jonathan S. Rosenfeld', 'Amir Rosenfeld', 'Yonatan Belinkov', 'Nir Shavit']",2019-09-27T00:00:00Z,arxiv,, 88644,https://arxiv.org/abs/2010.15578,Exploring the Nuances of Designing (with/for) Artificial Intelligence,"['Niya Stoimenova', 'Rebecca Price']",2020-10-22T00:00:00Z,arxiv,, 88670,https://arxiv.org/abs/2004.04136,CURL: Contrastive Unsupervised Representations for Reinforcement Learning,"['Aravind Srinivas', 'Michael Laskin', 'Pieter Abbeel']",2020-04-08T00:00:00Z,arxiv,, 88683,https://arxiv.org/abs/1704.06960,Translating Neuralese.,"['Jacob Andreas', 'Anca Dragan', 'Dan Klein']",2017-08-14T00:00:00Z,arxiv,, 88695,https://arxiv.org/abs/2209.07143,Autoregressive Latent Video Prediction with High-Fidelity Image Generator.,"['Younggyo Seo', 'Kimin Lee', 'Fangchen Liu', 'Stephen James', 'Pieter Abbeel']",2022-08-14T00:00:00Z,arxiv,, 88716,https://arxiv.org/abs/1707.05173,Trial without Error: Towards Safe Reinforcement Learning via Human Intervention,"['William Saunders', 'Girish Sastry', 'Andreas Stuhlmueller', 'Owain Evans']",2017-07-17T00:00:00Z,arxiv,, 88746,https://arxiv.org/abs/1906.10667,Reinforcement Learning with Competitive Ensembles of Information-Constrained Primitives,"['Anirudh Goyal', 'Shagun Sodhani', 'Jonathan Binas', 'Xue Bin Peng', 'Sergey Levine', 'Yoshua Bengio']",2019-06-25T00:00:00Z,arxiv,, 88771,https://arxiv.org/abs/2108.09586,Learning Causal Models of Autonomous Agents using Interventions,"['Pulkit Verma', 'Siddharth Srivastava']",2021-08-21T00:00:00Z,arxiv,, 88791,https://arxiv.org/abs/2002.09044,A Road Map to Strong Intelligence,['Philip Paquette'],2020-02-20T00:00:00Z,arxiv,, 88821,https://arxiv.org/abs/2105.10423,Evaluating Strategy Exploration in Empirical Game-Theoretic Analysis.,"['Yongzhao Wang', 'Qiurui Ma', 'Michael P Wellman']",2021-08-14T00:00:00Z,arxiv,, 88838,https://arxiv.org/abs/2011.06619,Learning Latent Representations to Influence Multi-Agent Interaction,"['Annie Xie', 'Dylan P. Losey', 'Ryan Tolsma', 'Chelsea Finn', 'Dorsa Sadigh']",2020-11-12T00:00:00Z,arxiv,, 88854,https://arxiv.org/abs/1705.10720,Low Impact Artificial Intelligences,"['Stuart Armstrong', 'Benjamin Levinstein']",2017-05-30T00:00:00Z,arxiv,, 88886,https://arxiv.org/abs/2203.10050,SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning.,"['Jongjin_Park', 'Younggyo Seo', 'Jinwoo Shin', 'Honglak Lee', 'Pieter Abbeel', 'Kimin Lee']",2022-08-14T00:00:00Z,arxiv,, 88905,https://arxiv.org/abs/1206.5290,Imitation Learning with a Value-Based Prior,"['Umar Syed', 'Robert E. Schapire']",2012-06-20T00:00:00Z,arxiv,, 88923,https://arxiv.org/abs/1812.03411,Feature Denoising for Improving Adversarial Robustness,"['Cihang Xie', 'Yuxin Wu', 'Laurens van der Maaten', 'Alan Yuille', 'Kaiming He']",2018-12-09T00:00:00Z,arxiv,, 88939,https://arxiv.org/abs/1811.06606,Economics of Human-AI Ecosystem: Value Bias and Lost Utility in Multi-Dimensional Gaps,['Daniel Muller'],2018-11-15T00:00:00Z,arxiv,, 88968,https://arxiv.org/abs/1804.09849,The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation,"['Mia Xu Chen', 'Orhan Firat', 'Ankur Bapna', 'Melvin Johnson', 'Wolfgang Macherey', 'George Foster', 'Llion Jones', 'Niki Parmar', 'Mike Schuster', 'Zhifeng Chen', 'Yonghui Wu', 'Macduff Hughes']",2018-04-26T00:00:00Z,arxiv,, 88999,https://arxiv.org/abs/2111.08267,Solving Probability and Statistics Problems by Program Synthesis,"['Leonard Tang', 'Elizabeth Ke', 'Nikhil Singh', 'Nakul Verma', 'Iddo Drori']",2021-11-16T00:00:00Z,arxiv,, 89016,https://arxiv.org/abs/2109.15316,Scalable Online Planning via Reinforcement Learning Fine-Tuning.,"['Arnaud Fickinger', 'Hengyuan Hu', 'Brandon Amos', 'Stuart Russell', 'Noam Brown']",2021-08-14T00:00:00Z,arxiv,, 89040,https://arxiv.org/abs/2105.08489,Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising,"['Dongbo Xi', 'Zhen Chen', 'Peng Yan', 'Yinger Zhang', 'Yongchun Zhu', 'Fuzhen Zhuang', 'Yu Chen']",2021-05-18T00:00:00Z,arxiv,, 89066,https://arxiv.org/abs/1902.00506,The Hanabi Challenge: A New Frontier for AI Research,"['Nolan Bard', 'Jakob N. Foerster', 'Sarath Chandar', 'Neil Burch', 'Marc Lanctot', 'H. Francis Song', 'Emilio Parisotto', 'Vincent Dumoulin', 'Subhodeep Moitra', 'Edward Hughes', 'Iain Dunning', 'Shibl Mourad', 'Hugo Larochelle', 'Marc G. Bellemare', 'Michael Bowling']",2019-02-01T00:00:00Z,arxiv,, 89088,https://arxiv.org/abs/1606.03476,Generative Adversarial Imitation Learning,['Jonathan Ho'],2016-06-10T20:51:29Z,arxiv,, 89110,https://arxiv.org/abs/2308.14752,"AI Deception: A Survey of Examples, Risks, and Potential Solutions",['Peter S. Park'],2023-08-28T17:59:35Z,arxiv,, 89153,https://arxiv.org/abs/2211.03157,Examining the Differential Risk from High-level Artificial Intelligence and the Question of Control,"['Kyle A. Kilian', 'Christopher J. Ventura', 'Mark M. Bailey']",2022-11-06T00:00:00Z,arxiv,, 89189,https://arxiv.org/abs/2112.03575,MESA: Offline Meta-RL for Safe Adaptation and Fault Tolerance,"['Michael Luo', 'Ashwin Balakrishna', 'Brijen Thananjeyan', 'Suraj Nair', 'Julian Ibarz', 'Jie Tan', 'Chelsea Finn', 'Ion Stoica', 'Ken Goldberg']",2021-12-07T00:00:00Z,arxiv,, 89210,https://arxiv.org/abs/2202.13252,The Quest for a Common Model of the Intelligent Decision Maker,['Richard S. Sutton'],2022-02-26T00:00:00Z,arxiv,, 89227,https://arxiv.org/abs/2111.11401,Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer.,"['Suneel Belkhale', 'Ethan Kroll Gordon', 'Yuxiao Chen', 'Siddhartha Srinivasa', 'Tapomayukh Bhattacharjee', 'Dorsa Sadigh']",2022-08-14T00:00:00Z,arxiv,, 89244,https://arxiv.org/abs/1811.03531,A Geometric Perspective on the Transferability of Adversarial Directions,"['Zachary Charles', 'Harrison Rosenberg', 'Dimitris Papailiopoulos']",2018-11-08T00:00:00Z,arxiv,, 89259,https://arxiv.org/abs/1811.09720,Representer Point Selection for Explaining Deep Neural Networks,"['Chih-Kuan Yeh', 'Joon Sik Kim', 'Ian E. H. Yen', 'Pradeep Ravikumar']",2018-11-23T00:00:00Z,arxiv,, 89278,https://arxiv.org/abs/1612.01474,Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles,"['Balaji Lakshminarayanan', 'Alexander Pritzel', 'Charles Blundell']",2016-12-05T00:00:00Z,arxiv,, 89300,https://arxiv.org/abs/2210.15906,Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learning from Human Preferences,"['Lin Guan', 'Karthik Valmeekam', 'Subbarao Kambhampati']",2022-10-28T00:00:00Z,arxiv,, 89323,https://arxiv.org/abs/1412.3076,The Computational Complexity of Structure-Based Causality.,"['Gadi Aleksandrowicz', 'Hana Chockler', 'Joseph Y', 'Halpern', 'Alexander Ivrii']",2017-08-14T00:00:00Z,arxiv,, 89345,https://arxiv.org/abs/1407.7189,Evidence with Uncertain Likelihoods,"['Joseph Y. Halpern', 'Riccardo Pucella']",2014-07-27T00:00:00Z,arxiv,, 89358,https://arxiv.org/abs/1906.10536,An AGI with Time-Inconsistent Preferences,"['James D. Miller', 'Roman Yampolskiy']",2019-06-23T00:00:00Z,arxiv,, 89377,https://arxiv.org/abs/2009.13772,Trust-Region Method with Deep Reinforcement Learning in Analog Design Space Exploration,"['Kai-En Yang', 'Chia-Yu Tsai', 'Hung-Hao Shen', 'Chen-Feng Chiang', 'Feng-Ming Tsai', 'Chung-An Wang', 'Yiju Ting', 'Chia-Shun Yeh', 'Chin-Tang Lai']",2020-09-29T00:00:00Z,arxiv,, 89403,https://arxiv.org/abs/2110.01770,Procedure Planning in Instructional Videos via Contextual Modeling and Model-based Policy Learning,"['Jing Bi', 'Jiebo Luo', 'Chenliang Xu']",2021-10-05T00:00:00Z,arxiv,, 89424,https://arxiv.org/abs/1803.05859,Neural Network Quine,"['Oscar Chang', 'Hod Lipson']",2018-03-15T00:00:00Z,arxiv,, 89447,https://arxiv.org/abs/1310.1328,The Relevance of Proofs of the Rationality of Probability Theory to Automated Reasoning and Cognitive Models,['Ernest Davis'],2013-10-04T00:00:00Z,arxiv,, 89467,https://arxiv.org/abs/1905.12616,Defending Against Neural Fake News,"['Rowan Zellers', 'Ari Holtzman', 'Hannah Rashkin', 'Yonatan Bisk', 'Ali Farhadi', 'Franziska Roesner', 'Yejin Choi']",2019-05-29T00:00:00Z,arxiv,, 89491,https://arxiv.org/abs/2104.08440,Learning on a Budget via Teacher Imitation,"['Ercument Ilhan', 'Jeremy Gow', 'Diego Perez-Liebana']",2021-04-17T00:00:00Z,arxiv,, 89520,https://arxiv.org/abs/1912.07768,Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data,"['Felipe Petroski Such', 'Aditya Rawal', 'Joel Lehman', 'Kenneth O. Stanley', 'Jeff Clune']",2019-12-17T00:00:00Z,arxiv,, 89543,https://arxiv.org/abs/2003.13590,Suphx: Mastering Mahjong with Deep Reinforcement Learning,"['Junjie Li', 'Sotetsu Koyamada', 'Qiwei Ye', 'Guoqing Liu', 'Chao Wang', 'Ruihan Yang', 'Li Zhao', 'Tao Qin', 'Tie-Yan Liu', 'Hsiao-Wuen Hon']",2020-03-30T00:00:00Z,arxiv,, 89572,https://arxiv.org/abs/1909.01492,Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation,"['Po-Sen Huang', 'Robert Stanforth', 'Johannes Welbl', 'Chris Dyer', 'Dani Yogatama', 'Sven Gowal', 'Krishnamurthy Dvijotham', 'Pushmeet Kohli']",2019-09-03T00:00:00Z,arxiv,, 89590,https://arxiv.org/abs/2102.05207,Transfer Reinforcement Learning across Homotopy Classes,"['Zhangjie Cao', 'Minae Kwon', 'Dorsa Sadigh']",2021-02-10T00:00:00Z,arxiv,, 89606,https://arxiv.org/abs/1606.05896,Clustering with a Reject Option: Interactive Clustering as Bayesian Prior Elicitation,"['Akash Srivastava', 'James Zou', 'Ryan P. Adams', 'Charles Sutton']",2016-06-19T18:07:15Z,arxiv,, 89618,https://arxiv.org/abs/2008.01339,Collecting the Public Perception of AI and Robot Rights,"['Gabriel Lima', 'Changyeon Kim', 'Seungho Ryu', 'Chihyung Jeon', 'Meeyoung Cha']",2020-08-04T00:00:00Z,arxiv,, 89645,https://arxiv.org/abs/1808.04096,Directed Policy Gradient for Safe Reinforcement Learning with Human Advice,"['Hélène Plisnier', 'Denis Steckelmacher', 'Tim Brys', 'Diederik M. Roijers', 'Ann Nowé']",2018-08-13T00:00:00Z,arxiv,, 89660,https://arxiv.org/abs/1003.0617,Agent Based Approaches to Engineering Autonomous Space Software,"['Louise A. Dennis', 'Michael Fisher', 'Nicholas Lincoln', 'Alexei Lisitsa', 'Sandor M. Veres']",2010-03-02T00:00:00Z,arxiv,, 89688,https://arxiv.org/abs/1812.11118,Reconciling modern machine learning practice and the bias-variance trade-off,"['Mikhail Belkin', 'Daniel Hsu', 'Siyuan Ma', 'Soumik Mandal']",2018-12-28T00:00:00Z,arxiv,, 89708,https://arxiv.org/abs/2208.08611,Intellectual Property Evaluation Utilizing Machine Learning,"['Jinxin Ding', 'Yuxin Huang', 'Keyang Ni', 'Xueyao Wang', 'Yinxiao Wang', 'Yucheng Wang']",2022-08-18T00:00:00Z,arxiv,, 89739,https://arxiv.org/abs/1910.10362,Strategic Classification is Causal Modeling in Disguise.,"['John Miller', 'Smitha Milli', 'Moritz Hardt']",2019-08-14T00:00:00Z,arxiv,, 89760,https://arxiv.org/abs/2203.01884,Graph Neural Networks for Multimodal Single-Cell Data Integration,"['Hongzhi Wen', 'Jiayuan Ding', 'Wei Jin', 'Yiqi Wang', 'Yuying Xie', 'Jiliang Tang']",2022-03-03T00:00:00Z,arxiv,, 89776,https://arxiv.org/abs/1804.05464,On Gradient-Based Learning in Continuous Games,"['Eric Mazumdar', 'Lillian J. Ratliff', 'S. Shankar Sastry']",2018-04-16T00:00:00Z,arxiv,, 89800,https://arxiv.org/abs/1803.03146,SentRNA: Improving computational RNA design by incorporating a prior of human design strategies,"['Jade Shi', 'Rhiju Das', 'Vijay S. Pande']",2018-03-08T00:00:00Z,arxiv,, 89831,https://arxiv.org/abs/1111.3934,Model-based utility functions,['Bill Hibbard'],2011-11-16T20:13:54Z,arxiv,, 89857,https://arxiv.org/abs/1902.06766,Parenting: Safe Reinforcement Learning from Human Input,"['Christopher Frye', 'Ilya Feige']",2019-02-18T00:00:00Z,arxiv,, 89893,https://arxiv.org/abs/1905.12686,"Learning Representations by Humans, for Humans","['Sophie Hilgard', 'Nir Rosenfeld', 'Mahzarin R. Banaji', 'Jack Cao', 'David C. Parkes']",2019-05-29T00:00:00Z,arxiv,, 89913,https://arxiv.org/abs/1910.09338,An Alternative Surrogate Loss for PGD-based Adversarial Testing,"['Sven Gowal', 'Jonathan Uesato', 'Chongli Qin', 'Po-Sen Huang', 'Timothy Mann', 'Pushmeet Kohli']",2019-10-21T00:00:00Z,arxiv,, 89929,https://arxiv.org/abs/2105.00691,Hybrid Intelligence,"['Dominik Dellermann', 'Philipp Ebel', 'Matthias Soellner', 'Jan Marco Leimeister']",2021-05-03T00:00:00Z,arxiv,, 89953,https://arxiv.org/abs/1911.02320,Nonverbal Robot Feedback for Human Teachers,"['Sandy H. Huang', 'Isabella Huang', 'Ravi Pandya', 'Anca D. Dragan']",2019-11-06T00:00:00Z,arxiv,, 89974,https://arxiv.org/abs/1807.04950,Deep Learning in the Wild,"['Thilo Stadelmann', 'Mohammadreza Amirian', 'Ismail Arabaci', 'Marek Arnold', 'Gilbert François Duivesteijn', 'Ismail Elezi', 'Melanie Geiger', 'Stefan Lörwald', 'Benjamin Bruno Meier', 'Katharina Rombach', 'Lukas Tuggener']",2018-07-13T00:00:00Z,arxiv,, 90007,https://arxiv.org/abs/2106.12447,How Well do Feature Visualizations Support Causal Understanding of CNN Activations?,"['Roland S. Zimmermann', 'Judy Borowski', 'Robert Geirhos', 'Matthias Bethge', 'Thomas S. A. Wallis', 'Wieland Brendel']",2021-06-23T00:00:00Z,arxiv,, 90018,https://arxiv.org/abs/2305.01610,Finding Neurons in a Haystack: Case Studies with Sparse Probing,['Wes Gurnee'],2023-05-02T17:13:55Z,arxiv,, 90044,https://arxiv.org/abs/1805.03382,Automated Mechanism Design via Neural Networks,"['Weiran Shen', 'Pingzhong Tang', 'Song Zuo']",2018-05-09T00:00:00Z,arxiv,, 90063,https://arxiv.org/abs/1805.08263,Learning What Information to Give in Partially Observed Domains,"['Rohan Chitnis', 'Leslie Pack Kaelbling', 'Tomás Lozano-Pérez']",2018-05-21T00:00:00Z,arxiv,, 90074,https://arxiv.org/abs/2103.14101,Characterizing and Detecting Mismatch in Machine-Learning-Enabled Systems,"['Grace A. Lewis', 'Stephany Bellomo', 'Ipek Ozkaya']",2021-03-25T00:00:00Z,arxiv,, 90088,https://arxiv.org/abs/2009.08644,Efficient Reinforcement Learning Development with RLzoo,"['Zihan Ding', 'Tianyang Yu', 'Yanhua Huang', 'Hongming Zhang', 'Guo Li', 'Quancheng Guo', 'Luo Mai', 'Hao Dong']",2020-09-18T00:00:00Z,arxiv,, 90110,https://arxiv.org/abs/2011.12439,Contract Scheduling With Predictions,"['Spyros Angelopoulos', 'Shahin Kamali']",2020-11-24T00:00:00Z,arxiv,, 90127,https://arxiv.org/abs/2205.07395,Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles.,"['Thomas Krendl Gilbert', 'Aaron J', 'Snoswell', 'Michael Dennis', 'Rowan McAllister', 'and Cathy Wu']",2022-08-14T00:00:00Z,arxiv,, 90146,https://arxiv.org/abs/1901.01365,Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization,"['Takayuki Osa', 'Voot Tangkaratt', 'Masashi Sugiyama']",2019-01-05T00:00:00Z,arxiv,, 90163,https://arxiv.org/abs/1411.1373,Ethical Artificial Intelligence,['Bill Hibbard'],2014-11-05T00:00:00Z,arxiv,, 90206,https://arxiv.org/abs/cs/0307069,A logic for reasoning about upper probabilities,"['Joseph Y. Halpern', 'Riccardo Pucella']",2003-07-30T00:00:00Z,arxiv,, 90223,https://arxiv.org/abs/1907.08225,Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery,"['Kristian Hartikainen', 'Xinyang Geng', 'Tuomas Haarnoja', 'Sergey Levine']",2019-07-18T00:00:00Z,arxiv,, 90248,https://arxiv.org/abs/1611.01578,Neural Architecture Search with Reinforcement Learning,"['Barret Zoph', 'Quoc V. Le']",2016-11-05T00:00:00Z,arxiv,, 90267,https://arxiv.org/abs/1902.01580,PUTWorkbench: Analysing Privacy in AI-intensive Systems,"['Saurabh Srivastava', 'Vinay P. Namboodiri', 'T. V. Prabhakar']",2019-02-05T00:00:00Z,arxiv,, 90284,https://arxiv.org/abs/1607.05540,Exploiting Vagueness for Multi-Agent Consensus,"['Michael Crosscombe', 'Jonathan Lawry']",2016-07-19T00:00:00Z,arxiv,, 90299,https://arxiv.org/abs/2009.04875,Importance Weighted Policy Learning and Adaptation,"['Alexandre Galashov', 'Jakub Sygnowski', 'Guillaume Desjardins', 'Jan Humplik', 'Leonard Hasenclever', 'Rae Jeong', 'Yee Whye Teh', 'Nicolas Heess']",2020-09-10T00:00:00Z,arxiv,, 90320,https://arxiv.org/abs/1901.02161,Risk-Aware Active Inverse Reinforcement Learning,"['Daniel S. Brown', 'Yuchen Cui', 'Scott Niekum']",2019-01-08T00:00:00Z,arxiv,, 90336,https://arxiv.org/abs/2109.07445,Challenges in Detoxifying Language Models,"['Johannes Welbl', 'Amelia Glaese', 'Jonathan Uesato', 'Sumanth Dathathri', 'John Mellor', 'Lisa Anne Hendricks', 'Kirsty Anderson', 'Pushmeet Kohli', 'Ben Coppin', 'Po-Sen Huang']",2021-09-15T00:00:00Z,arxiv,, 90369,https://arxiv.org/abs/1811.04551,Learning Latent Dynamics for Planning from Pixels,"['Danijar Hafner', 'Timothy Lillicrap', 'Ian Fischer', 'Ruben Villegas', 'David Ha', 'Honglak Lee', 'James Davidson']",2018-11-12T00:00:00Z,arxiv,, 90394,https://arxiv.org/abs/1701.04079,Agent-Agnostic Human-in-the-Loop Reinforcement Learning,"['David Abel', 'John Salvatier', 'Andreas Stuhlmüller', 'Owain Evans']",2017-01-15T00:00:00Z,arxiv,, 90416,https://arxiv.org/abs/2204.02889,A Cognitive Framework for Delegation Between Error-Prone AI and Human Agents,"['Andrew Fuchs', 'Andrea Passarella', 'Marco Conti']",2022-04-06T00:00:00Z,arxiv,, 90430,https://arxiv.org/abs/2103.10248,Systematic Mapping Study on the Machine Learning Lifecycle,"['Yuanhao Xie', 'Luís Cruz', 'Petra Heck', 'Jan S. Rellermeyer']",2021-03-11T00:00:00Z,arxiv,, 90457,https://arxiv.org/abs/1711.06431,Using KL-divergence to focus Deep Visual Explanation,"['Housam Khalifa Bashier Babiker', 'Randy Goebel']",2017-11-17T00:00:00Z,arxiv,, 90470,https://arxiv.org/abs/1806.04067,Adaptive Mechanism Design: Learning to Promote Cooperation,"['Tobias Baumann', 'Thore Graepel', 'John Shawe-Taylor']",2018-06-11T00:00:00Z,arxiv,, 90490,https://arxiv.org/abs/2103.15171,A Bayesian Approach to Identifying Representational Errors,"['Ramya Ramakrishnan', 'Vaibhav Unhelkar', 'Ece Kamar', 'Julie Shah']",2021-03-28T00:00:00Z,arxiv,, 90510,https://arxiv.org/abs/1806.07857,RUDDER: Return Decomposition for Delayed Rewards,"['Jose A. Arjona-Medina', 'Michael Gillhofer', 'Michael Widrich', 'Thomas Unterthiner', 'Johannes Brandstetter', 'Sepp Hochreiter']",2018-06-20T00:00:00Z,arxiv,, 90524,https://arxiv.org/abs/2010.14603,Learning to be Safe: Deep RL with a Safety Critic,"['Krishnan Srinivasan', 'Benjamin Eysenbach', 'Sehoon Ha', 'Jie Tan', 'Chelsea Finn']",2020-10-27T00:00:00Z,arxiv,, 90542,https://arxiv.org/abs/1912.01412,Deep Learning for Symbolic Mathematics,"['Guillaume Lample', 'François Charton']",2019-12-02T00:00:00Z,arxiv,, 90563,https://arxiv.org/abs/2204.07612,"Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hybrid Models for Theoretical and Applied Ethics in Artificial Intelligence","['Jennafer S. Roberts', 'Laura N. Montoya']",2022-04-15T00:00:00Z,arxiv,, 90588,https://arxiv.org/abs/1803.04765,"Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning","['Nicolas Papernot', 'Patrick McDaniel']",2018-03-13T00:00:00Z,arxiv,, 90611,https://arxiv.org/abs/2103.04551,Behavior From the Void: Unsupervised Active Pre-Training.,"['Hao Liu', 'Pieter Abbeel']",2021-08-14T00:00:00Z,arxiv,, 90622,https://arxiv.org/abs/1810.00482,Few-Shot Goal Inference for Visuomotor Learning and Planning,"['Annie Xie', 'Avi Singh', 'Sergey Levine', 'Chelsea Finn']",2018-09-30T00:00:00Z,arxiv,, 90638,https://arxiv.org/abs/1609.04994,Exploration Potential,['Jan Leike'],2016-09-16T00:00:00Z,arxiv,, 90649,https://arxiv.org/abs/1802.09159,Antifragility for Intelligent Autonomous Systems,"['Anusha Mujumdar', 'Swarup Kumar Mohalik', 'Ramamurthy Badrinath']",2018-02-26T00:00:00Z,arxiv,, 90668,https://arxiv.org/abs/1301.6707,Attention-Sensitive Alerting,"['Eric J. Horvitz', 'Andy Jacobs', 'David Hovel']",2013-01-23T00:00:00Z,arxiv,, 90688,https://arxiv.org/abs/2203.16497,"An Artificial Intelligence Browser Architecture (AIBA) For Our Kind and Others: A Voice Name System Speech implementation with two warrants, Wake Neutrality and Value Preservation of Personally Identifiable Information",['Brian Subirana'],2022-03-29T00:00:00Z,arxiv,, 90716,https://arxiv.org/abs/1905.06922,On Variational Bounds of Mutual Information,"['Ben Poole', 'Sherjil Ozair', 'Aaron van den Oord', 'Alexander A. Alemi', 'George Tucker']",2019-05-16T00:00:00Z,arxiv,, 90742,https://arxiv.org/abs/2201.05647,Tools and Practices for Responsible AI Engineering,"['Ryan Soklaski', 'Justin Goodwin', 'Olivia Brown', 'Michael Yee', 'Jason Matterer']",2022-01-14T00:00:00Z,arxiv,, 90766,https://arxiv.org/abs/1710.08191,Human-in-the-loop Artificial Intelligence,['Fabio Massimo Zanzotto'],2017-10-23T00:00:00Z,arxiv,, 90790,https://arxiv.org/abs/2102.06701,Explaining Neural Scaling Laws,"['Yasaman Bahri', 'Ethan Dyer', 'Jared Kaplan', 'Jaehoon Lee', 'Utkarsh Sharma']",2021-02-12T00:00:00Z,arxiv,, 90807,https://arxiv.org/abs/2105.00525,Planning for Proactive Assistance in Environments with Partial Observability,"['Anagha Kulkarni', 'Siddharth Srivastava', 'Subbarao Kambhampati']",2021-05-02T00:00:00Z,arxiv,, 90823,https://arxiv.org/abs/1012.5506,Ontology-based Queries over Cancer Data,"['Alejandra Gonzalez-Beltran', 'Ben Tagger', 'Anthony Finkelstein']",2010-12-26T00:00:00Z,arxiv,, 90842,https://arxiv.org/abs/2110.14419,Toward a Theory of Justice for Artificial Intelligence,['Iason Gabriel'],2021-10-27T00:00:00Z,arxiv,, 90870,https://arxiv.org/abs/1811.02546,A Model for General Intelligence,['Paul Yaworsky'],2018-11-06T00:00:00Z,arxiv,, 90886,https://arxiv.org/abs/2110.15907,Learning to Be Cautious,"['Montaser Mohammedalamen', 'Dustin Morrill', 'Alexander Sieusahai', 'Yash Satsangi', 'Michael Bowling']",2021-10-29T00:00:00Z,arxiv,, 90900,https://arxiv.org/abs/2212.03827,Discovering Latent Knowledge in Language Models Without Supervision,['Collin Burns'],2022-12-07T18:17:56Z,arxiv,, 90917,https://arxiv.org/abs/2202.02967,Learning from Imperfect Demonstrations via Adversarial Confidence Transfer.,"['Zhangjie Cao*', 'Zihan Wang*', 'Dorsa Sadigh']",2022-08-14T00:00:00Z,arxiv,, 90935,https://arxiv.org/abs/1107.5537,Asymptotically Optimal Agents,"['Tor Lattimore', 'Marcus Hutter']",2011-07-27T00:00:00Z,arxiv,, 90958,https://arxiv.org/abs/2209.15111,Quantifying Harm,"['Sander Beckers', 'Hana Chockler', 'Joseph Y. Halpern']",2022-09-29T00:00:00Z,arxiv,, 90980,https://arxiv.org/abs/1904.01033,Multitask Soft Option Learning,"['Maximilian Igl', 'Andrew Gambardella', 'Jinke He', 'Nantas Nardelli', 'N. Siddharth', 'Wendelin Böhmer', 'Shimon Whiteson']",2019-04-01T00:00:00Z,arxiv,, 91001,https://arxiv.org/abs/1804.03980,Emergent Communication through Negotiation,"['Kris Cao', 'Angeliki Lazaridou', 'Marc Lanctot', 'Joel Z Leibo', 'Karl Tuyls', 'Stephen Clark']",2018-04-11T00:00:00Z,arxiv,, 91021,https://arxiv.org/abs/1706.03741,Deep reinforcement learning from human preferences,"['Paul Christiano', 'Jan Leike', 'Tom B. Brown', 'Miljan Martic', 'Shane Legg', 'Dario Amodei']",2017-06-12T00:00:00Z,arxiv,, 91044,https://arxiv.org/abs/1604.00289,Building Machines That Learn and Think Like People,"['Brenden M. Lake', 'Tomer D. Ullman', 'Joshua B. Tenenbaum', 'and Samuel J. Gershman']",2016-01-01T00:00:00Z,arxiv,, 91068,https://arxiv.org/abs/2111.11276,Branching Time Active Inference: empirical study and complexity class analysis,"['Théophile Champion', 'Howard Bowman', 'Marek Grześ']",2021-11-22T00:00:00Z,arxiv,, 91089,https://arxiv.org/abs/1905.03030,Meta-learning of Sequential Strategies,"['Pedro A. Ortega', 'Jane X. Wang', 'Mark Rowland', 'Tim Genewein', 'Zeb Kurth-Nelson', 'Razvan Pascanu', 'Nicolas Heess', 'Joel Veness', 'Alex Pritzel', 'Pablo Sprechmann', 'Siddhant M. Jayakumar', 'Tom McGrath', 'Kevin Miller', 'Mohammad Azar', 'Ian Osband', 'Neil Rabinowitz', 'András György', 'Silvia Chiappa', 'Simon Osindero', 'Yee Whye Teh', 'Hado van Hasselt', 'Nando de Freitas', 'Matthew Botvinick', 'Shane Legg']",2019-05-08T00:00:00Z,arxiv,, 91118,https://arxiv.org/abs/2107.12808,Open-Ended Learning Leads to Generally Capable Agents,"['Open Ended Learning Team', 'Adam Stooke', 'Anuj Mahajan', 'Catarina Barros', 'Charlie Deck', 'Jakob Bauer', 'Jakub Sygnowski', 'Maja Trebacz', 'Max Jaderberg', 'Michael Mathieu', 'Nat McAleese', 'Nathalie Bradley-Schmieg', 'Nathaniel Wong', 'Nicolas Porcel', 'Roberta Raileanu', 'Steph Hughes-Fitt', 'Valentin Dalibard', 'Wojciech Marian Czarnecki']",2021-07-27T00:00:00Z,arxiv,, 91145,https://arxiv.org/abs/2112.08438,Programmatic Reward Design by Example,"['Weichao Zhou', 'Wenchao Li']",2021-12-14T00:00:00Z,arxiv,, 91162,https://arxiv.org/abs/2105.14111,Goal Misgeneralization in Deep Reinforcement Learning,"['Lauro Langosco', 'Jack Koch', 'Lee Sharkey', 'Jacob Pfau', 'Laurent Orseau', 'David Krueger']",2021-05-28T00:00:00Z,arxiv,, 91175,https://arxiv.org/abs/1905.12149,SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver,"['Po-Wei Wang', 'Priya L. Donti', 'Bryan Wilder', 'Zico Kolter']",2019-05-29T00:00:00Z,arxiv,, 91193,https://arxiv.org/abs/1310.1863,Empowerment -- an Introduction,"['Christoph Salge', 'Cornelius Glackin', 'Daniel Polani']",2013-10-07T00:00:00Z,arxiv,, 91218,https://arxiv.org/abs/2203.03668,A Typology for Exploring the Mitigation of Shortcut Behavior,"['Felix Friedrich', 'Wolfgang Stammer', 'Patrick Schramowski', 'Kristian Kersting']",2022-03-04T00:00:00Z,arxiv,, 91248,https://arxiv.org/abs/1901.04966,Identifying and Correcting Label Bias in Machine Learning,"['Heinrich Jiang', 'Ofir Nachum']",2019-01-15T00:00:00Z,arxiv,, 91263,https://arxiv.org/abs/2011.06118,I Know What You Meant: Learning Human Objectives by (Under)estimating Their Choice Set,"['Ananth Jonnavittula', 'Dylan P. Losey']",2020-11-11T00:00:00Z,arxiv,, 91279,https://arxiv.org/abs/1804.00222,Meta-Learning Update Rules for Unsupervised Representation Learning,"['Luke Metz', 'Niru Maheswaranathan', 'Brian Cheung', 'Jascha Sohl-Dickstein']",2018-03-31T00:00:00Z,arxiv,, 91302,https://arxiv.org/abs/2006.10029,Big Self-Supervised Models are Strong Semi-Supervised Learners,"['Ting Chen', 'Simon Kornblith', 'Kevin Swersky', 'Mohammad Norouzi', 'Geoffrey Hinton']",2020-06-17T00:00:00Z,arxiv,, 91326,https://arxiv.org/abs/2108.09404,Safe Transformative AI via a Windfall Clause,"['Paolo Bova', 'Jonas Emanuel Müller', 'Benjamin Harack']",2021-08-20T00:00:00Z,arxiv,, 91343,https://arxiv.org/abs/2111.14874,Weighing the Milky Way and Andromeda with Artificial Intelligence,"['Pablo Villanueva-Domingo', 'Francisco Villaescusa-Navarro', 'Shy Genel', 'Daniel Anglés-Alcázar', 'Lars Hernquist', 'Federico Marinacci', 'David N. Spergel', 'Mark Vogelsberger', 'Desika Narayanan']",2021-11-29T00:00:00Z,arxiv,, 91361,https://arxiv.org/abs/1809.10283,Adding Neural Network Controllers to Behavior Trees without Destroying Performance Guarantees,"['Christopher Iliffe Sprague', 'Petter Ögren']",2018-09-26T00:00:00Z,arxiv,, 91377,https://arxiv.org/abs/2011.06709,Active Reinforcement Learning: Observing Rewards at a Cost,"['David Krueger', 'Jan Leike', 'Owain Evans', 'John Salvatier']",2020-11-13T00:00:00Z,arxiv,, 91394,https://arxiv.org/abs/2007.14244,Automated Database Indexing using Model-free Reinforcement Learning,"['Gabriel Paludo Licks', 'Felipe Meneguzzi']",2020-07-25T00:00:00Z,arxiv,, 91412,https://arxiv.org/abs/1802.07228,"The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation","['Miles Brundage', 'Shahar Avin', 'Jack Clark', 'Helen Toner', 'Peter Eckersley', 'Ben Garfinkel', 'Allan Dafoe', 'Paul Scharre', 'Thomas Zeitzoff', 'Bobby Filar', 'Hyrum Anderson', 'Heather Roff', 'Gregory C. Allen', 'Jacob Steinhardt', 'Carrick Flynn', 'Seán Ó hÉigeartaigh', 'Simon Beard', 'Haydn Belfield', 'Sebastian Farquhar', 'Clare Lyle', 'Rebecca Crootof', 'Owain Evans', 'Michael Page', 'Joanna Bryson', 'Roman Yampolskiy', 'Dario Amodei']",2018-02-20T00:00:00Z,arxiv,, 91449,https://arxiv.org/abs/1801.08757,Safe Exploration in Continuous Action Spaces,"['Gal Dalal', 'Krishnamurthy Dvijotham', 'Matej Vecerik', 'Todd Hester', 'Cosmin Paduraru', 'Yuval Tassa']",2018-01-26T00:00:00Z,arxiv,, 91464,https://arxiv.org/abs/1907.11826,Bayesian Robustness: A Nonasymptotic Viewpoint.,"['Kush Bhatia', 'Yi-An Ma', 'Anca D', 'Dragan', 'Peter L', 'Bartlett', 'Michael I', 'Jordan']",2019-08-14T00:00:00Z,arxiv,, 91473,https://arxiv.org/abs/1807.01672,Ranked Reward: Enabling Self-Play Reinforcement Learning for Combinatorial Optimization,"['Alexandre Laterre', 'Yunguan Fu', 'Mohamed Khalil Jabri', 'Alain-Sam Cohen', 'David Kas', 'Karl Hajjar', 'Torbjorn S. Dahl', 'Amine Kerkeni', 'Karim Beguir']",2018-07-04T00:00:00Z,arxiv,, 91485,https://arxiv.org/abs/1904.09959,Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness,"['Greg Anderson', 'Shankara Pailoor', 'Isil Dillig', 'Swarat Chaudhuri']",2019-04-22T00:00:00Z,arxiv,, 91499,https://arxiv.org/abs/2101.05507,Evaluating the Robustness of Collaborative Agents,"['Paul Knott', 'Micah Carroll', 'Sam Devlin', 'Kamil Ciosek', 'Katja Hofmann', 'A. D. Dragan', 'Rohin Shah']",2021-01-14T00:00:00Z,arxiv,, 91532,https://arxiv.org/abs/2002.08512,The Problem with Metrics is a Fundamental Problem for AI,"['Rachel Thomas', 'David Uminsky']",2020-02-20T00:00:00Z,arxiv,, 91552,https://arxiv.org/abs/1807.06919,"Backplay: ""Man muss immer umkehren""","['Cinjon Resnick', 'Roberta Raileanu', 'Sanyam Kapoor', 'Alexander Peysakhovich', 'Kyunghyun Cho', 'Joan Bruna']",2018-07-18T00:00:00Z,arxiv,, 91569,https://arxiv.org/abs/2102.08029,Transferring Domain Knowledge with an Adviser in Continuous Tasks,"['Rukshan Wijesinghe', 'Kasun Vithanage', 'Dumindu Tissera', 'Alex Xavier', 'Subha Fernando', 'Jayathu Samarawickrama']",2021-02-16T00:00:00Z,arxiv,, 91584,https://arxiv.org/abs/2107.05383,"Not Quite 'Ask a Librarian': AI on the Nature, Value, and Future of LIS","['Jesse David Dinneen', 'Helen Bubinger']",2021-07-07T00:00:00Z,arxiv,, 91615,https://arxiv.org/abs/1810.04538,Secure Deep Learning Engineering: A Software Quality Assurance Perspective,"['Lei Ma', 'Felix Juefei-Xu', 'Minhui Xue', 'Qiang Hu', 'Sen Chen', 'Bo Li', 'Yang Liu', 'Jianjun Zhao', 'Jianxiong Yin', 'Simon See']",2018-10-10T00:00:00Z,arxiv,, 91642,https://arxiv.org/abs/2003.06417,Sparse Graphical Memory for Robust Planning.,"['Scott Emmons', 'Ajay Jain', 'Michael Laskin', 'Thanard Kurutach', 'Pieter Abbeel', 'Deepak Pathak']",2020-08-14T00:00:00Z,arxiv,, 91662,https://arxiv.org/abs/2105.13431,An Offline Risk-aware Policy Selection Method for Bayesian Markov Decision Processes,"['Giorgio Angelotti', 'Nicolas Drougard', 'Caroline Ponzoni Carvalho Chanel']",2021-05-27T00:00:00Z,arxiv,, 91678,https://arxiv.org/abs/2106.00672,What Matters for Adversarial Imitation Learning?,"['Manu Orsini', 'Anton Raichuk', 'Léonard Hussenot', 'Damien Vincent', 'Robert Dadashi', 'Sertan Girgin', 'Matthieu Geist', 'Olivier Bachem', 'Olivier Pietquin', 'Marcin Andrychowicz']",2021-06-01T00:00:00Z,arxiv,, 91712,https://arxiv.org/abs/1904.11737,Using Sub-Optimal Plan Detection to Identify Commitment Abandonment in Discrete Environments,"['Ramon Fraga Pereira', 'Nir Oren', 'Felipe Meneguzzi']",2019-04-26T00:00:00Z,arxiv,, 91729,https://arxiv.org/abs/1412.6980,Adam: A Method for Stochastic Optimization,"['Diederik P. Kingma', 'Jimmy Ba']",2014-12-22T00:00:00Z,arxiv,, 91753,https://arxiv.org/abs/2007.02823,Dynamic Awareness.,"['Joseph Y Halpern', 'Evan Piermont']",2020-08-14T00:00:00Z,arxiv,, 91771,https://arxiv.org/abs/1803.06373,Adversarial Logit Pairing,"['Harini Kannan', 'Alexey Kurakin', 'Ian Goodfellow']",2018-03-16T00:00:00Z,arxiv,, 91806,https://arxiv.org/abs/2003.03181,Can ML predict the solution value for a difficult combinatorial problem?,"['Constantine Goulimis', 'Gastón Simone']",2020-03-06T00:00:00Z,arxiv,, 91818,https://arxiv.org/abs/2109.10862,Recursively Summarizing Books with Human Feedback,"['Jeff Wu', 'Long Ouyang', 'Daniel M. Ziegler', 'Nisan Stiennon', 'Ryan Lowe', 'Jan Leike', 'Paul Christiano']",2021-09-22T00:00:00Z,arxiv,, 91842,https://arxiv.org/abs/2011.10804,BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures,"['Tianchen Zhao', 'Xuefei Ning', 'Xiangsheng Shi', 'Songyi Yang', 'Shuang Liang', 'Peng Lei', 'Jianfei Chen', 'Huazhong Yang', 'Yu Wang']",2020-11-21T00:00:00Z,arxiv,, 91866,https://arxiv.org/abs/1912.07045,How Should an Agent Practice?.,"['Janarthanan Rajendran', 'Richard Lewis', 'Vivek Veeriah', 'Honglak Lee', 'Satinder Singh']",2020-08-14T00:00:00Z,arxiv,, 91884,https://arxiv.org/abs/1802.01536,Expressive Robot Motion Timing.,"['Allan Zhou', 'Dylan Hadfield-Menell', 'Anusha Nagabaudi', 'Anca Dragan']",2017-08-14T00:00:00Z,arxiv,, 91900,https://arxiv.org/abs/1804.05917,Heuristic Approaches for Goal Recognition in Incomplete Domain Models,"['Ramon Fraga Pereira', 'Felipe Meneguzzi']",2018-04-16T00:00:00Z,arxiv,, 91922,https://arxiv.org/abs/1302.4970,Is There a Role for Qualitative Risk Assessment?,"['Paul J. Krause', 'John Fox', 'Philip Judson']",2013-02-20T00:00:00Z,arxiv,, 91937,https://arxiv.org/abs/1906.09624,"On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference","['Rohin Shah', 'Noah Gundotra', 'Pieter Abbeel', 'Anca D. Dragan']",2019-06-23T00:00:00Z,arxiv,, 91955,https://arxiv.org/abs/1010.5445,Theory and applications of Robust Optimization,['Dimitris Bertsimas'],,arxiv,, 91975,https://arxiv.org/abs/1302.4978,Exploiting the Rule Structure for Decision Making within the Independent Choice Logic,['David L. Poole'],2013-02-20T00:00:00Z,arxiv,, 91987,https://arxiv.org/abs/1811.06032,Natural Environment Benchmarks for Reinforcement Learning,"['Amy Zhang', 'Yuxin Wu', 'Joelle Pineau']",2018-11-14T00:00:00Z,arxiv,, 92013,https://arxiv.org/abs/2101.08001,UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers,"['Siyi Hu', 'Fengda Zhu', 'Xiaojun Chang', 'Xiaodan Liang']",2021-01-20T00:00:00Z,arxiv,, 92031,https://arxiv.org/abs/1908.09203,Release Strategies and the Social Impacts of Language Models,"['Irene Solaiman', 'Miles Brundage', 'Jack Clark', 'Amanda Askell', 'Ariel Herbert-Voss', 'Jeff Wu', 'Alec Radford', 'Gretchen Krueger', 'Jong Wook Kim', 'Sarah Kreps', 'Miles McCain', 'Alex Newhouse', 'Jason Blazakis', 'Kris McGuffie', 'Jasmine Wang']",2019-08-24T00:00:00Z,arxiv,, 92065,https://arxiv.org/abs/1906.05909,Stand-Alone Self-Attention in Vision Models,"['Prajit Ramachandran', 'Niki Parmar', 'Ashish Vaswani', 'Irwan Bello', 'Anselm Levskaya', 'Jonathon Shlens']",2019-06-13T00:00:00Z,arxiv,, 92084,https://arxiv.org/abs/1907.13275,Towards a Theory of Intentions for Human-Robot Collaboration,"['Rocio Gomez', 'Mohan Sridharan', 'Heather Riley']",2019-07-31T00:00:00Z,arxiv,, 92107,https://arxiv.org/abs/2010.02629,"A framework for predicting, interpreting, and improving Learning Outcomes","['Chintan Donda', 'Sayan Dasgupta', 'Soma S Dhavala', 'Keyur Faldu', 'Aditi Avasthi']",2020-10-06T00:00:00Z,arxiv,, 92126,https://arxiv.org/abs/2206.13477,Parametrically Retargetable Decision-Makers Tend To Seek Power,"['Alexander Matt Turner', 'Prasad Tadepalli']",2022-06-27T00:00:00Z,arxiv,, 92138,https://arxiv.org/abs/1907.10508,A system of different layers of abstraction for artificial intelligence,"['Alexander Serb', 'Themistoklis Prodromakis']",2019-07-22T00:00:00Z,arxiv,, 92160,https://arxiv.org/abs/2105.00385,pyBKT: An Accessible Python Library of Bayesian Knowledge Tracing Models,"['Anirudhan Badrinath', 'Frederic Wang', 'Zachary Pardos']",2021-05-02T00:00:00Z,arxiv,, 92180,https://arxiv.org/abs/2009.14715,Learning Rewards from Linguistic Feedback,"['Theodore R. Sumers', 'Mark K. Ho', 'Robert D. Hawkins', 'Karthik Narasimhan', 'Thomas L. Griffiths']",2020-09-30T00:00:00Z,arxiv,, 92198,https://arxiv.org/abs/2106.01901,Iterative Empirical Game Solving via Single Policy Best Response.,"['Max Olan Smith', 'Thomas Anthony', 'Michael P Wellman']",2021-08-14T00:00:00Z,arxiv,, 92216,https://arxiv.org/abs/2204.11966,Estimating and Penalizing Induced Preference Shifts in Recommender Systems.,"['Micah Carroll', 'Dylan Hadfield-Menell', 'Stuart Russell', 'Anca Dragan']",2022-08-14T00:00:00Z,arxiv,, 92231,https://arxiv.org/abs/2106.11022,Hard Choices in Artificial Intelligence,"['Roel Dobbe', 'Thomas Krendl Gilbert', 'Yonatan Mintz']",2021-06-10T00:00:00Z,arxiv,, 92252,https://arxiv.org/abs/2106.07643,Unsupervised Learning of Visual 3D Keypoints for Control.,"['Boyuan Chen', 'Pieter Abbeel', 'Deepak Pathak']",2021-08-14T00:00:00Z,arxiv,, 92273,https://arxiv.org/abs/1810.06544,"Deep Imitative Models for Flexible Inference, Planning, and Control","['Nicholas Rhinehart', 'Rowan McAllister', 'Sergey Levine']",2018-10-15T00:00:00Z,arxiv,, 92297,https://arxiv.org/abs/2104.03741,Voluntary safety commitments provide an escape from over-regulation in AI development,"['The Anh Han', 'Tom Lenaerts', 'Francisco C. Santos', 'Luis Moniz Pereira']",2021-04-08T00:00:00Z,arxiv,, 92311,https://arxiv.org/abs/1003.5305,Rational Value of Information Estimation for Measurement Selection,"['David Tolpin', 'Solomon Eyal Shimony']",2010-03-27T00:00:00Z,arxiv,, 92324,https://arxiv.org/abs/1503.00038,Sequential Feature Explanations for Anomaly Detection,['Md Amran Siddiqui and Alan Fern and Thomas G. Dietterich and Weng-Keen Wong'],2015-02-28T00:04:11Z,arxiv,, 92345,https://arxiv.org/abs/2009.10385,A narrowing of AI research?,"['Joel Klinger', 'Juan Mateos-Garcia', 'Konstantinos Stathoulopoulos']",2020-09-22T00:00:00Z,arxiv,, 92377,https://arxiv.org/abs/2002.04833,Reward-rational (implicit) choice: A unifying formalism for reward learning.,"['Hong Jun Jeon', 'Smitha Milli', 'Anca D', 'Dragan']",2019-08-15T00:00:00Z,arxiv,, 92397,https://arxiv.org/abs/2008.04096,Impact of meta-roles on the evolution of organisational institutions,"['Amir Hosein Afshar Sedigh', 'Martin K. Purvis', 'Bastin Tony Roy Savarimuthu', 'Maryam A. Purvis', 'Christopher K. Frantz']",2020-08-07T00:00:00Z,arxiv,, 92412,https://arxiv.org/abs/2211.14648,Learning Visuo-Haptic Skewering Strategies for Robot-Assisted Feeding.,"['Priya Sundaresan', 'Suneel Belkhale', 'Dorsa Sadigh']",2022-08-14T00:00:00Z,arxiv,, 92430,https://arxiv.org/abs/2006.04734,Reinforcement Learning Under Moral Uncertainty,"['Adrien Ecoffet', 'Joel Lehman']",2020-06-08T00:00:00Z,arxiv,, 92452,https://arxiv.org/abs/2003.02979,"""Other-Play"" for Zero-Shot Coordination","['Hengyuan Hu', 'Adam Lerer', 'Alex Peysakhovich', 'Jakob Foerster']",2020-03-06T00:00:00Z,arxiv,, 92470,https://arxiv.org/abs/1911.00497,A Narration-based Reward Shaping Approach using Grounded Natural Language Commands,"['Nicholas Waytowich', 'Sean L. Barton', 'Vernon Lawhern', 'Garrett Warnell']",2019-10-31T00:00:00Z,arxiv,, 92489,https://arxiv.org/abs/2204.10817,Reward Reports for Reinforcement Learning,"['Thomas Krendl Gilbert', 'Sarah Dean', 'Nathan Lambert', 'Tom Zick', 'Aaron Snoswell']",2022-08-21T00:00:00Z,arxiv,, 92508,https://arxiv.org/abs/2104.03113,Scaling Scaling Laws with Board Games,['Andy L. Jones'],2021-04-07T00:00:00Z,arxiv,, 92523,https://arxiv.org/abs/2102.02872,Feedback in Imitation Learning: The Three Regimes of Covariate Shift,"['Jonathan Spencer', 'Sanjiban Choudhury', 'Arun Venkatraman', 'Brian Ziebart', 'J. Andrew Bagnell']",2021-02-04T00:00:00Z,arxiv,, 92545,https://arxiv.org/abs/2210.12628,Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions.,"['Weirui Ye', 'Pieter Abbeel', 'Yang Gao']",2022-08-14T00:00:00Z,arxiv,, 92555,https://arxiv.org/abs/1807.09341,Learning Plannable Representations with Causal InfoGAN,"['Thanard Kurutach', 'Aviv Tamar', 'Ge Yang', 'Stuart Russell', 'Pieter Abbeel']",2018-07-24T00:00:00Z,arxiv,, 92572,https://arxiv.org/abs/1705.08417,Reinforcement Learning with a Corrupted Reward Channel,"['Tom Everitt', 'Victoria Krakovna', 'Laurent Orseau', 'Marcus Hutter', 'Shane Legg']",2017-05-23T00:00:00Z,arxiv,, 92598,https://arxiv.org/abs/2107.07002,The Benchmark Lottery,"['Mostafa Dehghani', 'Yi Tay', 'Alexey A. Gritsenko', 'Zhe Zhao', 'Neil Houlsby', 'Fernando Diaz', 'Donald Metzler', 'Oriol Vinyals']",2021-07-14T00:00:00Z,arxiv,, 92630,https://arxiv.org/abs/1803.04926,Active Reinforcement Learning with Monte-Carlo Tree Search,"['Sebastian Schulze', 'Owain Evans']",2018-03-13T00:00:00Z,arxiv,, 92648,https://arxiv.org/abs/cs/0305033,Beslutstödssystemet Dezzy - en översikt,"['Ulla Bergsten', 'Johan Schubert', 'Per Svensson']",2003-05-16T00:00:00Z,arxiv,, 92663,https://arxiv.org/abs/2006.01855,Aligning Superhuman AI with Human Behavior: Chess as a Model System,"['Reid McIlroy-Young', 'Siddhartha Sen', 'Jon Kleinberg', 'Ashton Anderson']",2020-06-02T00:00:00Z,arxiv,, 92685,https://arxiv.org/abs/1911.13152,Induction of Subgoal Automata for Reinforcement Learning,"['Daniel Furelos-Blanco', 'Mark Law', 'Alessandra Russo', 'Krysia Broda', 'Anders Jonsson']",2019-11-29T00:00:00Z,arxiv,, 92702,https://arxiv.org/abs/1808.03644,Building Safer AGI by introducing Artificial Stupidity,"['Michaël Trazzi', 'Roman V. Yampolskiy']",2018-08-11T00:00:00Z,arxiv,, 92736,https://arxiv.org/abs/1907.09273,Why Build an Assistant in Minecraft?,"['Arthur Szlam', 'Jonathan Gray', 'Kavya Srinet', 'Yacine Jernite', 'Armand Joulin', 'Gabriel Synnaeve', 'Douwe Kiela', 'Haonan Yu', 'Zhuoyuan Chen', 'Siddharth Goyal', 'Demi Guo', 'Danielle Rothermel', 'C. Lawrence Zitnick', 'Jason Weston']",2019-07-22T00:00:00Z,arxiv,, 92766,https://arxiv.org/abs/2010.00403,Mediating Artificial Intelligence Developments through Negative and Positive Incentives,"['The Anh Han', 'Luis Moniz Pereira', 'Tom Lenaerts', 'Francisco C. Santos']",2020-10-01T00:00:00Z,arxiv,, 92791,https://arxiv.org/abs/cs/0312040,Diagnostic reasoning with A-Prolog,"['Marcello Balduccini', 'Michael Gelfond']",2003-12-18T00:00:00Z,arxiv,, 92811,https://arxiv.org/abs/2012.11538,Evaluating Agents without Rewards,"['Brendon Matusch', 'Jimmy Ba', 'Danijar Hafner']",2020-12-21T00:00:00Z,arxiv,, 92832,https://arxiv.org/abs/2103.12983,Counterfactual Explanation with Multi-Agent Reinforcement Learning for Drug Target Prediction,"['Tri Minh Nguyen', 'Thomas P Quinn', 'Thin Nguyen', 'Truyen Tran']",2021-03-24T00:00:00Z,arxiv,, 92849,https://arxiv.org/abs/1906.10918,Towards Empathic Deep Q-Learning,"['Bart Bussmann', 'Jacqueline Heinerman', 'Joel Lehman']",2019-06-26T00:00:00Z,arxiv,, 92869,https://arxiv.org/abs/2005.10243,What Makes for Good Views for Contrastive Learning?,"['Yonglong Tian', 'Chen Sun', 'Ben Poole', 'Dilip Krishnan', 'Cordelia Schmid', 'Phillip Isola']",2020-05-20T00:00:00Z,arxiv,, 92891,https://arxiv.org/abs/1607.06450,Layer Normalization,['Jimmy Lei Ba'],2016-07-21T19:57:52Z,arxiv,, 92913,https://arxiv.org/abs/1506.03030,Computational Extensive-Form Games.,"['Joseph Y', 'Halpern', 'Rafael Pass', 'Lior Seeman']",2017-08-14T00:00:00Z,arxiv,, 92936,https://arxiv.org/abs/1812.05979,Scaling shared model governance via model splitting,"['Miljan Martic', 'Jan Leike', 'Andrew Trask', 'Matteo Hessel', 'Shane Legg', 'Pushmeet Kohli']",2018-12-14T00:00:00Z,arxiv,, 92957,https://arxiv.org/abs/1910.13369,A Hamilton-Jacobi Reachability-Based Framework for Predicting and Analyzing Human Motion for Safe Planning,"['Somil Bansal', 'Andrea Bajcsy', 'Ellis Ratner', 'Anca D. Dragan', 'Claire J. Tomlin']",2019-10-29T00:00:00Z,arxiv,, 92976,https://arxiv.org/abs/1903.09328,Improving Safety in Reinforcement Learning Using Model-Based Architectures and Human Intervention,"['Bharat Prakash', 'Mohit Khatwani', 'Nicholas Waytowich', 'Tinoosh Mohsenin']",2019-03-22T00:00:00Z,arxiv,, 92997,https://arxiv.org/abs/1505.04813,What is Learning? A primary discussion about information and Representation,['Hao Wu'],2015-05-19T00:00:00Z,arxiv,, 93017,https://arxiv.org/abs/1302.6837,Anytime Decision Making with Imprecise Probabilities,['Michael Pittarelli'],2013-02-27T00:00:00Z,arxiv,, 93036,https://arxiv.org/abs/2001.07417,Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach,"['Carlos Fernández-Loría', 'Foster Provost', 'Xintian Han']",2020-01-21T00:00:00Z,arxiv,, 93058,https://arxiv.org/abs/2201.11117,Cybertrust: From Explainable to Actionable and Interpretable AI (AI2),"['Stephanie Galaitsi', 'Benjamin D. Trump', 'Jeffrey M. Keisler', 'Igor Linkov', 'Alexander Kott']",2022-01-26T00:00:00Z,arxiv,, 93079,https://arxiv.org/abs/2111.15121,Pyramid Adversarial Training Improves ViT Performance,"['Charles Herrmann', 'Kyle Sargent', 'Lu Jiang', 'Ramin Zabih', 'Huiwen Chang', 'Ce Liu', 'Dilip Krishnan', 'Deqing Sun']",2021-11-30T00:00:00Z,arxiv,, 93094,https://arxiv.org/abs/2107.04303,"Integrating Planning, Execution and Monitoring in the presence of Open World Novelties: Case Study of an Open World Monopoly Solver","['Sriram Gopalakrishnan', 'Utkarsh Soni', 'Tung Thai', 'Panagiotis Lymperopoulos', 'Matthias Scheutz', 'Subbarao Kambhampati']",2021-07-09T00:00:00Z,arxiv,, 93111,https://arxiv.org/abs/1806.05635,Self-Imitation Learning,"['Junhyuk Oh', 'Yijie Guo', 'Satinder Singh', 'Honglak Lee']",2018-06-14T00:00:00Z,arxiv,, 93127,https://arxiv.org/abs/1805.07722,Task-Agnostic Meta-Learning for Few-shot Learning,"['Muhammad Abdullah Jamal', 'Guo-Jun Qi', 'Mubarak Shah']",2018-05-20T00:00:00Z,arxiv,, 93140,https://arxiv.org/abs/2104.00739,Formal Methods for the Informal Engineer: Workshop Recommendations,"['Gopal Sarma', 'James Koppel', 'Gregory Malecha', 'Patrick Schultz', 'Eric Drexler', 'Ramana Kumar', 'Cody Roux', 'Philip Zucker']",2021-04-01T00:00:00Z,arxiv,, 93171,https://arxiv.org/abs/1902.02767,Hybrid Models with Deep and Invertible Features,"['Eric Nalisnick', 'Akihiro Matsukawa', 'Yee Whye Teh', 'Dilan Gorur', 'Balaji Lakshminarayanan']",2019-02-07T00:00:00Z,arxiv,, 93191,https://arxiv.org/abs/1710.07075,Decision Trees for Helpdesk Advisor Graphs,"['Spyros Gkezerlis', 'Dimitris Kalles']",2017-10-19T00:00:00Z,arxiv,, 93208,https://arxiv.org/abs/1811.02553,A Closer Look at Deep Policy Gradients,"['Andrew Ilyas', 'Logan Engstrom', 'Shibani Santurkar', 'Dimitris Tsipras', 'Firdaus Janoos', 'Larry Rudolph', 'Aleksander Madry']",2018-11-06T00:00:00Z,arxiv,, 93235,https://arxiv.org/abs/2010.12606,Exemplary natural images explain CNN activations better than feature visualizations,"['Judy Borowski', 'Roland S. Zimmermann']",2020-10-23T18:31:13Z,arxiv,, 93255,https://arxiv.org/abs/2103.03872,Rissanen Data Analysis: Examining Dataset Characteristics via Description Length,"['Ethan Perez', 'Douwe Kiela', 'Kyunghyun Cho']",2021-03-05T00:00:00Z,arxiv,, 93276,https://arxiv.org/abs/2210.13382,Emergent world representations: Exploring a sequence model trained on a synthetic task,['Kenneth Li'],2022-10-24T16:29:55Z,arxiv,, 93295,https://arxiv.org/abs/2111.06956,Human irrationality: both bad and good for reward inference,"['Lawrence Chan', 'Andrew Critch', 'Anca Dragan']",2021-11-12T00:00:00Z,arxiv,, 93306,https://arxiv.org/abs/2103.02886,Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings.,"['Lili Chen', 'Kimin Lee', 'Aravind Srinivas', 'Pieter Abbeel']",2021-08-14T00:00:00Z,arxiv,, 93328,https://arxiv.org/abs/2106.02039,Offline Reinforcement Learning as One Big Sequence Modeling Problem,"['Michael Janner', 'Qiyang Li', 'Sergey Levine']",2021-06-03T00:00:00Z,arxiv,, 93351,https://arxiv.org/abs/2110.08058,Quantifying Local Specialization in Deep Neural Networks,"['Shlomi Hod', 'Daniel Filan', 'Stephen Casper', 'Andrew Critch', 'Stuart Russell']",2021-10-13T00:00:00Z,arxiv,, 93362,https://arxiv.org/abs/2110.10024,Risks of AI Foundation Models in Education,"['Su Lin Blodgett', 'Michael Madaio']",2021-10-19T00:00:00Z,arxiv,, 93393,https://arxiv.org/abs/2105.03414,Using reinforcement learning to design an AI assistantfor a satisfying co-op experience,"['Ajay Krishnan', 'Niranj Jyothish', 'Xun Jia']",2021-05-07T00:00:00Z,arxiv,, 93417,https://arxiv.org/abs/1909.12200,Scaling data-driven robotics with reward sketching and batch reinforcement learning,"['Serkan Cabi', 'Sergio Gómez Colmenarejo', 'Alexander Novikov', 'Ksenia Konyushkova', 'Scott Reed', 'Rae Jeong', 'Konrad Zolna', 'Yusuf Aytar', 'David Budden', 'Mel Vecerik', 'Oleg Sushkov', 'David Barker', 'Jonathan Scholz', 'Misha Denil', 'Nando de Freitas', 'Ziyu Wang']",2019-09-26T00:00:00Z,arxiv,, 93445,https://arxiv.org/abs/1812.03980,Building Ethically Bounded AI,"['Francesca Rossi', 'Nicholas Mattei']",2018-12-10T00:00:00Z,arxiv,, 93460,https://arxiv.org/abs/1806.01261,"Relational inductive biases, deep learning, and graph networks","['Peter W. Battaglia', 'Jessica B. Hamrick', 'Victor Bapst', 'Alvaro Sanchez-Gonzalez', 'Vinicius Zambaldi', 'Mateusz Malinowski', 'Andrea Tacchetti', 'David Raposo', 'Adam Santoro', 'Ryan Faulkner', 'Caglar Gulcehre', 'Francis Song', 'Andrew Ballard', 'Justin Gilmer', 'George Dahl', 'Ashish Vaswani', 'Kelsey Allen', 'Charles Nash', 'Victoria Langston', 'Chris Dyer', 'Nicolas Heess', 'Daan Wierstra', 'Pushmeet Kohli', 'Matt Botvinick', 'Oriol Vinyals', 'Yujia Li', 'Razvan Pascanu']",2018-06-04T00:00:00Z,arxiv,, 93475,https://arxiv.org/abs/2006.07558,Ethical Considerations for AI Researchers,['Kyle Dent'],2020-06-13T00:00:00Z,arxiv,, 93502,https://arxiv.org/abs/1806.11146,Adversarial Reprogramming of Neural Networks,"['Gamaleldin F. Elsayed', 'Ian Goodfellow', 'Jascha Sohl-Dickstein']",2018-06-28T00:00:00Z,arxiv,, 93524,https://arxiv.org/abs/1909.07528,Emergent Tool Use From Multi-Agent Autocurricula,"['Bowen Baker', 'Ingmar Kanitscheider', 'Todor Markov', 'Yi Wu', 'Glenn Powell', 'Bob McGrew', 'Igor Mordatch']",2019-09-17T00:00:00Z,arxiv,, 93555,https://arxiv.org/abs/1904.12134,Regulating AI: do we need new tools?,"['Otello Ardovino', 'Jacopo Arpetti', 'Marco Delmastro']",2019-04-27T00:00:00Z,arxiv,, 93571,https://arxiv.org/abs/1803.02912,A Brandom-ian view of Reinforcement Learning towards strong-AI,['Atrisha Sarkar'],2018-03-07T00:00:00Z,arxiv,, 93587,https://arxiv.org/abs/1808.00928,Learning Actionable Representations from Visual Observations,"['Debidatta Dwibedi', 'Jonathan Tompson', 'Corey Lynch', 'Pierre Sermanet']",2018-08-02T00:00:00Z,arxiv,, 93611,https://arxiv.org/abs/2108.01634,Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation,"['Victor Besnier', 'Andrei Bursuc', 'David Picard', 'Alexandre Briot']",2021-08-03T00:00:00Z,arxiv,, 93629,https://arxiv.org/abs/1901.03327,A New Tensioning Method using Deep Reinforcement Learning for Surgical Pattern Cutting,"['Thanh Thi Nguyen', 'Ngoc Duy Nguyen', 'Fernando Bello', 'Saeid Nahavandi']",2019-01-10T00:00:00Z,arxiv,, 93644,https://arxiv.org/abs/1602.07029,Latent Skill Embedding for Personalized Lesson Sequence Recommendation,"['Siddharth Reddy', 'Igor Labutov', 'Thorsten Joachims']",2016-02-23T00:00:00Z,arxiv,, 93661,https://arxiv.org/abs/2107.02907,Learning Latent Actions to Control Assistive Robots.,"['Dylan Losey', 'Hong Jun Jeon', 'Mengxi Li', 'Krishnan Srinivasan', 'Ajay Mandlekar', 'Animesh Garg', 'Jeannette Bohg', 'Dorsa Sadigh']",2022-08-14T00:00:00Z,arxiv,, 93682,https://arxiv.org/abs/2110.03175,Fingerprinting Multi-exit Deep Neural Network Models via Inference Time,"['Tian Dong', 'Han Qiu', 'Tianwei Zhang', 'Jiwei Li', 'Hewu Li', 'Jialiang Lu']",2021-10-07T00:00:00Z,arxiv,, 93699,https://arxiv.org/abs/2210.08906,A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities,"['Andrea Tocchetti', 'Lorenzo Corti', 'Agathe Balayn', 'Mireia Yurrita', 'Philip Lippmann', 'Marco Brambilla', 'Jie Yang']",2022-10-17T00:00:00Z,arxiv,, 93731,https://arxiv.org/abs/1803.10664,Autonomous Intelligent Cyber-defense Agent (AICA) Reference Architecture. Release 2.0,"['Alexander Kott', 'Paul Théron', 'Martin Drašar', 'Edlira Dushku', 'Benoît LeBlanc', 'Paul Losiewicz', 'Alessandro Guarino', 'Luigi Mancini', 'Agostino Panico', 'Mauno Pihelgas', 'Krzysztof Rzadca', 'Fabio De Gaspari']",2018-03-28T00:00:00Z,arxiv,, 93771,https://arxiv.org/abs/1805.10265,Training verified learners with learned verifiers,"['Krishnamurthy Dvijotham', 'Sven Gowal', 'Robert Stanforth', 'Relja Arandjelovic', ""Brendan O'Donoghue"", 'Jonathan Uesato', 'Pushmeet Kohli']",2018-05-25T00:00:00Z,arxiv,, 93784,https://arxiv.org/abs/2202.09039,Critical Checkpoints for Evaluating Defence Models Against Adversarial Attack and Robustness,"['Kanak Tekwani', 'Manojkumar Parmar']",2022-02-18T00:00:00Z,arxiv,, 93817,https://arxiv.org/abs/2104.06613,Detection of Dataset Shifts in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression,"['Feiyang Cai', 'Ali I. Ozdagli', 'Xenofon Koutsoukos']",2021-04-14T00:00:00Z,arxiv,, 93827,https://arxiv.org/abs/2110.08322,Robustness of different loss functions and their impact on networks learning capability,['Vishal Rajput'],2021-10-15T00:00:00Z,arxiv,, 93848,https://arxiv.org/abs/2004.07213,Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims,"['Miles Brundage', 'Shahar Avin', 'Jasmine Wang', 'Haydn Belfield', 'Gretchen Krueger', 'Gillian Hadfield', 'Heidy Khlaaf', 'Jingying Yang', 'Helen Toner', 'Ruth Fong', 'Tegan Maharaj', 'Pang Wei Koh', 'Sara Hooker', 'Jade Leung', 'Andrew Trask', 'Emma Bluemke', 'Jonathan Lebensold', ""Cullen O'Keefe"", 'Mark Koren', 'Théo Ryffel', 'JB Rubinovitz', 'Tamay Besiroglu', 'Federica Carugati', 'Jack Clark', 'Peter Eckersley', 'Sarah de Haas', 'Maritza Johnson', 'Ben Laurie', 'Alex Ingerman', 'Igor Krawczuk', 'Amanda Askell', 'Rosario Cammarota', 'Andrew Lohn', 'David Krueger', 'Charlotte Stix', 'Peter Henderson', 'Logan Graham', 'Carina Prunkl', 'Bianca Martin', 'Elizabeth Seger', 'Noa Zilberman', 'Seán Ó hÉigeartaigh', 'Frens Kroeger', 'Girish Sastry', 'Rebecca Kagan', 'Adrian Weller', 'Brian Tse', 'Elizabeth Barnes', 'Allan Dafoe', 'Paul Scharre', 'Ariel Herbert-Voss', 'Martijn Rasser', 'Shagun Sodhani', 'Carrick Flynn', 'Thomas Krendl Gilbert', 'Lisa Dyer', 'Saif Khan', 'Yoshua Bengio', 'Markus Anderljung']",2020-04-15T00:00:00Z,arxiv,, 93893,https://arxiv.org/abs/2206.03378,Imitating Past Successes can be Very Suboptimal,"['Benjamin Eysenbach', 'Soumith Udatha', 'Sergey Levine', 'Ruslan Salakhutdinov']",2022-06-07T00:00:00Z,arxiv,, 93911,https://arxiv.org/abs/2206.08325,Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models,"['Maribeth Rauh', 'John Mellor', 'Jonathan Uesato', 'Po-Sen Huang', 'Johannes Welbl', 'Laura Weidinger', 'Sumanth Dathathri', 'Amelia Glaese', 'Geoffrey Irving', 'Iason Gabriel', 'William Isaac', 'Lisa Anne Hendricks']",2022-06-16T00:00:00Z,arxiv,, 93929,https://arxiv.org/abs/1902.09725,Conservative agency via attainable utility preservation..,"['Alexander Matt Turner', 'Dylan Hadfield-Menell', 'Prasad Tadepalli']",2020-08-15T00:00:00Z,arxiv,, 93947,https://arxiv.org/abs/1504.03592,Towards Verifiably Ethical Robot Behaviour,"['Louise A. Dennis', 'Michael Fisher', 'Alan F. T. Winfield']",2015-04-14T00:00:00Z,arxiv,, 93958,https://arxiv.org/abs/1602.04938,"""Why Should I Trust You?"": Explaining the Predictions of Any Classifier","['Marco Tulio Ribeiro', 'Sameer Singh', 'Carlos Guestrin']",2016-02-16T00:00:00Z,arxiv,, 93974,https://arxiv.org/abs/1902.07742,From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following,"['Justin Fu', 'Anoop Korattikara', 'Sergey Levine', 'Sergio Guadarrama']",2019-02-20T00:00:00Z,arxiv,, 93995,https://arxiv.org/abs/1711.00694,Interpretable and Pedagogical Examples.,"['Smitha Milli', 'Pieter Abbeel', 'Igor Mordatch']",2020-08-14T00:00:00Z,arxiv,, 94007,https://arxiv.org/abs/2202.01679,Certifying Out-of-Domain Generalization for Blackbox Functions,"['Maurice Weber', 'Linyi Li', 'Boxin Wang', 'Zhikuan Zhao', 'Bo Li', 'Ce Zhang']",2022-02-03T00:00:00Z,arxiv,, 94024,https://arxiv.org/abs/2002.01080,Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations,"['Sarath Sreedharan', 'Utkarsh Soni', 'Mudit Verma', 'Siddharth Srivastava', 'Subbarao Kambhampati']",2020-02-04T00:00:00Z,arxiv,, 94045,https://arxiv.org/abs/2005.05960,Planning to Explore via Self-Supervised World Models,"['Ramanan Sekar', 'Oleh Rybkin', 'Kostas Daniilidis', 'Pieter Abbeel', 'Danijar Hafner', 'Deepak Pathak']",2020-05-12T00:00:00Z,arxiv,, 94064,https://arxiv.org/abs/2004.07450,Subjectifying Objectivity: Delineating Tastes in Theoretical Quantum Gravity Research,"['Thomas K. Gilbert', 'Andrew J. Loveridge']",2020-04-16T04:16:56Z,arxiv,, 94080,https://arxiv.org/abs/1912.05652,Learning Human Objectives by Evaluating Hypothetical Behavior,"['Siddharth Reddy', 'Anca D. Dragan', 'Sergey Levine', 'Shane Legg', 'Jan Leike']",2019-12-05T00:00:00Z,arxiv,, 94106,https://arxiv.org/abs/2006.05990,What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study,"['Marcin Andrychowicz', 'Anton Raichuk', 'Piotr Stańczyk', 'Manu Orsini', 'Sertan Girgin', 'Raphael Marinier', 'Léonard Hussenot', 'Matthieu Geist', 'Olivier Pietquin', 'Marcin Michalski', 'Sylvain Gelly', 'Olivier Bachem']",2020-06-10T00:00:00Z,arxiv,, 94135,https://arxiv.org/abs/2107.14414,Towards Understanding the Impact of Real-Time AI-Powered Educational Dashboards (RAED) on Providing Guidance to Instructors,['Ajay Kulkarni'],2021-07-30T00:00:00Z,arxiv,, 94158,https://arxiv.org/abs/1812.10144,Can rationality be measured?,['Tshilidzi Marwala'],2018-12-25T00:00:00Z,arxiv,, 94186,https://arxiv.org/abs/2101.02032,"Socially Responsible AI Algorithms: Issues, Purposes, and Challenges","['Lu Cheng', 'Kush R. Varshney', 'Huan Liu']",2021-01-01T00:00:00Z,arxiv,, 94217,https://arxiv.org/abs/1807.01697,Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations,"['Dan Hendrycks', 'Thomas G. Dietterich']",2018-07-04T00:00:00Z,arxiv,, 94257,https://arxiv.org/abs/2107.01915,Logic Locking at the Frontiers of Machine Learning: A Survey on Developments and Opportunities,"['Dominik Sisejkovic', 'Lennart M. Reimann', 'Elmira Moussavi', 'Farhad Merchant', 'Rainer Leupers']",2021-07-05T00:00:00Z,arxiv,, 94283,https://arxiv.org/abs/1905.09130,AI-CARGO: A Data-Driven Air-Cargo Revenue Management System,"['Stefano Giovanni Rizzo', 'Ji Lucas', 'Zoi Kaoudi', 'Jorge-Arnulfo Quiane-Ruiz', 'Sanjay Chawla']",2019-05-22T00:00:00Z,arxiv,, 94304,https://arxiv.org/abs/2007.16089,Toward Campus Mail Delivery Using BDI,"['Chidiebere Onyedinma', 'Patrick Gavigan', 'Babak Esfandiari']",2020-07-23T00:00:00Z,arxiv,, 94328,https://arxiv.org/abs/1707.08759,Together We Know How to Achieve: An Epistemic Logic of Know-How (Extended Abstract),"['Pavel Naumov', 'Jia Tao']",2017-07-27T00:00:00Z,arxiv,, 94351,https://arxiv.org/abs/2006.16668,GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding,"['Dmitry Lepikhin', 'HyoukJoong Lee', 'Yuanzhong Xu', 'Dehao Chen', 'Orhan Firat', 'Yanping Huang', 'Maxim Krikun', 'Noam Shazeer', 'Zhifeng Chen']",2020-06-30T00:00:00Z,arxiv,, 94384,https://arxiv.org/abs/2010.09468,Chance-Constrained Control with Lexicographic Deep Reinforcement Learning,"['Alessandro Giuseppi', 'Antonio Pietrabissa']",2020-10-19T00:00:00Z,arxiv,, 94398,https://arxiv.org/abs/1905.06876,"From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices","['Jessica Morley', 'Luciano Floridi', 'Libby Kinsey', 'Anat Elhalal']",2019-05-15T00:00:00Z,arxiv,, 94427,https://arxiv.org/abs/1805.11686,Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition,"['Justin Fu', 'Avi Singh', 'Dibya Ghosh', 'Larry Yang', 'Sergey Levine']",2018-05-29T00:00:00Z,arxiv,, 94443,https://arxiv.org/abs/1707.06354,Pragmatic-Pedagogic Value Alignment,"['Jaime F. Fisac', 'Monica A. Gates', 'Jessica B. Hamrick', 'Chang Liu', 'Dylan Hadfield-Menell', 'Malayandi Palaniappan', 'Dhruv Malik', 'S. Shankar Sastry', 'Thomas L. Griffiths', 'Anca D. Dragan']",2017-07-20T00:00:00Z,arxiv,, 94460,https://arxiv.org/abs/1804.00645,Universal Planning Networks,"['Aravind Srinivas', 'Allan Jabri', 'Pieter Abbeel', 'Sergey Levine', 'Chelsea Finn']",2018-04-02T00:00:00Z,arxiv,, 94480,https://arxiv.org/abs/2003.01593,Marketplace for AI Models,"['Abhishek Kumar', 'Benjamin Finley', 'Tristan Braud', 'Sasu Tarkoma', 'Pan Hui']",2020-03-03T00:00:00Z,arxiv,, 94517,https://arxiv.org/abs/1304.2713,Dempster-Shafer vs. Probabilistic Logic,['Daniel Hunter'],2013-03-27T00:00:00Z,arxiv,, 94534,https://arxiv.org/abs/2102.08686,Fully General Online Imitation Learning,"['Michael K. Cohen', 'Marcus Hutter', 'Neel Nanda']",2021-02-17T00:00:00Z,arxiv,, 94551,https://arxiv.org/abs/1911.00061,DeepLine: AutoML Tool for Pipelines Generation using Deep Reinforcement Learning and Hierarchical Actions Filtering,"['Yuval Heffetz', 'Roman Vainstein', 'Gilad Katz', 'Lior Rokach']",2019-10-31T00:00:00Z,arxiv,, 94572,https://arxiv.org/abs/2102.05008,Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and Practice,"['Lewis Hammond', 'James Fox', 'Tom Everitt', 'Alessandro Abate', 'Michael Wooldridge']",2021-02-09T00:00:00Z,arxiv,, 94595,https://arxiv.org/abs/2102.09430,State Entropy Maximization with Random Encoders for Efficient Exploration.,"['Younggyo Seo', 'Lili Chen', 'Jinwoo Shin', 'Honglak Lee', 'Pieter Abbeel', 'Kimin Lee']",2021-08-14T00:00:00Z,arxiv,, 94618,https://arxiv.org/abs/2209.13873,InFi: End-to-End Learning to Filter Input for Resource-Efficiency in Mobile-Centric Inference,"['Mu Yuan', 'Lan Zhang', 'Fengxiang He', 'Xueting Tong', 'Miao-Hui Song', 'Zhengyuan Xu', 'Xiang-Yang Li']",2022-09-28T00:00:00Z,arxiv,, 94635,https://arxiv.org/abs/2011.10753,Emergent Road Rules In Multi-Agent Driving Environments,"['Avik Pal', 'Jonah Philion', 'Yuan-Hong Liao', 'Sanja Fidler']",2020-11-21T00:00:00Z,arxiv,, 94662,https://arxiv.org/abs/2211.10869,Uni[MASK]: Unified Inference in Sequential Decision Problems. .,"['M', 'Carroll', 'O', 'Paradise', 'J', 'Lin', 'R', 'Georgescu', 'M', 'Sun', 'D', 'Bignell', 'S', 'Milani', 'K', 'Hofmann', 'M', 'Hausknecht', 'A', 'D', 'Dragan', 'S', 'Devlin']",2022-08-14T00:00:00Z,arxiv,, 94676,https://arxiv.org/abs/2101.07691,Choice Set Misspecification in Reward Inference.,"['Rachel Freedman', 'Rohin Shah', 'Anca Dragan']",2020-08-14T00:00:00Z,arxiv,, 94694,https://arxiv.org/abs/2111.02840,Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models,"['Boxin Wang', 'Chejian Xu', 'Shuohang Wang', 'Zhe Gan', 'Yu Cheng', 'Jianfeng Gao', 'Ahmed Hassan Awadallah', 'Bo Li']",2021-11-04T00:00:00Z,arxiv,, 94712,https://arxiv.org/abs/2202.11812,Investigations of Performance and Bias in Human-AI Teamwork in Hiring,"['Andi Peng', 'Besmira Nushi', 'Emre Kiciman', 'Kori Inkpen', 'Ece Kamar']",2022-02-21T00:00:00Z,arxiv,, 94737,https://arxiv.org/abs/2202.11233,Retrieval Augmented Classification for Long-Tail Visual Recognition,"['Alexander Long', 'Wei Yin', 'Thalaiyasingam Ajanthan', 'Vu Nguyen', 'Pulak Purkait', 'Ravi Garg', 'Alan Blair', 'Chunhua Shen', 'Anton van den Hengel']",2022-02-22T00:00:00Z,arxiv,, 94752,https://arxiv.org/abs/2007.07703,Failures of Contingent Thinking,"['Evan Piermont', 'Peio Zuazo-Garin']",2020-07-15T00:00:00Z,arxiv,, 94768,https://arxiv.org/abs/1409.3215,Sequence to Sequence Learning with Neural Networks,"['Ilya Sutskever', 'Oriol Vinyals', 'Quoc V. Le']",2014-09-10T00:00:00Z,arxiv,, 94791,https://arxiv.org/abs/cs/0001015,Multi-Agent Only Knowing,"['Joseph Y. Halpern', 'Gerhard Lakemeyer']",2000-01-19T00:00:00Z,arxiv,, 94809,https://arxiv.org/abs/1810.07483,O2A: One-shot Observational learning with Action vectors,"['Leo Pauly', 'Wisdom C. Agboh', 'David C. Hogg', 'Raul Fuentes']",2018-10-17T00:00:00Z,arxiv,, 94826,https://arxiv.org/abs/1810.10525,Toward an AI Physicist for Unsupervised Learning,"['Tailin Wu', 'Max Tegmark']",2018-10-24T00:00:00Z,arxiv,, 94848,https://arxiv.org/abs/2001.06691,Teaching Software Engineering for AI-Enabled Systems,"['Christian Kästner', 'Eunsuk Kang']",2020-01-18T00:00:00Z,arxiv,, 94867,https://arxiv.org/abs/2111.13365,Machines & Influence: An Information Systems Lens,['Shashank Yadav'],2021-11-26T00:00:00Z,arxiv,, 94888,https://arxiv.org/abs/2012.07805,Extracting Training Data from Large Language Models,"['Nicholas Carlini', 'Florian Tramer', 'Eric Wallace', 'Matthew Jagielski', 'Ariel Herbert-Voss', 'Katherine Lee', 'Adam Roberts', 'Tom Brown', 'Dawn Song', 'Ulfar Erlingsson', 'Alina Oprea', 'Colin Raffel']",2020-12-14T00:00:00Z,arxiv,, 94910,https://arxiv.org/abs/1810.10593,Inverse reinforcement learning for video games,"['Aaron Tucker', 'Adam Gleave', 'Stuart Russell']",2018-10-24T00:00:00Z,arxiv,, 94933,https://arxiv.org/abs/1703.01908,A proposal for ethically traceable artificial intelligence,['Christopher A. Tucker'],2017-03-06T00:00:00Z,arxiv,, 94946,https://arxiv.org/abs/2112.07773,Filling gaps in trustworthy development of AI,"['Shahar Avin', 'Haydn Belfield', 'Miles Brundage', 'Gretchen Krueger', 'Jasmine Wang', 'Adrian Weller', 'Markus Anderljung', 'Igor Krawczuk', 'David Krueger', 'Jonathan Lebensold', 'Tegan Maharaj', 'Noa Zilberman']",2021-12-14T00:00:00Z,arxiv,, 94973,https://arxiv.org/abs/2007.16096,On Single Point Forecasts for Fat-Tailed Variables,"['Nassim Nicholas Taleb12', 'Yaneer Bar-Yam3', 'and Pasquale Cirillo45']",2020-07-31T14:20:16Z,arxiv,, 94997,https://arxiv.org/abs/2104.02180,AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control.,"['Xue Bin Peng', 'Ze Ma', 'Pieter Abbeel', 'Sergey Levine', 'Angjoo Kanazawa']",2021-08-14T00:00:00Z,arxiv,, 95023,https://arxiv.org/abs/2008.12623,From Optimizing Engagement to Measuring Value,"['Smitha Milli', 'Luca Belli', 'Moritz Hardt']",2021-08-21T00:00:00Z,arxiv,, 95038,https://arxiv.org/abs/2109.05486,A Socially Aware Reinforcement Learning Agent for The Single Track Road Problem,"['Ido Shapira', 'Amos Azaria']",2021-09-12T00:00:00Z,arxiv,, 95058,https://arxiv.org/abs/2007.01223,Verifiably Safe Exploration for End-to-End Reinforcement Learning,"['Nathan Hunt', 'Nathan Fulton', 'Sara Magliacane', 'Nghia Hoang', 'Subhro Das', 'Armando Solar-Lezama']",2020-07-02T00:00:00Z,arxiv,, 95078,https://arxiv.org/abs/1803.08287,Learning-based Model Predictive Control for Safe Exploration,"['Torsten Koller', 'Felix Berkenkamp', 'Matteo Turchetta', 'Andreas Krause']",2018-03-22T00:00:00Z,arxiv,, 95095,https://arxiv.org/abs/2002.11879,State-only Imitation with Transition Dynamics Mismatch,"['Tanmay Gangwani', 'Jian Peng']",2020-02-27T00:00:00Z,arxiv,, 95108,https://arxiv.org/abs/1903.01567,Model Primitive Hierarchical Lifelong Reinforcement Learning,"['Bohan Wu', 'Jayesh K. Gupta', 'Mykel J. Kochenderfer']",2019-03-04T00:00:00Z,arxiv,, 95135,https://arxiv.org/abs/2009.11190,Enterprise AI Canvas -- Integrating Artificial Intelligence into Business,['U. Kerzel'],2020-09-18T00:00:00Z,arxiv,, 95156,https://arxiv.org/abs/2206.04114,Director: Deep Hierarchical Planning from Pixels.,"['Danijar Hafner', 'Kuang-Huei Lee', 'Ian Fischer', 'Pieter Abbeel']",2022-08-14T00:00:00Z,arxiv,, 95182,https://arxiv.org/abs/2107.00591,Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble.,"['Seunghyun Lee', 'Younggyo Seo', 'Kimin Lee', 'Pieter Abbeel', 'Jinwoo Shin']",2021-08-14T00:00:00Z,arxiv,, 95201,https://arxiv.org/abs/1907.03848,Artificial Intelligence Governance and Ethics: Global Perspectives,"['Angela Daly', 'Thilo Hagendorff', 'Li Hui', 'Monique Mann', 'Vidushi Marda', 'Ben Wagner', 'Wei Wang', 'Saskia Witteborn']",2019-06-28T00:00:00Z,arxiv,, 95245,https://arxiv.org/abs/1701.08306,Practical Reasoning with Norms for Autonomous Software Agents (Full Edition),"['Zohreh Shams', 'Marina De Vos', 'Julian Padget', 'Wamberto W. Vasconcelos']",2017-01-28T00:00:00Z,arxiv,, 95257,https://arxiv.org/abs/1905.09397,Cognitive Model Priors for Predicting Human Decisions,"['David D. Bourgin', 'Joshua C. Peterson', 'Daniel Reichman', 'Thomas L. Griffiths', 'Stuart J. Russell']",2019-05-22T00:00:00Z,arxiv,, 95272,https://arxiv.org/abs/2109.10996,Cartesian Frames,"['Scott Garrabrant', 'Daniel A. Herrmann', 'Josiah Lopez-Wild']",2021-09-22T00:00:00Z,arxiv,, 95293,https://arxiv.org/abs/2002.05379,The Conditional Entropy Bottleneck,['Ian Fischer'],2020-02-13T00:00:00Z,arxiv,, 95317,https://arxiv.org/abs/1401.4600,Exploiting Model Equivalences for Solving Interactive Dynamic Influence Diagrams,"['Yifeng Zeng', 'Prashant Doshi']",2014-01-18T00:00:00Z,arxiv,, 95336,https://arxiv.org/abs/2210.00608,"Establishing Meta-Decision-Making for AI: An Ontology of Relevance, Representation and Reasoning","['Cosmin Badea', 'Leilani Gilpin']",2022-10-02T00:00:00Z,arxiv,, 95366,https://arxiv.org/abs/1904.07633,HARK Side of Deep Learning -- From Grad Student Descent to Automated Machine Learning,"['Oguzhan Gencoglu', 'Mark van Gils', 'Esin Guldogan', 'Chamin Morikawa', 'Mehmet Süzen', 'Mathias Gruber', 'Jussi Leinonen', 'Heikki Huttunen']",2019-04-16T00:00:00Z,arxiv,, 95397,https://arxiv.org/abs/2303.16200,Natural Selection Favors AIs over Humans,['Dan Hendrycks'],2023-03-28T17:59:12Z,arxiv,, 95435,https://arxiv.org/abs/1902.04198,Preferences Implicit in the State of the World,"['Rohin Shah', 'Dmitrii Krasheninnikov', 'Jordan Alexander', 'Pieter Abbeel', 'Anca Dragan']",2019-02-12T00:00:00Z,arxiv,, 95456,https://arxiv.org/abs/2102.13195,Reinforcement Learning of Implicit and Explicit Control Flow Instructions.,"['Ethan A', 'Brooks', 'Janarthanan Rajendran', 'Richard L', 'Lewis', 'Satinder Singh']",2021-08-14T00:00:00Z,arxiv,, 95479,https://arxiv.org/abs/1809.01999,Recurrent World Models Facilitate Policy Evolution,"['David Ha', 'Jürgen Schmidhuber']",2018-09-04T00:00:00Z,arxiv,, 95501,https://arxiv.org/abs/1811.10597,GAN Dissection: Visualizing and Understanding Generative Adversarial Networks,"['David Bau', 'Jun-Yan Zhu', 'Hendrik Strobelt', 'Bolei Zhou', 'Joshua B. Tenenbaum', 'William T. Freeman', 'Antonio Torralba']",2018-11-26T00:00:00Z,arxiv,, 95522,https://arxiv.org/abs/1906.00945,Adversarial Robustness as a Prior for Learned Representations,"['Logan Engstrom', 'Andrew Ilyas', 'Shibani Santurkar', 'Dimitris Tsipras', 'Brandon Tran', 'Aleksander Madry']",2019-06-03T00:00:00Z,arxiv,, 95537,https://arxiv.org/abs/2201.12427,Towards Safe Reinforcement Learning with a Safety Editor Policy,"['Haonan Yu', 'Wei Xu', 'Haichao Zhang']",2022-01-28T00:00:00Z,arxiv,, 95554,https://arxiv.org/abs/2209.07670,Reducing Variance in Temporal-Difference Value Estimation via Ensemble of Deep Networks.,"['Litian Liang', 'Yaosheng Xu', 'Stephen Mcaleer', 'Dailin Hu', 'Alexander Ihler', 'Pieter Abbeel', 'Roy Fox']",2022-08-14T00:00:00Z,arxiv,, 95567,https://arxiv.org/abs/1812.02953,Building Ethics into Artificial Intelligence,"['Han Yu', 'Zhiqi Shen', 'Chunyan Miao', 'Cyril Leung', 'Victor R. Lesser', 'Qiang Yang']",2018-12-07T00:00:00Z,arxiv,, 95606,https://arxiv.org/abs/1811.01267,Legible Normativity for AI Alignment: The Value of Silly Rules.,"['Dylan Hadfield-Menell', 'McKane Andrus', 'Gillian Hadfield']",2019-08-15T00:00:00Z,arxiv,, 95624,https://arxiv.org/abs/1912.05284,Interactive AI with a Theory of Mind,"['Mustafa Mert Çelikok', 'Tomi Peltola', 'Pedram Daee', 'Samuel Kaski']",2019-12-01T00:00:00Z,arxiv,, 95634,https://arxiv.org/abs/2202.10153,Inferring Lexicographically-Ordered Rewards from Preferences,"['Alihan Hüyük', 'William R. Zame', 'Mihaela van der Schaar']",2022-02-21T00:00:00Z,arxiv,, 95649,https://arxiv.org/abs/1805.11592,Playing hard exploration games by watching YouTube,"['Yusuf Aytar', 'Tobias Pfaff', 'David Budden', 'Tom Le Paine', 'Ziyu Wang', 'Nando de Freitas']",2018-05-29T00:00:00Z,arxiv,, 95668,https://arxiv.org/abs/2004.11434,Responsible AI and Its Stakeholders,"['Gabriel Lima', 'Meeyoung Cha']",2020-04-23T00:00:00Z,arxiv,, 95684,https://arxiv.org/abs/2011.08541,Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization,"['Sreejith Balakrishnan', 'Quoc Phong Nguyen', 'Bryan Kian Hsiang Low', 'Harold Soh']",2020-11-17T00:00:00Z,arxiv,, 95700,https://arxiv.org/abs/1907.07273,An Inductive Synthesis Framework for Verifiable Reinforcement Learning,"['He Zhu', 'Zikang Xiong', 'Stephen Magill', 'Suresh Jagannathan']",2019-07-16T00:00:00Z,arxiv,, 95718,https://arxiv.org/abs/1905.01320,Meta-learners' learning dynamics are unlike learners',['Neil C. Rabinowitz'],2019-05-03T00:00:00Z,arxiv,, 95734,https://arxiv.org/abs/2102.06741,Discovery of Options via Meta-Learned Subgoals,"['Vivek Veeriah', 'Tom Zahavy', 'Matteo Hessel', 'Zhongwen Xu', 'Junhyuk Oh', 'Iurii Kemaev', 'Hado van Hasselt', 'David Silver', 'Satinder Singh']",2021-02-12T00:00:00Z,arxiv,, 95750,https://arxiv.org/abs/2201.05646,ULTRA: A Data-driven Approach for Recommending Team Formation in Response to Proposal Calls,"['Biplav Srivastava', 'Tarmo Koppel', 'Sai Teja Paladi', 'Siva Likitha Valluru', 'Rohit Sharma', 'Owen Bond']",2022-01-13T00:00:00Z,arxiv,, 95766,https://arxiv.org/abs/2201.10436,Safe AI -- How is this Possible?,"['Harald Rueß', 'Simon Burton']",2022-01-25T00:00:00Z,arxiv,, 95799,https://arxiv.org/abs/2203.15913,Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling and Design.,"['Kourosh Hakhamaneshi', 'Marcel Nassar', 'Mariano Phielipp', 'Pieter Abbeel', 'Vladimir StojanoviÄ']",2022-08-14T00:00:00Z,arxiv,, 95821,https://arxiv.org/abs/2108.12427,Why and How Governments Should Monitor AI Development,"['Jess Whittlestone', 'Jack Clark']",2021-08-28T00:00:00Z,arxiv,, 95842,https://arxiv.org/abs/1802.00420,"Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples","['Anish Athalye', 'Nicholas Carlini', 'David Wagner']",2018-02-01T18:20:05Z,arxiv,, 95872,https://arxiv.org/abs/1809.02591,Learning Invariances for Policy Generalization,"['Remi Tachet', 'Philip Bachman', 'Harm van Seijen']",2018-09-07T00:00:00Z,arxiv,, 95891,https://arxiv.org/abs/1805.08336,Maximum Causal Tsallis Entropy Imitation Learning,"['Kyungjae Lee', 'Sungjoon Choi', 'Songhwai Oh']",2018-05-22T00:00:00Z,arxiv,, 95906,https://arxiv.org/abs/1711.09883,AI Safety Gridworlds,"['Jan Leike', 'Miljan Martic', 'Victoria Krakovna', 'Pedro A. Ortega', 'Tom Everitt', 'Andrew Lefrancq', 'Laurent Orseau', 'Shane Legg']",2017-11-27T00:00:00Z,arxiv,, 95964,https://arxiv.org/abs/2105.02704,AI Risk Skepticism,['Roman V. Yampolskiy'],2021-05-02T00:00:00Z,arxiv,, 96003,https://arxiv.org/abs/1906.00336,The Principle of Unchanged Optimality in Reinforcement Learning Generalization,"['Alex Irpan', 'Xingyou Song']",2019-06-02T00:00:00Z,arxiv,, 96027,https://arxiv.org/abs/1602.04184,Parametric Bounded Löb's Theorem and Robust Cooperation of Bounded Agents,['Andrew Critch'],2016-02-12T00:00:00Z,arxiv,, 96050,https://arxiv.org/abs/2207.13834,Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables,"['Kenneth Holstein', 'Maria De-Arteaga', 'Lakshmi Tumati', 'Yanghuidi Cheng']",2022-07-28T00:00:00Z,arxiv,, 96069,https://arxiv.org/abs/2010.05899,SLIP: Learning to predict in unknown dynamical systems with long-term memory.,"['Paria Rashidinejad', 'Jiantao Jiao', 'Stuart Russell']",2020-08-14T00:00:00Z,arxiv,, 96084,https://arxiv.org/abs/1407.5380,Representing and Reasoning about Game Strategies,"['Dongmo Zhang', 'Michael Thielsher']",2014-07-21T00:00:00Z,arxiv,, 96103,https://arxiv.org/abs/2205.13013,Learning Deterministic Finite Automata Decompositions from Examples and Demonstrations.,"['N', 'Lauffer*', 'B', 'Yalcinkaya*', 'M', 'Vazquez-Chanlatte', 'A Shah', 'and S', 'Seshia']",2022-08-14T00:00:00Z,arxiv,, 96120,https://arxiv.org/abs/2204.05091,Linguistic communication as (inverse) reward design,"['Theodore R. Sumers', 'Robert D. Hawkins', 'Mark K. Ho', 'Thomas L. Griffiths', 'Dylan Hadfield-Menell']",2022-04-11T00:00:00Z,arxiv,, 96139,https://arxiv.org/abs/1910.01741,Improving Sample Efficiency in Model-Free Reinforcement Learning from Images,"['Denis Yarats', 'Amy Zhang', 'Ilya Kostrikov', 'Brandon Amos', 'Joelle Pineau', 'Rob Fergus']",2019-10-02T00:00:00Z,arxiv,, 96158,https://arxiv.org/abs/2102.04897,Learning State Representations from Random Deep Action-Conditional Predictions.,"['Zeyu Zheng', 'Vivek Veeriah', 'Risto Vuorio', 'Richard Lewis', 'Satinder Singh']",2021-08-14T00:00:00Z,arxiv,, 96176,https://arxiv.org/abs/2006.10720,IReEn: Reverse-Engineering of Black-Box Functions via Iterative Neural Program Synthesis,"['Hossein Hajipour', 'Mateusz Malinowski', 'Mario Fritz']",2020-06-18T00:00:00Z,arxiv,, 96193,https://arxiv.org/abs/1809.05188,CM3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning,"['Jiachen Yang', 'Alireza Nakhaei', 'David Isele', 'Kikuo Fujimura', 'Hongyuan Zha']",2018-09-13T00:00:00Z,arxiv,, 96211,https://arxiv.org/abs/2207.10050,Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations,"['Haoran Xu', 'Xianyuan Zhan', 'Honglei Yin', 'Huiling Qin']",2022-07-20T00:00:00Z,arxiv,, 96237,https://arxiv.org/abs/1808.04468,Risk-Sensitive Generative Adversarial Imitation Learning,"['Jonathan Lacotte', 'Mohammad Ghavamzadeh', 'Yinlam Chow', 'Marco Pavone']",2018-08-13T00:00:00Z,arxiv,, 96255,https://arxiv.org/abs/1809.07802,Playing the Game of Universal Adversarial Perturbations,"['Julien Perolat', 'Mateusz Malinowski', 'Bilal Piot', 'Olivier Pietquin']",2018-09-20T00:00:00Z,arxiv,, 96274,https://arxiv.org/abs/2203.12918,A Rationale-Centric Framework for Human-in-the-loop Machine Learning,"['Jinghui Lu', 'Linyi Yang', 'Brian Mac Namee', 'Yue Zhang']",2022-03-24T00:00:00Z,arxiv,, 96290,https://arxiv.org/abs/1805.07894,Constructing Unrestricted Adversarial Examples with Generative Models,"['Yang Song', 'Rui Shu', 'Nate Kushman', 'Stefano Ermon']",2018-05-21T00:00:00Z,arxiv,, 96303,https://arxiv.org/abs/2201.00764,Have I done enough planning or should I plan more?,"['Ruiqi He', 'Yash Raj Jain', 'Falk Lieder']",2022-01-03T00:00:00Z,arxiv,, 96318,https://arxiv.org/abs/1609.08144,Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation,"['Yonghui Wu', 'Mike Schuster', 'Zhifeng Chen', 'Quoc V. Le', 'Mohammad Norouzi', 'Wolfgang Macherey', 'Maxim Krikun', 'Yuan Cao', 'Qin Gao', 'Klaus Macherey', 'Jeff Klingner', 'Apurva Shah', 'Melvin Johnson', 'Xiaobing Liu', 'Łukasz Kaiser', 'Stephan Gouws', 'Yoshikiyo Kato', 'Taku Kudo', 'Hideto Kazawa', 'Keith Stevens', 'George Kurian', 'Nishant Patil', 'Wei Wang', 'Cliff Young', 'Jason Smith', 'Jason Riesa', 'Alex Rudnick', 'Oriol Vinyals', 'Greg Corrado', 'Macduff Hughes', 'Jeffrey Dean']",2016-09-26T00:00:00Z,arxiv,, 96350,https://arxiv.org/abs/2005.14363,Extracting low-dimensional psychological representations from convolutional neural networks.,"['Aditi Jha', 'Joshua Peterson', 'Thomas L', 'Griffiths']",2020-08-14T00:00:00Z,arxiv,, 96366,https://arxiv.org/abs/1912.02624,Learning Efficient Representation for Intrinsic Motivation,"['Ruihan Zhao', 'Stas Tiomkin', 'Pieter Abbeel']",2019-12-04T00:00:00Z,arxiv,, 96386,https://arxiv.org/abs/2102.13515,Beyond Fine-Tuning: Transferring Behavior in Reinforcement Learning,"['Víctor Campos', 'Pablo Sprechmann', 'Steven Hansen', 'Andre Barreto', 'Steven Kapturowski', 'Alex Vitvitskyi', 'Adrià Puigdomènech Badia', 'Charles Blundell']",2021-02-24T00:00:00Z,arxiv,, 96410,https://arxiv.org/abs/2111.10493,Discrete Representations Strengthen Vision Transformer Robustness,"['Chengzhi Mao', 'Lu Jiang', 'Mostafa Dehghani', 'Carl Vondrick', 'Rahul Sukthankar', 'Irfan Essa']",2021-11-20T00:00:00Z,arxiv,, 96423,https://arxiv.org/abs/2102.06362,A Decentralized Approach towards Responsible AI in Social Ecosystems,['Wenjing Chu'],2021-02-12T00:00:00Z,arxiv,, 96451,https://arxiv.org/abs/2109.06160,Augmenting Decision Making via Interactive What-If Analysis,"['Sneha Gathani', 'Madelon Hulsebos', 'James Gale', 'Peter J. Haas', 'Çağatay Demiralp']",2021-09-13T00:00:00Z,arxiv,, 96467,https://arxiv.org/abs/2110.09506,MEMO: Test Time Robustness via Adaptation and Augmentation,"['Marvin Zhang', 'Sergey Levine', 'Chelsea Finn']",2021-10-18T00:00:00Z,arxiv,, 96488,https://arxiv.org/abs/1703.10987,On the Impossibility of Supersized Machines,"['Ben Garfinkel', 'Miles Brundage', 'Daniel Filan', 'Carrick Flynn', 'Jelena Luketina', 'Michael Page', 'Anders Sandberg', 'Andrew Snyder-Beattie', 'Max Tegmark']",2017-03-31T00:00:00Z,arxiv,, 96504,https://arxiv.org/abs/2106.10268,MADE: Exploration via Maximizing Deviation from Explored Regions,"['Tianjun Zhang', 'Paria Rashidinejad', 'Jiantao Jiao', 'Yuandong Tian', 'Joseph Gonzalez', 'Stuart Russell']",2021-06-18T00:00:00Z,arxiv,, 96521,https://arxiv.org/abs/1911.04252,Self-training with Noisy Student improves ImageNet classification,"['Qizhe Xie', 'Minh-Thang Luong', 'Eduard Hovy', 'Quoc V. Le']",2019-11-11T00:00:00Z,arxiv,, 96540,https://arxiv.org/abs/2205.02850,A Deep Reinforcement Learning Framework for Rapid Diagnosis of Whole Slide Pathological Images,"['Tingting Zheng', 'Weixing chen', 'Shuqin Li', 'Hao Quan', 'Qun Bai', 'Tianhang Nan', 'Song Zheng', 'Xinghua Gao', 'Yue Zhao', 'Xiaoyu Cui']",2022-05-05T00:00:00Z,arxiv,, 96558,https://arxiv.org/abs/1604.05280,Asymptotic Convergence in Online Learning with Unbounded Delays,"['Scott Garrabrant', 'Nate Soares', 'Jessica Taylor']",2016-04-18T00:00:00Z,arxiv,, 96577,https://arxiv.org/abs/2102.04074,Learning Curve Theory,['Marcus Hutter'],2021-02-08T00:00:00Z,arxiv,, 96594,https://arxiv.org/abs/1701.01302,Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making,['Andrew Critch'],2017-01-05T00:00:00Z,arxiv,, 96610,https://arxiv.org/abs/1905.02825,Toybox: A Suite of Environments for Experimental Evaluation of Deep Reinforcement Learning,"['Emma Tosch', 'Kaleigh Clary', 'John Foley', 'David Jensen']",2019-05-07T00:00:00Z,arxiv,, 96626,https://arxiv.org/abs/1910.02910,Scaled Autonomy: Enabling Human Operators to Control Robot Fleets,"['Gokul Swamy', 'Siddharth Reddy', 'Sergey Levine', 'Anca D. Dragan']",2019-09-22T00:00:00Z,arxiv,, 96644,https://arxiv.org/abs/2210.08340,Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution,"['Anthony Zador', 'Sean Escola', 'Blake Richards', 'Bence Ölveczky', 'Yoshua Bengio', 'Kwabena Boahen', 'Matthew Botvinick', 'Dmitri Chklovskii', 'Anne Churchland', 'Claudia Clopath', 'James DiCarlo', 'Surya Ganguli', 'Jeff Hawkins', 'Konrad Koerding', 'Alexei Koulakov', 'Yann LeCun', 'Timothy Lillicrap', 'Adam Marblestone', 'Bruno Olshausen', 'Alexandre Pouget', 'Cristina Savin', 'Terrence Sejnowski', 'Eero Simoncelli', 'Sara Solla', 'David Sussillo', 'Andreas S. Tolias', 'Doris Tsao']",2022-10-15T00:00:00Z,arxiv,, 96674,https://arxiv.org/abs/1806.01830,Relational Deep Reinforcement Learning,"['Vinicius Zambaldi', 'David Raposo', 'Adam Santoro', 'Victor Bapst', 'Yujia Li', 'Igor Babuschkin', 'Karl Tuyls', 'David Reichert', 'Timothy Lillicrap', 'Edward Lockhart', 'Murray Shanahan', 'Victoria Langston', 'Razvan Pascanu', 'Matthew Botvinick', 'Oriol Vinyals', 'Peter Battaglia']",2018-06-05T00:00:00Z,arxiv,, 96687,https://arxiv.org/abs/1903.10396,The LogBarrier adversarial attack: making effective use of decision boundary information,"['Chris Finlay', 'Aram-Alexandre Pooladian', 'Adam M. Oberman']",2019-03-25T00:00:00Z,arxiv,, 96703,https://arxiv.org/abs/2207.09712,The Need for a Meta-Architecture for Robot Autonomy,"['Stalin Muñoz Gutiérrez', 'Gerald Steinbauer-Wagner']",2022-07-20T00:00:00Z,arxiv,, 96723,https://arxiv.org/abs/1904.10386,Risk Structures: Towards Engineering Risk-aware Autonomous Systems,['Mario Gleirscher'],2019-04-23T00:00:00Z,arxiv,, 96734,https://arxiv.org/abs/1808.04730,Analyzing Inverse Problems with Invertible Neural Networks,"['Lynton Ardizzone', 'Jakob Kruse', 'Sebastian Wirkert', 'Daniel Rahner', 'Eric W. Pellegrini', 'Ralf S. Klessen', 'Lena Maier-Hein', 'Carsten Rother', 'Ullrich Köthe']",2018-08-14T00:00:00Z,arxiv,, 96748,https://arxiv.org/abs/2112.01455,"Zero-Shot Text-Guided Object Generation with Dream Fields,.","['Ajay Jain', 'Ben Mildenhall', 'Jonathan T', 'Barron', 'Pieter Abbeel', 'Ben Poole']",2022-08-14T00:00:00Z,arxiv,, 96778,https://arxiv.org/abs/1608.08225,Why does deep and cheap learning work so well?,"['Henry W. Lin', 'Max Tegmark', 'David Rolnick']",2016-08-29T00:00:00Z,arxiv,, 96798,https://arxiv.org/abs/1807.08364,EnsembleDAgger: A Bayesian Approach to Safe Imitation Learning,"['Kunal Menda', 'Katherine Driggs-Campbell', 'Mykel J. Kochenderfer']",2018-07-22T00:00:00Z,arxiv,, 96811,https://arxiv.org/abs/2002.12156,Cautious Reinforcement Learning with Logical Constraints,"['Mohammadhosein Hasanbeig', 'Alessandro Abate', 'Daniel Kroening']",2020-02-26T00:00:00Z,arxiv,, 96829,https://arxiv.org/abs/2210.17368,Teacher-student curriculum learning for reinforcement learning,['Yanick Schraner'],2022-10-31T00:00:00Z,arxiv,, 96855,https://arxiv.org/abs/2102.03896,Consequences of Misaligned AI,"['Simon Zhuang', 'Dylan Hadfield-Menell']",2021-02-07T00:00:00Z,arxiv,, 96872,https://arxiv.org/abs/1910.14599,"Adversarial NLI: A New Benchmark for Natural Language Understanding",['Yixin Nie'],2019-10-31T16:50:43Z,arxiv,, 96891,https://arxiv.org/abs/1811.12231,ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness,['Robert Geirhos'],2018-01-01T00:00:00Z,arxiv,, 96907,https://arxiv.org/abs/2305.00586,How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model,['Michael Hanna'],2023-04-30T21:44:21Z,arxiv,, 96924,https://arxiv.org/abs/1805.07468,Unsupervised Learning of Neural Networks to Explain Neural Networks,"['Quanshi Zhang', 'Yu Yang', 'Yuchen Liu', 'Ying Nian Wu', 'Song-Chun Zhu']",2018-05-18T00:00:00Z,arxiv,, 96942,https://arxiv.org/abs/1909.13392,Learning from Observations Using a Single Video Demonstration and Human Feedback,"['Sunil Gandhi', 'Tim Oates', 'Tinoosh Mohsenin', 'Nicholas Waytowich']",2019-09-29T00:00:00Z,arxiv,, 96961,https://arxiv.org/abs/2211.12740,Masked Autoencoding for Scalable and Generalizable Decision Making.,"['Fangchen Liu', 'Hao Liu', 'Aditya Grover', 'Pieter Abbeel']",2022-08-14T00:00:00Z,arxiv,, 96978,https://arxiv.org/abs/2202.01288,Imitation Learning by Estimating Expertise of Demonstrators.,"['Mark Beliaev*', 'Andy Shih*', 'Stefano Ermon', 'Dorsa Sadigh', 'Ramtin Pedarsani']",2022-08-14T00:00:00Z,arxiv,, 96994,https://arxiv.org/abs/1904.11455,Ray Interference: a Source of Plateaus in Deep Reinforcement Learning,"['Tom Schaul', 'Diana Borsa', 'Joseph Modayil', 'Razvan Pascanu']",2019-04-25T00:00:00Z,arxiv,, 97018,https://arxiv.org/abs/2201.01448,Conditional Imitation Learning for Multi-Agent Games.,"['Andy Shih', 'Stefano Ermon', 'Dorsa Sadigh']",2022-08-14T00:00:00Z,arxiv,, 97036,https://arxiv.org/abs/1804.05296,Adversarial Attacks Against Medical Deep Learning Systems,"['Samuel G. Finlayson', 'Hyung Won Chung', 'Isaac S. Kohane', 'Andrew L. Beam']",2018-04-15T00:00:00Z,arxiv,, 97061,https://arxiv.org/abs/1706.01303,The Singularity May Be Near,['Roman V. Yampolskiy'],2017-05-31T00:00:00Z,arxiv,, 97077,https://arxiv.org/abs/1806.02404,Dissolving the Fermi Paradox,"['Anders Sandberg', 'Eric Drexler', 'Toby Ord']",2018-06-06T00:00:00Z,arxiv,, 97091,https://arxiv.org/abs/1202.6177,Can Intelligence Explode?,['Marcus Hutter'],2012-02-28T00:00:00Z,arxiv,, 97121,https://arxiv.org/abs/1302.3568,Independence with Lower and Upper Probabilities,['Lonnie Chrisman'],2013-02-13T00:00:00Z,arxiv,, 97132,https://arxiv.org/abs/1303.5719,Probability Estimation in Face of Irrelevant Information,"['Adam J. Grove', 'Daphne Koller']",2013-03-20T00:00:00Z,arxiv,, 97148,https://arxiv.org/abs/2204.06407,Flexible Multiple-Objective Reinforcement Learning for Chip Placement,"['Fu-Chieh Chang', 'Yu-Wei Tseng', 'Ya-Wen Yu', 'Ssu-Rui Lee', 'Alexandru Cioba', 'I-Lun Tseng', 'Da-shan Shiu', 'Jhih-Wei Hsu', 'Cheng-Yuan Wang', 'Chien-Yi Yang', 'Ren-Chu Wang', 'Yao-Wen Chang', 'Tai-Chen Chen', 'Tung-Chieh Chen']",2022-04-13T00:00:00Z,arxiv,, 97165,https://arxiv.org/abs/1903.03129,SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems,"['Beidi Chen', 'Tharun Medini', 'James Farwell', 'Sameh Gobriel', 'Charlie Tai', 'Anshumali Shrivastava']",2019-03-07T00:00:00Z,arxiv,, 97180,https://arxiv.org/abs/1903.04102,Blameworthiness in Multi-Agent Settings.,"['Meir Friedenberg', 'Joseph Y', 'Halpern']",2019-08-14T00:00:00Z,arxiv,, 97197,https://arxiv.org/abs/1602.02658,Graying the black box: Understanding DQNs,['\\name'],2016-02-08T17:27:31Z,arxiv,, 97218,https://arxiv.org/abs/1901.01291,On the Utility of Model Learning in HRI.,"['Rohan Choudhury', 'Gokul Swamy', 'Dylan Hadfield-Menell', 'Anca D', 'Dragan']",2019-08-15T00:00:00Z,arxiv,, 97244,https://generative.ink/posts/anomalous-tokens-reveal-the-original-identities-of-instruct-models/,Anomalous tokens reveal the original identities of Instruct models,['janus'],2023-02-09T00:00:00Z,blogs,, 97259,https://intelligence.org/2013/03/07/upcoming-miri-research-workshops/,Upcoming MIRI Research Workshops,['Luke Muehlhauser'],2013-03-07T04:12:06Z,blogs,, 97273,https://intelligence.org/2014/10/18/new-report-corrigibility/,New paper: “Corrigibility”,['Luke Muehlhauser'],2014-10-19T00:14:19Z,blogs,, 97284,https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/,What do ML researchers think about AI in 2022?,['Katja Grace'],2022-08-04T15:37:41Z,blogs,, 97309,https://intelligence.org/2016/10/11/miri-ama-and-a-talk-on-logical-induction/,"MIRI AMA, and a talk on logical induction",['Rob Bensinger'],2016-10-12T02:47:00Z,blogs,, 97318,https://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/,"MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei",['Luke Muehlhauser'],2014-01-14T07:22:08Z,blogs,, 97342,https://carado.moe/where-next-piracy.html,Where next for piracy ?,['Tamsin Leake'],2020-10-03T23:00:00Z,blogs,, 97352,https://jsteinhardt.wordpress.com/2010/08/22/least-squares-and-fourier-analysis/,Least Squares and Fourier Analysis,['jsteinhardt'],2010-08-22T00:33:43Z,blogs,, 97372,https://carado.moe/cev-coherent-enough.html,CEV can be coherent enough,['Tamsin Leake'],2023-02-09T00:00:00Z,blogs,, 97384,https://generative.ink/posts/parsing-by-counterfactual/,Parsing by counterfactual,['janus'],2020-12-05T00:00:00Z,blogs,, 97396,https://aiimpacts.org/brain-wiring-the-long-and-short-of-it/,Brain wiring: The long and short of it,['Tegan McCaslin'],2018-03-30T07:02:35Z,blogs,, 97406,https://carado.moe/lamenting-nerds.html,lamenting nerds,['Tamsin Leake'],2021-10-24T23:00:00Z,blogs,, 97418,https://www.gwern.net/Backstop.page,Evolution as Backstop for Reinforcement Learning,['Gwern Branwen'],2021-07-04T00:00:00Z,blogs,, 97433,https://intelligence.org/2020/01/15/january-2020-newsletter/,January 2020 Newsletter,['Rob Bensinger'],2020-01-15T17:41:49Z,blogs,, 97460,https://www.cold-takes.com/tool-assisted-speedrunning/,Tool-assisted speedrunning,['Holden Karnofsky'],2021-11-19T00:00:00Z,blogs,, 97475,https://blog.eleuther.ai/prompts-gpt-fewshot/,Evaluating Different Fewshot Description Prompts on GPT-3,['Leo Gao'],2021-05-24T00:00:00Z,blogs,, 97484,https://aiimpacts.org/rate-of-neuron-firing/,Neuron firing rates in humans,['Katja Grace'],2015-04-14T18:57:50Z,blogs,, 97495,https://carado.moe/epistemic-range.html,epistemic range,['Tamsin Leake'],2023-07-09T23:00:00Z,blogs,, 97507,https://intelligence.org/2015/03/12/rationality-ai-zombies/,Rationality: From AI to Zombies,['Rob Bensinger'],2015-03-13T01:23:17Z,blogs,, 97516,https://intelligence.org/2023/02/02/what-i-mean-by-alignment-is-in-large-part-about-making-cognition-aimable-at-all/,What I mean by “alignment is in large part about making cognition aimable at all”,['Nate Soares'],2023-02-03T03:33:08Z,blogs,, 97526,https://www.cold-takes.com/spreading-messages-to-help-with-the-most-important-century/,Spreading messages to help with the most important century,['Holden Karnofsky'],2023-01-25T00:00:00Z,blogs,, 97562,https://aiimpacts.org/the-biggest-technological-leaps/,The Biggest Technological Leaps,['Katja Grace'],2015-01-09T21:59:04Z,blogs,, 97576,https://jsteinhardt.wordpress.com/2017/03/17/sets-with-small-intersection/,Sets with Small Intersection,['jsteinhardt'],2017-03-17T04:42:29Z,blogs,, 97585,https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/,New paper: “Cheating Death in Damascus”,['Rob Bensinger'],2017-03-19T03:30:27Z,blogs,, 97601,https://carado.moe/deobfuscation-conjecture.html,the deobfuscation conjecture,['Tamsin Leake'],2021-12-05T00:00:00Z,blogs,, 97614,https://www.cold-takes.com/visualizing-utopia/,Visualizing Utopia,['Holden Karnofsky'],2021-12-14T00:00:00Z,blogs,, 97634,https://carado.moe/unviable-moral-patient.html,unviable moral patients,['Tamsin Leake'],2022-08-10T23:00:00Z,blogs,, 97643,https://intelligence.org/2014/07/21/2014-summer-matching-challenge/,2014 Summer Matching Challenge!,['Luke Muehlhauser'],2014-07-21T12:51:25Z,blogs,, 97669,https://aiimpacts.org/historic-trends-in-book-production/,Historic trends in book production,['Katja Grace'],2020-02-08T01:56:44Z,blogs,, 97685,https://aiimpacts.org/kurzweil-the-singularity-is-near/,"Kurzweil, The Singularity is Near",['Katja Grace'],2015-03-12T12:15:11Z,blogs,, 97698,https://generative.ink/posts/alchemical-marriage-gpt-3-x-clip/,Alchemical marriage: GPT-3 x CLIP,['janus'],2021-02-08T00:00:00Z,blogs,, 97707,https://intelligence.org/2016/08/15/csrbai-talks-on-robustness-and-error-tolerance/,CSRBAI talks on robustness and error-tolerance,['Alex Vermeer'],2016-08-15T17:28:40Z,blogs,, 97734,https://aiimpacts.org/automation-of-music-production/,Automation of music production,['Katja Grace'],2017-09-13T00:03:34Z,blogs,, 97747,https://carado.moe/you-are-your-information-system.html,You are your information system,['Tamsin Leake'],2020-12-25T00:00:00Z,blogs,, 97756,https://vkrakovna.wordpress.com/2022/11/25/refining-the-sharp-left-turn-threat-model/,Refining the Sharp Left Turn threat model,['Victoria Krakovna'],2022-11-25T17:01:03Z,blogs,, 97778,https://intelligence.org/2023/03/14/comments-on-openais-planning-for-agi-and-beyond/,"Comments on OpenAI’s ""Planning for AGI and beyond""",['Nate Soares'],2023-03-14T20:49:27Z,blogs,, 97809,https://intelligence.org/2023/04/21/the-basic-reasons-i-expect-agi-ruin/,The basic reasons I expect AGI ruin,['Rob Bensinger'],2023-04-21T21:15:47Z,blogs,, 97833,https://jsteinhardt.wordpress.com/2011/07/03/verifying-stability-of-stochastic-systems/,Verifying Stability of Stochastic Systems,['jsteinhardt'],2011-07-03T00:52:25Z,blogs,, 97847,https://carado.moe/defining-freedom.html,Core values: Defining freedom,['Tamsin Leake'],2020-12-31T00:00:00Z,blogs,, 97856,https://carado.moe/rough-sketch-formal-aligned-ai.html,a rough sketch of formal aligned AI using QACI,['Tamsin Leake'],2022-12-11T00:00:00Z,blogs,, 97882,https://vkrakovna.wordpress.com/2016/01/16/to-contribute-to-ai-safety-consider-doing-ai-research/,"To contribute to AI safety, consider doing AI research",['Victoria Krakovna'],2016-01-16T05:19:15Z,blogs,, 97905,https://aiimpacts.org/examples-of-early-action-on-a-risk/,Examples of early action on risks,['Katja Grace'],2016-08-16T22:16:22Z,blogs,, 97930,https://intelligence.org/2015/08/31/final-fundraiser-day-announcing-our-new-team/,Final fundraiser day: Announcing our new team,['Nate Soares'],2015-08-31T07:07:30Z,blogs,, 97941,https://aiimpacts.org/primates-vs-birds-is-one-brain-architecture-better-than-the-other/,Primates vs birds: Is one brain architecture better than the other?,['Tegan McCaslin'],2019-03-01T01:26:24Z,blogs,, 97954,https://carado.moe/ai-alignment-curves.html,AI alignment curves,['Tamsin Leake'],2022-09-07T23:00:00Z,blogs,, 97964,https://carado.moe/ubi.html,For UBI,['Tamsin Leake'],2020-10-03T23:00:00Z,blogs,, 97987,https://openai.com/research/improving-mathematical-reasoning-with-process-supervision,Improving mathematical reasoning with process supervision,"['Bowen Baker', 'Teddy Lee', 'John Schulman', 'Greg Brockman', 'Kendra Rimbach', 'Hannah Wong', 'Thomas Degry']",2023-05-31T00:00:00Z,blogs,, 98010,https://intelligence.org/2014/05/30/new-report-loudness-priors-preference-relations/,New report: “Loudness: On priors over preference relations”,['Luke Muehlhauser'],2014-05-30T23:26:58Z,blogs,, 98021,https://aiimpacts.org/time-flies-when-robots-rule-the-earth/,Time flies when robots rule the earth,['Katja Grace'],2015-07-28T22:39:43Z,blogs,, 98032,https://carado.moe/estimating-populated-intelligence-explosions.html,estimating the amount of populated intelligence explosion timelines,['Tamsin Leake'],2021-07-09T23:00:00Z,blogs,, 98041,https://aiimpacts.org/promising-research-projects/,Promising research projects,['Katja Grace'],2018-04-06T06:00:47Z,blogs,, 98064,https://jsteinhardt.wordpress.com/2017/02/06/linear-algebra-fact/,Linear algebra fact,['jsteinhardt'],2017-02-06T02:33:39Z,blogs,, 98073,https://intelligence.org/2013/09/10/september-newsletter/,MIRI’s September Newsletter,['Luke Muehlhauser'],2013-09-11T00:57:27Z,blogs,, 98099,https://aiimpacts.org/investigation-into-the-relationship-between-neuron-count-and-intelligence-across-differing-cortical-architectures/,Investigation into the relationship between neuron count and intelligence across differing cortical architectures,['Tegan McCaslin'],2019-02-11T21:31:01Z,blogs,, 98115,https://www.cold-takes.com/hunter-gatherer-gender-relations-seem-bad/,Pre-agriculture gender relations seem bad,['Holden Karnofsky'],2021-10-19T00:00:00Z,blogs,, 98130,https://www.cold-takes.com/asimovs-chronology-of-science-and-discovery/,Asimov's Chronology of Science and Discovery,['Holden Karnofsky'],2021-09-17T00:00:00Z,blogs,, 98145,https://intelligence.org/2013/12/21/josef-urban-on-machine-learning-and-automated-reasoning/,Josef Urban on Machine Learning and Automated Reasoning,['Luke Muehlhauser'],2013-12-21T21:49:43Z,blogs,, 98175,https://jsteinhardt.wordpress.com/2010/06/20/linear-control/,Linear Control Theory: Part 0,['jsteinhardt'],2010-06-20T00:58:45Z,blogs,, 98192,https://carado.moe/qaci-invention-dialogue.html,an Evangelion dialogue explaining the QACI alignment plan,['Tamsin Leake'],2023-06-09T23:00:00Z,blogs,, 98220,https://intelligence.org/2015/07/27/miris-approach/,MIRI’s Approach,['Nate Soares'],2015-07-28T02:21:27Z,blogs,, 98233,https://newsletter.mlsafety.org/p/ml-safety-newsletter-3,ML Safety Newsletter #3,['Dan Hendrycks'],2022-03-08T12:00:40Z,blogs,, 98276,https://carado.moe/prototype-realities.html,A Prototypeness Hierarchy of Realities,['Tamsin Leake'],2020-11-18T00:00:00Z,blogs,, 98286,https://www.yudkowsky.net/rational/the-simple-truth,The Simple Truth,['Eliezer S. Yudkowsky'],2020-09-04T01:20:07Z,blogs,, 98303,https://carado.moe/quantum-suicide.html,Plausible Quantum Suicide,['Tamsin Leake'],2021-04-27T23:00:00Z,blogs,, 98324,https://aiimpacts.org/list-of-analyses-of-time-to-human-level-ai/,List of Analyses of Time to Human-Level AI,['Katja Grace'],2015-01-22T14:39:11Z,blogs,, 98334,https://vkrakovna.wordpress.com/2019/06/18/iclr-safe-ml-workshop-report/,ICLR Safe ML workshop report,['Victoria Krakovna'],2019-06-17T23:10:15Z,blogs,, 98393,https://intelligence.org/2014/10/27/singularity2014-fake/,Singularity2014.com appears to be a fake,['Luke Muehlhauser'],2014-10-28T02:59:41Z,blogs,, 98404,https://intelligence.org/2018/08/27/august-2018-newsletter/,August 2018 Newsletter,['Rob Bensinger'],2018-08-28T04:57:32Z,blogs,, 98420,https://importai.substack.com/p/import-ai-327-stable-diffusion-on,Import AI 327: Stable Diffusion on phones; GPT-Hacker; UK launches a £100m AI taskforce,['Jack Clark'],2023-05-01T12:41:06Z,blogs,, 98454,https://www.yudkowsky.net/other/fiction/x17,X17,['Eliezer S. Yudkowsky'],2020-09-04T04:08:04Z,blogs,, 98475,https://www.cold-takes.com/honesty-about-reading/,Honesty about reading,['Holden Karnofsky'],2021-07-14T00:00:00Z,blogs,, 98489,https://intelligence.org/2015/06/01/june-2015-newsletter/,June 2015 Newsletter,['Jesse Galef'],2015-06-01T21:00:06Z,blogs,, 98510,https://intelligence.org/2014/05/30/milind-tambe/,Milind Tambe on game theory in security applications,['Luke Muehlhauser'],2014-05-31T03:00:21Z,blogs,, 98537,https://intelligence.org/2015/01/07/matthias-troyer-quantum-computers/,Matthias Troyer on Quantum Computers,['Luke Muehlhauser'],2015-01-08T06:23:27Z,blogs,, 98560,https://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/,Existential Risk Strategy Conversation with Holden Karnofsky,['Luke Muehlhauser'],2014-01-28T02:20:08Z,blogs,, 98583,https://aiimpacts.org/allen-the-singularity-isnt-near/,"Allen, The Singularity Isn’t Near",['Katja Grace'],2015-03-13T09:04:48Z,blogs,, 98593,https://intelligence.org/2014/01/31/two-miri-talks-from-agi-11/,Two MIRI talks from AGI-11,['Luke Muehlhauser'],2014-01-31T20:41:44Z,blogs,, 98608,https://intelligence.org/2013/09/14/effective-altruism-and-flow-through-effects/,Effective Altruism and Flow-Through Effects,['Luke Muehlhauser'],2013-09-15T01:01:38Z,blogs,, 98618,https://aiimpacts.org/introducing-research-bounties/,Introducing research bounties,['Katja Grace'],2015-08-07T07:23:04Z,blogs,, 98631,https://www.yudkowsky.net/rational/bayes,An Intuitive Explanation of Bayes’ Theorem,['Eliezer S. Yudkowsky'],2020-09-04T01:30:06Z,blogs,, 98657,https://intelligence.org/2014/09/09/hayworth/,Ken Hayworth on brain emulation prospects,['Luke Muehlhauser'],2014-09-10T01:17:54Z,blogs,, 98680,https://intelligence.org/2013/09/25/paul-rosenbloom-interview/,Paul Rosenbloom on Cognitive Architectures,['Luke Muehlhauser'],2013-09-26T03:18:23Z,blogs,, 98704,https://carado.moe/video-games-needs-a-platform.html,Video Games Needs A Platform,['Tamsin Leake'],2021-05-02T23:00:00Z,blogs,, 98724,https://www.cold-takes.com/has-life-gotten-better/,Has Life Gotten Better?,['Holden Karnofsky'],2021-10-05T00:00:00Z,blogs,, 98742,https://carado.moe/what-happens-when-you-die.html,what happens when you die?,['Tamsin Leake'],2021-08-24T23:00:00Z,blogs,, 98758,https://intelligence.org/2013/09/05/five-theses-using-only-simple-words/,"Five Theses, Using Only Simple Words",['Luke Muehlhauser'],2013-09-05T20:03:13Z,blogs,, 98773,https://intelligence.org/2015/03/18/introducing-intelligent-agent-foundations-forum/,Introducing the Intelligent Agent Foundations Forum,['Luke Muehlhauser'],2015-03-19T03:34:29Z,blogs,, 98797,https://intelligence.org/2017/10/16/october-2017-newsletter/,October 2017 Newsletter,['Rob Bensinger'],2017-10-17T02:22:28Z,blogs,, 98822,https://aiimpacts.org/brain-performance-in-teps/,Brain performance in TEPS,['Katja Grace'],2015-05-07T00:15:21Z,blogs,, 98837,https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/,2019 recent trends in GPU price per FLOPS,['Asya Bergal'],2020-03-25T23:46:49Z,blogs,, 98847,https://www.cold-takes.com/prediction-track-records-i-know-of/,Track records for those who have made lots of predictions,['Holden Karnofsky'],2021-07-21T00:00:00Z,blogs,, 98869,https://deepmindsafetyresearch.medium.com/alignment-of-language-agents-9fbc7dd52c6c,Alignment of Language Agents,['DeepMind Safety Research'],2021-03-30T00:00:00Z,blogs,, 98889,https://aiimpacts.org/similarity-between-historical-and-contemporary-ai-predictions/,Similarity Between Historical and Contemporary AI Predictions,['Katja Grace'],2014-12-29T18:46:50Z,blogs,, 98908,https://jsteinhardt.wordpress.com/2010/07/17/linear-control-theory-part-i/,Linear Control Theory: Part I,['jsteinhardt'],2010-07-17T04:37:19Z,blogs,, 98925,https://aiimpacts.org/price-performance-moores-law-seems-slow/,Price performance Moore’s Law seems slow,['Katja Grace'],2017-11-27T07:58:03Z,blogs,, 98935,https://aiimpacts.org/time-for-ai-to-cross-the-human-range-in-english-draughts/,Time for AI to cross the human range in English draughts,['Katja Grace'],2020-10-26T22:28:36Z,blogs,, 98957,https://aiimpacts.org/historic-trends-in-land-speed-records/,Historic trends in land speed records,['Asya Bergal'],2019-07-17T23:10:07Z,blogs,, 98966,https://aiimpacts.org/preliminary-prices-for-human-level-hardware/,Preliminary prices for human-level hardware,['Katja Grace'],2015-04-04T10:09:37Z,blogs,, 98987,https://carado.moe/blob-quantum-issue.html,QACI blob location: an issue with firstness,['Tamsin Leake'],2023-03-19T00:00:00Z,blogs,, 99004,https://jsteinhardt.wordpress.com/2013/01/31/quadratically-independent-monomials/,Quadratically Independent Monomials,['jsteinhardt'],2013-01-31T08:42:05Z,blogs,, 99016,https://carado.moe/fas-solution-everything.html,fully aligned singleton as a solution to everything,['Tamsin Leake'],2022-11-12T00:00:00Z,blogs,, 99027,https://intelligence.org/2019/12/02/miris-2019-fundraiser/,MIRI’s 2019 Fundraiser,['Malo Bourgon'],2019-12-02T22:50:35Z,blogs,, 99054,https://intelligence.org/2017/02/11/chcai-miri/,CHCAI/MIRI research internship in AI safety,['Rob Bensinger'],2017-02-11T23:31:03Z,blogs,, 99069,https://intelligence.org/2013/10/01/upcoming-talks-at-harvard-and-mit/,Upcoming Talks at Harvard and MIT,['Luke Muehlhauser'],2013-10-02T00:43:11Z,blogs,, 99092,https://aiimpacts.org/time-for-ai-to-cross-the-human-range-in-starcraft/,Time for AI to cross the human range in StarCraft,['Katja Grace'],2020-10-20T20:55:24Z,blogs,, 99108,https://intelligence.org/2017/11/03/november-2017-newsletter/,November 2017 Newsletter,['Rob Bensinger'],2017-11-04T01:40:48Z,blogs,, 99139,https://carado.moe/against-unicode.html,Against Unicode,['Tamsin Leake'],2020-12-21T00:00:00Z,blogs,, 99152,https://intelligence.org/2014/02/01/robust-cooperation-a-case-study-in-friendly-ai-research/,Robust Cooperation: A Case Study in Friendly AI Research,['Luke Muehlhauser'],2014-02-01T16:55:13Z,blogs,, 99176,https://intelligence.org/2017/11/16/announcing-inadequate-equilibria/,Announcing “Inadequate Equilibria”,['Rob Bensinger'],2017-11-17T02:23:09Z,blogs,, 99185,https://aiimpacts.org/argument-from-large-impacts/,Argument for AI x-risk from large impacts,['Katja Grace'],2021-09-28T18:32:00Z,blogs,, 99199,https://aiimpacts.org/framing-ai-strategy/,Framing AI strategy,['Zach Stein-Perlman'],2023-02-06T19:00:27Z,blogs,, 99220,https://intelligence.org/2016/12/31/new-paper-optimal-polynomial-time-estimators/,New paper: “Optimal polynomial-time estimators”,['Rob Bensinger'],2017-01-01T01:07:43Z,blogs,, 99234,https://blog.eleuther.ai/year-two-preface/,"The View from 30,000 Feet: Preface to the Second EleutherAI Retrospective","['Stella Biderman', 'Curtis Huebner', 'Connor Leahy', 'Eric Hallahan']",2023-03-02T00:00:00Z,blogs,, 99256,https://carado.moe/bracing-alignment-tunnel.html,bracing for the alignment tunnel,['Tamsin Leake'],2022-04-09T23:00:00Z,blogs,, 99266,https://intelligence.org/2017/03/31/two-new-researchers-join-miri/,Two new researchers join MIRI,['Rob Bensinger'],2017-04-01T01:46:30Z,blogs,, 99287,https://vkrakovna.wordpress.com/2018/06/05/measuring-and-avoiding-side-effects-using-relative-reachability/,Measuring and avoiding side effects using relative reachability,['Victoria Krakovna'],2018-06-05T14:15:06Z,blogs,, 99304,https://intelligence.org/2014/04/22/martin-hilbert/,Martin Hilbert on the world’s information capacity,['Luke Muehlhauser'],2014-04-22T19:43:18Z,blogs,, 99327,https://www.gwern.net/Hyperbolic-Time-Chamber.page,'The Hyperbolic Time Chamber & Brain Emulation',['Gwern Branwen'],2018-09-02T00:00:00Z,blogs,, 99344,https://intelligence.org/2019/04/01/new-grants-open-phil-beri/,New grants from the Open Philanthropy Project and BERI,['Rob Bensinger'],2019-04-01T21:49:53Z,blogs,, 99354,https://www.cold-takes.com/summary-of-history-empowerment-and-well-being-lens/,Summary of history (empowerment and well-being lens),['Holden Karnofsky'],2021-09-28T00:00:00Z,blogs,, 99369,https://vkrakovna.wordpress.com/2016/10/15/openai-unconference-on-machine-learning/,OpenAI unconference on machine learning,['Victoria Krakovna'],2016-10-15T23:51:44Z,blogs,, 99401,https://carado.moe/psi-rewriting.html,psi rewriting,['Tamsin Leake'],2021-12-09T00:00:00Z,blogs,, 99417,https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/,Our new technical research agenda overview,['Luke Muehlhauser'],2014-12-23T23:06:33Z,blogs,, 99443,https://intelligence.org/2013/04/18/miri-april-newsletter-relaunch-celebration-and-a-new-math-result/,MIRI’s April newsletter: Relaunch Celebration and a New Math Result,['Jake'],2013-04-18T22:44:24Z,blogs,, 99457,https://www.cold-takes.com/phil-birnbaums-regression-analysis/,"Phil Birnbaum's ""bad regression"" puzzles",['Holden Karnofsky'],2021-07-15T00:00:00Z,blogs,, 99472,https://intelligence.org/2013/04/13/miris-strategy-for-2013/,MIRI’s Strategy for 2013,['Luke Muehlhauser'],2013-04-13T21:42:59Z,blogs,, 99492,https://jsteinhardt.wordpress.com/2010/06/26/the-underwater-cartpole/,The Underwater Cartpole,['jsteinhardt'],2010-06-26T21:08:22Z,blogs,, 99517,https://aiimpacts.org/examples-of-ai-systems-producing-unconventional-solutions/,Examples of AI systems producing unconventional solutions,['Katja Grace'],2018-02-12T03:58:01Z,blogs,, 99538,https://intelligence.org/2019/11/28/giving-tuesday-2019/,Giving Tuesday 2019,['Colm Ó Riain'],2019-11-28T21:15:12Z,blogs,, 99553,https://openai.com/research/efficient-training-of-language-models-to-fill-in-the-middle,Efficient training of language models to fill in the middle,['OpenAI Research'],2022-07-28T00:00:00Z,blogs,, 99572,https://jsteinhardt.wordpress.com/2012/10/31/beyond-bayesians-and-frequentists/,Beyond Bayesians and Frequentists,['jsteinhardt'],2012-10-31T06:39:00Z,blogs,, 99595,https://intelligence.org/2014/03/20/randal-a-koene-on-whole-brain-emulation/,Randal Koene on whole brain emulation,['Luke Muehlhauser'],2014-03-20T08:00:46Z,blogs,, 99621,https://carado.moe/simulation-hypotheses.html,some simulation hypotheses,['Tamsin Leake'],2022-10-11T23:00:00Z,blogs,, 99641,https://intelligence.org/2013/05/01/agi-impacts-experts-and-friendly-ai-experts/,AGI Impact Experts and Friendly AI Experts,['Luke Muehlhauser'],2013-05-01T01:00:48Z,blogs,, 99665,https://intelligence.org/2015/01/09/new-report-questions-reasoning-logical-uncertainty/,New report: “Questions of reasoning under logical uncertainty”,['Luke Muehlhauser'],2015-01-09T17:54:59Z,blogs,, 99675,https://aisafety.camp/2018/12/07/aisc2-research-summaries/,AISC2: Research Summaries,['Johannes'],2018-12-07T14:39:37Z,blogs,, 99714,https://carado.moe/core-vals-exist-selfdet.html,Determining core values & existential self-determination,['Tamsin Leake'],2020-09-08T23:00:00Z,blogs,, 99726,https://carado.moe/gender-bootstrappism.html,Gender Bootstrappism,['Tamsin Leake'],2020-10-03T23:00:00Z,blogs,, 99736,https://intelligence.org/2020/07/08/july-2020-newsletter/,July 2020 Newsletter,['Rob Bensinger'],2020-07-08T20:18:02Z,blogs,, 99770,https://blog.eleuther.ai/tuning-on-eval-harness/,Finetuning Models on Downstream Tasks,['Leo Gao'],2021-05-24T00:00:00Z,blogs,, 99784,https://intelligence.org/2014/09/12/nate-soares-speaking-purdue-september-18th/,Nate Soares speaking at Purdue University,['Luke Muehlhauser'],2014-09-12T21:47:31Z,blogs,, 99795,https://carado.moe/topia-layer-0.html,Topia: Layer 0,['Tamsin Leake'],2020-03-29T23:00:00Z,blogs,, 99822,https://intelligence.org/2016/03/18/seeking-research-fellows-in-type-theory-and-machine-self-reference/,Seeking Research Fellows in Type Theory and Machine Self-Reference,['Rob Bensinger'],2016-03-18T18:55:17Z,blogs,, 99836,https://intelligence.org/2014/04/20/why-miri/,Why MIRI?,['Luke Muehlhauser'],2014-04-21T01:17:27Z,blogs,, 99856,https://www.cold-takes.com/why-describing-utopia-goes-badly/,Why Describing Utopia Goes Badly,['Holden Karnofsky'],2021-12-07T00:00:00Z,blogs,, 99874,https://intelligence.org/2022/03/01/christiano-and-yudkowsky-on-ai-predictions-and-human-intelligence/,Christiano and Yudkowsky on AI predictions and human intelligence,['Rob Bensinger'],2022-03-01T19:24:28Z,blogs,, 99890,https://intelligence.org/2011/08/07/new-intelligence-explosion-website/,New Intelligence Explosion Website,['Luke Muehlhauser'],2011-08-08T04:09:09Z,blogs,, 99900,https://aiimpacts.org/conversation-with-paul-christiano/,Conversation with Paul Christiano,['Asya Bergal'],2019-09-11T23:05:12Z,blogs,, 99922,https://carado.moe/alignment-bits.html,where are your alignment bits?,['Tamsin Leake'],2022-06-09T23:00:00Z,blogs,, 99935,https://intelligence.org/2007/09/30/three-major-singularity-schools/,Three Major Singularity Schools,['Eliezer Yudkowsky'],2007-09-30T23:11:14Z,blogs,, 99955,https://www.yudkowsky.net/rational/lobs-theorem,(The Cartoon Guide to) Lob’s Theorem,['Eliezer S. Yudkowsky'],2020-09-04T02:45:35Z,blogs,, 99969,https://intelligence.org/2014/06/06/new-report-distributions-allowing-tiling-staged-subjective-eu-maximizers/,New report: “Distributions allowing tiling of staged subjective EU maximizers”,['Luke Muehlhauser'],2014-06-06T21:56:07Z,blogs,, 99978,https://carado.moe/predca.html,PreDCA: vanessa kosoy's alignment protocol,['Tamsin Leake'],2022-08-19T23:00:00Z,blogs,, 99999,https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/,How we could stumble into AI catastrophe,['Holden Karnofsky'],2023-01-13T00:00:00Z,blogs,, 100038,https://aiimpacts.org/on-the-inapplicability-of-corporate-rights-cases-to-digital-minds/,On the (in)applicability of corporate rights cases to digital minds,['Katja Grace'],2018-09-28T22:27:26Z,blogs,, 100053,https://www.cold-takes.com/ideal-governance-for-companies-countries-and-more/,"Ideal governance (for companies, countries and more)",['Holden Karnofsky'],2022-04-05T00:00:00Z,blogs,, 100088,https://intelligence.org/2018/10/31/embedded-decisions/,Decision Theory,['Abram Demski'],2018-10-31T18:25:38Z,blogs,, 100105,https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/,John Horgan interviews Eliezer Yudkowsky,['Rob Bensinger'],2016-03-03T04:38:35Z,blogs,, 100123,https://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/,Russell and Norvig on Friendly AI,['Luke Muehlhauser'],2013-10-19T18:09:39Z,blogs,, 100139,https://aiimpacts.org/diabetic-retinopathy-as-a-case-study-in-time-for-ai-to-cross-the-range-of-human-performance/,Time for AI to cross the human performance range in diabetic retinopathy,['Aysja Johnson'],2018-11-21T22:34:37Z,blogs,, 100150,https://jsteinhardt.wordpress.com/2013/03/15/probabilistic-abstractions-i/,Probabilistic Abstractions I,['jsteinhardt'],2013-03-15T03:45:04Z,blogs,, 100169,https://aiimpacts.org/observed-patterns-around-major-technological-advancements/,Observed patterns around major technological advancements,['richardkorzekwa'],2022-02-03T00:22:53Z,blogs,, 100186,https://carado.moe/cant-simulate-the-universe.html,you can't simulate the universe from the beginning?,['Tamsin Leake'],2023-03-19T00:00:00Z,blogs,, 100200,https://intelligence.org/2014/02/17/miris-february-2014-newsletter/,MIRI’s February 2014 Newsletter,['Luke Muehlhauser'],2014-02-17T23:28:10Z,blogs,, 100227,https://aiimpacts.org/2018-price-of-performance-by-tensor-processing-units/,2018 price of performance by Tensor Processing Units,['Katja Grace'],2018-02-13T23:55:29Z,blogs,, 100237,https://aiimpacts.org/ai-timelines-and-strategies/,AI timelines and strategies,['Katja Grace'],2015-08-21T06:55:25Z,blogs,, 100256,https://www.yudkowsky.net/rational/cognitive-biases,Cognitive Biases Potentially Affecting Judgment of Global Risks,['Eliezer S. Yudkowsky'],2020-09-04T01:16:03Z,blogs,, 100276,https://intelligence.org/2018/10/29/embedded-agents/,Embedded Agents,['Scott Garrabrant'],2018-10-29T19:59:33Z,blogs,, 100303,https://intelligence.org/2012/01/12/qa-2-with-luke-muehlhauser-singularity-institute-executive-director/,"Q&A #2 with Luke Muehlhauser, Machine Intelligence Research Institute Executive Director",['Louie Helm'],2012-01-12T23:34:24Z,blogs,, 100339,https://intelligence.org/2017/12/06/december-2017-newsletter/,December 2017 Newsletter,['Rob Bensinger'],2017-12-06T15:07:02Z,blogs,, 100361,https://carado.moe/growth-doesnt-care-about-crises.html,Growth Doesn't Care About Crises,['Tamsin Leake'],2021-03-04T00:00:00Z,blogs,, 100370,https://carado.moe/takeoff-speeds-define.html,my takeoff speeds? depends how you define that,['Tamsin Leake'],2023-02-11T00:00:00Z,blogs,, 100384,https://importai.substack.com/p/import-ai-330-palantirs-ai-war-future,Import AI 330: Palantir's AI-War future; BLOOMChat; and more money for distributed AI training,['Jack Clark'],2023-05-22T12:31:51Z,blogs,, 100406,https://aiimpacts.org/historic-trends-in-flight-airspeed-records/,Historic trends in flight airspeed records,['Asya Bergal'],2020-02-07T22:47:35Z,blogs,, 100416,https://intelligence.org/2013/06/05/friendly-ai-research-as-effective-altruism/,Friendly AI Research as Effective Altruism,['Luke Muehlhauser'],2013-06-06T00:55:23Z,blogs,, 100433,https://intelligence.org/2016/07/27/alignment-machine-learning/,New paper: “Alignment for advanced machine learning systems”,['Rob Bensinger'],2016-07-27T23:48:52Z,blogs,, 100466,https://aiimpacts.org/ai-impacts-2020-review/,AI Impacts 2020 review,['Asya Bergal'],2020-12-22T06:24:04Z,blogs,, 100503,https://www.cold-takes.com/future-proof-ethics/,Future-proof ethics,['Holden Karnofsky'],2022-02-02T00:00:00Z,blogs,, 100518,https://carado.moe/persistent-data-structures-consciousness.html,the persistent data structure argument against linear consciousness,['Tamsin Leake'],2021-06-15T23:00:00Z,blogs,, 100527,https://carado.moe/program-search.html,program searches,['Tamsin Leake'],2022-09-04T23:00:00Z,blogs,, 100542,https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/,The Track Record of Futurists Seems ... Fine,['Holden Karnofsky'],2022-06-30T00:00:00Z,blogs,, 100558,https://intelligence.org/2012/04/06/singularity-institute-progress-report-march-2012/,"Machine Intelligence Research Institute Progress Report, March 2012",['Louie Helm'],2012-04-06T08:38:49Z,blogs,, 100585,https://carado.moe/anthropic-reasoning-coordination.html,anthropic reasoning coordination,['Tamsin Leake'],2022-06-17T23:00:00Z,blogs,, 100594,https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/,MIRI’s 2015 Summer Fundraiser!,['Nate Soares'],2015-07-17T22:35:17Z,blogs,, 100609,https://blog.eleuther.ai/activation-fns/,Activation Function Ablation,['Leo Gao'],2021-05-24T00:00:00Z,blogs,, 100619,https://www.deepmind.com/blog/byol-explore-exploration-with-bootstrapped-prediction,BYOL-Explore: Exploration with Bootstrapped Prediction,"['Zhaohan Daniel Guo', 'Shantanu Thakoor', 'Miruna Pîslar', 'Bernardo Avila Pires', 'Florent Altché', 'Corentin Tallec', 'Alaa Saade', 'Daniele Calandriello', 'Jean-Bastien Grill', 'Yunhao Tang', 'Michal Valko', 'Rémi Munos', 'Mohammad Gheshlaghi Azar', 'Bilal Piot']",2022-06-20T00:00:00Z,blogs,, 100632,https://intelligence.org/2015/03/09/fallenstein-talk-aps-march-meeting-2015/,Fallenstein talk for APS March Meeting 2015,['Luke Muehlhauser'],2015-03-09T21:15:37Z,blogs,, 100650,https://aiimpacts.org/lets-think-about-slowing-down-ai/,Let’s think about slowing down AI,['Katja Grace'],2022-12-22T17:30:40Z,blogs,, 100677,https://aiimpacts.org/update-on-all-the-ai-predictions/,Update on all the AI predictions,['Katja Grace'],2015-06-06T05:27:59Z,blogs,, 100698,https://intelligence.org/2013/02/28/welcome-to-intelligence-org/,Welcome to Intelligence.org,['Luke Muehlhauser'],2013-02-28T02:00:38Z,blogs,, 100708,https://intelligence.org/2016/10/06/csrbai-talks-agent-models/,CSRBAI talks on agent models and multi-agent dilemmas,['Alex Vermeer'],2016-10-07T00:17:35Z,blogs,, 100735,https://intelligence.org/2013/11/06/amazonsmile/,Support MIRI by Shopping at AmazonSmile,['Alex Vermeer'],2013-11-07T01:36:49Z,blogs,, 100744,https://www.deepmind.com/blog/simple-sensor-intentions-for-exploration,Simple Sensor Intentions for Exploration,"['Tim Hertweck', 'Martin Riedmiller', 'Michael Bloesch', 'Jost Tobias Springenberg', 'Noah Siegel', 'Markus Wulfmeier', 'Roland Hafner', 'Nicolas Heess']",2020-05-12T00:00:00Z,blogs,, 100753,https://aiimpacts.org/discontinuity-from-the-burj-khalifa/,Historic trends in structure heights,['Katja Grace'],2018-07-12T17:15:02Z,blogs,, 100762,https://carado.moe/multiverse-argument-automated-alignment.html,the multiverse argument argument against automated alignment,['Tamsin Leake'],2023-04-21T23:00:00Z,blogs,, 100780,https://intelligence.org/2018/11/28/2017-in-review/,2017 in review,['Malo Bourgon'],2018-11-29T07:59:35Z,blogs,, 100801,https://intelligence.org/2013/05/30/miri-may-newsletter-intelligence-explosion-microeconomics-and-other-publications/,MIRI May Newsletter: Intelligence Explosion Microeconomics and Other Publications,['Jake'],2013-05-30T18:48:11Z,blogs,, 100828,https://carado.moe/ordering-capability-thresholds.html,ordering capability thresholds,['Tamsin Leake'],2022-09-15T23:00:00Z,blogs,, 100851,https://carado.moe/values-tdd.html,values system as test-driven development,['Tamsin Leake'],2022-03-21T00:00:00Z,blogs,, 100861,https://carado.moe/the-peerless.html,The Peerless,['Tamsin Leake'],2022-04-11T23:00:00Z,blogs,, 100884,https://aiimpacts.org/penicillin-and-syphilis/,Penicillin and syphilis,['Katja Grace'],2015-02-02T19:21:52Z,blogs,, 100897,https://www.cold-takes.com/unraveling-the-evidence-about-violence-among-very-early-humans/,Unraveling the evidence about violence among very early humans,['Holden Karnofsky'],2021-11-02T00:00:00Z,blogs,, 100918,https://intelligence.org/2013/06/06/new-research-page-and-two-new-articles/,New Research Page and Two New Articles,['Luke Muehlhauser'],2013-06-07T06:54:02Z,blogs,, 100935,https://intelligence.org/2014/10/30/new-report-udt-known-search-order/,New report: “UDT with known search order”,['Luke Muehlhauser'],2014-10-30T22:30:51Z,blogs,, 100945,https://www.deepmind.com/blog/challenges-in-detoxifying-language-models,Challenges in Detoxifying Language Models,"['Johannes Welbl', 'Mia Glaese', 'Jonathan Uesato', 'Sumanth Dathathri', 'John Mellor', 'Lisa Anne Hendricks', 'Kirsty Anderson *', 'Pushmeet Kohli', 'Ben Coppin', 'Po-Sen Huang']",2021-09-15T00:00:00Z,blogs,, 100973,https://carado.moe/brittle-physics.html,brittle physics and the nature of X-risks,['Tamsin Leake'],2022-01-05T00:00:00Z,blogs,, 100996,https://carado.moe/disclosing-subjectivity.html,disclosing subjectivity,['Tamsin Leake'],2021-06-28T23:00:00Z,blogs,, 101005,https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/,Why AI alignment could be hard with modern deep learning,['Holden Karnofsky'],2021-09-21T00:00:00Z,blogs,, 101029,https://www.cold-takes.com/what-does-bing-chat-tell-us-about-ai-risk/,What does Bing Chat tell us about AI risk?,['Holden Karnofsky'],2023-02-28T00:00:00Z,blogs,, 101053,https://aiimpacts.org/transmitting-fibers-in-the-brain-total-length-and-distribution-of-lengths/,Transmitting fibers in the brain: Total length and distribution of lengths,['Tegan McCaslin'],2018-03-30T06:51:49Z,blogs,, 101066,https://aiimpacts.org/energy-efficiency-of-wandering-albatross-flight/,Energy efficiency of wandering albatross flight,['Katja Grace'],2020-11-25T00:13:13Z,blogs,, 101076,https://www.deepmind.com/blog/perceiver-ar-general-purpose-long-context-autoregressive-generation,"Perceiver AR: general-purpose, long-context autoregressive generation","['Curtis Hawthorne', 'Andrew Jaegle', 'Cătălina Cangea', 'Sebastian Borgeaud', 'Charlie Nash', 'Mateusz Malinowski', 'Sander Dieleman', 'Oriol Vinyals', 'Matthew Botvinick', 'Ian Simon', 'Hannah Sheahan', 'Neil Zeghidour', 'Jean-Baptiste Alayrac', 'João Carreira', 'Jesse Engel']",2022-07-16T00:00:00Z,blogs,, 101090,https://www.cold-takes.com/imagining-yourself-as-a-digital-person-two-sketches/,Imagining yourself as a digital person (two sketches),['Holden Karnofsky'],2021-07-29T00:00:00Z,blogs,, 101112,https://intelligence.org/2013/04/19/altairs-timeless-decision-theory-paper-published/,Altair’s Timeless Decision Theory Paper Published,['Luke Muehlhauser'],2013-04-19T00:15:48Z,blogs,, 101121,https://carado.moe/global-era.html,The Last Global Era,['Tamsin Leake'],2019-03-26T00:00:00Z,blogs,, 101132,https://aiimpacts.org/how-ai-timelines-are-estimated/,How AI timelines are estimated,['Katja Grace'],2015-02-09T15:00:32Z,blogs,, 101154,https://intelligence.org/2017/11/08/major-grant-open-phil/,A major grant from the Open Philanthropy Project,['Malo Bourgon'],2017-11-09T00:05:42Z,blogs,, 101169,https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/,Racing through a minefield: the AI deployment problem,['Holden Karnofsky'],2022-12-22T00:00:00Z,blogs,, 101200,https://generative.ink/posts/simulators/,Simulators,['janus'],2022-09-02T00:00:00Z,blogs,, 101238,https://intelligence.org/2020/02/13/our-2019-fundraiser-review/,Our 2019 Fundraiser Review,['Colm Ó Riain'],2020-02-13T16:48:44Z,blogs,, 101259,https://openai.com/research/hierarchical-text-conditional-image-generation-with-clip-latents,Hierarchical text-conditional image generation with CLIP latents,['OpenAI Research'],2022-04-13T00:00:00Z,blogs,, 101283,https://intelligence.org/2014/01/28/how-big-is-ai/,How Big is the Field of Artificial Intelligence? (initial findings),['Luke Muehlhauser'],2014-01-28T18:13:46Z,blogs,, 101306,https://carado.moe/generalized-computation-interpretability.html,generalized computation interpretability,['Tamsin Leake'],2022-06-17T23:00:00Z,blogs,, 101323,https://www.yudkowsky.net/singularity/simplified,Transhumanism as Simplified Humanism,['Eliezer S. Yudkowsky'],2020-09-04T03:04:47Z,blogs,, 101337,https://jsteinhardt.wordpress.com/2010/06/18/robotics/,Robotics,['jsteinhardt'],2010-06-18T04:05:50Z,blogs,, 101362,https://www.yudkowsky.net/singularity/power,The Power of Intelligence,['Eliezer S. Yudkowsky'],2020-09-04T03:01:43Z,blogs,, 101373,https://intelligence.org/2016/09/12/new-paper-logical-induction/,New paper: “Logical induction”,['Nate Soares'],2016-09-13T00:33:40Z,blogs,, 101391,https://aiimpacts.org/effects-of-breech-loading-rifles-on-historic-trends-in-firearm-progress/,Effects of breech loading rifles on historic trends in firearm progress,['Katja Grace'],2019-12-23T07:53:06Z,blogs,, 101401,https://intelligence.org/2021/08/03/july-2021-newsletter/,July 2021 Newsletter,['Rob Bensinger'],2021-08-04T03:26:55Z,blogs,, 101433,https://intelligence.org/2012/03/03/singularity-institute-progress-report-february-2012/,"Machine Intelligence Research Institute Progress Report, February 2012",['Louie Helm'],2012-03-03T11:31:06Z,blogs,, 101462,https://intelligence.org/2014/10/31/financial-times-story-miri/,The Financial Times story on MIRI,['Luke Muehlhauser'],2014-10-31T22:32:47Z,blogs,, 101477,https://aiimpacts.org/occasional-update-july-5-2018/,Occasional update July 5 2018,['Katja Grace'],2018-07-05T15:36:59Z,blogs,, 101498,https://www.cold-takes.com/moral-progress-vs-the-simple-passage-of-time/,"""Moral progress"" vs. the simple passage of time",['Holden Karnofsky'],2022-02-08T00:00:00Z,blogs,, 101508,https://intelligence.org/2015/12/15/jed-mccaleb-on-why-miri-matters/,Jed McCaleb on Why MIRI Matters,['Guest'],2015-12-15T20:11:11Z,blogs,, 101521,https://carado.moe/alignment-researchspace-is-malign.html,alignment researchspace is potentially malign,['Tamsin Leake'],2022-08-16T23:00:00Z,blogs,, 101538,https://carado.moe/purposes-for-art.html,purposes for art,['Tamsin Leake'],2021-07-08T23:00:00Z,blogs,, 101550,https://aiimpacts.org/friendly-ai-as-a-global-public-good/,Friendly AI as a global public good,['Michael Wulfsohn'],2016-08-08T19:31:51Z,blogs,, 101572,https://aiimpacts.org/conversation-with-rohin-shah/,Conversation with Rohin Shah,['Asya Bergal'],2019-10-31T12:02:15Z,blogs,, 101593,https://jsteinhardt.wordpress.com/2017/02/28/advice-for-authors/,Advice for Authors,['jsteinhardt'],2017-02-28T01:11:50Z,blogs,, 101616,https://vkrakovna.wordpress.com/2016/08/01/clopen-ai-openness-in-different-aspects-of-ai-development/,Clopen AI: Openness in different aspects of AI development,['Victoria Krakovna'],2016-08-01T16:23:05Z,blogs,, 101638,https://intelligence.org/2022/07/04/a-central-ai-alignment-problem/,"A central AI alignment problem: capabilities generalization, and the sharp left turn",['Nate Soares'],2022-07-05T05:22:45Z,blogs,, 101649,https://www.cold-takes.com/the-gloves-are-off-the-pants-are-on/,"The gloves are off, the pants are on",['Holden Karnofsky'],2021-08-26T00:00:00Z,blogs,, 101668,https://intelligence.org/2014/06/23/new-report-non-omniscience-probabilistic-inference-metamathematics/,"New report: “Non-omniscience, probabilistic inference, and metamathematics”",['Luke Muehlhauser'],2014-06-23T18:13:50Z,blogs,, 101677,https://newsletter.mlsafety.org/p/ml-safety-newsletter-1,ML Safety Newsletter #1,['Dan Hendrycks'],2021-10-18T00:03:15Z,blogs,, 101700,https://blog.eleuther.ai/year-two-full/,EleutherAI Second Retrospective: The long version,"['Stella Biderman', 'Shivanshu Purohit', 'Curtis Huebner', 'Leo Gao', 'Connor Leahy', 'Eric Hallahan']",2023-03-26T00:00:00Z,blogs,, 101726,https://carado.moe/ai-risk-plans.html,AI risk plans,['Tamsin Leake'],2022-05-11T23:00:00Z,blogs,, 101742,https://carado.moe/on-economics.html,On Economics,['Tamsin Leake'],2020-03-28T00:00:00Z,blogs,, 101753,https://www.cold-takes.com/are-we-trending-toward-transformative-ai-how-would-we-know/,"Are we ""trending toward"" transformative AI? (How would we know?)",['Holden Karnofsky'],2021-08-24T00:00:00Z,blogs,, 101769,https://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/,New paper: “Problems of self-reference in self-improving space-time embedded intelligence”,['Luke Muehlhauser'],2014-05-06T10:47:27Z,blogs,, 101784,https://www.gwern.net/Tool-AI.page,"""Why Tool AIs Want to Be Agent AIs""",['Gwern Branwen'],2018-08-28T00:00:00Z,blogs,, 101800,https://intelligence.org/2016/05/13/may-2016-newsletter/,May 2016 Newsletter,['Rob Bensinger'],2016-05-14T02:09:24Z,blogs,, 101818,https://aiimpacts.org/progress-in-general-purpose-factoring/,Progress in general purpose factoring,['Katja Grace'],2017-03-16T11:03:21Z,blogs,, 101835,https://aiimpacts.org/historic-trends-in-chess-ai/,Historic trends in chess AI,['Asya Bergal'],2020-02-08T00:00:27Z,blogs,, 101850,https://aiimpacts.org/precedents-for-economic-n-year-doubling-before-4n-year-doubling/,Precedents for economic n-year doubling before 4n-year doubling,['Katja Grace'],2020-04-14T20:42:41Z,blogs,, 101860,https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/,Response to Cegłowski on superintelligence,['Matthew Graves'],2017-01-13T23:55:52Z,blogs,, 101877,https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/,Time for AI to cross the human performance range in ImageNet image classification,['Katja Grace'],2020-10-19T23:52:50Z,blogs,, 101887,https://carado.moe/how-far-are-things-that-care.html,how far are things that care?,['Tamsin Leake'],2022-12-15T00:00:00Z,blogs,, 101897,https://www.deepmind.com/blog/artificial-intelligence-values-and-alignment,"Artificial Intelligence, Values and Alignment",['Iason Gabriel'],2020-01-13T00:00:00Z,blogs,, 101918,https://www.deepmind.com/blog/making-efficient-use-of-demonstrations-to-solve-hard-exploration-problems,Making Efficient Use of Demonstrations to Solve Hard Exploration Problems,"['Caglar Gülçehre', 'Tom Le Paine', 'Bobak Shahriari', 'Misha Denil', 'Matt Hoffman', 'Hubert Soyer', 'Richard Tanburn', 'Steven Kapturowski', 'Neil Rabinowitz', 'Duncan Williams', 'Gabriel Barth-Maron', 'Ziyu Wang', 'Nando de Freitas', 'Worlds Team']",2019-09-05T00:00:00Z,blogs,, 101951,https://aiimpacts.org/the-tyranny-of-the-god-scenario/,The tyranny of the god scenario,['Michael Wulfsohn'],2018-04-06T15:00:00Z,blogs,, 101964,https://carado.moe/less-quantum-immortality.html,less quantum immortality? • carado.moe,['Tamsin Leake'],2021-12-27T00:00:00Z,blogs,, 101976,https://intelligence.org/2014/10/01/october-newsletter/,MIRI’s October Newsletter,['Luke Muehlhauser'],2014-10-01T21:00:19Z,blogs,, 101987,https://intelligence.org/2018/03/27/categorizing-goodhart/,New paper: “Categorizing variants of Goodhart’s Law”,['Scott Garrabrant'],2018-03-28T02:08:17Z,blogs,, 102016,https://www.deepmind.com/blog/dynamic-language-understanding-adaptation-to-new-knowledge-in-parametric-and-semi-parametric-models,Dynamic language understanding: adaptation to new knowledge in parametric and semi-parametric models,"['Elena Gribovskaya', 'Angeliki Lazaridou', 'Tomáš Kočiský']",2022-05-26T00:00:00Z,blogs,, 102040,https://www.cold-takes.com/the-most-important-century-in-a-nutshell/,The Most Important Century (in a nutshell),['Holden Karnofsky'],2021-09-23T00:00:00Z,blogs,, 102059,https://www.yudkowsky.net/rational/technical,A Technical Explanation of Technical Explanation,['Eliezer S. Yudkowsky'],2020-09-04T02:43:02Z,blogs,, 102076,https://intelligence.org/2021/12/31/december-2021-newsletter/,December 2021 Newsletter,['Rob Bensinger'],2022-01-01T07:32:12Z,blogs,, 102103,https://generative.ink/posts/loom-interface-to-the-multiverse/,Loom: interface to the multiverse,['janus'],2021-02-09T00:00:00Z,blogs,, 102114,https://jsteinhardt.wordpress.com/2012/12/17/algebra-trick-of-the-day/,Algebra trick of the day,['jsteinhardt'],2012-12-17T04:43:19Z,blogs,, 102124,https://www.cold-takes.com/biological-anchors-is-about-bounding-not-pinpointing-ai-timelines/,"“Biological anchors” is about bounding, not pinpointing, AI timelines",['Holden Karnofsky'],2021-11-18T00:00:00Z,blogs,, 102139,https://intelligence.org/2007/07/10/the-power-of-intelligence/,The Power of Intelligence,['Eliezer Yudkowsky'],2007-07-11T02:41:10Z,blogs,, 102150,https://carado.moe/why-timelines-short.html,why my timelines are short: all roads lead to doom,['Tamsin Leake'],2022-08-13T23:00:00Z,blogs,, 102159,https://aiimpacts.org/muller-and-bostrom-ai-progress-poll/,Müller and Bostrom AI Progress Poll,['Katja Grace'],2014-12-29T18:47:11Z,blogs,, 102169,https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/,Specification gaming examples in AI,['Victoria Krakovna'],2018-04-01T23:33:20Z,blogs,, 102178,https://intelligence.org/2019/04/24/delegative-reinforcement-learning/,New paper: “Delegative reinforcement learning”,['Rob Bensinger'],2019-04-24T22:52:40Z,blogs,, 102195,https://openai.com/research/gpts-are-gpts,GPTs are GPTs: An early look at the labor market impact potential of large language models,"['Tyna Eloundou', 'Sam Manning', 'Pamela Mishkin', 'Daniel Rock']",2023-03-17T00:00:00Z,blogs,, 102228,https://carado.moe/unsatisfactory-property.html,The Unsatisfactorily Far Reach Of Property,['Tamsin Leake'],2021-05-02T23:00:00Z,blogs,, 102249,https://carado.moe/so-you-think-not-qualified-alignment.html,so you think you're not qualified to do technical alignment research?,['Tamsin Leake'],2023-02-07T00:00:00Z,blogs,, 102262,https://intelligence.org/2021/12/09/conversation-on-technology-forecasting-and-gradualism/,Conversation on technology forecasting and gradualism,['Rob Bensinger'],2021-12-09T21:29:42Z,blogs,, 102286,https://aiimpacts.org/energy-efficiency-of-monarch-butterfly-flight/,Energy efficiency of monarch butterfly flight,['Katja Grace'],2020-11-26T06:30:47Z,blogs,, 102297,https://vkrakovna.wordpress.com/2014/12/26/open-and-closed-mental-states/,Open and closed mental states,['Victoria Krakovna'],2014-12-26T06:21:09Z,blogs,, 102306,https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/,Jobs that can help with the most important century,['Holden Karnofsky'],2023-02-10T00:00:00Z,blogs,, 102341,https://carado.moe/value-crystallization.html,Value Crystallization,['Tamsin Leake'],2021-03-04T00:00:00Z,blogs,, 102352,https://www.cold-takes.com/cold-links-misc/,Cold Links: misc,['Holden Karnofsky'],2021-12-17T00:00:00Z,blogs,, 102361,https://aiimpacts.org/glial-signaling/,Glial Signaling,['Katja Grace'],2015-04-16T23:29:27Z,blogs,, 102374,https://www.deepmind.com/blog/data-architecture-or-losses-what-contributes-most-to-multimodal-transformer-success,"Data, Architecture, or Losses: What Contributes Most to Multimodal Transformer Success?","['Aida Nematzadeh', 'Lisa Anne Hendricks', 'Jean-baptiste Alayrac', 'Rosalia Schneider', 'John Mellor']",2021-02-02T00:00:00Z,blogs,, 102395,https://aiimpacts.org/methodology-for-discontinuity-investigation/,Methodology for discontinuous progress investigation,['Asya Bergal'],2019-06-05T19:55:08Z,blogs,, 102410,https://carado.moe/outer-alignment-past-user.html,outer alignment: two failure modes and past-user satisfaction,['Tamsin Leake'],2022-10-10T23:00:00Z,blogs,, 102424,https://intelligence.org/2017/08/31/incorrigibility-in-cirl/,New paper: “Incorrigibility in the CIRL Framework”,['Matthew Graves'],2017-09-01T02:06:32Z,blogs,, 102438,https://aiimpacts.org/error-in-armstrong-and-sotala-2012/,Error in Armstrong and Sotala 2012,['Katja Grace'],2016-05-17T21:04:30Z,blogs,, 102453,https://aiimpacts.org/was-the-industrial-revolution-a-drastic-departure-from-historic-trends/,Was the industrial revolution a drastic departure from historic trends?,['Katja Grace'],2020-11-17T22:14:30Z,blogs,, 102468,https://carado.moe/refusing-negative.html,refusing to answer ≠ giving a negative answer,['Tamsin Leake'],2021-06-15T23:00:00Z,blogs,, 102478,https://carado.moe/word-report-3.html,word report #3,['Tamsin Leake'],2023-02-07T00:00:00Z,blogs,, 102490,https://carado.moe/qaci.html,the QACI alignment plan: table of contents,['Tamsin Leake'],2023-03-20T00:00:00Z,blogs,, 102500,https://www.cold-takes.com/call-to-vigilance/,Call to Vigilance,['Holden Karnofsky'],2021-09-15T00:00:00Z,blogs,, 102525,https://intelligence.org/2015/07/20/why-now-matters/,Why Now Matters,['Nate Soares'],2015-07-20T22:31:03Z,blogs,, 102540,https://intelligence.org/2015/08/03/when-ai-accelerates-ai/,When AI Accelerates AI,['Rob Bensinger'],2015-08-04T06:55:56Z,blogs,, 102560,https://aiimpacts.org/scale-of-the-human-brain/,Scale of the Human Brain,['Katja Grace'],2015-04-16T23:00:46Z,blogs,, 102576,https://carado.moe/terminal-alignment-solutions.html,some thoughts about terminal alignment,['Tamsin Leake'],2023-02-26T00:00:00Z,blogs,, 102593,https://carado.moe/solomonoff-deism.html,"solonomonoff induction, time penalty, the universal program, and deism",['Tamsin Leake'],2022-06-17T23:00:00Z,blogs,, 102608,https://carado.moe/insulated-goal-program.html,the Insulated Goal-Program idea,['Tamsin Leake'],2022-08-10T23:00:00Z,blogs,, 102625,https://importai.substack.com/p/import-ai-319-sovereign-ai-facebooks,Import AI 319: Sovereign AI; Facebook's weights leak on torrent networks; Google might have made a better optimizer than Adam!,['Jack Clark'],2023-03-06T14:06:00Z,blogs,, 102652,https://carado.moe/spu-review.html,"Semantics: Primes and Universals, a book review • carado.moe",['Tamsin Leake'],2019-04-10T23:00:00Z,blogs,, 102669,https://intelligence.org/2015/08/14/what-sets-miri-apart/,What Sets MIRI Apart?,['Nate Soares'],2015-08-15T01:22:48Z,blogs,, 102686,https://carado.moe/hope-infinite-compute.html,hope for infinite compute,['Tamsin Leake'],2022-05-11T23:00:00Z,blogs,, 102707,https://intelligence.org/2021/10/07/october-2021-newsletter/,October 2021 Newsletter,['Rob Bensinger'],2021-10-08T05:25:23Z,blogs,, 102722,https://aiimpacts.org/ai-risk-terminology/,Glossary of AI Risk Terminology and common AI terms,['Katja Grace'],2015-10-30T22:58:34Z,blogs,, 102743,https://aiimpacts.org/predictions-of-human-level-ai-timelines/,Predictions of Human-Level AI Timelines,['Katja Grace'],2015-06-05T15:36:50Z,blogs,, 102755,https://carado.moe/cosmic-missing-outs.html,cosmic missing outs,['Tamsin Leake'],2021-10-13T23:00:00Z,blogs,, 102766,https://aiimpacts.org/event-multipolar-ai-workshop-with-robin-hanson/,Event: Multipolar AI workshop with Robin Hanson,['Katja Grace'],2015-01-14T18:52:42Z,blogs,, 102787,https://carado.moe/nonscarce-compute-optimize-out.html,non-scarce compute means moral patients might not get optimized out,['Tamsin Leake'],2021-12-25T00:00:00Z,blogs,, 102796,https://intelligence.org/2016/08/02/2016-summer-program-recap/,2016 summer program recap,['Alex Vermeer'],2016-08-02T17:53:56Z,blogs,, 102829,https://intelligence.org/2021/07/01/june-2021-newsletter/,June 2021 Newsletter,['Rob Bensinger'],2021-07-02T00:04:17Z,blogs,, 102850,https://intelligence.org/2012/12/06/2012-winter-matching-challenge/,2012 Winter Matching Challenge!,['Luke Muehlhauser'],2012-12-06T19:42:41Z,blogs,, 102861,https://intelligence.org/2013/07/11/july-newsletter/,MIRI’s July Newsletter: Fundraiser and New Papers,['Luke Muehlhauser'],2013-07-11T20:44:14Z,blogs,, 102888,https://intelligence.org/2014/12/16/new-report-toward-idealized-decision-theory/,New report: “Toward Idealized Decision Theory”,['Luke Muehlhauser'],2014-12-17T00:09:49Z,blogs,, 102899,https://www.cold-takes.com/making-the-best-of-the-most-important-century/,How to make the best of the most important century?,['Holden Karnofsky'],2021-09-14T00:00:00Z,blogs,, 102924,https://carado.moe/ruling-out-intuitions-materially-acausal-intuitions.html,ruling out intuitions about materially acausal things,['Tamsin Leake'],2022-08-09T23:00:00Z,blogs,, 102934,https://carado.moe/blob-causality.html,"QACI: the problem of blob location, causality, and counterfactuals",['Tamsin Leake'],2023-03-05T00:00:00Z,blogs,, 102953,https://intelligence.org/2020/10/23/october-2020-newsletter/,October 2020 Newsletter,['Rob Bensinger'],2020-10-23T14:15:19Z,blogs,, 102974,https://aiimpacts.org/whole-bird-emulation-requires-quantum-mechanics/,Whole Bird Emulation requires Quantum Mechanics,['Jeffrey Heninger'],2023-02-14T23:47:25Z,blogs,, 102985,https://intelligence.org/2015/10/07/new-report-leo-szilard-and-the-danger-of-nuclear-weapons/,New report: “Leó Szilárd and the Danger of Nuclear Weapons”,['Rob Bensinger'],2015-10-08T04:38:07Z,blogs,, 103000,https://aiimpacts.org/description-vs-simulated-prediction/,Description vs simulated prediction,['richardkorzekwa'],2020-04-22T16:30:00Z,blogs,, 103019,https://vkrakovna.wordpress.com/2018/01/07/2017-18-new-year-review/,2017-18 New Year review,['Victoria Krakovna'],2018-01-07T01:01:30Z,blogs,, 103043,https://carado.moe/clippy-in-panpsychia.html,clippy in panpsychia,['Tamsin Leake'],2022-09-14T23:00:00Z,blogs,, 103052,https://carado.moe/fermi-paradox.html,my answer to the fermi paradox,['Tamsin Leake'],2021-06-15T23:00:00Z,blogs,, 103062,https://carado.moe/overcoming-narratives.html,Overcoming Narratives,['Tamsin Leake'],2021-06-03T23:00:00Z,blogs,, 103071,https://intelligence.org/2017/12/01/miris-2017-fundraiser/,MIRI’s 2017 Fundraiser,['Malo Bourgon'],2017-12-01T23:00:21Z,blogs,, 103105,https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/,"Updates to the research team, and a major donation",['Malo Bourgon'],2017-07-04T15:35:45Z,blogs,, 103122,https://carado.moe/ai-doom.html,a casual intro to AI doom and alignment,['Tamsin Leake'],2022-11-01T00:00:00Z,blogs,, 103151,https://jsteinhardt.wordpress.com/2014/02/10/a-fervent-defense-of-frequentist-statistics/,A Fervent Defense of Frequentist Statistics,['jsteinhardt'],2014-02-10T04:53:39Z,blogs,, 103177,https://intelligence.org/2017/10/20/alphago/,AlphaGo Zero and the Foom Debate,['Eliezer Yudkowsky'],2017-10-21T02:37:19Z,blogs,, 103188,https://www.deepmind.com/blog/unlocking-high-accuracy-differentially-private-image-classification-through-scale,Unlocking High-Accuracy Differentially Private Image Classification through Scale,"['Soham De', 'Leonard Berrada', 'Jamie Hayes', 'Samuel L. Smith', 'Borja Balle']",2022-06-17T00:00:00Z,blogs,, 103201,https://openai.com/research/scaling-laws-for-reward-model-overoptimization,Scaling laws for reward model overoptimization,['OpenAI Research'],2022-10-19T00:00:00Z,blogs,, 103231,https://aiimpacts.org/trends-in-algorithmic-progress/,Trends in algorithmic progress,['Katja Grace'],2017-03-02T07:03:26Z,blogs,, 103241,https://intelligence.org/2014/02/28/the-worlds-distribution-of-computation-initial-findings/,The world’s distribution of computation (initial findings),['Luke Muehlhauser'],2014-03-01T01:49:59Z,blogs,, 103265,https://intelligence.org/2019/02/25/february-2019-newsletter/,February 2019 Newsletter,['Rob Bensinger'],2019-02-25T23:29:48Z,blogs,, 103298,https://www.cold-takes.com/what-counts-as-death/,What counts as death?,['Holden Karnofsky'],2021-12-28T00:00:00Z,blogs,, 103317,https://vkrakovna.wordpress.com/2017/10/30/tokyo-ai-society-symposium/,Tokyo AI & Society Symposium,['Victoria Krakovna'],2017-10-30T10:51:27Z,blogs,, 103346,https://intelligence.org/2015/08/07/target-3-taking-it-to-the-next-level/,Target 3: Taking It To The Next Level,['Nate Soares'],2015-08-08T03:57:38Z,blogs,, 103366,https://jsteinhardt.wordpress.com/2017/02/05/prekopa-leindler-inequality/,Prékopa–Leindler inequality,['jsteinhardt'],2017-02-05T22:02:43Z,blogs,, 103375,https://carado.moe/deduplication-ethics.html,experience/moral patient deduplication and ethics,['Tamsin Leake'],2022-03-06T00:00:00Z,blogs,, 103397,https://carado.moe/ai-risk-drone.html,"""AI risk drone""",['Tamsin Leake'],2022-06-09T23:00:00Z,blogs,, 103406,https://aiimpacts.org/partially-plausible-fictional-ai-futures/,Fiction relevant to AI futurism,['Katja Grace'],2021-04-13T00:51:04Z,blogs,, 103415,https://generative.ink/posts/gpt-3-x-clip-worldbuilding/,GPT-3 x CLIP worldbuilding,['janus'],2021-02-03T00:00:00Z,blogs,, 103424,https://aiimpacts.org/bainbridge-survey/,Bainbridge Survey,['Katja Grace'],2014-12-29T18:50:48Z,blogs,, 103439,https://intelligence.org/2013/06/07/miris-july-2013-workshop/,MIRI’s July 2013 Workshop,['Luke Muehlhauser'],2013-06-07T22:15:07Z,blogs,, 103453,https://blog.eleuther.ai/announcing-20b/,Announcing GPT-NeoX-20B,['Connor Leahy'],2022-02-02T00:00:00Z,blogs,, 103469,https://aiimpacts.org/cost-of-teps/,The cost of TEPS,['Katja Grace'],2015-03-21T22:53:27Z,blogs,, 103478,https://generative.ink/posts/the-internet-mirrored-by-gpt-3/,"The Internet, mirrored by GPT-3",['janus'],2021-01-23T00:00:00Z,blogs,, 103494,https://aiimpacts.org/historic-trends-in-bridge-span-length/,Historic trends in bridge span length,['Katja Grace'],2020-02-08T02:39:02Z,blogs,, 103510,https://carado.moe/instrumentality-alienating.html,"to me, it's instrumentality that is alienating",['Tamsin Leake'],2023-01-27T00:00:00Z,blogs,, 103521,https://intelligence.org/2017/04/30/2017-updates-and-strategy/,2017 Updates and Strategy,['Rob Bensinger'],2017-05-01T03:42:46Z,blogs,, 103543,https://aiimpacts.org/michie-and-overoptimism/,Michie and overoptimism,['Katja Grace'],2015-01-13T01:24:59Z,blogs,, 103552,https://www.deepmind.com/blog/in-conversation-with-ai-building-better-language-models,In conversation with AI: building better language models,['Atoosa Kasirzadeh and Iason Gabriel'],2022-09-06T00:00:00Z,blogs,, 103570,https://jsteinhardt.wordpress.com/2016/08/25/two-strange-facts/,Two Strange Facts,['jsteinhardt'],2016-08-25T06:50:00Z,blogs,, 103585,https://intelligence.org/2014/11/01/miris-november-newsletter/,MIRI’s November Newsletter,['Jake'],2014-11-02T03:00:52Z,blogs,, 103600,https://intelligence.org/2010/12/21/announcing-the-tallinn-evans-125000-singularity-holiday-challenge/,"Announcing the Tallinn-Evans $125,000 Singularity Challenge",['Louie Helm'],2010-12-21T23:26:54Z,blogs,, 103609,https://aiimpacts.org/cost-of-human-level-information-storage/,Cost of human-level information storage,['Katja Grace'],2015-07-23T20:33:08Z,blogs,, 103618,https://carado.moe/value.html,Value and Earning,['Tamsin Leake'],2021-03-31T23:00:00Z,blogs,, 103627,https://aiimpacts.org/what-if-you-turned-the-worlds-hardware-into-ai-minds/,What if you turned the world’s hardware into AI minds?,['Katja Grace'],2016-09-04T19:09:21Z,blogs,, 103645,https://vkrakovna.wordpress.com/2020/07/05/tradeoff-between-desirable-properties-for-baseline-choices-in-impact-measures/,Tradeoff between desirable properties for baseline choices in impact measures,['Victoria Krakovna'],2020-07-05T17:40:53Z,blogs,, 103665,https://aiimpacts.org/counterarguments-to-the-basic-ai-x-risk-case/,Counterarguments to the basic AI x-risk case,['Katja Grace'],2022-10-14T12:58:58Z,blogs,, 103692,https://jsteinhardt.wordpress.com/2015/09/07/maximal-maximum-entropy-sets/,Maximal Maximum-Entropy Sets,['jsteinhardt'],2015-09-07T19:33:41Z,blogs,, 103701,https://intelligence.org/2014/06/11/mid-2014-strategic-plan/,Our mid-2014 strategic plan,['Luke Muehlhauser'],2014-06-11T17:29:50Z,blogs,, 103732,https://vkrakovna.wordpress.com/2016/12/28/ai-safety-highlights-from-nips-2016/,AI Safety Highlights from NIPS 2016,['Victoria Krakovna'],2016-12-28T18:04:04Z,blogs,, 103769,https://aiimpacts.org/a-summary-of-ai-surveys/,A summary of AI surveys,['Katja Grace'],2015-01-10T23:19:17Z,blogs,, 103778,https://intelligence.org/2014/05/23/sandor-veres/,Sandor Veres on autonomous agents,['Luke Muehlhauser'],2014-05-23T11:00:06Z,blogs,, 103797,https://deepmindsafetyresearch.medium.com/what-mechanisms-drive-agent-behaviour-e7b8d9aee88,What mechanisms drive agent behaviour?,['DeepMind Safety Research'],2021-03-05T00:00:00Z,blogs,, 103812,https://aiimpacts.org/agi-in-a-vulnerable-world/,AGI in a vulnerable world,['Asya Bergal'],2020-03-26T00:05:46Z,blogs,, 103834,https://vkrakovna.wordpress.com/2016/09/30/looking-back-at-my-grad-school-journey/,Looking back at my grad school journey,['Victoria Krakovna'],2016-09-30T05:03:21Z,blogs,, 103849,https://intelligence.org/2013/09/12/how-well-will-policy-makers-handle-agi-initial-findings/,How well will policy-makers handle AGI? (initial findings),['Luke Muehlhauser'],2013-09-12T07:17:44Z,blogs,, 103870,https://generative.ink/posts/hitl-thought-experiment/,HITL thought experiment,['janus'],2020-10-16T00:00:00Z,blogs,, 103891,https://intelligence.org/2011/09/15/interview-with-new-singularity-institute-research-fellow-luke-muehlhuaser-september-2011/,Interview with New MIRI Research Fellow Luke Muehlhauser,['Louie Helm'],2011-09-15T10:17:50Z,blogs,, 103921,https://carado.moe/emergency-unaligned-ai-goals.html,goals for emergency unaligned AI,['Tamsin Leake'],2022-03-22T00:00:00Z,blogs,, 103936,https://carado.moe/cultural-and-memetic-hygiene.html,Cultural and Memetic Hygiene,['Tamsin Leake'],2021-03-28T00:00:00Z,blogs,, 103957,https://www.deepmind.com/blog/learning-to-segment-actions-from-observation-and-narration,Learning to Segment Actions from Observation and Narration,"['Daniel Fried*', 'Jean-Baptiste Alayrac', 'Phil Blunsom', 'Chris Dyer', 'Stephen Clark', 'Aida Nematzadeh']",2020-05-07T00:00:00Z,blogs,, 103972,https://intelligence.org/2013/10/03/proofs/,"Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness",['Luke Muehlhauser'],2013-10-03T19:33:10Z,blogs,, 103992,https://deepmindsafetyresearch.medium.com/goal-misgeneralisation-why-correct-specifications-arent-enough-for-correct-goals-cf96ebc60924,Goal Misgeneralisation: Why Correct Specifications Aren’t Enough For Correct Goals,['DeepMind Safety Research'],2022-10-07T00:00:00Z,blogs,, 104018,https://aiimpacts.org/trends-in-the-cost-of-computing/,Trends in the cost of computing,['Katja Grace'],2015-03-10T22:15:38Z,blogs,, 104030,https://aiimpacts.org/reinterpreting-ai-and-compute/,Reinterpreting “AI and Compute”,['Justis Mills'],2018-12-18T22:51:50Z,blogs,, 104045,https://carado.moe/timeline-codes.html,AI alignment timeline codes,['Tamsin Leake'],2021-07-17T23:00:00Z,blogs,, 104063,https://vkrakovna.wordpress.com/2019/12/20/retrospective-on-the-specification-gaming-examples-list/,Retrospective on the specification gaming examples list,['Victoria Krakovna'],2019-12-20T16:58:11Z,blogs,, 104085,https://intelligence.org/2017/09/24/september-2017-newsletter/,September 2017 Newsletter,['Rob Bensinger'],2017-09-25T05:14:16Z,blogs,, 104123,https://aiimpacts.org/mysteries-of-global-hardware/,Mysteries of global hardware,['Katja Grace'],2016-03-08T00:45:24Z,blogs,, 104139,https://intelligence.org/2015/03/18/new-report-introduction-lobs-theorem-miri-research/,New report: “An Introduction to Löb’s Theorem in MIRI Research”,['Luke Muehlhauser'],2015-03-19T03:35:03Z,blogs,, 104148,https://intelligence.org/2014/04/09/paulo-tabuada/,Paulo Tabuada on program synthesis for cyber-physical systems,['Luke Muehlhauser'],2014-04-09T20:42:42Z,blogs,, 104166,https://carado.moe/all-claw-no-world.html,"all claw, no world — and other thoughts on the universal distribution",['Tamsin Leake'],2022-12-14T00:00:00Z,blogs,, 104185,https://carado.moe/question-answer-counterfactual-intervals.html,QACI: question-answer counterfactual intervals,['Tamsin Leake'],2022-10-23T23:00:00Z,blogs,, 104208,https://newsletter.mlsafety.org/p/ml-safety-newsletter-6,ML Safety Newsletter #6,['Dan Hendrycks'],2022-10-13T14:00:56Z,blogs,, 104252,https://carado.moe/gpt-dangerous-useful.html,GPT is dangerous because it is useful at all,['Tamsin Leake'],2023-02-11T00:00:00Z,blogs,, 104269,https://intelligence.org/2015/09/30/new-paper-asymptotic-logical-uncertainty-and-the-benford-test/,New paper: “Asymptotic logical uncertainty and the Benford test”,['Rob Bensinger'],2015-10-01T02:07:44Z,blogs,, 104279,https://intelligence.org/2018/06/27/forecasting-using-incomplete-models/,New paper: “Forecasting using incomplete models”,['Rob Bensinger'],2018-06-27T23:48:57Z,blogs,, 104288,https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/,2018 Update: Our New Research Directions,['Nate Soares'],2018-11-22T23:27:01Z,blogs,, 104306,https://www.cold-takes.com/investigating-musical-genius-by-listening-to-the-beach-boys-a-lot/,Investigating musical genius by listening to the Beach Boys a lot,['Holden Karnofsky'],2022-02-03T00:00:00Z,blogs,, 104315,https://newsletter.mlsafety.org/p/ml-safety-newsletter-2,ML Safety Newsletter #2,['Dan Hendrycks'],2021-12-09T16:40:04Z,blogs,, 104342,https://aisafety.camp/2018/06/05/aisc-1-research-summaries/,AISC 1: Research Summaries,['Johannes'],2018-06-05T11:56:43Z,blogs,, 104377,https://aiimpacts.org/the-public-supports-regulating-ai-for-safety/,The public supports regulating AI for safety,['Zach Stein-Perlman'],2023-02-17T04:00:53Z,blogs,, 104387,https://carado.moe/capabilities-away-great-problem.html,being only polynomial capabilities away from alignment: what a great problem to have that would be!,['Tamsin Leake'],2022-12-22T00:00:00Z,blogs,, 104398,https://intelligence.org/2018/11/08/embedded-curiosities/,Embedded Curiosities,['Abram Demski'],2018-11-08T17:31:41Z,blogs,, 104411,https://carado.moe/unoptimal-superint-doesnt-lose.html,unoptimal superintelligence doesn't lose,['Tamsin Leake'],2021-12-09T00:00:00Z,blogs,, 104420,https://carado.moe/canonical-bit-varints.html,A canonical bit-encoding for ranged integers,['Tamsin Leake'],2021-01-14T00:00:00Z,blogs,, 104429,https://intelligence.org/2018/06/23/june-2018-newsletter/,June 2018 Newsletter,['Rob Bensinger'],2018-06-23T23:36:32Z,blogs,, 104440,https://intelligence.org/2015/01/15/new-report-vingean-reflection-reliable-reasoning-self-improving-agents/,New report: “Vingean Reflection: Reliable Reasoning for Self-Improving Agents”,['Luke Muehlhauser'],2015-01-15T23:31:01Z,blogs,, 104451,https://vkrakovna.wordpress.com/2015/05/17/hamming-questions-and-bottlenecks/,Hamming questions and bottlenecks,['Victoria Krakovna'],2015-05-17T17:35:53Z,blogs,, 104475,https://intelligence.org/2014/04/24/domitilla-del-vecchio/,Domitilla del Vecchio on hybrid control for autonomous vehicles,['Luke Muehlhauser'],2014-04-25T00:00:06Z,blogs,, 104500,https://intelligence.org/2023/03/22/truth-and-advantage-response-to-a-draft-of-ai-safety-seems-hard-to-measure/,Truth and Advantage: Response to a draft of “AI safety seems hard to measure”,['Nate Soares'],2023-03-23T03:27:11Z,blogs,, 104516,https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/,New paper: “Formalizing convergent instrumental goals”,['Rob Bensinger'],2015-11-26T18:48:08Z,blogs,, 104537,https://vkrakovna.wordpress.com/2023/01/06/2022-23-new-year-review/,2022-23 New Year review,['Victoria Krakovna'],2023-01-06T17:58:07Z,blogs,, 104554,https://www.cold-takes.com/no-need-to-click/,No need to click,['Holden Karnofsky'],2021-12-02T00:00:00Z,blogs,, 104564,https://vkrakovna.wordpress.com/2021/11/11/reflections-on-the-first-year-of-parenting/,Reflections on the first year of parenting,['Victoria Krakovna'],2021-11-11T16:02:19Z,blogs,, 104605,https://www.cold-takes.com/candidate-for-highest-stakes-question-of-the-next-several-months-rare-hot-take/,Candidate for “highest-stakes question of the next several months” (rare hot take),['Holden Karnofsky'],2021-12-06T00:00:00Z,blogs,, 104624,https://intelligence.org/2021/11/25/christiano-cotra-and-yudkowsky-on-ai-progress/,"Christiano, Cotra, and Yudkowsky on AI progress",['Rob Bensinger'],2021-11-25T18:33:22Z,blogs,, 104658,https://intelligence.org/2014/01/05/donor-story-1-giving-after-critique/,Donor Story #1: Noticing Inferential Distance,['Luke Muehlhauser'],2014-01-06T01:09:09Z,blogs,, 104672,https://carado.moe/essential-inequality-vs-functional-inequivalence.html,essential inequality vs functional inequivalence,['Tamsin Leake'],2022-08-14T23:00:00Z,blogs,, 104689,https://intelligence.org/2014/12/01/2014-winter-matching-challenge/,2014 Winter Matching Challenge!,['Luke Muehlhauser'],2014-12-02T03:16:17Z,blogs,, 104699,https://jsteinhardt.wordpress.com/2014/01/05/another-critique-of-effective-altruism/,Another Critique of Effective Altruism,['jsteinhardt'],2014-01-05T09:41:20Z,blogs,, 104722,https://intelligence.org/2016/07/05/july-2016-newsletter/,July 2016 Newsletter,['Rob Bensinger'],2016-07-06T13:17:18Z,blogs,, 104749,https://aiimpacts.org/historic-trends-in-long-range-military-payload-delivery/,Historic trends in long-range military payload delivery,['Katja Grace'],2020-02-08T02:39:27Z,blogs,, 104759,https://intelligence.org/2013/04/29/intelligence-explosion-microeconomics-released/,“Intelligence Explosion Microeconomics” Released,['Luke Muehlhauser'],2013-04-29T21:28:24Z,blogs,, 104769,https://carado.moe/implement-free-will.html,should we implement free will?,['Tamsin Leake'],2022-03-29T23:00:00Z,blogs,, 104779,https://intelligence.org/2014/04/10/miris-april-2014-newsletter/,MIRI’s April 2014 Newsletter,['Luke Muehlhauser'],2014-04-11T06:00:04Z,blogs,, 104796,https://www.cold-takes.com/more-on-multiple-world-size-economies-per-atom/,More on “multiple world-size economies per atom”,['Holden Karnofsky'],2021-08-27T00:00:00Z,blogs,, 104806,https://intelligence.org/2016/02/06/february-2016-newsletter/,February 2016 Newsletter,['Rob Bensinger'],2016-02-06T12:37:48Z,blogs,, 104834,https://carado.moe/alignment-optimization-processes.html,alignment is an optimization processes problem,['Tamsin Leake'],2021-10-22T23:00:00Z,blogs,, 104861,https://aiimpacts.org/when-do-ml-researchers-think-specific-tasks-will-be-automated/,When do ML Researchers Think Specific Tasks will be Automated?,['Katja Grace'],2017-09-26T22:33:51Z,blogs,, 104876,https://www.deepmind.com/blog/unsupervised-deep-learning-identifies-semantic-disentanglement-in-single-inferotemporal-face-patch-neurons,Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons,"['Irina Higgins', 'L Chang*', 'V Langston', 'Demis Hassabis', 'Christopher Summerfield', 'Doris Tsao*', 'Matt Botvinick']",2021-11-09T00:00:00Z,blogs,, 104890,https://intelligence.org/2021/11/18/ngo-and-yudkowsky-on-ai-capability-gains/,Ngo and Yudkowsky on AI capability gains,['Rob Bensinger'],2021-11-19T01:53:23Z,blogs,, 104913,https://intelligence.org/2014/04/23/dave-doty/,Dave Doty on algorithmic self-assembly,['Luke Muehlhauser'],2014-04-23T21:24:14Z,blogs,, 104937,https://aiimpacts.org/energy-efficiency-of-boeing-747-400/,Energy efficiency of Boeing 747-400,['Katja Grace'],2020-11-06T05:10:05Z,blogs,, 104948,https://carado.moe/moral-patient-term.html,"let's stick with the term ""moral patient""",['Tamsin Leake'],2022-11-20T00:00:00Z,blogs,, 104957,https://carado.moe/ai-capability-risk-biases.html,cognitive biases regarding the evaluation of AI risk when doing AI capabilities work,['Tamsin Leake'],2022-05-13T23:00:00Z,blogs,, 104987,https://aiimpacts.org/are-human-engineered-flight-designs-better-or-worse-than-natural-ones/,How energy efficient are human-engineered flight designs relative to natural ones?,['Katja Grace'],2020-12-10T22:48:00Z,blogs,, 105001,https://www.cold-takes.com/assorted-cold-ish-links/,Assorted cold-ish links,['Holden Karnofsky'],2022-01-14T00:00:00Z,blogs,, 105025,https://intelligence.org/2015/06/30/new-report-the-asilomar-conference-a-case-study-in-risk-mitigation/,New report: “The Asilomar Conference: A Case Study in Risk Mitigation”,['Rob Bensinger'],2015-07-01T04:49:44Z,blogs,, 105036,https://importai.substack.com/p/import-ai-322-huaweis-trillion-parameter,Import AI 322: Huawei's trillion parameter model; AI systems as moral patients; parasocial bots via Character.ai,['Jack Clark'],2023-03-27T11:30:54Z,blogs,, 105069,https://deepmindsafetyresearch.medium.com/discovering-when-an-agent-is-present-in-a-system-41154de11e7b,Discovering when an agent is present in a system,['DeepMind Safety Research'],2022-08-25T00:00:00Z,blogs,, 105081,https://aiimpacts.org/why-work-at-ai-impacts/,Why work at AI Impacts?,['Katja Grace'],2022-03-06T22:07:24Z,blogs,, 105097,https://vkrakovna.wordpress.com/2020/01/09/2019-20-new-year-review/,2019-20 New Year review,['Victoria Krakovna'],2020-01-09T01:01:55Z,blogs,, 105119,https://carado.moe/ethic-juice-anthropic-juice.html,ethics juice and anthropic juice,['Tamsin Leake'],2022-09-06T23:00:00Z,blogs,, 105128,https://aiimpacts.org/how-popular-is-chatgpt-part-2-slower-growth-than-pokemon-go/,How popular is ChatGPT? Part 2: slower growth than Pokémon GO,['richardkorzekwa'],2023-03-03T23:36:46Z,blogs,, 105142,https://jsteinhardt.wordpress.com/2017/02/07/model-mis-specification-and-inverse-reinforcement-learning/,Model Mis-specification and Inverse Reinforcement Learning,['jsteinhardt'],2017-02-07T21:25:15Z,blogs,, 105169,https://aiimpacts.org/klein-agi-survey/,Klein AGI Survey,['Katja Grace'],2014-12-29T18:47:11Z,blogs,, 105179,https://intelligence.org/2013/01/30/yudkowsky-on-what-can-we-do-now/,Yudkowsky on “What can we do now?”,['staff'],2013-01-30T15:21:18Z,blogs,, 105200,https://www.cold-takes.com/the-bayesian-mindset/,Bayesian Mindset,['Holden Karnofsky'],2021-12-21T00:00:00Z,blogs,, 105222,https://aiimpacts.org/what-weve-learned-so-far-from-our-technological-temptations-project/,What we’ve learned so far from our technological temptations project,['richardkorzekwa'],2023-04-14T00:04:40Z,blogs,, 105244,https://www.cold-takes.com/useful-vices-for-wicked-problems/,Useful Vices for Wicked Problems,['Holden Karnofsky'],2022-04-12T00:00:00Z,blogs,, 105266,https://aiimpacts.org/superintelligence-is-not-omniscience/,Superintelligence Is Not Omniscience,['Jeffrey Heninger'],2023-04-07T16:25:58Z,blogs,, 105287,https://carado.moe/generalized-values-testing-patterns.html,generalized values: testing for patterns in computation,['Tamsin Leake'],2022-07-01T23:00:00Z,blogs,, 105303,https://intelligence.org/2015/08/28/ai-and-effective-altruism/,AI and Effective Altruism,['Rob Bensinger'],2015-08-28T23:42:40Z,blogs,, 105313,https://vkrakovna.wordpress.com/2021/01/03/2020-21-new-year-review/,2020-21 New Year review,['Victoria Krakovna'],2021-01-03T15:33:20Z,blogs,, 105337,https://carado.moe/outer-alignment-politics-philosophy.html,outer alignment: politics & philosophy,['Tamsin Leake'],2022-06-09T23:00:00Z,blogs,, 105349,https://carado.moe/degrees-of-runtime-metaprogrammability.html,degrees of runtime metaprogrammability,['Tamsin Leake'],2021-06-24T23:00:00Z,blogs,, 105365,https://www.cold-takes.com/cost-disease-and-civilizational-decline/,Cost disease and civilizational decline,['Holden Karnofsky'],2022-01-27T00:00:00Z,blogs,, 105391,https://intelligence.org/2014/08/22/new-paper-exploratory-engineering-artificial-intelligence/,New paper: “Exploratory engineering in artificial intelligence”,['Luke Muehlhauser'],2014-08-23T04:04:59Z,blogs,, 105405,https://www.cold-takes.com/a-few-quick-links-re-covid-19-delta/,A few quick links re: COVID-19/Delta,['Holden Karnofsky'],2021-08-12T00:00:00Z,blogs,, 105435,https://carado.moe/categories-of-knowledge.html,categories of knowledge representation,['Tamsin Leake'],2021-06-17T23:00:00Z,blogs,, 105444,https://intelligence.org/2013/10/12/miris-october-newsletter/,MIRI’s October Newsletter,['Luke Muehlhauser'],2013-10-12T21:24:44Z,blogs,, 105467,https://intelligence.org/2013/01/09/january-2013-newsletter/,January 2013 Newsletter,['staff'],2013-01-09T18:06:54Z,blogs,, 105478,https://carado.moe/love-not-competition.html,"love, not competition",['Tamsin Leake'],2022-10-29T23:00:00Z,blogs,, 105501,https://intelligence.org/2013/12/01/new-paper-predicting-agi-what-can-we-say-when-we-know-so-little/,New Paper: “Predicting AGI: What can we say when we know so little?”,['Luke Muehlhauser'],2013-12-01T14:00:04Z,blogs,, 105513,https://intelligence.org/2015/11/03/november-2015-newsletter/,November 2015 Newsletter,['Rob Bensinger'],2015-11-04T02:33:11Z,blogs,, 105542,https://generative.ink/posts/list-sorting-does-not-play-well-with-few-shot/,List sorting does not play well with few-shot,['janus'],2021-02-27T00:00:00Z,blogs,, 105554,https://intelligence.org/2014/02/08/2013-in-review-strategic-and-expository-research/,2013 in Review: Strategic and Expository Research,['Luke Muehlhauser'],2014-02-08T17:24:11Z,blogs,, 105585,https://intelligence.org/2019/03/14/march-2019-newsletter/,March 2019 Newsletter,['Rob Bensinger'],2019-03-15T03:20:18Z,blogs,, 105609,https://www.cold-takes.com/bet-with-zvi-about-omicron/,Bet with Zvi about Omicron,['Holden Karnofsky'],2021-12-22T00:00:00Z,blogs,, 105629,https://intelligence.org/2023/03/21/deep-deceptiveness/,Deep Deceptiveness,['Nate Soares'],2023-03-21T16:36:54Z,blogs,, 105649,https://www.deepmind.com/blog/red-teaming-language-models-with-language-models,Red Teaming Language Models with Language Models,"['Ethan Perez', 'Saffron Huang', 'Francis Song', 'Trevor Cai', 'Roman Ring', 'John Aslanides', 'Amelia Glaese', 'Nat McAleese', 'Geoffrey Irving']",2022-02-07T00:00:00Z,blogs,, 105663,https://carado.moe/12-rules-for-life.html,Book Review: 12 Rules For Life,['Tamsin Leake'],2020-03-30T23:00:00Z,blogs,, 105679,https://aiimpacts.org/historic-trends-in-transatlantic-message-speed/,Historic trends in transatlantic message speed,['Katja Grace'],2020-02-08T02:39:47Z,blogs,, 105692,https://carado.moe/quantum-amplitude-deduplication.html,the quantum amplitude argument against ethics deduplication,['Tamsin Leake'],2023-03-12T00:00:00Z,blogs,, 105706,https://vkrakovna.wordpress.com/2016/04/15/using-humility-to-counteract-shame/,Using humility to counteract shame,['Victoria Krakovna'],2016-04-15T18:23:38Z,blogs,, 105717,https://aisafety.camp/2021/06/23/aisc5-research-summaries/,AISC5: Research Summaries,['Remmelt Ellen'],2021-06-23T13:52:23Z,blogs,, 105756,https://intelligence.org/2014/04/28/david-j-atkinson/,David J. Atkinson on autonomous systems,['Luke Muehlhauser'],2014-04-28T11:00:38Z,blogs,, 105776,https://aiimpacts.org/making-or-breaking-a-thinking-machine/,Making or breaking a thinking machine,['Katja Grace'],2015-01-18T20:59:27Z,blogs,, 105786,https://intelligence.org/2014/01/20/2013-in-review-outreach/,2013 in Review: Outreach,['Luke Muehlhauser'],2014-01-20T19:11:22Z,blogs,, 105808,https://intelligence.org/2021/03/02/february-2021-newsletter/,February 2021 Newsletter,['Rob Bensinger'],2021-03-02T20:21:01Z,blogs,, 105831,https://intelligence.org/2014/04/22/suzana-herculano-houzel/,Suzana Herculano-Houzel on cognitive ability and brain size,['Luke Muehlhauser'],2014-04-22T23:08:14Z,blogs,, 105851,https://vkrakovna.wordpress.com/2016/06/22/new-ai-safety-research-agenda-from-google-brain/,New AI safety research agenda from Google Brain,['Victoria Krakovna'],2016-06-22T19:19:01Z,blogs,, 105888,https://intelligence.org/2018/11/26/november-2018-newsletter/,November 2018 Newsletter,['Rob Bensinger'],2018-11-27T05:46:17Z,blogs,, 105934,https://intelligence.org/2017/03/15/march-2017-newsletter/,March 2017 Newsletter,['Rob Bensinger'],2017-03-16T02:59:13Z,blogs,, 105955,https://intelligence.org/2015/03/22/2014-review/,2014 in review,['Luke Muehlhauser'],2015-03-22T21:19:35Z,blogs,, 105977,https://aiimpacts.org/conversation-with-ernie-davis/,Conversation with Ernie Davis,['Rob Long'],2019-08-23T23:35:20Z,blogs,, 106021,https://www.cold-takes.com/has-violence-declined-when-we-include-the-world-wars-and-other-major-atrocities/,"Falling everyday violence, bigger wars and atrocities: how do they net out?",['Holden Karnofsky'],2021-11-16T00:00:00Z,blogs,, 106036,https://intelligence.org/2012/02/05/singularity-institute-progress-report-january-2012/,"Machine Intelligence Research Institute Progress Report, January 2012",['Louie Helm'],2012-02-05T22:06:40Z,blogs,, 106063,https://aiimpacts.org/index-of-hardware-articles/,Index of articles about hardware,['Katja Grace'],2015-07-26T17:38:34Z,blogs,, 106088,https://intelligence.org/2017/01/25/negotiable-rll/,New paper: “Toward negotiable reinforcement learning”,['Rob Bensinger'],2017-01-26T04:11:25Z,blogs,, 106104,https://openai.com/research/teaching-models-to-express-their-uncertainty-in-words,Teaching models to express their uncertainty in words,['OpenAI Research'],2022-05-28T00:00:00Z,blogs,, 106127,https://www.cold-takes.com/defending-one-dimensional-ethics/,Defending One-Dimensional Ethics,['Holden Karnofsky'],2022-02-15T00:00:00Z,blogs,, 106142,https://intelligence.org/2013/10/18/richard-posner-on-ai-dangers/,Richard Posner on AI Dangers,['Luke Muehlhauser'],2013-10-18T07:23:51Z,blogs,, 106156,https://intelligence.org/2022/06/10/agi-ruin/,AGI Ruin: A List of Lethalities,['Eliezer Yudkowsky'],2022-06-11T04:07:22Z,blogs,, 106176,https://aiimpacts.org/energy-efficiency-of-vickers-vimy-plane/,Energy efficiency of Vickers Vimy plane,['Katja Grace'],2020-11-05T20:54:31Z,blogs,, 106189,https://intelligence.org/2015/12/11/openai-and-other-news/,OpenAI and other news,['Nate Soares'],2015-12-12T06:50:16Z,blogs,, 106207,https://blog.eleuther.ai/minetester-intro/,Minetester: A fully open RL environment built on Minetest,"['Curtis Huebner', 'Robert Klassert', 'Stepan Shabalin', 'Edwin Fennell', 'Delta Hessler']",2023-07-08T00:00:00Z,blogs,, 106239,https://intelligence.org/2013/11/27/new-paper-racing-to-the-precipice/,New Paper: “Racing to the Precipice”,['Luke Muehlhauser'],2013-11-27T12:58:18Z,blogs,, 106259,https://aiimpacts.org/wikipedia-history-of-gflops-costs/,Wikipedia history of GFLOPS costs,['Katja Grace'],2015-03-11T01:58:46Z,blogs,, 106270,https://intelligence.org/2015/08/02/august-2015-newsletter/,August 2015 Newsletter,['Rob Bensinger'],2015-08-02T18:19:24Z,blogs,, 106292,https://aiimpacts.org/guide-to-pages-on-ai-timeline-predictions/,Guide to pages on AI timeline predictions,['Katja Grace'],2017-04-07T07:35:45Z,blogs,, 106315,https://intelligence.org/2016/05/04/announcing-a-new-research-program/,A new MIRI research program with a machine learning focus,['admin'],2016-05-05T06:53:31Z,blogs,, 106348,https://aiimpacts.org/product-safety-is-a-poor-model-for-ai-governance/,Product safety is a poor model for AI governance,['richardkorzekwa'],2023-02-01T22:38:03Z,blogs,, 106363,https://www.cold-takes.com/misc-thematic-links/,Misc thematic links,['Holden Karnofsky'],2022-02-18T00:00:00Z,blogs,, 106374,https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/,MIRI’s 2015 Winter Fundraiser!,['Nate Soares'],2015-12-02T00:24:15Z,blogs,, 106407,https://aiimpacts.org/we-dont-trade-with-ants/,We don’t trade with ants,['Katja Grace'],2023-01-10T23:52:13Z,blogs,, 106418,https://intelligence.org/2022/03/01/february-2022-newsletter/,February 2022 Newsletter,['Rob Bensinger'],2022-03-01T09:59:24Z,blogs,, 106445,https://aiimpacts.org/three-kinds-of-competitiveness/,Three kinds of competitiveness,['Daniel Kokotajlo'],2020-03-31T00:55:04Z,blogs,, 106459,https://carado.moe/socialism-conspiracy.html,Socialism as a conspiracy theory,['Tamsin Leake'],2020-10-03T23:00:00Z,blogs,, 106469,https://intelligence.org/2014/03/02/armando-tacchella/,Armando Tacchella on Safety in Future AI Systems,['Luke Muehlhauser'],2014-03-02T22:04:41Z,blogs,, 106499,https://blog.eleuther.ai/trlx-exploratory-analysis/,Exploratory Analysis of TRLX RLHF Transformers with TransformerLens,['Curt Tigges'],2023-04-02T00:00:00Z,blogs,, 106514,https://intelligence.org/2020/06/08/june-2020-newsletter/,June 2020 Newsletter,['Rob Bensinger'],2020-06-09T01:40:53Z,blogs,, 106543,https://importai.substack.com/p/import-ai-335-synth-data-is-a-bad,Import AI 335: Synth data is a bad AI drug; Facebook changes the internet with LLaMa release; and Chinese researchers use AI to figure out chip design,['Jack Clark'],2023-07-31T12:50:09Z,blogs,, 106580,https://intelligence.org/2014/05/12/exponential-and-non-exponential/,Exponential and non-exponential trends in information technology,['Luke Muehlhauser'],2014-05-12T08:18:08Z,blogs,, 106597,https://carado.moe/logical-indexical-dignity.html,logical vs indexical dignity,['Tamsin Leake'],2022-11-19T00:00:00Z,blogs,, 106606,https://intelligence.org/2016/10/20/white-house-submissions-and-report-on-ai-safety/,White House submissions and report on AI safety,['Rob Bensinger'],2016-10-21T01:50:43Z,blogs,, 106645,https://www.cold-takes.com/has-life-gotten-better-the-post-industrial-era/,Has life gotten better?: the post-industrial era,['Holden Karnofsky'],2021-10-12T00:00:00Z,blogs,, 106667,https://intelligence.org/2014/02/11/gerwin-klein-on-formal-methods/,Gerwin Klein on Formal Methods,['Luke Muehlhauser'],2014-02-11T16:00:50Z,blogs,, 106702,https://intelligence.org/2009/02/16/introducing-myself/,Introducing Myself,['Michael Vassar'],2009-02-16T18:04:43Z,blogs,, 106731,https://carado.moe/everything-is-okay.html,everything is okay,['Tamsin Leake'],2022-08-20T23:00:00Z,blogs,, 106745,https://intelligence.org/2018/01/28/january-2018-newsletter/,January 2018 Newsletter,['Rob Bensinger'],2018-01-28T20:25:59Z,blogs,, 106771,https://aiimpacts.org/december-2022-updates-and-fundraising/,December 2022 updates and fundraising,['Katja Grace'],2022-12-22T17:11:31Z,blogs,, 106792,https://aiimpacts.org/why-do-agi-researchers-expect-ai-so-soon/,Why do AGI researchers expect AI so soon?,['Katja Grace'],2015-05-25T00:03:56Z,blogs,, 106811,https://aiimpacts.org/recently-at-ai-impacts/,Recently at AI Impacts,['Katja Grace'],2015-11-24T17:09:48Z,blogs,, 106836,https://vkrakovna.wordpress.com/2022/01/04/2021-22-new-year-review/,2021-22 New Year review,['Victoria Krakovna'],2022-01-04T22:38:55Z,blogs,, 106857,https://intelligence.org/2021/12/06/more-christiano-cotra-and-yudkowsky-on-ai-progress/,"More Christiano, Cotra, and Yudkowsky on AI progress",['Rob Bensinger'],2021-12-07T00:59:17Z,blogs,, 106872,https://www.cold-takes.com/how-digital-people-could-change-the-world/,Digital People Would Be An Even Bigger Deal,['Holden Karnofsky'],2021-07-27T00:00:00Z,blogs,, 106903,https://intelligence.org/2014/04/30/new-paper-the-errors-insights-and-lessons-of-famous-ai-predictions/,"New Paper: “The errors, insights, and lessons of famous AI predictions”",['Luke Muehlhauser'],2014-04-30T23:58:53Z,blogs,, 106920,https://aiimpacts.org/how-popular-is-chatgpt-part-1-more-popular-than-taylor-swift/,How popular is ChatGPT? Part 1: more popular than Taylor Swift,['Harlan Stewart'],2023-02-24T02:47:04Z,blogs,, 106937,https://intelligence.org/2016/04/21/two-new-papers-uniform/,New papers dividing logical uncertainty into two subproblems,['Nate Soares'],2016-04-21T16:17:03Z,blogs,, 106956,https://intelligence.org/2013/06/19/what-is-intelligence-2/,What is Intelligence?,['Luke Muehlhauser'],2013-06-19T19:59:12Z,blogs,, 106978,https://vkrakovna.wordpress.com/2015/01/11/2014-15-new-year-review/,2014-15 New Year review,['Victoria Krakovna'],2015-01-11T03:21:43Z,blogs,, 106993,https://carado.moe/values-complex-not-objective.html,your terminal values are complex and not objective,['Tamsin Leake'],2023-03-13T00:00:00Z,blogs,, 107014,https://carado.moe/generalized-wireheading.html,generalized wireheading,['Tamsin Leake'],2022-11-18T00:00:00Z,blogs,, 107029,https://www.yudkowsky.net/singularity/intro,5-Minute Singularity Intro,['Eliezer S. Yudkowsky'],2020-09-04T03:05:58Z,blogs,, 107045,https://intelligence.org/2014/04/09/diana-spears/,Diana Spears on the safety of adaptive agents,['Luke Muehlhauser'],2014-04-09T20:10:21Z,blogs,, 107085,https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/,2022 Expert Survey on Progress in AI,['Katja Grace'],2022-08-04T13:25:21Z,blogs,, 107106,https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/,Sam Harris and Eliezer Yudkowsky on “AI: Racing Toward the Brink”,['Rob Bensinger'],2018-03-01T01:19:11Z,blogs,, 107145,https://intelligence.org/2016/01/12/end-of-the-year-fundraiser-and-grant-successes/,End-of-the-year fundraiser and grant successes,['Nate Soares'],2016-01-12T22:40:18Z,blogs,, 107154,https://www.yudkowsky.net/other/artifacts,Artifacts,['Eliezer S. Yudkowsky'],2020-09-04T04:10:06Z,blogs,, 107163,https://intelligence.org/2021/01/27/january-2021-newsletter/,January 2021 Newsletter,['Rob Bensinger'],2021-01-27T21:04:42Z,blogs,, 107201,https://vkrakovna.wordpress.com/2015/12/24/highlights-and-impressions-from-nips-conference-on-machine-learning/,Highlights and impressions from NIPS conference on machine learning,['Victoria Krakovna'],2015-12-25T01:31:06Z,blogs,, 107235,https://intelligence.org/2013/05/24/sign-up-for-daggre-to-improve-science-technology-forecasting/,Sign up for DAGGRE to improve science & technology forecasting,['Luke Muehlhauser'],2013-05-25T01:24:32Z,blogs,, 107245,https://carado.moe/udassa-time-steps.html,making the UD and UDASSA less broken: identifying time steps,['Tamsin Leake'],2022-03-21T00:00:00Z,blogs,, 107260,https://carado.moe/post-words.html,Some post- words for the future,['Tamsin Leake'],2019-07-27T23:00:00Z,blogs,, 107275,https://carado.moe/free-will.html,"Real quick, on free will",['Tamsin Leake'],2020-10-03T23:00:00Z,blogs,, 107284,https://intelligence.org/2018/05/31/may-2018-newsletter/,May 2018 Newsletter,['Rob Bensinger'],2018-06-01T01:37:29Z,blogs,, 107299,https://intelligence.org/2021/11/29/visible-thoughts-project-and-bounty-announcement/,Visible Thoughts Project and Bounty Announcement,['Nate Soares'],2021-11-30T07:52:56Z,blogs,, 107322,https://www.cold-takes.com/why-it-matters-if-ideas-get-harder-to-find/,"Why it matters if ""ideas get harder to find""",['Holden Karnofsky'],2022-01-11T00:00:00Z,blogs,, 107341,https://aiimpacts.org/event-exercises-in-economic-futurism/,Event: Exercises in Economic Futurism,['Katja Grace'],2015-07-15T19:27:03Z,blogs,, 107350,https://carado.moe/outlook-ai-risk-mitigation.html,my current outlook on AI risk mitigation,['Tamsin Leake'],2022-10-02T23:00:00Z,blogs,, 107369,https://jsteinhardt.wordpress.com/2013/06/12/convexity-counterexample/,Convexity counterexample,['jsteinhardt'],2013-06-12T21:04:27Z,blogs,, 107381,https://www.deepmind.com/blog/active-offline-policy-selection,Active offline policy selection,"['Yutian Chen', 'Ksenia Konyushkova', 'Tom Paine', 'Caglar Gulcehre', 'Cosmin Paduraru', 'Daniel J. Mankowitz', 'Misha Denil', 'Nando de Freitas']",2022-05-06T00:00:00Z,blogs,, 107393,https://intelligence.org/2019/03/10/applications-are-open-for-msfp/,Applications are open for the MIRI Summer Fellows Program!,['Colm Ó Riain'],2019-03-11T04:58:57Z,blogs,, 107402,https://intelligence.org/2017/01/04/january-2017-newsletter/,January 2017 Newsletter,['Rob Bensinger'],2017-01-05T05:26:18Z,blogs,, 107433,https://intelligence.org/2013/04/25/singularity-hypotheses-published/,“Singularity Hypotheses” Published,['Luke Muehlhauser'],2013-04-25T03:43:00Z,blogs,, 107449,https://intelligence.org/2016/08/03/august-2016-newsletter/,August 2016 Newsletter,['Rob Bensinger'],2016-08-04T13:07:38Z,blogs,, 107465,https://www.yudkowsky.net/other/fiction/girl-intercorrupted,Girl Intercorrupted,['Eliezer S. Yudkowsky'],2020-09-04T04:13:26Z,blogs,, 107481,https://intelligence.org/2014/03/19/max-tegmark/,Max Tegmark on the mathematical universe,['Luke Muehlhauser'],2014-03-19T08:00:37Z,blogs,, 107503,https://intelligence.org/2012/11/26/once-again-a-reporter-thinks-our-positions-are-the-opposite-of-what-they-are/,"Once again, a reporter thinks our positions are the opposite of what they are",['Luke Muehlhauser'],2012-11-27T00:23:57Z,blogs,, 107517,https://intelligence.org/2020/04/01/march-2020-newsletter/,March 2020 Newsletter,['Rob Bensinger'],2020-04-01T16:28:35Z,blogs,, 107540,https://carado.moe/fuzzies-utils-check-getting-either.html,fuzzies & utils: check that you're getting either,['Tamsin Leake'],2023-02-12T00:00:00Z,blogs,, 107549,https://aiimpacts.org/a-policy-guaranteed-to-increase-ai-timelines/,A policy guaranteed to increase AI timelines,['richardkorzekwa'],2023-04-01T20:41:43Z,blogs,, 107559,https://generative.ink/posts/methods-of-prompt-programming/,Methods of prompt programming,['janus'],2021-01-12T00:00:00Z,blogs,, 107596,https://www.cold-takes.com/the-duplicator/,The Duplicator: Instant Cloning Would Make the World Economy Explode,['Holden Karnofsky'],2021-07-20T00:00:00Z,blogs,, 107613,https://www.cold-takes.com/ai-alignment-research-links/,AI alignment research links,['Holden Karnofsky'],2022-01-05T00:00:00Z,blogs,, 107637,https://aiimpacts.org/atari-early/,Atari early,['Katja Grace'],2020-04-02T06:02:18Z,blogs,, 107652,https://carado.moe/sharp-left-turn-what-wins-first.html,before the sharp left turn: what wins first?,['Tamsin Leake'],2023-03-06T00:00:00Z,blogs,, 107667,https://intelligence.org/2018/04/10/april-2018-newsletter/,April 2018 Newsletter,['Rob Bensinger'],2018-04-10T21:36:32Z,blogs,, 107688,https://aiimpacts.org/discontinuous-progress-in-history-an-update/,Discontinuous progress in history: an update,['Katja Grace'],2020-04-13T23:55:08Z,blogs,, 107699,https://intelligence.org/2014/04/10/new-report-botworld/,New Report: Botworld,['Luke Muehlhauser'],2014-04-11T02:43:31Z,blogs,, 107715,https://intelligence.org/2020/12/30/december-2020-newsletter/,December 2020 Newsletter,['Rob Bensinger'],2020-12-31T03:48:22Z,blogs,, 107737,https://intelligence.org/2014/04/25/may-6th-miri-participating-in-massive-24-hour-online-fundraiser/,Help MIRI in a Massive 24-Hour Fundraiser on May 6th,['Louie Helm'],2014-04-26T02:20:34Z,blogs,, 107746,https://intelligence.org/2016/11/11/post-fundraiser-update/,Post-fundraiser update,['Nate Soares'],2016-11-12T01:48:01Z,blogs,, 107764,https://intelligence.org/2016/09/16/miris-2016-fundraiser/,MIRI’s 2016 Fundraiser,['Nate Soares'],2016-09-16T07:14:17Z,blogs,, 107779,https://importai.substack.com/p/import-ai-329-compute-is-data-dont,Import AI 329: Compute IS data; don't build AI agents; AI needs a precautionary principle,['Jack Clark'],2023-05-15T13:02:02Z,blogs,, 107805,https://intelligence.org/2015/05/24/miri-related-talks-from-the-decision-theory-conference-at-cambridge-university/,MIRI-related talks from the decision theory conference at Cambridge University,['Luke Muehlhauser'],2015-05-24T15:51:02Z,blogs,, 107826,https://importai.substack.com/p/import-ai-336-financialized-ai-public,Import AI 336: Financialized AI; public and elite AI opinion; one million insects.,['Jack Clark'],2023-08-14T13:14:57Z,blogs,, 107860,https://importai.substack.com/p/import-ai-337-why-i-am-confused-about,Import AI 337: Why I am confused about AI; penguin dataset; and defending networks via RL with CYBERFORCE,['Jack Clark'],2023-08-21T13:13:19Z,blogs,, 107881,https://aiimpacts.org/energy-efficiency-of-the-spirit-of-butts-farm/,Energy efficiency of The Spirit of Butt’s Farm,['Katja Grace'],2020-11-18T23:53:25Z,blogs,, 107895,https://intelligence.org/2013/09/04/the-hanson-yudkowsky-ai-foom-debate-is-now-available-as-an-ebook/,The Hanson-Yudkowsky AI-Foom Debate is now available as an eBook!,['Alex Vermeer'],2013-09-04T19:59:11Z,blogs,, 107908,https://intelligence.org/2016/10/09/october-2016-newsletter/,October 2016 Newsletter,['Rob Bensinger'],2016-10-10T02:26:24Z,blogs,, 107919,https://www.gwern.net/Scaling-hypothesis.page,"""The Scaling Hypothesis""",['Gwern Branwen'],2022-01-02T00:00:00Z,blogs,, 107937,https://intelligence.org/2014/02/24/nik-weaver-on-paradoxes-of-rational-agency/,Nik Weaver on Paradoxes of Rational Agency,['Luke Muehlhauser'],2014-02-24T23:36:47Z,blogs,, 107958,https://www.deepmind.com/blog/rl-unplugged-benchmarks-for-offline-reinforcement-learning,RL Unplugged: Benchmarks for Offline Reinforcement Learning,"['Caglar Gülçehre', 'Ziyu Wang', 'Alexander Novikov', 'Tom Le Paine', 'Sergio Gómez Colmenarejo', 'K Zolna', 'Rishabh Agarwal*', 'Josh Merel', 'Daniel Mankowitz', 'Cosmin Paduraru', 'Gabriel Dulac-Arnold*', 'Jerry Li', 'Mohammad Norouzi *', 'Matt Hoffman', 'Ofir Nachum *', 'George Tucker *', 'Nicolas Heess', 'Nando de Freitas']",2020-06-24T00:00:00Z,blogs,, 107973,https://intelligence.org/2014/05/02/kasper-stoy/,Kasper Stoy on self-reconfigurable robots,['Luke Muehlhauser'],2014-05-02T12:00:34Z,blogs,, 107994,https://intelligence.org/2016/09/03/september-2016-newsletter/,September 2016 Newsletter,['Rob Bensinger'],2016-09-04T07:26:51Z,blogs,, 108008,https://intelligence.org/2018/10/03/rocket-alignment/,The Rocket Alignment Problem,['Eliezer Yudkowsky'],2018-10-03T22:28:00Z,blogs,, 108018,https://intelligence.org/2014/04/12/jonathan-millen/,Jonathan Millen on covert channel communication,['Luke Muehlhauser'],2014-04-12T08:00:08Z,blogs,, 108056,https://aiimpacts.org/effect-of-marginal-hardware-on-artificial-general-intelligence/,Effect of marginal hardware on artificial general intelligence,['Katja Grace'],2017-12-29T03:35:17Z,blogs,, 108067,https://carado.moe/limiting-real-universes.html,Limiting Real Universes,['Tamsin Leake'],2020-04-26T23:00:00Z,blogs,, 108081,https://www.cold-takes.com/weak-point-in-most-important-century-lock-in/,Weak point in “most important century”: lock-in,['Holden Karnofsky'],2021-11-11T00:00:00Z,blogs,, 108103,https://aiimpacts.org/cases-of-discontinuous-technological-progress/,Cases of Discontinuous Technological Progress,['Katja Grace'],2014-12-31T23:44:11Z,blogs,, 108113,https://carado.moe/unfair-feedback-loops.html,Unfair feedback loops,['Tamsin Leake'],2020-12-23T00:00:00Z,blogs,, 108128,https://carado.moe/uncertainty-2+2=4.html,the uncertainty of 2+2=4,['Tamsin Leake'],2022-04-29T23:00:00Z,blogs,, 108142,https://intelligence.org/2016/01/03/january-2016-newsletter/,January 2016 Newsletter,['Rob Bensinger'],2016-01-03T11:55:26Z,blogs,, 108157,https://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/,"Five theses, two lemmas, and a couple of strategic implications",['Eliezer Yudkowsky'],2013-05-06T01:36:33Z,blogs,, 108182,https://vkrakovna.wordpress.com/2023/08/09/when-discussing-ai-risks-talk-about-capabilities-not-intelligence/,"When discussing AI risks, talk about capabilities, not intelligence",['Victoria Krakovna'],2023-08-09T21:27:01Z,blogs,, 108193,https://intelligence.org/2014/07/28/scott-frickel/,Scott Frickel on intellectual movements,['Luke Muehlhauser'],2014-07-28T08:00:56Z,blogs,, 108209,https://aiimpacts.org/were-nuclear-weapons-cost-effective-explosives/,Were nuclear weapons cost-effective explosives?,['Katja Grace'],2015-01-12T01:30:06Z,blogs,, 108226,https://carado.moe/above-paperclips-2.html,yes room above paperclips?,['Tamsin Leake'],2021-12-25T00:00:00Z,blogs,, 108236,https://aiimpacts.org/ai-vignettes-project/,AI Vignettes Project,['Katja Grace'],2021-10-13T04:58:10Z,blogs,, 108245,https://intelligence.org/2021/11/22/yudkowsky-and-christiano-discuss-takeoff-speeds/,Yudkowsky and Christiano discuss “Takeoff Speeds”,['Rob Bensinger'],2021-11-22T20:15:54Z,blogs,, 108276,https://newsletter.mlsafety.org/p/ml-safety-newsletter-7,ML Safety Newsletter #7,['Dan Hendrycks'],2023-01-09T15:30:20Z,blogs,, 108303,https://carado.moe/appreciating-grand-political-visions.html,emotionally appreciating grand political visions,['Tamsin Leake'],2021-12-09T00:00:00Z,blogs,, 108313,https://intelligence.org/2014/01/17/miris-january-2014-newsletter/,MIRI’s January 2014 Newsletter,['Luke Muehlhauser'],2014-01-17T17:06:06Z,blogs,, 108327,https://intelligence.org/2014/05/07/harry-buhrman/,Harry Buhrman on quantum algorithms and cryptography,['Luke Muehlhauser'],2014-05-08T00:45:26Z,blogs,, 108353,https://carado.moe/existential-selfdet.html,existential self-determination,['Tamsin Leake'],2022-09-26T23:00:00Z,blogs,, 108367,https://aiimpacts.org/walsh-2017-survey/,Walsh 2017 survey,['Asya Bergal'],2019-12-25T02:17:21Z,blogs,, 108382,https://aiimpacts.org/particle-accelerator-performance-progress/,Historic trends in particle accelerator performance,['Katja Grace'],2019-03-27T00:07:42Z,blogs,, 108392,https://carado.moe/human-values-unaligned-incoherent.html,"""humans aren't aligned"" and ""human values are incoherent""",['Tamsin Leake'],2022-11-18T00:00:00Z,blogs,, 108416,https://intelligence.org/2021/11/06/november-2021-newsletter/,November 2021 Newsletter,['Rob Bensinger'],2021-11-06T07:23:02Z,blogs,, 108449,https://jsteinhardt.wordpress.com/2010/10/02/generalizing-across-categories/,Generalizing Across Categories,['jsteinhardt'],2010-10-02T19:10:31Z,blogs,, 108465,https://intelligence.org/2019/06/01/june-2019-newsletter/,June 2019 Newsletter,['Rob Bensinger'],2019-06-02T06:43:41Z,blogs,, 108480,https://www.cold-takes.com/forecasting-transformative-ai-whats-the-burden-of-proof/,Forecasting transformative AI: what's the burden of proof?,['Holden Karnofsky'],2021-08-17T00:00:00Z,blogs,, 108501,https://www.yudkowsky.net/singularity/schools,Three Major Singularity Schools,['Eliezer S. Yudkowsky'],2020-09-04T02:59:03Z,blogs,, 108524,https://carado.moe/go-with-your-gut.html,guiding your brain: go with your gut!,['Tamsin Leake'],2022-08-16T23:00:00Z,blogs,, 108535,https://carado.moe/culture-tribes-legitimacy.html,culture tribes and legitimacy,['Tamsin Leake'],2021-07-20T23:00:00Z,blogs,, 108545,https://www.deepmind.com/blog/imitating-interactive-intelligence,Imitating Interactive Intelligence,"['Josh Abramson', 'Arun Ahuja', 'Arthur Brussee', 'Federico Carnevale', 'Mary Cassin', 'Stephen Clark', 'Andrew Dudzik', 'Petko Georgiev', 'Aurelia Guy', 'Tim Harley', 'Felix Hill', 'Alden Hung', 'Zac Kenton', 'Jessica Landon', 'Timothy Lillicrap', 'Kory W. Mathewson', 'Alistair Muldal', 'Adam Santoro', 'Nikolay Savinov', 'Vikrant Varma', 'Gregory Wayne', 'Nathaniel Wong', 'Chen Yan', 'Rui Zhu']",2020-12-11T00:00:00Z,blogs,, 108575,https://www.gwern.net/Clippy.page,It Looks Like You’re Trying To Take Over The World,['Gwern Branwen'],2023-03-28T00:00:00Z,blogs,, 108605,https://aiimpacts.org/electrical-efficiency-of-computing/,Electrical efficiency of computing,['Katja Grace'],2018-02-19T01:56:40Z,blogs,, 108614,https://carado.moe/alignment-research-is-very-weird.html,alignment research is very weird,['Tamsin Leake'],2022-08-16T23:00:00Z,blogs,, 108630,https://intelligence.org/2015/12/23/need-scale-miris-methods/,The need to scale MIRI’s methods,['Rob Bensinger'],2015-12-24T04:50:11Z,blogs,, 108643,https://aiimpacts.org/accuracy-of-ai-predictions/,Accuracy of AI Predictions,['Katja Grace'],2015-06-04T08:47:48Z,blogs,, 108667,https://www.cold-takes.com/empowerment-and-stakeholder-management/,Empowerment and Stakeholder Management,['Holden Karnofsky'],2022-01-18T00:00:00Z,blogs,, 108682,https://aiimpacts.org/soft-takeoff-can-still-lead-to-decisive-strategic-advantage/,Soft takeoff can still lead to decisive strategic advantage,['Daniel Kokotajlo'],2019-09-11T18:39:38Z,blogs,, 108705,https://carado.moe/systems-and-diversity.html,systems and diversity,['Tamsin Leake'],2021-07-20T23:00:00Z,blogs,, 108726,https://intelligence.org/2013/07/08/2013-summer-matching-challenge/,2013 Summer Matching Challenge!,['Luke Muehlhauser'],2013-07-08T14:00:40Z,blogs,, 108740,https://www.cold-takes.com/wheres-todays-beethoven/,Where's Today's Beethoven?,['Holden Karnofsky'],2022-01-04T00:00:00Z,blogs,, 108761,https://intelligence.org/2014/02/21/john-baez-on-research-tactics/,John Baez on Research Tactics,['Luke Muehlhauser'],2014-02-22T02:04:28Z,blogs,, 108787,https://intelligence.org/2014/05/24/johann-schumann/,Johann Schumann on high-assurance systems,['Luke Muehlhauser'],2014-05-24T11:00:47Z,blogs,, 108808,https://intelligence.org/2014/04/13/thomas-bolander/,Thomas Bolander on self-reference and agent introspection,['Luke Muehlhauser'],2014-04-13T18:17:18Z,blogs,, 108840,https://carado.moe/generalized-adding-reality-layers.html,generalized adding reality layers,['Tamsin Leake'],2022-05-18T23:00:00Z,blogs,, 108849,https://www.yudkowsky.net/other/fiction/dark-lords-answer,Dark Lord’s Answer,['Eliezer S. Yudkowsky'],2020-09-04T04:06:24Z,blogs,, 108865,https://intelligence.org/2014/06/23/roger-schell/,Roger Schell on long-term computer security research,['Luke Muehlhauser'],2014-06-23T14:25:00Z,blogs,, 108891,https://www.deepmind.com/blog/human-centred-mechanism-design-with-democratic-ai,Human-centred mechanism design with Democratic AI,"['Raphael Koster', 'Jan Balaguer', 'Andrea Tacchetti', 'Ari Weinstein', 'Tina Zhu', 'Oliver Hauser* (University of Exeter)', 'Duncan Williams', 'Lucy Campbell-Gillingham', 'Phoebe Thacker', 'Matthew Botvinick', 'Christopher Summerfield']",2022-07-04T00:00:00Z,blogs,, 108909,https://carado.moe/think-in-what.html,think in what ?,['Tamsin Leake'],2021-12-04T00:00:00Z,blogs,, 108918,https://aiimpacts.org/nordhaus-hardware-price-performance-dataset/,Nordhaus hardware price performance dataset,['Katja Grace'],2018-02-17T03:30:05Z,blogs,, 108928,https://aiimpacts.org/joscha-bach-on-the-unfinished-steps-to-human-level-ai/,Joscha Bach on remaining steps to human-level AI,['Katja Grace'],2016-11-29T18:27:47Z,blogs,, 108953,https://aiimpacts.org/metasurvey-predict-the-predictors/,Metasurvey: predict the predictors,['Katja Grace'],2016-05-13T00:11:05Z,blogs,, 108964,https://carado.moe/genuineness-existselfdet-satisfaction-pick2.html,"Genuineness, Existential Selfdetermination, Satisfaction: pick 2",['Tamsin Leake'],2021-11-21T00:00:00Z,blogs,, 108973,https://intelligence.org/2014/03/26/anil-nerode/,Anil Nerode on hybrid systems control,['Luke Muehlhauser'],2014-03-26T23:25:31Z,blogs,, 108988,https://carado.moe/scarce-moral-patient-involvement.html,the scarcity of moral patient involvement,['Tamsin Leake'],2022-12-21T00:00:00Z,blogs,, 109002,https://www.deepmind.com/blog/learning-robust-real-time-cultural-transmission-without-human-data,Learning Robust Real-Time Cultural Transmission without Human Data,['Cultural General Intelligence Team'],2022-03-03T00:00:00Z,blogs,, 109017,https://intelligence.org/2020/02/23/february-2020-newsletter/,February 2020 Newsletter,['Rob Bensinger'],2020-02-23T16:14:35Z,blogs,, 109038,https://vkrakovna.wordpress.com/2017/06/04/takeaways-from-self-tracking-data/,Takeaways from self-tracking data,['Victoria Krakovna'],2017-06-04T22:29:45Z,blogs,, 109048,https://carado.moe/kolmogorov-objectivity-in-languagespace.html,kolmogorov complexity objectivity and languagespace,['Tamsin Leake'],2021-08-15T23:00:00Z,blogs,, 109058,https://intelligence.org/2017/02/28/using-machine-learning/,Using machine learning to address AI risk,['Jessica Taylor'],2017-03-01T03:04:41Z,blogs,, 109096,https://www.deepmind.com/blog/an-early-warning-system-for-novel-ai-risks,An early warning system for novel AI risks,['Toby Shevlane'],2023-05-25T00:00:00Z,blogs,, 109116,https://intelligence.org/2014/04/02/2013-in-review-fundraising/,2013 in Review: Fundraising,['Luke Muehlhauser'],2014-04-02T16:04:31Z,blogs,, 109139,https://www.cold-takes.com/where-ai-forecasting-stands-today/,"AI Timelines: Where the Arguments, and the ""Experts,"" Stand",['Holden Karnofsky'],2021-09-07T00:00:00Z,blogs,, 109159,https://carado.moe/nonsolving-ideologies.html,Non-solving ideologies,['Tamsin Leake'],2021-01-01T00:00:00Z,blogs,, 109168,https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/,"Forecasting Transformative AI, Part 1: What Kind of AI?",['Holden Karnofsky'],2021-08-10T00:00:00Z,blogs,, 109189,https://aiimpacts.org/etzioni-2016-survey/,Etzioni 2016 survey,['Katja Grace'],2019-11-06T18:41:19Z,blogs,, 109204,https://vkrakovna.wordpress.com/2016/02/28/introductory-resources-on-ai-safety-research/,Introductory resources on AI safety research,['Victoria Krakovna'],2016-02-28T05:03:08Z,blogs,, 109249,https://intelligence.org/2019/02/22/thoughts-on-human-models/,Thoughts on Human Models,['Guest'],2019-02-23T01:30:11Z,blogs,, 109271,https://intelligence.org/2016/12/13/december-2016-newsletter/,December 2016 Newsletter,['Rob Bensinger'],2016-12-13T22:41:53Z,blogs,, 109286,https://newsletter.mlsafety.org/p/ml-safety-newsletter-9,ML Safety Newsletter #9,['Dan Hendrycks'],2023-04-11T15:53:36Z,blogs,, 109331,https://aiimpacts.org/agi-11-survey/,AGI-11 survey,['Justis Mills'],2018-11-10T23:28:58Z,blogs,, 109346,https://intelligence.org/2021/05/18/saving-time/,Saving Time,['Scott Garrabrant'],2021-05-18T22:04:40Z,blogs,, 109358,https://aiimpacts.org/energy-efficiency-of-airbus-a320/,Energy efficiency of Airbus A320,['Katja Grace'],2020-11-06T04:44:03Z,blogs,, 109367,https://vkrakovna.wordpress.com/2023/03/09/near-term-motivation-for-ai-alignment/,Near-term motivation for AI alignment,['Victoria Krakovna'],2023-03-09T13:09:33Z,blogs,, 109386,https://blog.eleuther.ai/factored-cognition/,A Preliminary Exploration into Factored Cognition with Language Models,"['Leo Gao', 'Kyle McDonell', 'Laria Reynolds', 'Stella Biderman']",2021-10-25T00:00:00Z,blogs,, 109407,https://carado.moe/were-all-doomed.html,we're all doomed,['Tamsin Leake'],2021-06-29T23:00:00Z,blogs,, 109423,https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/,Assessing our past and potential impact,['Rob Bensinger'],2015-08-11T06:50:22Z,blogs,, 109441,https://openai.com/research/frontier-ai-regulation,Frontier AI regulation: Managing emerging risks to public safety,"['Markus Anderljung', 'Joslyn Barnhart', 'Jade Leung', 'Anton Korinek', 'Cullen O’Keefe', 'Jess Whittlestone']",2023-07-06T00:00:00Z,blogs,, 109481,https://intelligence.org/2015/12/03/december-2015-newsletter/,December 2015 Newsletter,['Rob Bensinger'],2015-12-03T23:04:27Z,blogs,, 109497,https://intelligence.org/2014/04/08/will-macaskill/,Will MacAskill on normative uncertainty,['Luke Muehlhauser'],2014-04-08T17:24:42Z,blogs,, 109524,https://aiimpacts.org/resolutions-of-mathematical-conjectures-over-time/,Resolutions of mathematical conjectures over time,['Asya Bergal'],2020-04-14T20:38:13Z,blogs,, 109535,https://aiimpacts.org/discontinuity-in-altitude-records/,Historic trends in altitude,['Katja Grace'],2018-02-10T03:47:39Z,blogs,, 109545,https://intelligence.org/2018/10/29/announcing-the-ai-alignment-forum/,Announcing the new AI Alignment Forum,['Guest'],2018-10-29T23:34:29Z,blogs,, 109563,https://aiimpacts.org/brain-performance-in-flops/,Brain performance in FLOPS,['Katja Grace'],2015-07-26T19:33:41Z,blogs,, 109577,https://aiimpacts.org/group-differences-in-ai-predictions/,Group Differences in AI Predictions,['Katja Grace'],2015-05-24T20:37:27Z,blogs,, 109588,https://carado.moe/plausible-vs-likely.html,plausible vs likely,['Tamsin Leake'],2022-05-27T23:00:00Z,blogs,, 109610,https://carado.moe/counterfactual-computation-in-world-models.html,counterfactual computations in world models,['Tamsin Leake'],2022-10-27T23:00:00Z,blogs,, 109621,https://www.yudkowsky.net/other/fiction/the-sword-of-good,The Sword of Good,['Eliezer S. Yudkowsky'],2009-11-28T19:28:10Z,blogs,, 109638,https://jsteinhardt.wordpress.com/2012/12/06/log-linear-models/,Log-Linear Models,['jsteinhardt'],2012-12-06T19:46:23Z,blogs,, 109650,https://carado.moe/saving-the-web.html,Saving The Client-Side Web: just WASM and the DOM,['Tamsin Leake'],2021-05-15T23:00:00Z,blogs,, 109665,https://intelligence.org/2017/04/30/software-engineer-internship-staff-openings/,Software Engineer Internship / Staff Openings,['Alex Vermeer'],2017-04-30T18:04:14Z,blogs,, 109675,https://carado.moe/recognition.html,recognition,['Tamsin Leake'],2022-03-04T00:00:00Z,blogs,, 109690,https://intelligence.org/2018/11/06/embedded-subsystems/,Subsystem Alignment,['Scott Garrabrant'],2018-11-06T18:34:36Z,blogs,, 109712,https://intelligence.org/2014/03/26/lyle-ungar/,Lyle Ungar on forecasting,['Luke Muehlhauser'],2014-03-26T23:55:10Z,blogs,, 109742,https://carado.moe/hello-elua.html,"Hello, Elua.",['Tamsin Leake'],2023-02-23T00:00:00Z,blogs,, 109767,https://intelligence.org/2015/04/28/new-papers-reflective/,New papers on reflective oracles and agents,['Luke Muehlhauser'],2015-04-28T21:36:59Z,blogs,, 109782,https://intelligence.org/2014/04/29/ruediger-schack/,Ruediger Schack on quantum Bayesianism,['Luke Muehlhauser'],2014-04-29T16:12:35Z,blogs,, 109801,https://aiimpacts.org/possible-investigations/,Possible Empirical Investigations,['Katja Grace'],2015-02-26T00:02:11Z,blogs,, 109831,https://aiimpacts.org/funding-of-ai-research/,Funding of AI Research,['Katja Grace'],2017-02-20T11:25:13Z,blogs,, 109843,https://intelligence.org/2022/07/30/july-2022-newsletter/,July 2022 Newsletter,['Rob Bensinger'],2022-07-30T17:17:31Z,blogs,, 109862,https://blog.eleuther.ai/multiple-choice-normalization/,Multiple Choice Normalization in LM Evaluation,['Leo Gao'],2021-10-11T00:00:00Z,blogs,, 109878,https://carado.moe/say-ai-risk-mitigation-not-alignment.html,"say ""AI risk mitigation"" not ""alignment""",['Tamsin Leake'],2022-05-27T23:00:00Z,blogs,, 109887,https://aiimpacts.org/against-a-general-factor-of-doom/,Against a General Factor of Doom,['Jeffrey Heninger'],2022-11-23T16:45:49Z,blogs,, 109904,https://carado.moe/exact-minds-in-an-exact-world.html,exact minds in an exact world,['Tamsin Leake'],2021-10-12T23:00:00Z,blogs,, 109913,https://www.cold-takes.com/this-cant-go-on/,This Can't Go On,['Holden Karnofsky'],2021-08-03T00:00:00Z,blogs,, 109941,https://vkrakovna.wordpress.com/2015/03/26/negative-visualization-radical-acceptance-and-stoicism/,"Negative visualization, radical acceptance and stoicism",['Victoria Krakovna'],2015-03-27T03:36:57Z,blogs,, 109953,https://intelligence.org/2021/05/02/april-2021-newsletter/,April 2021 Newsletter,['Rob Bensinger'],2021-05-02T16:30:04Z,blogs,, 109990,https://aiimpacts.org/current-flops-prices/,Current FLOPS prices,['Katja Grace'],2015-04-02T05:16:13Z,blogs,, 110001,https://intelligence.org/2014/05/13/christof-koch-stuart-russell-machine-superintelligence/,Christof Koch and Stuart Russell on machine superintelligence,['Luke Muehlhauser'],2014-05-14T00:34:27Z,blogs,, 110018,https://intelligence.org/2013/09/04/how-effectively-can-we-plan-for-future-decades/,How effectively can we plan for future decades? (initial findings),['Luke Muehlhauser'],2013-09-04T22:38:11Z,blogs,, 110037,https://carado.moe/wonky-good-enough-alignment.html,wonky but good enough alignment schemes,['Tamsin Leake'],2022-11-19T00:00:00Z,blogs,, 110053,https://intelligence.org/2023/02/03/focus-on-the-places-where-you-feel-shocked-everyones-dropping-the-ball/,Focus on the places where you feel shocked everyone’s dropping the ball,['Nate Soares'],2023-02-03T16:29:48Z,blogs,, 110070,https://intelligence.org/2013/03/22/early-draft-of-naturalistic-reflection-paper/,Early draft of naturalistic reflection paper,['Luke Muehlhauser'],2013-03-22T03:28:16Z,blogs,, 110081,https://intelligence.org/2016/03/05/march-2016-newsletter/,March 2016 Newsletter,['Rob Bensinger'],2016-03-06T01:44:34Z,blogs,, 110098,https://carado.moe/approximate-decisions.html,making decisions as our approximately simulated selves,['Tamsin Leake'],2022-12-28T00:00:00Z,blogs,, 110111,https://intelligence.org/2020/04/27/miris-largest-grant-to-date/,MIRI’s largest grant to date!,['Rob Bensinger'],2020-04-27T14:51:13Z,blogs,, 110120,https://intelligence.org/2015/02/06/davis-ai-capability-motivation/,Davis on AI capability and motivation,['Rob Bensinger'],2015-02-06T23:45:40Z,blogs,, 110143,https://intelligence.org/2019/11/25/november-2019-newsletter/,November 2019 Newsletter,['Rob Bensinger'],2019-11-25T18:53:11Z,blogs,, 110187,https://intelligence.org/2013/04/04/the-lean-nonprofit/,The Lean Nonprofit,['Luke Muehlhauser'],2013-04-04T03:22:03Z,blogs,, 110203,https://aiimpacts.org/ai-timeline-predictions-in-surveys-and-statements/,AI Timeline predictions in surveys and statements,['Katja Grace'],2015-05-20T11:01:48Z,blogs,, 110212,https://www.deepmind.com/blog/codoc-developing-reliable-ai-tools-for-healthcare,Developing reliable AI tools for healthcare,['Krishnamurthy (Dj) Dvijotham and Taylan Cemgil on behalf of the CoDoC team'],2023-07-17T00:00:00Z,blogs,, 110229,https://aiimpacts.org/misalignment-and-misuse-whose-values-are-manifest/,Misalignment and misuse: whose values are manifest?,['Katja Grace'],2020-11-19T00:06:34Z,blogs,, 110241,https://intelligence.org/2015/05/29/two-papers-accepted-to-agi-15/,Two papers accepted to AGI-15,['Luke Muehlhauser'],2015-05-29T18:55:16Z,blogs,, 110261,https://generative.ink/posts/amplifying-gpt-3-on-closed-ended-questions/,Amplifying GPT-3 on closed-ended questions,['janus'],2020-10-30T00:00:00Z,blogs,, 110281,https://aiimpacts.org/the-maes-garreau-law/,The Maes-Garreau Law,['Katja Grace'],2015-05-20T11:18:50Z,blogs,, 110291,https://blog.eleuther.ai/safetensors-security-audit/,🐶Safetensors audited as really safe and becoming the default,"['Nicolas Patry', 'Stella Biderman', 'Garry Jean-Baptiste']",2023-05-23T00:00:00Z,blogs,, 110305,https://aisafety.camp/2020/05/30/aisc4-research-summaries/,AISC4: Research Summaries,['Sebastian Kosch'],2020-05-30T18:05:43Z,blogs,, 110350,https://carado.moe/publishing-infohazards.html,publishing alignment research and exfohazards,['Tamsin Leake'],2022-10-31T00:00:00Z,blogs,, 110360,https://carado.moe/diversity-novelty.html,diversity vs novelty,['Tamsin Leake'],2022-06-09T23:00:00Z,blogs,, 110370,https://aiimpacts.org/vignettes-workshop/,Vignettes workshop,['Daniel Kokotajlo'],2021-06-15T10:56:28Z,blogs,, 110379,https://jsteinhardt.wordpress.com/2010/09/13/nobody-understands-probability/,Nobody Understands Probability,['jsteinhardt'],2010-09-13T02:23:53Z,blogs,, 110401,https://carado.moe/qaci-math.html,formalizing the QACI alignment formal-goal,['Tamsin Leake'],2023-06-09T23:00:00Z,blogs,, 110426,https://carado.moe/political-technology.html,political technology,['Tamsin Leake'],2022-02-04T00:00:00Z,blogs,, 110435,https://www.deepmind.com/blog/emergent-bartering-behaviour-in-multi-agent-reinforcement-learning,Emergent Bartering Behaviour in Multi-Agent Reinforcement Learning,"['Mike Johanson', 'Edward Hughes', 'Finbarr Timbers', 'Joel Leibo']",2022-05-16T00:00:00Z,blogs,, 110444,https://www.yudkowsky.net/singularity/ai-risk,Artificial Intelligence as a Positive and Negative Factor in Global Risk,['Eliezer S. Yudkowsky'],2020-09-04T03:00:22Z,blogs,, 110478,https://intelligence.org/2012/06/16/singularity-institute-progress-report-may-2012/,"Machine Intelligence Research Institute Progress Report, May 2012",['Luke Muehlhauser'],2012-06-16T07:39:29Z,blogs,, 110508,https://www.cold-takes.com/ai-safety-seems-hard-to-measure/,AI Safety Seems Hard to Measure,['Holden Karnofsky'],2022-12-08T00:00:00Z,blogs,, 110539,https://intelligence.org/2011/12/27/2011-singularity-institute-winter-fundraiser/,2011 Machine Intelligence Research Institute Winter Fundraiser,['Louie Helm'],2011-12-27T21:57:07Z,blogs,, 110551,https://aiimpacts.org/penicillin-and-historic-syphilis-trends/,Penicillin and historic syphilis trends,['Asya Bergal'],2020-02-08T01:36:10Z,blogs,, 110560,https://aiimpacts.org/outcomes-of-inducement-prizes/,Outcomes of inducement prizes,['Katja Grace'],2022-08-30T01:34:00Z,blogs,, 110571,https://intelligence.org/2013/08/11/what-is-agi/,What is AGI?,['Luke Muehlhauser'],2013-08-11T18:32:36Z,blogs,, 110586,https://intelligence.org/2014/01/31/mike-frank-on-reversible-computing/,Mike Frank on reversible computing,['Luke Muehlhauser'],2014-01-31T19:48:22Z,blogs,, 110606,https://intelligence.org/2013/10/30/new-paper-embryo-selection-for-cognitive-enhancement/,New Paper: “Embryo Selection for Cognitive Enhancement”,['Luke Muehlhauser'],2013-10-30T09:47:13Z,blogs,, 110625,https://www.deepmind.com/blog/is-curiosity-all-you-need-on-the-utility-of-emergent-behaviours-from-curious-exploration,Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration,"['Oliver Groth', 'Markus Wulfmeier', 'Giulia Vezzani', 'Vibhavari Dasagi', 'Tim Hertweck', 'Roland Hafner', 'Nicolas Heess', 'and Martin Riedmiller']",2021-09-17T00:00:00Z,blogs,, 110639,https://intelligence.org/2020/12/21/2020-updates-and-strategy/,2020 Updates and Strategy,['Malo Bourgon'],2020-12-21T23:50:18Z,blogs,, 110655,https://intelligence.org/2015/03/09/bill-hibbard/,Bill Hibbard on Ethical Artificial Intelligence,['Luke Muehlhauser'],2015-03-10T04:43:22Z,blogs,, 110696,https://intelligence.org/2013/05/15/when-will-ai-be-created/,When Will AI Be Created?,['Luke Muehlhauser'],2013-05-16T05:00:20Z,blogs,, 110722,https://intelligence.org/2023/04/07/pausing-ai-developments-isnt-enough-we-need-to-shut-it-all-down/,Pausing AI Developments Isn’t Enough. We Need to Shut it All Down,['Eliezer Yudkowsky'],2023-04-08T00:14:43Z,blogs,, 110743,https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/,2016 Expert Survey on Progress in AI,['Katja Grace'],2016-12-15T05:48:02Z,blogs,, 110758,https://www.yudkowsky.net/singularity/aibox,The AI-Box Experiment:,['Eliezer S. Yudkowsky'],2020-09-04T03:08:55Z,blogs,, 110773,https://aiimpacts.org/agi-09-survey/,AGI-09 Survey,['Katja Grace'],2014-12-29T18:50:48Z,blogs,, 110794,https://jsteinhardt.wordpress.com/2016/12/26/thinking-outside-ones-paradigm/,Thinking Outside One’s Paradigm,['jsteinhardt'],2016-12-26T17:37:45Z,blogs,, 110811,https://intelligence.org/2017/06/16/june-2017-newsletter/,June 2017 Newsletter,['Rob Bensinger'],2017-06-16T21:02:15Z,blogs,, 110832,https://intelligence.org/2012/07/03/summer-challenge/,2012 Summer Singularity Challenge,['Luke Muehlhauser'],2012-07-04T05:40:48Z,blogs,, 110860,https://carado.moe/nostalgia.html,nostalgia: a value pointing home,['Tamsin Leake'],2023-01-19T00:00:00Z,blogs,, 110872,https://intelligence.org/2013/12/02/2013-winter-matching-challenge/,2013 Winter Matching Challenge,['Luke Muehlhauser'],2013-12-02T22:33:13Z,blogs,, 110887,https://intelligence.org/2017/04/06/april-2017-newsletter/,April 2017 Newsletter,['Rob Bensinger'],2017-04-06T16:59:11Z,blogs,, 110903,https://intelligence.org/2014/04/03/erik-debenedictis/,Erik DeBenedictis on supercomputing,['Luke Muehlhauser'],2014-04-03T08:00:12Z,blogs,, 110932,https://www.deepmind.com/blog/dm-control-software-and-tasks-for-continuous-control,dm_control: Software and Tasks for Continuous Control,"['Yuval Tassa', 'Saran Tunyasuvunakool', 'Alistair Muldal', 'Yotam Doron', 'Siqi Liu', 'Steven Bohez', 'Josh Merel', 'Tom Erez', 'Timothy Lillicrap', 'Nicolas Heess']",2020-06-15T00:00:00Z,blogs,, 110959,https://intelligence.org/2018/11/28/miris-newest-recruit-edward-kmett/,MIRI’s newest recruit: Edward Kmett!,['Rob Bensinger'],2018-11-28T18:14:30Z,blogs,, 110969,https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/,Decisions are for making bad outcomes inconsistent,['Rob Bensinger'],2017-04-07T22:02:13Z,blogs,, 110985,https://intelligence.org/2016/08/05/miri-strategy-update-2016/,MIRI strategy update: 2016,['Nate Soares'],2016-08-05T20:40:22Z,blogs,, 111006,https://intelligence.org/2021/05/23/finite-factored-sets/,Finite Factored Sets,['Scott Garrabrant'],2021-05-23T07:00:35Z,blogs,, 111019,https://intelligence.org/2019/03/09/a-new-field-guide-for-mirix/,A new field guide for MIRIx,['Rob Bensinger'],2019-03-10T03:38:01Z,blogs,, 111030,https://intelligence.org/2018/11/26/miris-2018-fundraiser/,MIRI’s 2018 Fundraiser,['Malo Bourgon'],2018-11-27T04:54:15Z,blogs,, 111045,https://intelligence.org/2013/12/13/aaronson/,Scott Aaronson on Philosophical Progress,['Luke Muehlhauser'],2013-12-13T20:33:33Z,blogs,, 111061,https://carado.moe/unoptimal-superint-loses.html,unoptimal superintelligence loses,['Tamsin Leake'],2021-11-20T00:00:00Z,blogs,, 111071,https://aiimpacts.org/coordinated-human-action-example-superhuman-intelligence/,Coordinated human action as example of superhuman intelligence,['Ben Hoffman'],2016-01-21T16:24:12Z,blogs,, 111089,https://carado.moe/communicating-clearly.html,Communicating Clearly,['Tamsin Leake'],2021-01-22T00:00:00Z,blogs,, 111098,https://www.deepmind.com/blog/discovering-when-an-agent-is-present-in-a-system,Discovering when an agent is present in a system,"['Zachary Kenton', 'Ramana Kumar', 'Sebastian Farquhar', 'Jonathan Richens', 'Matt MacDermott', 'Tom Everitt']",2022-08-18T00:00:00Z,blogs,, 111110,https://newsletter.mlsafety.org/p/ml-safety-newsletter-8,ML Safety Newsletter #8,['Dan Hendrycks'],2023-02-20T15:00:53Z,blogs,, 111148,https://openai.com/research/gpt-4,GPT-4,['OpenAI Research'],2023-03-14T00:00:00Z,blogs,, 111180,https://carado.moe/balancing-utilitarianism.html,balancing utilitarianism,['Tamsin Leake'],2022-02-04T00:00:00Z,blogs,, 111189,https://carado.moe/psi.html,psi: a universal format for structured information,['Tamsin Leake'],2021-11-08T00:00:00Z,blogs,, 111199,https://aiimpacts.org/miri-ai-predictions-dataset/,MIRI AI Predictions Dataset,['Katja Grace'],2015-05-20T10:18:11Z,blogs,, 111219,https://intelligence.org/2018/12/16/december-2018-newsletter/,December 2018 Newsletter,['Rob Bensinger'],2018-12-16T18:08:03Z,blogs,, 111240,https://www.deepmind.com/blog/improving-language-models-by-retrieving-from-trillions-of-tokens,Improving language models by retrieving from trillions of tokens,"['Sebastian Borgeaud', 'Arthur Mensch', 'Jordan Hoffmann', 'Laurent Sifre']",2021-12-08T00:00:00Z,blogs,, 111259,https://carado.moe/botched-alignment-and-awareness.html,botched alignment and alignment awareness,['Tamsin Leake'],2021-07-18T23:00:00Z,blogs,, 111271,https://intelligence.org/2023/04/10/misgeneralization-as-a-misnomer/,Misgeneralization as a misnomer,['Nate Soares'],2023-04-10T22:55:13Z,blogs,, 111288,https://aiimpacts.org/media-discussion-of-2016-espai/,Media discussion of 2016 ESPAI,['Katja Grace'],2017-06-15T04:31:51Z,blogs,, 111307,https://intelligence.org/2021/12/04/shulman-and-yudkowsky-on-ai-progress/,Shulman and Yudkowsky on AI progress,['Rob Bensinger'],2021-12-04T16:00:23Z,blogs,, 111330,https://blog.eleuther.ai/why-release-a-large-language-model/,Why Release a Large Language Model?,['Connor Leahy'],2021-06-02T00:00:00Z,blogs,, 111351,https://www.cold-takes.com/how-governments-can-help-with-the-most-important-century/,How major governments can help with the most important century,['Holden Karnofsky'],2023-02-24T00:00:00Z,blogs,, 111385,https://intelligence.org/2015/01/29/new-report-value-learning-problem/,New report: “The value learning problem”,['Luke Muehlhauser'],2015-01-29T20:01:34Z,blogs,, 111400,https://generative.ink/posts/language-models-are-multiverse-generators/,Language models are multiverse generators,['janus'],2021-01-25T00:00:00Z,blogs,, 111420,https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/,Security Mindset and Ordinary Paranoia,['Eliezer Yudkowsky'],2017-11-25T20:18:26Z,blogs,, 111445,https://www.deepmind.com/blog/gophercite-teaching-language-models-to-support-answers-with-verified-quotes,GopherCite: Teaching language models to support answers with verified quotes,"['Jacob Menick', 'Maja Trebacz', 'Vladimir Mikulik', 'John Aslanides', 'Francis Song', 'Martin Chadwick', 'Mia Glaese', 'Susannah Young', 'Lucy Campbell-Gillingham', 'Geoffrey Irving', 'Nat McAleese']",2022-03-16T00:00:00Z,blogs,, 111467,https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/,"Why Would AI ""Aim"" To Defeat Humanity?",['Holden Karnofsky'],2022-11-29T00:00:00Z,blogs,, 111484,https://aiimpacts.org/are-ai-surveys-seeing-the-inside-view/,Are AI surveys seeing the inside view?,['Katja Grace'],2015-01-16T01:00:02Z,blogs,, 111503,https://importai.substack.com/p/import-ai-324-machiavellian-ais-llms,Import AI 324: Machiavellian AIs; LLMs and political campaigns; Facebook makes an excellent segmentation model,['Jack Clark'],2023-04-11T13:20:51Z,blogs,, 111536,https://www.deepmind.com/blog/a-generalist-agent,A Generalist Agent,"['Scott Reed', 'Konrad Żołna', 'Emilio Parisotto', 'Sergio Gómez Colmenarejo', 'Alexander Novikov', 'Gabriel Barth-Maron', 'Mai Giménez', 'Yury Sulsky', 'Jackie Kay', 'Jost Tobias Springenberg', 'Tom Eccles', 'Jake Bruce', 'Ali Razavi', 'Ashley Edwards', 'Nicolas Heess', 'Yutian Chen', 'Raia Hadsell', 'Oriol Vinyals', 'Mahyar Bordbar', 'and Nando de Freitas']",2022-05-12T00:00:00Z,blogs,, 111552,https://jsteinhardt.wordpress.com/2016/12/28/donations-for-2016/,Donations for 2016,['jsteinhardt'],2016-12-28T04:25:44Z,blogs,, 111582,https://aiimpacts.org/bias-from-optimistic-predictors/,Selection bias from optimistic experts,['Katja Grace'],2015-05-29T18:50:48Z,blogs,, 111600,https://aiimpacts.org/changes-in-funding-in-the-ai-safety-field/,Changes in funding in the AI safety field,['Katja Grace'],2017-02-20T11:40:52Z,blogs,, 111619,https://aiimpacts.org/do-neural-networks-learn-human-concepts/,Do neural networks learn human concepts?,['Katja Grace'],2021-12-06T18:40:20Z,blogs,, 111632,https://aiimpacts.org/ai50-survey/,AI@50 Survey,['Katja Grace'],2014-12-29T18:50:48Z,blogs,, 111641,https://www.cold-takes.com/minimal-trust-investigations/,Minimal-trust investigations,['Holden Karnofsky'],2021-11-23T00:00:00Z,blogs,, 111655,https://vkrakovna.wordpress.com/2015/11/29/ai-risk-without-an-intelligence-explosion/,Risks from general artificial intelligence without an intelligence explosion,['Victoria Krakovna'],2015-11-30T04:48:36Z,blogs,, 111676,https://carado.moe/noninterf-superint.html,non-interfering superintelligence and remaining philosophical progress: a deterministic utopia,['Tamsin Leake'],2021-12-09T00:00:00Z,blogs,, 111687,https://intelligence.org/2012/08/21/august-2012-newsletter/,August 2012 Newsletter,['Louie Helm'],2012-08-21T17:21:16Z,blogs,, 111711,https://www.deepmind.com/blog/benchmarking-the-next-generation-of-never-ending-learners,Benchmarking the next generation of never-ending learners,"['Marc’Aurelio Ranzato', 'Amal Rannen-Triki']",2022-11-22T00:00:00Z,blogs,, 111724,https://aiimpacts.org/survey-of-prescient-actions/,Preliminary survey of prescient actions,['richardkorzekwa'],2020-04-04T00:15:54Z,blogs,, 111742,https://blog.eleuther.ai/alignment-eleuther/,Alignment Research @ EleutherAI,['Curtis Huebner'],2023-05-03T00:00:00Z,blogs,, 111775,https://vkrakovna.wordpress.com/2019/08/19/classifying-specification-problems-as-variants-of-goodharts-law/,Classifying specification problems as variants of Goodhart’s Law,['Victoria Krakovna'],2019-08-19T20:42:00Z,blogs,, 111803,https://carado.moe/albions-seed.html,freedom and diversity in Albion's Seed,['Tamsin Leake'],2021-12-09T00:00:00Z,blogs,, 111813,https://aiimpacts.org/comparison-of-naturally-evolved-and-engineered-solutions/,Comparison of naturally evolved and engineered solutions,['Katja Grace'],2019-12-25T02:32:01Z,blogs,, 111823,https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/,AI Could Defeat All Of Us Combined,['Holden Karnofsky'],2022-06-09T00:00:00Z,blogs,, 111849,https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project/,Evidence on good forecasting practices from the Good Judgment Project,['Daniel Kokotajlo'],2019-02-07T22:25:19Z,blogs,, 111882,https://intelligence.org/2018/10/29/october-2018-newsletter/,October 2018 Newsletter,['Rob Bensinger'],2018-10-30T00:16:06Z,blogs,, 111908,https://www.cold-takes.com/does-x-cause-y-an-in-depth-evidence-review/,Does X cause Y? An in-depth evidence review,['Holden Karnofsky'],2021-07-28T00:00:00Z,blogs,, 111923,https://importai.substack.com/p/import-ai-333-synthetic-data-makes,Import AI 333: Synthetic data makes models stupid; chatGPT eats MTurk. Inflection shows off a large language model,['Jack Clark'],2023-06-26T13:40:01Z,blogs,, 111948,https://vkrakovna.wordpress.com/2017/01/09/2016-17-new-year-review/,2016-17 New Year review,['Victoria Krakovna'],2017-01-09T22:24:33Z,blogs,, 111970,https://carado.moe/guess-intrinsic-values.html,a guess at my intrinsic values,['Tamsin Leake'],2023-01-29T00:00:00Z,blogs,, 111991,https://intelligence.org/2014/04/23/ariel-procaccia/,Ariel Procaccia on economics and computation,['Luke Muehlhauser'],2014-04-23T12:00:03Z,blogs,, 112018,https://carado.moe/carmack-predictions.html,carmack predictions,['Tamsin Leake'],2022-08-16T23:00:00Z,blogs,, 112031,https://vkrakovna.wordpress.com/2015/07/26/systems-i-have-tried-an-overview/,Systems I have tried: an overview,['Victoria Krakovna'],2015-07-26T05:01:22Z,blogs,, 112059,https://intelligence.org/2014/02/03/ronald-de-wolf-on-quantum-computing/,Ronald de Wolf on Quantum Computing,['Luke Muehlhauser'],2014-02-03T09:00:13Z,blogs,, 112079,https://carado.moe/tabooing-agi.html,"tabooing ""AGI""",['Tamsin Leake'],2023-02-07T00:00:00Z,blogs,, 112093,https://intelligence.org/2014/03/10/toby-walsh/,Toby Walsh on computational social choice,['Luke Muehlhauser'],2014-03-10T19:05:14Z,blogs,, 112116,https://aiimpacts.org/ai-impacts-key-questions-of-interest/,AI Impacts key questions of interest,['Katja Grace'],2020-12-17T19:10:43Z,blogs,, 112136,https://intelligence.org/2016/03/31/new-paper-on-bounded-lob/,New paper on bounded Löb and robust cooperation of bounded agents,['Rob Bensinger'],2016-04-01T05:48:02Z,blogs,, 112152,https://newsletter.mlsafety.org/p/ml-safety-newsletter-4,ML Safety Newsletter #4,['Dan Hendrycks'],2022-06-03T01:15:35Z,blogs,, 112179,https://jsteinhardt.wordpress.com/2016/12/31/individual-project-fund-further-details/,Individual Project Fund: Further Details,['jsteinhardt'],2016-12-31T22:10:28Z,blogs,, 112190,https://aiimpacts.org/ai-and-the-big-nuclear-discontinuity/,AI and the Big Nuclear Discontinuity,['Katja Grace'],2015-01-09T22:06:46Z,blogs,, 112201,https://aiimpacts.org/argument-for-likelihood-of-superhuman-ai/,Will Superhuman AI be created?,['Katja Grace'],2022-08-08T09:07:00Z,blogs,, 112219,https://www.cold-takes.com/hunter-gatherer-happiness/,Hunter-gatherer happiness,['Holden Karnofsky'],2021-11-04T00:00:00Z,blogs,, 112241,https://carado.moe/forking-bitrate-entropy-control.html,forking bitrate and entropy control,['Tamsin Leake'],2022-02-06T00:00:00Z,blogs,, 112253,https://carado.moe/cringe-as-prejudice.html,Cringe as prejudice,['Tamsin Leake'],2020-12-20T00:00:00Z,blogs,, 112264,https://jsteinhardt.wordpress.com/2013/02/02/local-kl-divergence/,Local KL Divergence,['jsteinhardt'],2013-02-02T04:27:27Z,blogs,, 112274,https://intelligence.org/2016/06/30/grain-of-truth/,New paper: “A formal solution to the grain of truth problem”,['Rob Bensinger'],2016-06-30T22:57:28Z,blogs,, 112289,https://deepmindsafetyresearch.medium.com/your-policy-regulariser-is-secretly-an-adversary-14684c743d45,Your Policy Regulariser is Secretly an Adversary,['DeepMind Safety Research'],2022-03-24T00:00:00Z,blogs,, 112303,https://intelligence.org/2013/11/05/greg-morrisett-on-secure-and-reliable-systems-2/,Greg Morrisett on Secure and Reliable Systems,['Luke Muehlhauser'],2013-11-05T21:11:55Z,blogs,, 112333,https://aisafety.camp/2022/06/17/aisc6-research-summaries/,AISC6: Research Summaries,['Kristi Uustalu'],2022-06-17T17:03:15Z,blogs,, 112360,https://carado.moe/cc_-1.html,CC_ -1,['Tamsin Leake'],2021-04-23T23:00:00Z,blogs,, 112369,https://aiimpacts.org/coherence-arguments-imply-a-force-for-goal-directed-behavior/,Coherence arguments imply a force for goal-directed behavior,['Katja Grace'],2021-03-26T16:06:45Z,blogs,, 112379,https://aiimpacts.org/price-performance-trend-in-top-supercomputers/,Price-performance trend in top supercomputers,['Katja Grace'],2017-11-09T07:31:27Z,blogs,, 112391,https://intelligence.org/2013/08/02/algorithmic-progress-in-six-domains-released/,“Algorithmic Progress in Six Domains” Released,['Luke Muehlhauser'],2013-08-03T02:31:09Z,blogs,, 112406,https://intelligence.org/2014/05/04/calling-all-miri-supporters/,Calling all MIRI supporters for unique giving opportunity!,['Malo Bourgon'],2014-05-04T23:01:07Z,blogs,, 112419,https://aiimpacts.org/tepsbrainestimate/,A new approach to predicting brain-computer parity,['Katja Grace'],2015-05-08T00:36:48Z,blogs,, 112430,https://vkrakovna.wordpress.com/2018/11/01/discussion-on-the-machine-learning-approach-to-ai-safety/,Discussion on the machine learning approach to AI safety,['Victoria Krakovna'],2018-11-01T20:22:43Z,blogs,, 112452,https://intelligence.org/2011/07/22/announcing-the-125000-summer-singularity-challenge/,"Announcing the $125,000 Summer Singularity Challenge",['Luke Muehlhauser'],2011-07-23T04:13:58Z,blogs,, 112465,https://intelligence.org/2014/01/30/emil-vassev-on-formal-verification/,Emil Vassev on Formal Verification,['Luke Muehlhauser'],2014-01-31T01:33:14Z,blogs,, 112490,https://intelligence.org/2014/02/15/andre-platzer-on-verifying-cyber-physical-systems/,André Platzer on Verifying Cyber-Physical Systems,['Luke Muehlhauser'],2014-02-15T15:24:55Z,blogs,, 112513,https://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/,From Philosophy to Math to Engineering,['Luke Muehlhauser'],2013-11-04T15:36:23Z,blogs,, 112528,https://carado.moe/two-vtable.html,the two-vtable problem,['Tamsin Leake'],2021-11-21T00:00:00Z,blogs,, 112538,https://intelligence.org/2013/07/24/miris-december-2013-workshop/,MIRI’s December 2013 Workshop,['Luke Muehlhauser'],2013-07-24T23:55:35Z,blogs,, 112548,https://intelligence.org/2012/05/08/singularity-institute-progress-report-april-2012/,"Machine Intelligence Research Institute Progress Report, April 2012",['Louie Helm'],2012-05-08T21:55:17Z,blogs,, 112575,https://aiimpacts.org/human-level-hardware-timeline/,Human-level hardware timeline,['Katja Grace'],2017-12-23T07:59:19Z,blogs,, 112590,https://aiimpacts.org/2019-recent-trends-in-geekbench-score-per-cpu-price/,2019 recent trends in Geekbench score per CPU price,['Asya Bergal'],2020-04-14T21:11:41Z,blogs,, 112599,https://intelligence.org/2013/12/16/miris-december-2013-newsletter/,MIRI’s December 2013 Newsletter,['Luke Muehlhauser'],2013-12-16T14:42:46Z,blogs,, 112620,https://carado.moe/is-intelligence-program-inversion.html,is intelligence program inversion?,['Tamsin Leake'],2023-02-13T00:00:00Z,blogs,, 112629,https://aiimpacts.org/conversation-with-steve-potter/,Conversation with Steve Potter,['Katja Grace'],2015-07-14T01:49:36Z,blogs,, 112659,https://openai.com/research/a-hazard-analysis-framework-for-code-synthesis-large-language-models,A hazard analysis framework for code synthesis large language models,['OpenAI Research'],2022-07-25T00:00:00Z,blogs,, 112690,https://aiimpacts.org/takeaways-from-safety-by-default-interviews/,Takeaways from safety by default interviews,['Asya Bergal'],2020-04-03T17:10:45Z,blogs,, 112713,https://aiimpacts.org/supporting-ai-impacts/,Supporting AI Impacts,['Katja Grace'],2015-05-22T05:27:11Z,blogs,, 112724,https://aiimpacts.org/the-ai-impacts-blog/,The AI Impacts Blog,['Katja Grace'],2015-01-09T21:56:23Z,blogs,, 112738,https://www.cold-takes.com/why-talk-about-10-000-years-from-now/,"Why talk about 10,000 years from now?",['Holden Karnofsky'],2021-08-05T00:00:00Z,blogs,, 112754,https://intelligence.org/2015/01/22/new-report-formalizing-two-problems-realistic-world-models/,New report: “Formalizing Two Problems of Realistic World Models”,['Luke Muehlhauser'],2015-01-22T23:19:02Z,blogs,, 112763,https://intelligence.org/2013/11/01/robin-hanson/,Robin Hanson on Serious Futurism,['Luke Muehlhauser'],2013-11-01T17:00:50Z,blogs,, 112786,https://intelligence.org/2015/05/01/may-2015-newsletter/,May 2015 Newsletter,['Jesse Galef'],2015-05-01T21:00:22Z,blogs,, 112813,https://carado.moe/cm21.html,"cm21, a pixel art editor",['Tamsin Leake'],2021-06-19T23:00:00Z,blogs,, 112822,https://intelligence.org/2013/09/14/double-your-donation/,Double Your Donations via Corporate Matching,['Luke Muehlhauser'],2013-09-15T00:42:27Z,blogs,, 112832,https://intelligence.org/2020/05/01/april-2020-newsletter/,April 2020 Newsletter,['Rob Bensinger'],2020-05-01T18:26:09Z,blogs,, 112855,https://aiimpacts.org/robin-hanson-on-the-futurist-focus-on-ai/,Robin Hanson on the futurist focus on AI,['Asya Bergal'],2019-11-13T21:40:42Z,blogs,, 112864,https://intelligence.org/2016/11/20/november-2016-newsletter/,November 2016 Newsletter,['Rob Bensinger'],2016-11-20T22:09:38Z,blogs,, 112889,https://intelligence.org/2014/05/29/aaron-tomb/,Aaron Tomb on crowd-sourced formal verification,['Luke Muehlhauser'],2014-05-29T11:00:30Z,blogs,, 112905,https://vkrakovna.wordpress.com/2015/02/16/flis-recent-milestones-in-ai-safety/,Future of Life Institute’s recent milestones in AI safety,['Victoria Krakovna'],2015-02-16T19:32:20Z,blogs,, 112928,https://aiimpacts.org/historic-trends-in-transatlantic-passenger-travel/,Historic trends in transatlantic passenger travel,['Katja Grace'],2019-12-05T00:07:19Z,blogs,, 112946,https://intelligence.org/2021/11/15/ngo-and-yudkowsky-on-alignment-difficulty/,Ngo and Yudkowsky on alignment difficulty,['Rob Bensinger'],2021-11-16T02:00:22Z,blogs,, 112969,https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-chess/,Time for AI to cross the human performance range in chess,['Katja Grace'],2020-10-15T23:36:00Z,blogs,, 112985,https://importai.substack.com/p/import-ai-332-mini-ai-safety-through,Import AI 332: Mini-AI; safety through evals; Facebook releases a RLHF dataset,['Jack Clark'],2023-06-12T12:20:50Z,blogs,, 113006,https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/,New paper: “Safely interruptible agents”,['Rob Bensinger'],2016-06-02T06:58:38Z,blogs,, 113023,https://intelligence.org/2016/03/28/announcing-a-new-colloquium-series-and-fellows-program/,Announcing a new colloquium series and fellows program,['Rob Bensinger'],2016-03-29T01:43:42Z,blogs,, 113035,https://intelligence.org/2014/09/14/kris-thorisson/,Kristinn Thórisson on constructivist AI,['Luke Muehlhauser'],2014-09-15T03:03:09Z,blogs,, 113056,https://intelligence.org/2013/12/18/new-paper-why-we-need-friendly-ai/,New Paper: “Why We Need Friendly AI”,['Luke Muehlhauser'],2013-12-18T19:44:10Z,blogs,, 113080,https://www.deepmind.com/blog/enabling-high-accuracy-protein-structure-prediction-at-the-proteome-scale,Enabling high-accuracy protein structure prediction at the proteome scale,"['Kathryn Tunyasuvunakool', 'Jonas Adler', 'Zachary Wu', 'Tim Green', 'Michal Zielinski', 'Augustin Žídek', 'Alex Bridgland', 'Andrew Cowie', 'Clemens Meyer', 'Agata Laydon', 'Sameer Velanka *', 'Gerard J Kleywegt *', 'Alex Bateman *', 'Richard Evans', 'Alexander Pritzel', 'Michael Figurnov', 'Olaf Ronneberger', 'Russ Bates', 'Simon A. A. Kohl', 'Anna Potapenko', 'Andrew J Ballard', 'Bernardino Romera-Paredes', 'Stanislav Nikolov', 'Rishub Jain', 'Ellen Clancy', 'David Reiman', 'Stig Petersen', 'Andrew Senior', 'Koray Kavukcuoglu', 'Ewan Birney *', 'Pushmeet Kohli', 'John Jumper', 'Demis Hassabis']",2021-07-22T00:00:00Z,blogs,, 113103,https://aiimpacts.org/ernie-davis-on-the-landscape-of-ai-risks/,Ernie Davis on the landscape of AI risks,['Rob Long'],2019-08-24T00:23:38Z,blogs,, 113112,https://intelligence.org/2013/08/25/holden-karnofsky-interview/,Holden Karnofsky on Transparent Research Analyses,['Luke Muehlhauser'],2013-08-25T15:12:43Z,blogs,, 113134,https://carado.moe/universal-complete.html,universal complete,['Tamsin Leake'],2021-07-15T23:00:00Z,blogs,, 113147,https://carado.moe/metatracking.html,meta-tracking,['Tamsin Leake'],2021-10-10T23:00:00Z,blogs,, 113158,https://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/,New Paper: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem”,['Luke Muehlhauser'],2014-05-18T05:00:32Z,blogs,, 113168,https://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/,AI Risk and the Security Mindset,['Luke Muehlhauser'],2013-08-01T03:19:14Z,blogs,, 113188,https://intelligence.org/2019/12/05/december-2019-newsletter/,December 2019 Newsletter,['Rob Bensinger'],2019-12-05T20:00:25Z,blogs,, 113209,https://vkrakovna.wordpress.com/2019/01/01/2018-19-new-year-review/,2018-19 New Year review,['Victoria Krakovna'],2019-01-01T21:43:12Z,blogs,, 113225,https://intelligence.org/2014/03/02/bob-constable/,Robert Constable on correct-by-construction programming,['Luke Muehlhauser'],2014-03-03T00:41:34Z,blogs,, 113250,https://aiimpacts.org/human-level-ai/,Human-Level AI,['Katja Grace'],2014-01-23T23:36:27Z,blogs,, 113273,https://vkrakovna.wordpress.com/2018/01/27/is-there-a-tradeoff-between-safety-concerns-about-current-and-future-ai-systems/,Is there a tradeoff between immediate and longer-term AI safety efforts?,['Victoria Krakovna'],2018-01-27T18:08:19Z,blogs,, 113293,https://intelligence.org/2021/05/13/two-major-donations/,"Our all-time largest donation, and major crypto support from Vitalik Buterin",['Colm Ó Riain'],2021-05-14T01:00:03Z,blogs,, 113302,https://intelligence.org/2014/12/05/new-paper-concept-learning-safe-autonomous-ai/,New paper: “Concept learning for safe autonomous AI”,['Luke Muehlhauser'],2014-12-05T08:56:11Z,blogs,, 113323,https://www.deepmind.com/blog/restoring-ancient-text-using-deep-learning-a-case-study-on-greek-epigraphy,Restoring ancient text using deep learning: a case study on Greek epigraphy,"['Yannis Assael', 'Thea Sommerschield*', 'Jonathan Prag*']",2019-10-15T00:00:00Z,blogs,, 113338,https://intelligence.org/2019/07/19/july-2019-newsletter/,July 2019 Newsletter,['Rob Bensinger'],2019-07-20T04:58:05Z,blogs,, 113363,https://carado.moe/clarifying-formal-alignment-implementation.html,clarifying formal alignment implementation,['Tamsin Leake'],2023-02-25T00:00:00Z,blogs,, 113378,https://aiimpacts.org/paul-christiano-on/,Paul Christiano on the safety of future AI systems,['Asya Bergal'],2019-09-11T21:40:03Z,blogs,, 113402,https://intelligence.org/2019/02/11/our-2018-fundraiser-review/,Our 2018 Fundraiser Review,['Colm Ó Riain'],2019-02-11T21:20:31Z,blogs,, 113424,https://intelligence.org/2021/11/11/discussion-with-eliezer-yudkowsky-on-agi-interventions/,Discussion with Eliezer Yudkowsky on AGI interventions,['Rob Bensinger'],2021-11-11T20:40:44Z,blogs,, 113450,https://www.deepmind.com/blog/spurious-normativity-enhances-learning-of-compliance-and-enforcement-behavior-in-artificial-agents,Spurious normativity enhances learning of compliance and enforcement behavior in artificial agents,"['Raphael Koster', 'Dylan Hadfield-Menell *', 'Richard Everett', 'Laura Weidinger', 'G Hadfield *', 'Joel Leibo']",2022-01-18T00:00:00Z,blogs,, 113467,https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/,"AI Alignment: Why It’s Hard, and Where to Start",['Eliezer Yudkowsky'],2016-12-28T21:51:47Z,blogs,, 113503,https://aiimpacts.org/stuart-russells-description-of-ai-risk/,Stuart Russell’s description of AI risk,['Katja Grace'],2017-09-11T20:54:40Z,blogs,, 113521,https://carado.moe/communicating-successful-alignment.html,communicating with successful alignment timelines,['Tamsin Leake'],2023-01-29T00:00:00Z,blogs,, 113539,https://carado.moe/life-refocus.html,life refocus,['Tamsin Leake'],2022-05-12T23:00:00Z,blogs,, 113556,https://aiimpacts.org/trends-in-dram-price-per-gigabyte/,Trends in DRAM price per gigabyte,['Asya Bergal'],2020-04-14T20:03:46Z,blogs,, 113569,https://intelligence.org/2016/07/23/ostp/,Submission to the OSTP on AI outcomes,['Nate Soares'],2016-07-24T03:36:47Z,blogs,, 113611,https://carado.moe/formal-alignment.html,"formal alignment: what it is, and some proposals",['Tamsin Leake'],2023-01-29T00:00:00Z,blogs,, 113637,https://blog.eleuther.ai/eu-aia/,EleutherAI's Thoughts on the EU AI Act,"['Aviya Skowron', 'Stella Biderman']",2023-07-26T00:00:00Z,blogs,, 113662,https://intelligence.org/2017/10/22/fdt/,New paper: “Functional Decision Theory”,['Matthew Graves'],2017-10-22T17:05:35Z,blogs,, 113679,https://intelligence.org/2013/10/21/hadi-esmaeilzadeh-on-dark-silicon/,Hadi Esmaeilzadeh on Dark Silicon,['Luke Muehlhauser'],2013-10-22T01:42:27Z,blogs,, 113698,https://carado.moe/my-life-so-far.html,end of 2022: my life so far,['Tamsin Leake'],2022-12-31T00:00:00Z,blogs,, 113708,https://carado.moe/continue-working-hard-alignment.html,continue working on hard alignment! don't give up!,['Tamsin Leake'],2023-03-23T00:00:00Z,blogs,, 113718,https://intelligence.org/2013/07/12/james-miller-interview/,James Miller on Unusual Incentives Facing AGI Companies,['Luke Muehlhauser'],2013-07-13T01:39:45Z,blogs,, 113731,https://aiimpacts.org/cortes-pizarro-and-afonso-as-precedents-for-ai-takeover/,"Cortés, Pizarro, and Afonso as precedents for takeover",['Daniel Kokotajlo'],2020-03-01T03:43:04Z,blogs,, 113747,https://carado.moe/weird-chance.html,isn't it weird that we have a chance at all?,['Tamsin Leake'],2022-08-02T23:00:00Z,blogs,, 113760,https://aiimpacts.org/conversation-with-robin-hanson/,Conversation with Robin Hanson,['Asya Bergal'],2019-11-13T21:40:05Z,blogs,, 113788,https://www.cold-takes.com/first-post/,First Post,['Holden Karnofsky'],2021-07-13T00:00:00Z,blogs,, 113797,https://intelligence.org/2021/12/03/biology-inspired-agi-timelines-the-trick-that-never-works/,Biology-Inspired AGI Timelines: The Trick That Never Works,['Eliezer Yudkowsky'],2021-12-03T08:00:08Z,blogs,, 113814,https://carado.moe/homomorphically-encrypted-computations.html,ethics and anthropics of homomorphically encrypted computations,['Tamsin Leake'],2022-09-08T23:00:00Z,blogs,, 113828,https://intelligence.org/2022/03/02/shah-and-yudkowsky-on-alignment-failures/,Shah and Yudkowsky on alignment failures,['Rob Bensinger'],2022-03-02T15:30:14Z,blogs,, 113855,https://aiimpacts.org/chance-date-bias/,Chance date bias,['Katja Grace'],2017-12-12T07:59:02Z,blogs,, 113864,https://intelligence.org/2021/08/31/august-2021-newsletter/,August 2021 Newsletter,['Rob Bensinger'],2021-08-31T22:22:54Z,blogs,, 113887,https://intelligence.org/2018/05/19/challenges-to-christianos-capability-amplification-proposal/,Challenges to Christiano’s capability amplification proposal,['Eliezer Yudkowsky'],2018-05-19T18:24:28Z,blogs,, 113918,https://carado.moe/css-for-pixeley-images.html,CSS for pixeley images,['Tamsin Leake'],2020-12-24T00:00:00Z,blogs,, 113927,https://carado.moe/value-yourself-surviving.html,what does it mean to value our survival?,['Tamsin Leake'],2022-08-11T23:00:00Z,blogs,, 113941,https://www.cold-takes.com/was-life-better-in-hunter-gatherer-times/,Was life better in hunter-gatherer times?,['Holden Karnofsky'],2021-10-26T00:00:00Z,blogs,, 113954,https://carado.moe/formal-alignment-theory-change.html,Orthogonal's,['Tamsin Leake'],2023-05-05T23:00:00Z,blogs,, 113976,https://www.deepmind.com/blog/creating-interactive-agents-with-imitation-learning,Creating Interactive Agents with Imitation Learning,"['Josh Abramson', 'Arun Ahuja', 'Arthur Brussee', 'Federico Carnevale', 'Mary Cassin', 'Felix Fischer', 'Petko Georgiev', 'Alex Goldin', 'Tim Harley', 'Felix Hill', 'Peter C Humphreys', 'Alden Hung', 'Jessica Landon', 'Timothy Lillicrap', 'Hamza Merzic', 'Alistair Muldal', 'Adam Santoro', 'Guy Scully', 'Tamara von Glehn', 'Gregory Wayne', 'Nathaniel Wong', 'Chen Yan', 'Rui Zhu', 'Mary Cassin', 'Hamza Merzic']",2021-12-08T00:00:00Z,blogs,, 114005,https://intelligence.org/2015/07/24/four-background-claims/,Four Background Claims,['Nate Soares'],2015-07-25T04:30:08Z,blogs,, 114022,https://aiimpacts.org/ai-conference-attendance/,AI conference attendance,['Katja Grace'],2019-03-07T00:44:04Z,blogs,, 114031,https://www.cold-takes.com/to-match-the-greats-dont-follow-in-their-footsteps/,"To Match the Greats, Don’t Follow In Their Footsteps",['Holden Karnofsky'],2022-02-11T00:00:00Z,blogs,, 114043,https://carado.moe/endiannesses.html,endiannesses,['Tamsin Leake'],2021-11-20T00:00:00Z,blogs,, 114054,https://intelligence.org/2015/05/31/introductions/,Introductions,['Nate Soares'],2015-06-01T01:00:40Z,blogs,, 114068,https://intelligence.org/2012/09/21/september-2012-newsletter/,September 2012 Newsletter,['Jake'],2012-09-21T18:16:48Z,blogs,, 114089,https://intelligence.org/2014/04/25/roland-siegwart/,Roland Siegwart on autonomous mobile robots,['Luke Muehlhauser'],2014-04-25T11:00:46Z,blogs,, 114110,https://carado.moe/normies-are-in-hell-too.html,Normies Are in Hell Too,['Tamsin Leake'],2021-03-04T00:00:00Z,blogs,, 114121,https://intelligence.org/2015/11/29/new-paper-quantilizers/,New paper: “Quantilizers”,['Rob Bensinger'],2015-11-30T04:10:00Z,blogs,, 114134,https://intelligence.org/2012/08/06/july-2012-newsletter/,July 2012 Newsletter,['Louie Helm'],2012-08-06T12:34:32Z,blogs,, 114161,https://intelligence.org/2015/01/11/improved-ai-impacts-website/,An improved “AI Impacts” website,['Luke Muehlhauser'],2015-01-11T17:10:07Z,blogs,, 114179,https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/,Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post,['Daniel Kokotajlo'],2019-02-07T22:25:29Z,blogs,, 114209,https://intelligence.org/2017/12/06/chollet/,A reply to Francois Chollet on intelligence explosion,['Eliezer Yudkowsky'],2017-12-07T03:21:58Z,blogs,, 114222,https://generative.ink/posts/quantifying-curation/,Quantifying curation,['janus'],2021-07-07T00:00:00Z,blogs,, 114241,https://intelligence.org/2018/02/25/february-2018-newsletter/,February 2018 Newsletter,['Rob Bensinger'],2018-02-26T02:07:18Z,blogs,, 114262,https://intelligence.org/2014/03/02/anders-sandberg/,Anders Sandberg on Space Colonization,['Luke Muehlhauser'],2014-03-02T22:00:03Z,blogs,, 114291,https://intelligence.org/2014/12/16/new-report-tiling-agents-causal-graphs/,New report: “Tiling agents in causal graphs”,['Luke Muehlhauser'],2014-12-16T15:17:57Z,blogs,, 114302,https://aiimpacts.org/ai-timeline-surveys/,AI Timeline Surveys,['Katja Grace'],2015-01-10T09:37:48Z,blogs,, 114323,https://intelligence.org/2016/02/29/new-paper-defining-human-values-for-value-learners/,New paper: “Defining human values for value learners”,['Rob Bensinger'],2016-03-01T07:09:20Z,blogs,, 114336,https://aiimpacts.org/historic-trends-in-telecommunications-performance/,Historic trends in telecommunications performance,['Katja Grace'],2020-02-08T01:56:41Z,blogs,, 114354,https://carado.moe/blob-location.html,QACI blob location: no causality & answer signature,['Tamsin Leake'],2023-03-08T00:00:00Z,blogs,, 114365,https://intelligence.org/2014/12/16/new-report-computable-probability-distributions-converge/,New report: “Computable probability distributions which converge…”,['Luke Muehlhauser'],2014-12-17T00:33:37Z,blogs,, 114375,https://carado.moe/finite-patients.html,are there finitely many moral patients?,['Tamsin Leake'],2022-03-21T00:00:00Z,blogs,, 114387,https://intelligence.org/2014/05/11/benjamin-pierce/,Benjamin Pierce on clean-slate security architectures,['Luke Muehlhauser'],2014-05-11T18:00:06Z,blogs,, 114409,https://intelligence.org/2013/08/04/benja-interview/,Benja Fallenstein on the Löbian Obstacle to Self-Modifying Systems,['Luke Muehlhauser'],2013-08-04T19:53:10Z,blogs,, 114431,https://carado.moe/concentric-rings-illiberalism.html,concentric rings of illiberalism,['Tamsin Leake'],2022-05-28T23:00:00Z,blogs,, 114440,https://aiimpacts.org/information-storage-in-the-brain/,Information storage in the brain,['Katja Grace'],2015-07-23T16:26:18Z,blogs,, 114450,https://aiimpacts.org/how-bad-a-future-do-ml-researchers-expect/,How bad a future do ML researchers expect?,['Katja Grace'],2023-03-09T04:49:24Z,blogs,, 114459,https://aiimpacts.org/conversation-with-tom-griffiths/,Conversation with Tom Griffiths,['Katja Grace'],2016-09-08T11:18:03Z,blogs,, 114481,https://blog.eleuther.ai/transformer-math/,Transformer Math 101,"['Quentin Anthony', 'Stella Biderman', 'Hailey Schoelkopf']",2023-04-18T00:00:00Z,blogs,, 114521,https://intelligence.org/2014/03/08/john-ridgway-on-safety-critical-systems/,John Ridgway on safety-critical systems,['Luke Muehlhauser'],2014-03-08T10:15:09Z,blogs,, 114547,https://importai.substack.com/p/import-ai-321-open-source-gpt3-giving,Import AI 321: Open source GPT3; giving away democracy to AGI companies; GPT-4 is a political artifact,['Jack Clark'],2023-03-20T12:02:10Z,blogs,, 114573,https://intelligence.org/2022/03/01/ngo-and-yudkowsky-on-scientific-reasoning-and-pivotal-acts/,Ngo and Yudkowsky on scientific reasoning and pivotal acts,['Rob Bensinger'],2022-03-02T03:30:40Z,blogs,, 114590,https://intelligence.org/2019/06/07/new-paper-learned-optimization/,New paper: “Risks from learned optimization”,['Rob Bensinger'],2019-06-07T23:53:56Z,blogs,, 114610,https://aiimpacts.org/metabolic-estimates-of-rate-of-cortical-firing/,Metabolic Estimates of Rate of Cortical Firing,['Katja Grace'],2015-04-10T17:47:11Z,blogs,, 114619,https://carado.moe/game.html,I'm creating a world simulation video game,['Tamsin Leake'],2021-06-04T23:00:00Z,blogs,, 114634,https://carado.moe/lets-not-generalize-politics.html,Let's not generalize over people,['Tamsin Leake'],2021-04-23T23:00:00Z,blogs,, 114646,https://intelligence.org/2016/04/11/april-2016-newsletter/,April 2016 Newsletter,['Rob Bensinger'],2016-04-11T15:44:33Z,blogs,, 114677,https://aiimpacts.org/fhi-ai-timelines-survey/,FHI Winter Intelligence Survey,['Katja Grace'],2014-12-29T18:47:28Z,blogs,, 114686,https://intelligence.org/2014/01/10/kathleen-fisher-on-high-assurance-systems/,Kathleen Fisher on High-Assurance Systems,['Luke Muehlhauser'],2014-01-10T20:17:53Z,blogs,, 114707,https://www.gwern.net/Tanks.page,The Neural Net Tank Urban Legend,['Gwern Branwen'],2023-07-04T00:00:00Z,blogs,, 114727,https://intelligence.org/2013/07/15/roman-interview/,Roman Yampolskiy on AI Safety Engineering,['Luke Muehlhauser'],2013-07-16T05:33:39Z,blogs,, 114748,https://aiimpacts.org/is-the-range-of-human-intelligence-small/,The range of human intelligence,['Katja Grace'],2015-01-18T20:58:12Z,blogs,, 114763,https://aiimpacts.org/concrete-ai-tasks-for-forecasting/,Concrete AI tasks for forecasting,['Katja Grace'],2016-12-15T05:40:58Z,blogs,, 114780,https://www.deepmind.com/blog/how-undesired-goals-can-arise-with-correct-rewards,How undesired goals can arise with correct rewards,"['Rohin Shah', 'Victoria Krakovna', 'Vikrant Varma', 'Zachary Kenton']",2022-10-07T00:00:00Z,blogs,, 114800,https://www.cold-takes.com/technological-unemployment-ai-vs-most-important-century-ai-how-far-apart/,“Technological unemployment” AI vs. “most important century” AI: how far apart?,['Holden Karnofsky'],2021-10-07T00:00:00Z,blogs,, 114816,https://openai.com/research/point-e,Point-E: A system for generating 3D point clouds from complex prompts,['OpenAI Research'],2022-12-16T00:00:00Z,blogs,, 114839,https://aiimpacts.org/automated-intelligence-is-not-ai/,Automated intelligence is not AI,['Katja Grace'],2020-11-01T23:38:44Z,blogs,, 114849,https://carado.moe/probability-hardware-failure.html,probability under potential hardware failure,['Tamsin Leake'],2022-08-06T23:00:00Z,blogs,, 114865,https://intelligence.org/2016/09/06/grant-open-philanthropy/,Grant announcement from the Open Philanthropy Project,['Nate Soares'],2016-09-06T20:04:27Z,blogs,, 114881,https://carado.moe/delegated-embedded-agency-decision-theory.html,"one-shot AI, delegating embedded agency and decision theory, and one-shot QACI",['Tamsin Leake'],2022-12-22T00:00:00Z,blogs,, 114905,https://intelligence.org/2014/05/27/lennart-beringer/,Lennart Beringer on the Verified Software Toolchain,['Luke Muehlhauser'],2014-05-27T11:00:04Z,blogs,, 114924,https://aiimpacts.org/energy-efficiency-of-paramotors/,Energy efficiency of paramotors,['Katja Grace'],2020-11-24T21:11:22Z,blogs,, 114939,https://www.cold-takes.com/utopia-links/,Utopia links,['Holden Karnofsky'],2021-12-23T00:00:00Z,blogs,, 114961,https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/,New paper: “Proof-producing reflection for HOL”,['Rob Bensinger'],2015-12-04T23:31:15Z,blogs,, 114975,https://aiimpacts.org/2016-esopai-questions-printout/,2016 ESPAI questions printout,['Katja Grace'],2017-06-27T03:51:44Z,blogs,, 114985,https://intelligence.org/2015/02/01/february-2015-newsletter/,February 2015 Newsletter,['Jake'],2015-02-02T04:00:19Z,blogs,, 115004,https://generative.ink/posts/this-museum-does-not-exist-gpt-3-x-clip/,This Museum Does Not Exist: GPT-3 x CLIP,['janus'],2021-02-08T00:00:00Z,blogs,, 115019,https://www.cold-takes.com/did-life-get-better-during-the-pre-industrial-era-ehhhh/,Did life get better during the pre-industrial era? (Ehhhh),['Holden Karnofsky'],2021-11-30T00:00:00Z,blogs,, 115034,https://intelligence.org/2014/04/30/suresh-jagannathan-on-higher-order-program-verification/,Suresh Jagannathan on higher-order program verification,['Luke Muehlhauser'],2014-04-30T11:00:51Z,blogs,, 115061,https://aisafety.camp/2018/06/06/the-first-ai-safety-camp-onwards/,The first AI Safety Camp & onwards,['Johannes'],2018-06-06T15:09:15Z,blogs,, 115075,https://aiimpacts.org/historic-trends-in-ship-size/,Historic trends in ship size,['Katja Grace'],2019-12-23T07:57:11Z,blogs,, 115090,https://carado.moe/portable-programs.html,to wasm and back again: the essence of portable programs,['Tamsin Leake'],2021-10-21T23:00:00Z,blogs,, 115113,https://generative.ink/posts/gpt-3-on-coherent-extrapolated-volition/,GPT-3 on Coherent Extrapolated Volition,['janus'],2021-04-02T00:00:00Z,blogs,, 115129,https://intelligence.org/2013/12/20/2013-in-review-operations/,2013 in Review: Operations,['Luke Muehlhauser'],2013-12-20T19:14:51Z,blogs,, 115150,https://intelligence.org/2013/01/30/yudkowsky-on-logical-uncertainty/,Yudkowsky on Logical Uncertainty,['staff'],2013-01-30T15:22:34Z,blogs,, 115167,https://intelligence.org/2016/07/29/2015-in-review/,2015 in review,['Malo Bourgon'],2016-07-30T06:17:38Z,blogs,, 115208,https://aiimpacts.org/returns-to-scale-in-research/,Returns to scale in research,['Michael Wulfsohn'],2016-07-06T16:13:56Z,blogs,, 115219,https://carado.moe/∀V.html,∀V: A Utopia For Ever,['Tamsin Leake'],2021-08-30T23:00:00Z,blogs,, 115249,https://aiimpacts.org/kruel-ai-survey/,Kruel AI Interviews,['Katja Grace'],2014-12-29T18:47:11Z,blogs,, 115271,https://aiimpacts.org/hardware-overhang/,Hardware overhang,['Katja Grace'],2018-07-16T16:37:30Z,blogs,, 115280,https://carado.moe/two-principles-for-topia.html,Two Principles For Topia,['Tamsin Leake'],2020-11-15T00:00:00Z,blogs,, 115301,https://www.deepmind.com/blog/acme-a-new-framework-for-distributed-reinforcement-learning,Acme: A new framework for distributed reinforcement learning,"['Matt Hoffman', 'Bobak Shahriari', 'John Aslanides', 'Gabriel Barth-Maron', 'Feryal Behbahani', 'Tamara Norman', 'Abbas Abdolmaleki', 'Albin Cassirer', 'Fan Yang', 'Kate Baumli', 'Sarah Henderson', 'Alex Novikov', 'Sergio Gómez Colmenarejo', 'Serkan Cabi', 'Caglar Gülçehre', 'Tom Le Paine', 'Andrew Cowie', 'Ziyu Wang', 'Bilal Piot', 'Nando de Freitas']",2020-06-01T00:00:00Z,blogs,, 115322,https://vkrakovna.wordpress.com/2022/06/02/paradigms-of-ai-alignment-components-and-enablers/,Paradigms of AI alignment: components and enablers,['Victoria Krakovna'],2022-06-02T01:36:18Z,blogs,, 115347,https://aiimpacts.org/historical-growth-trends/,Historical economic growth trends,['Katja Grace'],2019-03-06T08:06:27Z,blogs,, 115360,https://intelligence.org/2021/09/29/september-2021-newsletter/,September 2021 Newsletter,['Rob Bensinger'],2021-09-29T20:20:04Z,blogs,, 115381,https://carado.moe/systematic-unlibertarianism.html,the systematic absence of libertarian thought,['Tamsin Leake'],2021-06-15T23:00:00Z,blogs,, 115390,https://aiimpacts.org/maccready-gossamer-albatross/,Energy efficiency of MacCready Gossamer Albatross,['Katja Grace'],2020-11-10T02:15:26Z,blogs,, 115399,https://intelligence.org/2016/06/12/june-2016-newsletter/,June 2016 Newsletter,['Rob Bensinger'],2016-06-12T20:05:35Z,blogs,, 115416,https://carado.moe/state-research-agenda.html,"state of my alignment research, and what needs work",['Tamsin Leake'],2023-03-03T00:00:00Z,blogs,, 115436,https://intelligence.org/2013/08/13/august-newsletter/,August Newsletter: New Research and Expert Interviews,['Jake'],2013-08-14T00:37:04Z,blogs,, 115463,https://intelligence.org/2014/03/23/michael-carbin/,Michael Carbin on integrity properties in approximate computing,['Luke Muehlhauser'],2014-03-23T08:00:07Z,blogs,, 115483,https://deepmindsafetyresearch.medium.com/model-free-risk-sensitive-reinforcement-learning-5a12ba5ce662,Model-Free Risk-Sensitive Reinforcement Learning,['DeepMind Safety Research'],2021-11-11T00:00:00Z,blogs,, 115498,https://intelligence.org/2014/07/25/bostrom/,Nick Bostrom to speak about Superintelligence at UC Berkeley,['Luke Muehlhauser'],2014-07-25T19:48:46Z,blogs,, 115508,https://aiimpacts.org/concrete-ai-tasks-bleg/,Concrete AI tasks bleg,['Katja Grace'],2016-03-30T18:09:28Z,blogs,, 115517,https://aiimpacts.org/what-do-coherence-arguments-imply-about-the-behavior-of-advanced-ai/,What do coherence arguments imply about the behavior of advanced AI?,['Katja Grace'],2021-04-08T21:28:28Z,blogs,, 115532,https://www.cold-takes.com/tv-shows-i-wish-i-could-watch-intergalactic-immigration-wars/,TV shows I wish I could watch: Intergalactic Immigration Wars,['Holden Karnofsky'],2021-12-10T00:00:00Z,blogs,, 115542,https://intelligence.org/2015/07/05/july-2015-newsletter/,July 2015 Newsletter,['Rob Bensinger'],2015-07-05T17:36:37Z,blogs,, 115569,https://intelligence.org/2013/10/18/ben-goertzel/,Ben Goertzel on AGI as a Field,['Luke Muehlhauser'],2013-10-18T07:17:02Z,blogs,, 115591,https://carado.moe/ai-alignment-wolfram-physics.html,AI alignment and wolfram physics,['Tamsin Leake'],2021-07-16T23:00:00Z,blogs,, 115613,https://www.cold-takes.com/past-and-future-of-economic-growth-paper/,One Cold Link: “The Past and Future of Economic Growth: A Semi-Endogenous Perspective”,['Holden Karnofsky'],2021-09-09T00:00:00Z,blogs,, 115637,https://vkrakovna.wordpress.com/2016/08/25/highlights-from-the-deep-learning-summer-school/,Highlights from the Deep Learning Summer School,['Victoria Krakovna'],2016-08-26T02:29:56Z,blogs,, 115668,https://importai.substack.com/p/import-ai-331-16x-smaller-language,Import AI 331: 16X smaller language models; could AMD compete with NVIDIA?; and BERT for the dark web,['Jack Clark'],2023-05-29T13:18:57Z,blogs,, 115683,https://carado.moe/canonical-byte-varints.html,A canonical and efficient byte-encoding for ints,['Tamsin Leake'],2020-12-29T00:00:00Z,blogs,, 115699,https://intelligence.org/2014/03/07/david-cook/,David Cook on the VV&A process,['Luke Muehlhauser'],2014-03-07T23:11:30Z,blogs,, 115727,https://intelligence.org/2014/09/04/john-fox/,John Fox on AI safety,['Luke Muehlhauser'],2014-09-04T19:00:13Z,blogs,, 115755,https://www.yudkowsky.net/other/yehuda,"Yehuda Yudkowsky, 1985-2004",['Eliezer S. Yudkowsky'],2020-09-04T03:57:13Z,blogs,, 115777,https://carado.moe/foundation-book.html,the foundation book,['Tamsin Leake'],2022-08-12T23:00:00Z,blogs,, 115793,https://carado.moe/a-cognitively-hazardous-idea.html,a cognitively hazardous idea,['Tamsin Leake'],2022-02-02T00:00:00Z,blogs,, 115810,https://intelligence.org/2013/10/25/bas-steunebrink-on-sleight/,Bas Steunebrink on Self-Reflective Programming,['Luke Muehlhauser'],2013-10-25T13:00:40Z,blogs,, 115837,https://jsteinhardt.wordpress.com/2017/01/10/latent-variables-and-model-mis-specification/,Latent Variables and Model Mis-specification,['jsteinhardt'],2017-01-10T02:45:46Z,blogs,, 115849,https://intelligence.org/2013/07/17/beckstead-interview/,Nick Beckstead on the Importance of the Far Future,['Luke Muehlhauser'],2013-07-18T06:13:22Z,blogs,, 115876,https://aiimpacts.org/what-do-ml-researchers-think-you-are-wrong-about/,What do ML researchers think you are wrong about?,['Katja Grace'],2017-09-26T05:04:13Z,blogs,, 115891,https://deepmindsafetyresearch.medium.com/avoiding-unsafe-states-in-3d-environments-using-human-feedback-5869ed9fb94c,Avoiding Unsafe States in 3D Environments using Human Feedback,['DeepMind Safety Research'],2022-01-21T00:00:00Z,blogs,, 115905,https://carado.moe/cyoas-futurism.html,CYOAs and futurism,['Tamsin Leake'],2022-11-20T00:00:00Z,blogs,, 115917,https://aiimpacts.org/some-survey-results/,Some survey results!,['Katja Grace'],2017-06-08T21:57:51Z,blogs,, 115939,https://openai.com/research/vpt,Learning to play Minecraft with Video PreTraining,"['This was a large effort by a dedicated team. Each author made huge contributions on many fronts over long time periods. All members were full time on the project for over six months. BB', 'IA', 'PZ', 'and JC were on the original VPT project team', 'and thus were involved for even longer']",2022-06-23T00:00:00Z,blogs,, 115961,https://carado.moe/when-in-doubt-kill-everyone.html,"when in doubt, kill everyone",['Tamsin Leake'],2021-07-17T23:00:00Z,blogs,, 115972,https://carado.moe/safer-quantum-suicide-experiment.html,a safer experiment than quantum suicide,['Tamsin Leake'],2022-11-13T00:00:00Z,blogs,, 115981,https://deepmindsafetyresearch.medium.com/understanding-meta-trained-algorithms-through-a-bayesian-lens-5042a1acc1c2,Understanding meta-trained algorithms through a Bayesian lens,['DeepMind Safety Research'],2020-12-03T00:00:00Z,blogs,, 116003,https://carado.moe/against-ai-alignment.html,against AI alignment ?,['Tamsin Leake'],2021-11-08T00:00:00Z,blogs,, 116021,https://intelligence.org/2019/05/31/2018-in-review/,2018 in review,['Malo Bourgon'],2019-06-01T05:00:59Z,blogs,, 116044,https://aiimpacts.org/2015-flops-prices/,2015 FLOPS prices,['Katja Grace'],2018-01-19T03:43:14Z,blogs,, 116062,https://carado.moe/quantum-immortality-local-deaths.html,quantum immortality and local deaths under X-risk,['Tamsin Leake'],2022-08-06T23:00:00Z,blogs,, 116074,https://carado.moe/right-to-death-therefore.html,"right to death, therefore",['Tamsin Leake'],2021-08-22T23:00:00Z,blogs,, 116090,https://intelligence.org/2015/01/01/january-2015-newsletter/,January 2015 Newsletter,['Jake'],2015-01-02T04:00:24Z,blogs,, 116115,https://jsteinhardt.wordpress.com/2013/12/30/linfty-strong-convexity/,Convex Conditions for Strong Convexity,['jsteinhardt'],2013-12-30T18:14:18Z,blogs,, 116129,https://intelligence.org/2023/03/14/yudkowsky-on-agi-risk-on-the-bankless-podcast/,Yudkowsky on AGI risk on the Bankless podcast,['Rob Bensinger'],2023-03-14T21:54:49Z,blogs,, 116165,https://blog.eleuther.ai/year-one/,"What A Long, Strange Trip It's Been: EleutherAI One Year Retrospective","['Connor Leahy', 'Eric Hallahan', 'Leo Gao', 'Stella Biderman']",2021-07-07T00:00:00Z,blogs,, 116192,https://carado.moe/what-is-value.html,what is value?,['Tamsin Leake'],2021-07-24T23:00:00Z,blogs,, 116209,https://carado.moe/strongly-generally-coherent-agents.html,on strong/general coherent agents,['Tamsin Leake'],2023-03-01T00:00:00Z,blogs,, 116221,https://aiimpacts.org/evidence-against-current-methods-leading-to-human-level-artificial-intelligence/,Evidence against current methods leading to human level artificial intelligence,['Asya Bergal'],2019-08-13T00:55:13Z,blogs,, 116257,https://carado.moe/hackable-multiverse.html,hackable multiverse,['Tamsin Leake'],2022-02-03T00:00:00Z,blogs,, 116268,https://aiimpacts.org/multipolar-research-projects/,List of multipolar research projects,['Katja Grace'],2015-02-11T18:54:51Z,blogs,, 116283,https://openai.com/research/economic-impacts,A research agenda for assessing the economic impacts of code generation models,"['Gillian Hadfield', 'Tyna Eloundou', 'Emily Eisner']",2022-03-03T00:00:00Z,blogs,, 116315,https://www.cold-takes.com/gell-mann-earworms/,Gell-Mann Earworms,['Holden Karnofsky'],2021-10-01T00:00:00Z,blogs,, 116327,https://intelligence.org/2015/04/01/april-2015-newsletter/,April 2015 newsletter,['Jesse Galef'],2015-04-01T23:00:39Z,blogs,, 116342,https://www.cold-takes.com/how-artistic-ideas-could-get-harder-to-find/,How artistic ideas could get harder to find,['Holden Karnofsky'],2022-01-07T00:00:00Z,blogs,, 116357,https://www.cold-takes.com/the-wicked-problem-experience/,The Wicked Problem Experience,['Holden Karnofsky'],2022-03-02T00:00:00Z,blogs,, 116373,https://intelligence.org/2014/04/11/wolf-kohn/,Wolf Kohn on hybrid systems control,['Luke Muehlhauser'],2014-04-11T08:00:30Z,blogs,, 116393,https://aiimpacts.org/effect-of-eli-whitneys-cotton-gin-on-historic-trends-in-cotton-ginning/,Effect of Eli Whitney’s cotton gin on historic trends in cotton ginning,['Katja Grace'],2020-02-07T22:58:55Z,blogs,, 116405,https://intelligence.org/2013/09/06/laurent-orseau-on-agi/,Laurent Orseau on Artificial General Intelligence,['Luke Muehlhauser'],2013-09-07T02:29:53Z,blogs,, 116428,https://openai.com/research/critiques,AI-written critiques help humans notice flaws,['OpenAI Research'],2022-06-13T00:00:00Z,blogs,, 116452,https://vkrakovna.wordpress.com/2020/05/31/possible-takeaways-from-the-coronavirus-pandemic-for-slow-ai-takeoff/,Possible takeaways from the coronavirus pandemic for slow AI takeoff,['Victoria Krakovna'],2020-05-31T17:48:04Z,blogs,, 116471,https://jsteinhardt.wordpress.com/2013/03/13/pairwise-independence-vs-independence/,Pairwise Independence vs. Independence,['jsteinhardt'],2013-03-13T06:53:52Z,blogs,, 116481,https://carado.moe/questions-cosmos-computations.html,questions about the cosmos and rich computations,['Tamsin Leake'],2022-01-07T00:00:00Z,blogs,, 116503,https://vkrakovna.wordpress.com/2014/10/18/importance-motivation-a-double-edged-sword/,Importance motivation: a double-edged sword,['Victoria Krakovna'],2014-10-18T14:28:24Z,blogs,, 116520,https://aiimpacts.org/discontinuous-progress-investigation/,Discontinuous progress investigation,['Katja Grace'],2015-02-02T19:11:36Z,blogs,, 116536,https://carado.moe/saving-server-internet.html,"saving the server-side of the internet: just WASM,",['Tamsin Leake'],2021-11-01T00:00:00Z,blogs,, 116554,https://intelligence.org/2015/07/31/a-new-miri-faq-and-other-announcements/,"A new MIRI FAQ, and other announcements",['Rob Bensinger'],2015-07-31T21:54:23Z,blogs,, 116563,https://aiimpacts.org/2016-espai-narrow-ai-task-forecast-timeline/,2016 ESPAI Narrow AI task forecast timeline,['Katja Grace'],2017-10-04T18:23:07Z,blogs,, 116573,https://intelligence.org/2021/11/29/soares-tallinn-and-yudkowsky-discuss-agi-cognition/,"Soares, Tallinn, and Yudkowsky discuss AGI cognition",['Rob Bensinger'],2021-11-29T19:15:22Z,blogs,, 116605,https://intelligence.org/2020/09/10/september-2020-newsletter/,September 2020 Newsletter,['Rob Bensinger'],2020-09-11T01:25:23Z,blogs,, 116626,https://intelligence.org/2020/11/30/november-2020-newsletter/,November 2020 Newsletter,['Rob Bensinger'],2020-11-30T18:19:44Z,blogs,, 116657,https://carado.moe/above-paperclips.html,no room above paperclips,['Tamsin Leake'],2021-11-20T00:00:00Z,blogs,, 116667,https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/,Security Mindset and the Logistic Success Curve,['Eliezer Yudkowsky'],2017-11-26T16:11:36Z,blogs,, 116680,https://intelligence.org/2017/12/12/ml-living-library/,ML Living Library Opening,['Alex Vermeer'],2017-12-12T19:38:20Z,blogs,, 116689,https://aiimpacts.org/relevant-pre-agi-possibilities/,Relevant pre-AGI possibilities,['Daniel Kokotajlo'],2020-06-19T13:40:00Z,blogs,, 116733,https://carado.moe/spoiler-fire-upon-deep.html,just enough spoilers for,['Tamsin Leake'],2022-11-22T00:00:00Z,blogs,, 116748,https://carado.moe/explaining-dot.html,"explaining "".""",['Tamsin Leake'],2023-02-14T00:00:00Z,blogs,, 116757,https://vkrakovna.wordpress.com/2015/12/31/2015-16-new-year-review/,2015-16 New Year review,['Victoria Krakovna'],2015-12-31T05:04:51Z,blogs,, 116771,https://vkrakovna.wordpress.com/2017/04/30/highlights-from-the-iclr-conference-food-ships-and-ml-security/,"Highlights from the ICLR conference: food, ships, and ML security",['Victoria Krakovna'],2017-04-30T20:54:46Z,blogs,, 116799,https://carado.moe/our-deepest-wishes.html,our deepest wishes,['Tamsin Leake'],2022-12-19T00:00:00Z,blogs,, 116815,https://www.deepmind.com/blog/melting-pot-an-evaluation-suite-for-multi-agent-reinforcement-learning,Melting Pot: an evaluation suite for multi-agent reinforcement learning,"['Joel Z. Leibo', 'Edgar Duéñez-Guzmán', 'Alexander Vezhnevets', 'John Agapiou', 'Peter Sunehag', 'Raphael Koster', 'Jayd Matyas', 'Charlie Beattie', 'Igor Mordatch *', 'Thore Graepel']",2021-07-14T00:00:00Z,blogs,, 116825,https://aiimpacts.org/conversation-with-adam-gleave/,Conversation with Adam Gleave,['Asya Bergal'],2019-12-24T03:08:20Z,blogs,, 116854,https://aiimpacts.org/michie-survey/,Michie Survey,['Katja Grace'],2015-01-10T01:58:11Z,blogs,, 116865,https://aiimpacts.org/energy-efficiency-of-north-american-p-51-mustang/,Energy efficiency of North American P-51 Mustang,['Katja Grace'],2020-11-06T01:04:24Z,blogs,, 116874,https://aiimpacts.org/short-prediction-publication-biases/,Publication biases toward shorter predictions,['Katja Grace'],2015-05-29T21:46:50Z,blogs,, 116895,https://aiimpacts.org/effect-of-alexnet-on-historic-trends-in-image-recognition/,Effect of AlexNet on historic trends in image recognition,['Katja Grace'],2020-02-08T02:40:36Z,blogs,, 116905,https://carado.moe/dont-censor-yourself-silly.html,"don't censor yourself, silly !",['Tamsin Leake'],2023-02-16T00:00:00Z,blogs,, 116914,https://aiimpacts.org/discontinuity-from-nuclear-weapons/,Effect of nuclear weapons on historic trends in explosives,['Katja Grace'],2014-12-31T11:45:48Z,blogs,, 116925,https://www.deepmind.com/blog/from-motor-control-to-embodied-intelligence,From motor control to embodied intelligence,"['Siqi Liu', 'Leonard Hasenclever', 'Steven Bohez', 'Guy Lever', 'Zhe Wang', 'S. M. Ali Eslami', 'Nicolas Heess']",2022-08-31T00:00:00Z,blogs,, 116942,https://carado.moe/rationalist-by-necessity.html,Rationalist by necessity,['Tamsin Leake'],2020-12-22T00:00:00Z,blogs,, 116951,https://aiimpacts.org/hanson-ai-expert-survey/,Hanson AI Expert Survey,['Katja Grace'],2014-12-29T18:47:27Z,blogs,, 116961,https://carado.moe/from-above-fine-grain-diversity.html,From-above vs Fine-grain diversity,['Tamsin Leake'],2021-03-04T00:00:00Z,blogs,, 116971,https://intelligence.org/2014/02/18/2013-in-friendly-ai-research/,2013 in Review: Friendly AI Research,['Luke Muehlhauser'],2014-02-18T21:13:05Z,blogs,, 117010,https://intelligence.org/2018/11/04/embedded-delegation/,Robust Delegation,['Abram Demski'],2018-11-04T16:56:37Z,blogs,, 117038,https://www.deepmind.com/blog/international-evaluation-of-an-ai-system-for-breast-cancer-screening,International evaluation of an AI system for breast cancer screening,"['Scott Mayer McKinney *', 'Marcin T. Sieniek *', 'Varun Godbole *', 'Jonathan Godwin', 'Natasha Antropova', 'Hutan Ashrafian *', 'Trevor Back', 'Mary Chesus', 'Greg C Corrado *', 'Ara Darzi *', 'Mozziyar Etemadi *', 'Florencia Garcia-Vicente *', 'Fiona J Gilbert *', 'Mark Halling-Brown *', 'Demis Hassabis', 'Sunny Jansen *', 'Alan Karthikesalingam', 'Christopher J Kelly', 'Dominic King', 'Joseph Ledsam', 'David Melnick *', 'Hormuz Mostofi *', 'Bernardino Romera Paredes', 'Lily Peng *', 'Joshua Jay Reicher *', 'Richard Sidebottom *', 'Mustafa Suleyman', 'Daniel Tse *', 'Kenneth C. Young *', 'Jeffrey De Fauw', 'Shravya Shetty *']",2020-01-01T00:00:00Z,blogs,, 117051,https://carado.moe/narrative-explanation-qaci.html,a narrative explanation of the QACI alignment plan,['Tamsin Leake'],2023-02-15T00:00:00Z,blogs,, 117080,https://www.yudkowsky.net/singularity/fun-theory,Singularity Fun Theory,['Eliezer S. Yudkowsky'],2020-09-04T03:10:31Z,blogs,, 117104,https://aiimpacts.org/whats-up-with-nuclear-weapons/,What’s up with nuclear weapons?,['Katja Grace'],2015-02-27T08:07:44Z,blogs,, 117116,https://openai.com/research/evolution-through-large-models,Evolution through large models,['OpenAI Research'],2022-06-17T00:00:00Z,blogs,, 117146,https://aiimpacts.org/scoring-forecasts-from-the-2016-expert-survey-on-progress-in-ai/,Scoring forecasts from the 2016 “Expert Survey on Progress in AI”,['Harlan Stewart'],2023-03-02T00:37:23Z,blogs,, 117167,https://www.cold-takes.com/nonprofit-boards-are-weird-2/,Nonprofit Boards are Weird,['Holden Karnofsky'],2022-06-23T00:00:00Z,blogs,, 117185,https://aiimpacts.org/historic-trends-in-light-intensity/,Historic trends in light intensity,['Katja Grace'],2020-02-08T02:38:35Z,blogs,, 117202,https://aiimpacts.org/trend-in-compute-used-in-training-for-headline-ai-results/,Trend in compute used in training for headline AI results,['Katja Grace'],2018-05-17T21:33:13Z,blogs,, 117212,https://intelligence.org/2014/10/16/agi-outcomes-civilizational-competence/,AGI outcomes and civilizational competence,['Luke Muehlhauser'],2014-10-16T11:00:57Z,blogs,, 117232,https://intelligence.org/2017/04/12/ensuring/,Ensuring smarter-than-human intelligence has a positive outcome,['Nate Soares'],2017-04-12T21:00:06Z,blogs,, 117265,https://intelligence.org/2016/03/30/miri-has-a-new-coo-malo-bourgon/,MIRI has a new COO: Malo Bourgon,['Nate Soares'],2016-03-31T00:52:00Z,blogs,, 117278,https://www.yudkowsky.net/rational/virtues,Twelve Virtues of Rationality,['Eliezer S. Yudkowsky'],2006-05-08T01:38:00Z,blogs,, 117292,https://intelligence.org/2018/03/31/2018-research-plans/,2018 research plans and predictions,['Rob Bensinger'],2018-04-01T03:38:52Z,blogs,, 117323,https://aiimpacts.org/energy-efficiency-of-wright-flyer/,Energy efficiency of Wright Flyer,['Katja Grace'],2020-11-04T18:58:55Z,blogs,, 117338,https://intelligence.org/2013/08/31/stephen-hsu-on-cognitive-genomics/,Stephen Hsu on Cognitive Genomics,['Luke Muehlhauser'],2013-08-31T23:54:56Z,blogs,, 117361,https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/,All Possible Views About Humanity's Future Are Wild,['Holden Karnofsky'],2021-07-13T00:00:00Z,blogs,, 117383,https://www.yudkowsky.net/other/fiction/npc,“Non-Player Character”,['Eliezer S. Yudkowsky'],2003-01-01T19:22:37Z,blogs,, 117403,https://aiimpacts.org/costs-of-extinction-risk-mitigation/,Costs of extinction risk mitigation,['Michael Wulfsohn'],2016-08-04T20:57:56Z,blogs,, 117424,https://intelligence.org/2018/11/02/embedded-models/,Embedded World-Models,['Scott Garrabrant'],2018-11-02T16:03:35Z,blogs,, 117445,https://aiimpacts.org/beyond-fire-alarms-freeing-the-groupstruck/,Beyond fire alarms: freeing the groupstruck,['Katja Grace'],2021-09-26T09:09:34Z,blogs,, 117468,https://carado.moe/confusion-about-alignment-requirements.html,confusion about alignment requirements,['Tamsin Leake'],2022-10-05T23:00:00Z,blogs,, 117483,https://carado.moe/not-hold-on-to-values.html,do not hold on to your believed intrinsic values — follow your heart!,['Tamsin Leake'],2022-03-02T00:00:00Z,blogs,, 117499,https://carado.moe/utopia-scopes.html,scopes of utopia,['Tamsin Leake'],2022-08-11T23:00:00Z,blogs,, 117515,https://openai.com/research/confidence-building-measures-for-artificial-intelligence,Confidence-Building Measures for Artificial Intelligence: Workshop proceedings,"['Sarah Shoker', 'Andrew Reddie']",2023-08-01T00:00:00Z,blogs,, 117556,https://intelligence.org/2015/07/18/targets-1-and-2-growing-miri/,Targets 1 and 2: Growing MIRI,['Nate Soares'],2015-07-19T02:14:56Z,blogs,, 117580,https://intelligence.org/2021/04/01/march-2021-newsletter/,March 2021 Newsletter,['Rob Bensinger'],2021-04-01T17:57:26Z,blogs,, 117599,https://www.cold-takes.com/digital-people-faq/,Digital People FAQ,['Holden Karnofsky'],2021-07-27T00:00:00Z,blogs,, 117616,https://www.deepmind.com/blog/ai-for-the-board-game-diplomacy,AI for the board game Diplomacy,"['Yoram Bachrach', 'János Kramár']",2022-12-06T00:00:00Z,blogs,, 117632,https://intelligence.org/2013/07/08/miris-september-2013-workshop/,MIRI’s September 2013 Workshop,['Luke Muehlhauser'],2013-07-08T13:50:22Z,blogs,, 117641,https://importai.substack.com/p/import-distributed-ai-chinese-ai,"Import AI 326:Chinese AI regulations; Stability's new LMs If AI is fashionable in 2023, then what will be fashionable in 2024?",['Jack Clark'],2023-04-24T12:45:15Z,blogs,, 117672,https://importai.substack.com/p/import-ai-328-cheaper-stablediffusion,Import AI 328: Cheaper StableDiffusion; sim2soccer; AI refinement,['Jack Clark'],2023-05-08T13:04:00Z,blogs,, 117702,https://carado.moe/tiling-unavoidable.html,tiling the cosmos might be unavoidable,['Tamsin Leake'],2022-08-03T23:00:00Z,blogs,, 117717,https://aiimpacts.org/you-cant-predict-a-game-of-pinball/,You Can’t Predict a Game of Pinball,['Jeffrey Heninger'],2023-03-30T00:39:34Z,blogs,, 117733,https://importai.substack.com/p/import-ai-323-ai-researcher-warns,Import AI 323: AI researcher warns about AI; BloombergGPT; and an open source Flamingo,['Jack Clark'],2023-04-03T12:45:47Z,blogs,, 117763,https://intelligence.org/2013/10/28/markus-schmidt-on-risks-from-novel-biotechnologies/,Markus Schmidt on Risks from Novel Biotechnologies,['Luke Muehlhauser'],2013-10-28T15:48:39Z,blogs,, 117794,https://www.deepmind.com/blog/intuitive-physics-learning-in-a-deep-learning-model-inspired-by-developmental-psychology,Intuitive physics learning in a deep-learning model inspired by developmental psychology,"['Luis Piloto', 'Ari Weinstein', 'Peter Battaglia', 'Matt Botvinick']",2022-07-11T00:00:00Z,blogs,, 117813,https://intelligence.org/2015/07/01/wanted-office-manager/,Wanted: Office Manager (aka Force Multiplier),['Alex Vermeer'],2015-07-01T19:04:14Z,blogs,, 117822,https://intelligence.org/2022/06/07/six-dimensions-of-operational-adequacy-in-agi-projects/,Six Dimensions of Operational Adequacy in AGI Projects,['Eliezer Yudkowsky'],2022-06-08T00:43:25Z,blogs,, 117852,https://aiimpacts.org/research-topic-hardware-software-and-ai/,"Research topic: Hardware, software and AI",['Katja Grace'],2015-02-20T05:10:27Z,blogs,, 117871,https://aiimpacts.org/costs-of-human-level-hardware/,Costs of human-level hardware,['Katja Grace'],2015-07-26T23:21:54Z,blogs,, 117885,https://aiimpacts.org/global-computing-capacity/,Global computing capacity,['Katja Grace'],2016-02-17T01:21:19Z,blogs,, 117895,https://www.deepmind.com/blog/visual-grounding-in-video-for-unsupervised-word-translation,Visual Grounding in Video for Unsupervised Word Translation,"['Gunnar Sigurdsson*', 'Jean-Baptiste Alayrac', 'Aida Nematzadeh', 'Lucas Smaira', 'Mateusz Malinowski', 'Joao Carreira', 'Phil Blunsom', 'Andrew Zisserman']",2020-03-11T00:00:00Z,blogs,, 117906,https://jsteinhardt.wordpress.com/2015/06/24/long-term-and-short-term-challenges-to-ensuring-the-safety-of-ai-systems/,Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems,['jsteinhardt'],2015-06-24T22:53:47Z,blogs,, 117946,https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/,Likelihood of discontinuous progress around the development of AGI,['Katja Grace'],2018-02-23T21:23:41Z,blogs,, 117962,https://deepmindsafetyresearch.medium.com/an-epic-way-to-evaluate-reward-functions-c2c6d41b61cc,An EPIC way to evaluate reward functions,['DeepMind Safety Research'],2021-04-16T00:00:00Z,blogs,, 117978,https://intelligence.org/2019/10/25/october-2019-newsletter/,October 2019 Newsletter,['Rob Bensinger'],2019-10-26T04:48:44Z,blogs,, 118015,https://jsteinhardt.wordpress.com/2013/02/09/a-fun-optimization-problem/,A Fun Optimization Problem,['jsteinhardt'],2013-02-09T06:56:29Z,blogs,, 118024,https://intelligence.org/2014/03/13/hires/,Recent Hires at MIRI,['Luke Muehlhauser'],2014-03-13T20:57:36Z,blogs,, 118041,https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/,The unexpected difficulty of comparing AlphaStar to humans,['richardkorzekwa'],2019-09-18T02:11:55Z,blogs,, 118065,https://intelligence.org/2015/09/14/september-2015-newsletter/,September 2015 Newsletter,['Rob Bensinger'],2015-09-15T06:20:05Z,blogs,, 118086,https://www.deepmind.com/blog/probing-image-language-transformers-for-verb-understanding,Probing Image-Language Transformers for Verb Understanding,"['Lisa Anne Hendricks', 'Aida Nematzadeh']",2022-02-23T00:00:00Z,blogs,, 118099,https://carado.moe/how-timelines-fall.html,how timelines fall,['Tamsin Leake'],2022-01-11T00:00:00Z,blogs,, 118111,https://importai.substack.com/p/import-ai-325-automated-mad-science,Import AI 325: Automated mad science; AI vs democracy; and a 12B parameter language model,['Jack Clark'],2023-04-17T13:20:46Z,blogs,, 118141,https://intelligence.org/2014/06/19/new-chapter-cambridge-handbook-artificial-intelligence/,New chapter in Cambridge Handbook of Artificial Intelligence,['Luke Muehlhauser'],2014-06-20T00:44:04Z,blogs,, 118164,https://www.deepmind.com/blog/evaluating-multimodal-interactive-agents,Evaluating Multimodal Interactive Agents,"['Josh Abramson', 'Arun Ahuja', 'Federico Carnevale', 'Petko Georgiev', 'Alex Goldin', 'Jessica Landon', 'Timothy Lillicrap', 'Alistair Muldal', 'Adam Santoro', 'Tamara von Glehn', 'Gregory Wayne', 'Nathaniel Wong', 'Chen Yan', 'Blake Richards*', 'Alden Hung*']",2022-05-27T00:00:00Z,blogs,, 118181,https://intelligence.org/2012/12/19/december-2012-newsletter/,December 2012 Newsletter,['Louie Helm'],2012-12-19T17:52:38Z,blogs,, 118214,https://intelligence.org/2014/02/22/miris-may-2014-workshop/,MIRI’s May 2014 Workshop,['Alex Vermeer'],2014-02-23T02:04:23Z,blogs,, 118223,https://aiimpacts.org/rohin-shah-on-reasons-for-ai-optimism/,Rohin Shah on reasons for AI optimism,['Asya Bergal'],2019-10-31T12:02:46Z,blogs,, 118238,https://intelligence.org/2020/08/13/august-2020-newsletter/,August 2020 Newsletter,['Rob Bensinger'],2020-08-13T19:20:03Z,blogs,, 118259,https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/,Brooks and Searle on AI volition and timelines,['Rob Bensinger'],2015-01-08T22:43:49Z,blogs,, 118276,https://intelligence.org/2021/12/14/ngos-view-on-alignment-difficulty/,Ngo’s view on alignment difficulty,['Rob Bensinger'],2021-12-15T06:14:05Z,blogs,, 118308,https://intelligence.org/2014/03/18/miris-march-2014-newsletter/,MIRI’s March 2014 Newsletter,['Luke Muehlhauser'],2014-03-18T18:53:57Z,blogs,, 118319,https://www.cold-takes.com/reading-books-vs-engaging-with-them/,Reading books vs. engaging with them,['Holden Karnofsky'],2021-10-20T00:00:00Z,blogs,, 118334,https://intelligence.org/2014/05/09/michael-fisher/,Michael Fisher on verifying autonomous systems,['Luke Muehlhauser'],2014-05-09T17:33:49Z,blogs,, 118357,https://www.cold-takes.com/weak-point-in-most-important-century-full-automation/,Weak point in “most important century”: full automation,['Holden Karnofsky'],2021-10-28T00:00:00Z,blogs,, 118373,https://intelligence.org/2014/09/08/friendly-ai-research-help-miri/,Friendly AI Research Help from MIRI,['Luke Muehlhauser'],2014-09-08T22:20:55Z,blogs,, 118383,https://carado.moe/utils-unit.html,a unit for utils,['Tamsin Leake'],2022-04-29T23:00:00Z,blogs,, 118393,https://intelligence.org/2013/01/09/new-transcript-eliezer-yudkowsky-and-massimo-pigliucci-on-the-singularity/,New Transcript: Eliezer Yudkowsky and Massimo Pigliucci on the Intelligence Explosion,['Jake'],2013-01-09T21:34:33Z,blogs,, 118410,https://intelligence.org/2015/07/01/grants-fundraisers/,Grants and fundraisers,['Nate Soares'],2015-07-02T03:15:45Z,blogs,, 118431,https://carado.moe/trading-with-superint.html,trading with superintelligence: a wonky proto-alignment scheme,['Tamsin Leake'],2022-08-14T23:00:00Z,blogs,, 118444,https://intelligence.org/2015/11/01/edge-org-contributors-discuss-the-future-of-ai/,Edge.org contributors discuss the future of AI,['Rob Bensinger'],2015-11-02T01:13:08Z,blogs,, 118473,https://carado.moe/do-not-form-your-own-opinion.html,do not form your own opinion,['Tamsin Leake'],2021-09-11T23:00:00Z,blogs,, 118490,https://aiimpacts.org/historic-trends-in-slow-light-technology/,Historic trends in slow light technology,['Katja Grace'],2020-02-08T01:56:25Z,blogs,, 118500,https://aiimpacts.org/computing-hardware-performance-data-collections/,Computing hardware performance data collections,['Katja Grace'],2017-10-26T22:34:47Z,blogs,, 118515,https://carado.moe/psi-json.html,thinking about psi: as a more general json,['Tamsin Leake'],2021-12-25T00:00:00Z,blogs,, 118524,https://carado.moe/faces-chaos-magick.html,the many faces of chaos magick,['Tamsin Leake'],2021-06-15T23:00:00Z,blogs,, 118541,https://intelligence.org/2014/09/04/daniel-roy/,Daniel Roy on probabilistic programming and AI,['Luke Muehlhauser'],2014-09-04T15:03:31Z,blogs,, 118561,https://carado.moe/anthropics-example.html,an anthropics example,['Tamsin Leake'],2022-07-26T23:00:00Z,blogs,, 118571,https://carado.moe/qaci-blobs-interval-illustrated.html,QACI blobs and interval illustrated,['Tamsin Leake'],2023-03-09T00:00:00Z,blogs,, 118586,https://generative.ink/posts/language-models-are-0-shot-interpreters/,Language models are 0-shot interpreters,['janus'],2021-02-10T00:00:00Z,blogs,, 118612,https://www.deepmind.com/blog/an-empirical-analysis-of-compute-optimal-large-language-model-training,An empirical analysis of compute-optimal large language model training,"['Jordan Hoffmann', 'Sebastian Borgeaud', 'Arthur Mensch', 'Laurent Sifre']",2022-04-12T00:00:00Z,blogs,, 118627,https://intelligence.org/2021/05/18/may-2021-newsletter/,May 2021 Newsletter,['Rob Bensinger'],2021-05-19T04:01:51Z,blogs,, 118650,https://carado.moe/goal-program-bricks.html,goal-program bricks,['Tamsin Leake'],2022-08-12T23:00:00Z,blogs,, 118671,https://carado.moe/database-transactions-wasm.html,"database transactions: you guessed it, it's WASM again",['Tamsin Leake'],2021-12-25T00:00:00Z,blogs,, 118682,https://blog.eleuther.ai/gpt3-model-sizes/,On the Sizes of OpenAI API Models,['Leo Gao'],2021-05-24T00:00:00Z,blogs,, 118694,https://intelligence.org/2015/12/31/safety-engineering-target-selection-and-alignment-theory/,"Safety engineering, target selection, and alignment theory",['Nate Soares'],2015-12-31T08:14:33Z,blogs,, 118726,https://www.gwern.net/complexity.page,Complexity no Bar to AI,['Gwern Branwen'],2019-06-09T00:00:00Z,blogs,, 118747,https://jsteinhardt.wordpress.com/2016/01/13/difficulty-of-predicting-the-maximum-of-gaussians/,Difficulty of Predicting the Maximum of Gaussians,['jsteinhardt'],2016-01-13T05:45:15Z,blogs,, 118756,https://intelligence.org/2019/05/10/may-2019-newsletter/,May 2019 Newsletter,['Rob Bensinger'],2019-05-10T17:30:47Z,blogs,, 118772,https://intelligence.org/2018/01/10/fundraising-success/,Fundraising success!,['Malo Bourgon'],2018-01-11T00:13:07Z,blogs,, 118787,https://intelligence.org/2014/03/09/randall-larsen-and-lynne-kidder/,Randall Larsen and Lynne Kidder on USA bio-response,['Luke Muehlhauser'],2014-03-09T18:10:17Z,blogs,, 118817,https://www.cold-takes.com/learning-by-writing/,Learning By Writing,['Holden Karnofsky'],2022-02-22T00:00:00Z,blogs,, 118840,https://intelligence.org/2013/05/15/advise-miri-with-your-domain-specific-expertise/,Advise MIRI with Your Domain-Specific Expertise,['Luke Muehlhauser'],2013-05-15T21:38:56Z,blogs,, 118849,https://importai.substack.com/p/import-ai-334-better-distillation,Import AI 334: Better distillation; the UK's AI taskforce; money and AI,['Jack Clark'],2023-07-10T13:12:03Z,blogs,, 118859,https://carado.moe/classifying-computational-frameworks.html,classifying computational frameworks,['Tamsin Leake'],2021-06-24T23:00:00Z,blogs,, 118874,https://aiimpacts.org/ai-hopes-and-fears-in-numbers/,AI hopes and fears in numbers,['Katja Grace'],2017-06-29T02:18:41Z,blogs,, 118884,https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-go/,Time for AI to cross the human performance range in Go,['Katja Grace'],2020-10-16T00:05:43Z,blogs,, 118893,https://www.deepmind.com/blog/using-unity-to-help-solve-intelligence,Using Unity to Help Solve Intelligence,"['Simon Carter', 'Manuel Sanchez', 'Ricardo Barreira', 'Seb Noury', 'Keith Anderson', 'Jay Lemmon', 'Jonathan Coe', 'Piotr Trochim', 'Tom Handley', 'Adrian Bolton']",2020-11-18T00:00:00Z,blogs,, 118909,https://aiimpacts.org/at-least-human-level-at-human-cost-ai/,At-least-human-level-at-human-cost AI,['Katja Grace'],2015-02-07T14:00:15Z,blogs,, 118920,https://intelligence.org/2017/08/16/august-2017-newsletter/,August 2017 Newsletter,['Rob Bensinger'],2017-08-16T18:02:08Z,blogs,, 118941,https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/,"Forecasting transformative AI: the ""biological anchors"" method in a nutshell",['Holden Karnofsky'],2021-08-31T00:00:00Z,blogs,, 118953,https://aiimpacts.org/will-ai-see-sudden-progress/,Will AI see sudden progress?,['Katja Grace'],2018-02-25T05:07:23Z,blogs,, 118963,https://carado.moe/implementing-the-platonic-realm.html,implementing the platonic realm,['Tamsin Leake'],2022-05-22T23:00:00Z,blogs,, 118974,https://aiimpacts.org/incentives-to-create-x-risky-ai-systems/,Incentives to create AI systems known to pose extinction risks,['Katja Grace'],2022-08-06T19:30:00Z,blogs,, 118989,https://www.cold-takes.com/rowing-steering-anchoring-equity-mutiny/,"Rowing, Steering, Anchoring, Equity, Mutiny",['Holden Karnofsky'],2021-11-09T00:00:00Z,blogs,, 119018,https://blog.eleuther.ai/rotary-embeddings/,Rotary Embeddings: A Relative Revolution,"['Stella Biderman', 'Sid Black', 'Charles Foster', 'Leo Gao', 'Eric Hallahan', 'Horace He', 'Ben Wang', 'Phil Wang']",2021-04-20T00:00:00Z,blogs,, 119030,https://carado.moe/smaller-x-risk.html,smaller X-risk,['Tamsin Leake'],2022-05-15T23:00:00Z,blogs,, 119041,https://intelligence.org/2022/01/31/january-2022-newsletter/,January 2022 Newsletter,['Rob Bensinger'],2022-02-01T07:57:20Z,blogs,, 119062,https://transformer-circuits.pub/2021/garcon/index.html,Garcon,"['Nelson Elhage', 'Neel Nanda', 'Catherine Olsson', 'Tom Henighan', 'Nicholas Joseph', 'Ben Mann', 'Amanda Askell', 'Yuntao Bai', 'Anna Chen', 'Tom Conerly', 'Nova DasSarma', 'Dawn Drain', 'Deep Ganguli', 'Zac Hatfield-Dodds', 'Danny Hernandez', 'Andy Jones', 'Jackson Kernion', 'Liane Lovitt', 'Kamal Ndousse', 'Dario Amodei', 'Tom Brown', 'Jack Clark', 'Jared Kaplan', 'Sam McCandlish', 'Chris Olah']",2021-12-22T00:00:00Z,blogs,, 119072,https://intelligence.org/2018/09/01/summer-miri-updates/,Summer MIRI Updates,['Malo Bourgon'],2018-09-02T01:35:32Z,blogs,, 119096,https://carado.moe/surprise-you-want.html,surprise! you want what you want,['Tamsin Leake'],2022-09-26T23:00:00Z,blogs,, 119106,https://carado.moe/formal-alignment-problems.html,problems for formal alignment,['Tamsin Leake'],2023-03-11T00:00:00Z,blogs,, 119143,https://jsteinhardt.wordpress.com/2010/09/18/uncertain-observations/,Uncertain Observations,['jsteinhardt'],2010-09-18T19:26:46Z,blogs,, 119158,https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/,Transparency in Safety-Critical Systems,['Luke Muehlhauser'],2013-08-25T18:28:23Z,blogs,, 119180,https://intelligence.org/2012/01/16/singularity-institute-progress-report-december-2011/,"Machine Intelligence Research Institute Progress Report, December 2011",['Luke Muehlhauser'],2012-01-16T23:35:26Z,blogs,, 119203,https://blog.eleuther.ai/rotary-embeddings-eval-harness/,Downstream Evaluations of Rotary Position Embeddings,['Leo Gao'],2021-08-16T00:00:00Z,blogs,, 119212,https://aiimpacts.org/historic-trends-in-the-maximum-superconducting-temperature/,Historic trends in the maximum superconducting temperature,['Asya Bergal'],2020-02-08T00:22:32Z,blogs,, 119224,https://distill.pub/2020/growing-ca,Growing Neural Cellular Automata,"['Alexander Mordvintsev', 'Eyvind Niklasson', 'Michael Levin']",2020-02-11T20:00:00Z,distill,, 119245,https://distill.pub/2020/circuits/frequency-edges,High-Low Frequency Detectors,"['Ludwig Schubert', 'Chelsea Voss', 'Nick Cammarata', 'Gabriel Goh', 'Chris Olah']",2021-01-27T20:00:00Z,distill,, 119266,https://distill.pub/2020/circuits,Thread: Circuits,"['Nick Cammarata', 'Shan Carter', 'Gabriel Goh', 'Chris Olah', 'Michael Petrov', 'Ludwig Schubert', 'Chelsea Voss', 'Swee Kiat Lim', 'Chris Olah', 'Nick Cammarata', 'Ludwig Schubert', 'Gabriel Goh', 'Michael Petrov', 'Shan Carter', 'Chris Olah', 'Nick Cammarata', 'Ludwig Schubert', 'Gabriel Goh', 'Michael Petrov', 'Shan Carter', 'Nick Cammarata', 'Gabriel Goh', 'Shan Carter', 'Ludwig Schubert', 'Michael Petrov', 'Chris Olah', 'Chris Olah', 'Nick Cammarata', 'Chelsea Voss', 'Ludwig Schubert', 'Gabriel Goh', 'Ludwig Schubert', 'Chelsea Voss', 'Nick Cammarata', 'Gabriel Goh', 'Chris Olah', 'Nick Cammarata', 'Gabriel Goh', 'Shan Carter', 'Chelsea Voss', 'Ludwig Schubert', 'Chris Olah', 'Chelsea Voss', 'Nick Cammarata', 'Gabriel Goh', 'Michael Petrov', 'Ludwig Schubert', 'Swee Kiat Lim', 'Chris Olah', 'Chelsea Voss', 'Gabriel Goh', 'Nick Cammarata', 'Michael Petrov', 'Ludwig Schubert', 'Chris Olah', 'Michael Petrov', 'Chelsea Voss', 'Ludwig Schubert', 'Nick Cammarata', 'Gabriel Goh', 'Chris Olah']",2020-03-10T20:00:00Z,distill,, 119282,https://distill.pub/2019/advex-bugs-discussion/response-2,"A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Robust Feature Leakage",['Gabriel Goh'],2019-08-06T20:00:00Z,distill,, 119297,https://distill.pub/2018/differentiable-parameterizations,Differentiable Image Parameterizations,"['Alexander Mordvintsev', 'Nicola Pezzotti', 'Ludwig Schubert', 'Chris Olah']",2018-07-25T20:00:00Z,distill,, 119318,https://distill.pub/2017/research-debt,Research Debt,"['Chris Olah', 'Shan Carter']",2017-03-22T20:00:00Z,distill,, 119338,https://distill.pub/2019/activation-atlas,Activation Atlas,"['Shan Carter', 'Zan Armstrong', 'Ludwig Schubert', 'Ian Johnson', 'Chris Olah']",2019-03-06T20:00:00Z,distill,, 119351,https://distill.pub/2020/understanding-rl-vision,Understanding RL Vision,"['Jacob Hilton', 'Nick Cammarata', 'Shan Carter', 'Gabriel Goh', 'Chris Olah']",2020-11-17T20:00:00Z,distill,, 119381,https://distill.pub/selforg/2021/adversarial,Adversarial Reprogramming of Neural Cellular Automata,"['Ettore Randazzo', 'Alexander Mordvintsev', 'Eyvind Niklasson', 'Michael Levin']",2021-05-06T20:00:00Z,distill,, 119402,https://distill.pub/2017/ctc,Sequence Modeling with CTC,['Awni Hannun'],2017-11-27T20:00:00Z,distill,, 119423,http://distill.pub/2016/misread-tsne,How to Use t-SNE Effectively,['Distill'],2016-10-13T16:00:00Z,distill,, 119447,https://distill.pub/2019/advex-bugs-discussion/response-3,"A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Two Examples of Useful, Non-Robust Features",['Gabriel Goh'],2019-08-06T20:00:00Z,distill,, 119466,https://distill.pub/2020/circuits/early-vision,An Overview of Early Vision in InceptionV1,"['Chris Olah', 'Nick Cammarata', 'Ludwig Schubert', 'Gabriel Goh', 'Michael Petrov', 'Shan Carter']",2020-04-01T20:00:00Z,distill,, 119487,https://distill.pub/2021/gnn-intro,A Gentle Introduction to Graph Neural Networks,['Adam Pearce'],2021-09-02T20:00:00Z,distill,, 119524,https://distill.pub/2019/visual-exploration-gaussian-processes,A Visual Exploration of Gaussian Processes,"['Jochen Görtler', 'Rebecca Kehlbeck', 'Oliver Deussen']",2019-04-02T20:00:00Z,distill,, 119537,https://distill.pub/2019/computing-receptive-fields,Computing Receptive Fields of Convolutional Neural Networks,"['André Araujo', 'Wade Norris']",2019-11-04T20:00:00Z,distill,, 119553,https://distill.pub/2020/bayesian-optimization,Exploring Bayesian Optimization,"['Apoorv Agnihotri', 'Nipun Batra']",2020-05-05T20:00:00Z,distill,, 119574,https://distill.pub/2019/gan-open-problems,Open Questions about Generative Adversarial Networks,['Augustus Odena'],2019-04-09T20:00:00Z,distill,, 119601,https://distill.pub/2017/aia,Using Artificial Intelligence to Augment Human Intelligence,"['Shan Carter', 'Michael Nielsen']",2017-12-04T20:00:00Z,distill,, 119621,https://distill.pub/2018/feature-wise-transformations,Feature-wise transformations,"['Vincent Dumoulin', 'Ethan Perez', 'Nathan Schucher', 'Florian Strub', 'Harm de Vries', 'Aaron Courville', 'Yoshua Bengio']",2018-07-09T20:00:00Z,distill,, 119639,https://distill.pub/2019/advex-bugs-discussion/response-1,"A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Example Researchers Need to Expand What is Meant by 'Robustness'","['Justin Gilmer', 'Dan Hendrycks']",2019-08-06T20:00:00Z,distill,, 119654,https://distill.pub/2020/selforg/mnist,Self-classifying MNIST Digits,"['Ettore Randazzo', 'Alexander Mordvintsev', 'Eyvind Niklasson', 'Michael Levin', 'Sam Greydanus']",2020-08-27T20:00:00Z,distill,, 119674,http://distill.pub/2017/momentum,Why Momentum Really Works,['Distill'],2017-04-04T16:00:00Z,distill,, 119691,https://distill.pub/2020/circuits/visualizing-weights,Visualizing Weights,"['Chelsea Voss', 'Nick Cammarata', 'Gabriel Goh', 'Michael Petrov', 'Ludwig Schubert', 'Swee Kiat Lim', 'Chris Olah']",2021-02-04T20:00:00Z,distill,, 119712,https://distill.pub/2018/building-blocks,The Building Blocks of Interpretability,"['Chris Olah', 'Arvind Satyanarayan', 'Ian Johnson', 'Shan Carter', 'Ludwig Schubert', 'Katherine Ye', 'Alexander Mordvintsev']",2018-03-06T20:00:00Z,distill,, 119734,https://distill.pub/2019/advex-bugs-discussion/response-6,"A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Learning from Incorrectly Labeled Data",['Eric Wallace'],2019-08-06T20:00:00Z,distill,, 119752,http://distill.pub/2016/handwriting,Experiments in Handwriting with a Neural Network,['Distill'],2016-12-06T15:00:00Z,distill,, 119769,https://distill.pub/2018/editorial-update,Distill Update 2018,['Distill Editors'],2018-08-14T20:00:00Z,distill,, 119796,https://distill.pub/2019/advex-bugs-discussion,"A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features'","['Logan Engstrom', 'Justin Gilmer', 'Gabriel Goh', 'Dan Hendrycks', 'Andrew Ilyas', 'Aleksander Madry', 'Reiichiro Nakano', 'Shibani Santurkar', 'Dimitris Tsipras', 'Eric Wallace', 'Justin Gilmer', 'Dan Hendrycks', 'Gabriel Goh', 'Gabriel Goh', 'Reiichiro Nakano', 'Preetum Nakkiran', 'Eric Wallace', 'Logan Engstrom', 'Andrew Ilyas', 'Aleksander Madry', 'Shibani Santurkar', 'Dimitris Tsipras']",2019-08-06T20:00:00Z,distill,, 119818,https://distill.pub/2020/grand-tour,Visualizing Neural Networks with the Grand Tour,"['Mingwei Li', 'Zhenge Zhao', 'Carlos Scheidegger']",2020-03-16T20:00:00Z,distill,, 119839,http://distill.pub/2016/augmented-rnns,Attention and Augmented Recurrent Neural Networks,['Distill'],2016-09-08T16:00:00Z,distill,, 119870,https://distill.pub/2017/feature-visualization,Feature Visualization,"['Chris Olah', 'Alexander Mordvintsev', 'Ludwig Schubert']",2017-11-07T20:00:00Z,distill,, 119891,https://distill.pub/2021/distill-hiatus,Distill Hiatus,['Editorial Team'],2021-07-02T20:00:00Z,distill,, 119917,https://distill.pub/2020/selforg,Thread: Differentiable Self-organizing Systems,"['Alexander Mordvintsev', 'Ettore Randazzo', 'Eyvind Niklasson', 'Michael Levin', 'Sam Greydanus', 'Alexander Mordvintsev', 'Ettore Randazzo', 'Eyvind Niklasson', 'Michael Levin', 'Ettore Randazzo', 'Alexander Mordvintsev', 'Eyvind Niklasson', 'Michael Levin', 'Sam Greydanus', 'Eyvind Niklasson', 'Alexander Mordvintsev', 'Ettore Randazzo', 'Michael Levin', 'Ettore Randazzo', 'Alexander Mordvintsev', 'Eyvind Niklasson', 'Michael Levin']",2020-08-27T20:00:00Z,distill,, 119933,https://distill.pub/2020/communicating-with-interactive-articles,Communicating with Interactive Articles,"['Fred Hohman', 'Matthew Conlen', 'Jeffrey Heer', 'Duen Horng (Polo) Chau']",2020-09-11T20:00:00Z,distill,, 119965,https://distill.pub/2019/safety-needs-social-scientists,AI Safety Needs Social Scientists,"['Geoffrey Irving', 'Amanda Askell']",2019-02-19T20:00:00Z,distill,, 119986,https://distill.pub/selforg/2021/textures,Self-Organising Textures,"['Eyvind Niklasson', 'Alexander Mordvintsev', 'Michael Levin']",2021-02-11T20:00:00Z,distill,, 120008,https://distill.pub/2019/advex-bugs-discussion/response-4,"A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarially Robust Neural Style Transfer",['Reiichiro Nakano'],2019-08-06T20:00:00Z,distill,, 120025,https://distill.pub/2019/paths-perspective-on-value-learning,The Paths Perspective on Value Learning,"['Sam Greydanus', 'Chris Olah']",2019-09-30T20:00:00Z,distill,, 120055,https://distill.pub/2020/circuits/zoom-in,Zoom In: An Introduction to Circuits,"['Chris Olah', 'Nick Cammarata', 'Ludwig Schubert', 'Gabriel Goh', 'Michael Petrov', 'Shan Carter']",2020-03-10T20:00:00Z,distill,, 120083,http://distill.pub/2016/deconv-checkerboard,Deconvolution and Checkerboard Artifacts,['Distill'],2016-10-17T16:00:00Z,distill,, 120102,https://distill.pub/2021/understanding-gnns,Understanding Convolutions on Graphs,"['Ameya Daigavane', 'Balaraman Ravindran', 'Gaurav Aggarwal']",2021-09-02T20:00:00Z,distill,, 120132,https://distill.pub/2019/advex-bugs-discussion/original-authors,"A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Discussion and Author Responses","['Logan Engstrom', 'Andrew Ilyas', 'Aleksander Madry', 'Shibani Santurkar', 'Dimitris Tsipras']",2019-08-06T20:00:00Z,distill,, 120149,https://distill.pub/2019/advex-bugs-discussion/response-5,"A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Examples are Just Bugs, Too",['Preetum Nakkiran'],2019-08-06T20:00:00Z,distill,, 120173,https://distill.pub/2019/memorization-in-rnns,Visualizing memorization in RNNs,['Andreas Madsen'],2019-03-25T20:00:00Z,distill,, 120192,https://distill.pub/2020/circuits/weight-banding,Weight Banding,"['Michael Petrov', 'Chelsea Voss', 'Ludwig Schubert', 'Nick Cammarata', 'Gabriel Goh', 'Chris Olah']",2021-04-08T20:00:00Z,distill,, 120210,https://forum.effectivealtruism.org/posts/yxArBdibQejHEYT4F/will-ai-kill-everyone-here-s-what-the-godfathers-of-ai-have,Will AI kill everyone? Here's what the godfathers of AI have to say [RA video],"['Writer', 'jai']",2023-08-19T17:29:03Z,eaforum,, 120219,https://forum.effectivealtruism.org/posts/bd7yr3eozzzhMuKCi/what-is-the-eu-ai-act-and-why-should-you-care-about-it,What is the EU AI Act and why should you care about it?,['MathiasKB'],2021-09-10T07:47:57Z,eaforum,, 120242,https://forum.effectivealtruism.org/posts/nGQJEYp5X2pCbeweg/new-report-on-how-much-computational-power-it-takes-to-match,New report on how much computational power it takes to match the human brain (Open Philanthropy),['Aaron Gertler'],2020-09-15T01:06:20Z,eaforum,, 120259,https://forum.effectivealtruism.org/posts/TMCWXTayji7gvRK9p/is-democracy-a-fad,Is Democracy a Fad?,['bgarfinkel'],2021-03-13T12:40:35Z,eaforum,, 120281,https://forum.effectivealtruism.org/posts/kyG2thuWmi6bu3sKo/grokking-semi-informative-priors-over-ai-timelines,Grokking “Semi-informative priors over AI timelines”,['anson'],2022-06-12T22:15:44Z,eaforum,, 120298,https://forum.effectivealtruism.org/posts/SH3Es2Q9XuYPMhA5H/aisn-25-white-house-executive-order-on-ai-uk-ai-safety,"AISN #25: White House Executive Order on AI, UK AI Safety Summit, and Progress on Voluntary Evaluations of AI Risks","['Center for AI Safety', 'aogara', 'Dan H']",2023-10-31T19:24:19Z,eaforum,, 120325,https://forum.effectivealtruism.org/posts/sqsE4x2BEjK6sS2GG/seeking-student-submissions-edit-your-source-code-contest,Seeking Student Submissions: Edit Your Source Code Contest,['Aris Richardson'],2022-08-26T02:06:50Z,eaforum,, 120340,https://forum.effectivealtruism.org/posts/F2DkHdKS8G3tD4CaG/possible-directions-in-ai-ideal-governance-research,Possible directions in AI ideal governance research,['RoryG'],2022-08-10T08:36:21Z,eaforum,, 120359,https://forum.effectivealtruism.org/posts/BFNxoQnqc9zmoB3wi/ai-emergency-eject-criteria-survey,'AI Emergency Eject Criteria' Survey,['tcelferact'],2023-04-19T21:55:45Z,eaforum,, 120368,https://forum.effectivealtruism.org/posts/zLkdQRFBeyyMLKoNj/still-no-strong-evidence-that-llms-increase-bioterrorism,Still no strong evidence that LLMs increase bioterrorism risk,['freedomandutility'],2023-11-02T21:23:14Z,eaforum,, 120380,https://forum.effectivealtruism.org/posts/QbGLmkohgJADdHzsp/on-scaling-academia,On Scaling Academia,['kirchner.jan'],2021-09-20T14:54:32Z,eaforum,, 120407,https://forum.effectivealtruism.org/posts/EQEYFpGdf2evJmqCw/how-dath-ilan-coordinates-around-solving-ai-alignment,How dath ilan coordinates around solving AI alignment,['Thomas Kwa'],2022-04-14T01:53:45Z,eaforum,, 120431,https://forum.effectivealtruism.org/posts/xQBcrPsH57MjCcgTb/announcing-the-cambridge-boston-alignment-initiative-hiring,Announcing the Cambridge Boston Alignment Initiative [Hiring!],"['kuhanj', 'levin', 'Xander Davies', 'Alexandra Bates']",2022-12-02T01:07:44Z,eaforum,, 120446,https://forum.effectivealtruism.org/posts/p84rqHBsRjb78xuNa/bio-risk-and-ai-ai-progress-might-soon-lead-to-much-faster,Bio-risk and AI: AI progress might soon lead to much faster research and engineering,['JanBrauner'],2023-04-20T19:48:21Z,eaforum,, 120456,https://forum.effectivealtruism.org/posts/ARwFCMpgTbmJ89hBP/ai-bio-cannot-be-half-of-ai-catastrophe-risk-right,"AI+bio cannot be half of AI catastrophe risk, right?",['Ulrik Horn'],2023-10-10T03:17:39Z,eaforum,, 120476,https://forum.effectivealtruism.org/posts/ARwvpA4dLvpPxNNRD/a-new-proposal-for-regulating-ai-in-the-eu,A new proposal for regulating AI in the EU,['EdoArad'],2021-04-26T17:25:07Z,eaforum,, 120494,https://forum.effectivealtruism.org/posts/GzHwFz4ihnfXpPGz2/ethical-considerations-in-regard-to-outsourcing-labour-needs,Ethical Considerations in regard to Outsourcing Labour Needs to the Global South,"[""Nicole Mutung'a""]",2023-10-04T09:18:41Z,eaforum,, 120512,https://forum.effectivealtruism.org/posts/6dphu3p8d5mQZEZzk/intrinsic-limitations-of-gpt-4-and-other-large-language,"Intrinsic limitations of GPT-4 and other large language models, and why I'm not (very) worried about GPT-n",['Fods12'],2023-06-03T13:09:59Z,eaforum,, 120528,https://forum.effectivealtruism.org/posts/jo7hmLrhy576zEyiL/prizes-for-ml-safety-benchmark-ideas,Prizes for ML Safety Benchmark Ideas,"['Joshc', 'Dan H']",2022-10-28T02:44:53Z,eaforum,, 120537,https://forum.effectivealtruism.org/posts/sFemFbiFTntgtQDbD/katja-grace-let-s-think-about-slowing-down-ai,Katja Grace: Let's think about slowing down AI,['peterhartree'],2022-12-23T00:57:19Z,eaforum,, 120551,https://forum.effectivealtruism.org/posts/fqXLT7NHZGsLmjH4o/paper-summary-the-effectiveness-of-ai-existential-risk,Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public,['Otto'],2023-03-09T10:40:07Z,eaforum,, 120566,https://forum.effectivealtruism.org/posts/sL8doR3TDjEhNcwGh/by-how-much-should-meta-s-blenderbot-being-really-bad-cause,By how much should Meta's BlenderBot being really bad cause me to update on how justifiable it is for OpenAI and DeepMind to be making significant progress on AI capabilities?,['Sisi'],2022-08-10T06:40:29Z,eaforum,, 120581,https://forum.effectivealtruism.org/posts/nZxcd5LGBNfFC9ejY/what-s-the-exact-way-you-predict-probability-of-ai,What's the exact way you predict probability of AI extinction?,['jackchang110'],2023-06-13T15:11:12Z,eaforum,, 120591,https://forum.effectivealtruism.org/posts/vdqBn65Qaw77MpqXz/on-ai-weapons,On AI Weapons,['kbog'],2019-11-13T12:48:16Z,eaforum,, 120636,https://forum.effectivealtruism.org/posts/BbpYq9iwGzC4YBfMk/the-universe-of-minds-call-for-reviewers-seeds-of-science,"""The Universe of Minds"" - call for reviewers (Seeds of Science)",['rogersbacon1'],2023-07-25T16:55:47Z,eaforum,, 120645,https://forum.effectivealtruism.org/posts/Bj4SBXtnjGmfH4QFq/linkpost-ai-now-institute-s-2023-annual-report-and-roadmap,[linkpost] AI NOW Institute's 2023 Annual Report & Roadmap,['Tristan Williams'],2023-04-12T20:00:47Z,eaforum,, 120675,https://forum.effectivealtruism.org/posts/Prxqvhr9JFj7JyJRX/alignment-newsletter-one-year-retrospective,Alignment Newsletter One Year Retrospective,['Rohin Shah'],2019-04-10T07:00:34Z,eaforum,, 120712,https://forum.effectivealtruism.org/posts/6y4nS9A6WYiaFS9Kp/health-morality-and-goal-alignment-of-systems-agents-and,"Health, morality, and goal alignment of systems, agents, and organs",['FalseCogs'],2023-08-24T14:05:03Z,eaforum,, 120734,https://forum.effectivealtruism.org/posts/d4W4inHhts5Y2szBf/the-space-of-systems-and-the-space-of-maps,The space of systems and the space of maps,"['Jan_Kulveit', 'rosehadshar', 'nora', 'clem']",2023-03-22T16:05:52Z,eaforum,, 120747,https://forum.effectivealtruism.org/posts/cAgTyxg4azaeD6xAW/link-post-ai-could-fuel-factory-farming-or-end-it,[Link post] AI could fuel factory farming—or end it,['BrianK'],2022-10-18T11:16:44Z,eaforum,, 120757,https://forum.effectivealtruism.org/posts/pZPDQmyEoaqBB8szD/conclusion-and-bibliography-for-understanding-the-diffusion,"Conclusion and Bibliography for ""Understanding the diffusion of large language models""",['Ben Cottier'],2022-12-21T13:50:46Z,eaforum,, 120769,https://forum.effectivealtruism.org/posts/a2KEyLaXzBADb8jgg/can-we-evaluate-the-tool-versus-agent-agi-prediction,"Can we evaluate the ""tool versus agent"" AGI prediction?",['Ben_West'],2023-04-08T18:35:08Z,eaforum,, 120786,https://forum.effectivealtruism.org/posts/MGbdhjgd2v6cg3vjv/apply-to-the-redwood-research-mechanistic-interpretability,"Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley","['Max Nadeau', 'Xander Davies', 'Buck', 'Nate Thomas']",2022-10-27T01:39:12Z,eaforum,, 120803,https://forum.effectivealtruism.org/posts/jra9axWurjMMYqxR5/introducing-the-ai-objectives-institute-s-research,Introducing the AI Objectives Institute's Research: Differential Paths toward Safe and Beneficial AI,"['cmck', 'Peli Grietzer', 'Deger Turan', 'particlemania']",2023-05-05T20:26:48Z,eaforum,, 120828,https://forum.effectivealtruism.org/posts/da4S6pbkbQ8azcfSA/long-term-ai-policy-strategy-research-and-implementation,Long-term AI policy strategy research and implementation,['Benjamin_Todd'],2021-11-09T00:00:00Z,eaforum,, 120846,https://forum.effectivealtruism.org/posts/CpyAgKt4gRza7npYf/recruit-the-world-s-best-for-agi-alignment,Recruit the World’s best for AGI Alignment,['Greg_Colbourn'],2023-03-30T16:41:35Z,eaforum,, 120883,https://forum.effectivealtruism.org/posts/6odj5iN8zoDL3t224/language-agents-reduce-the-risk-of-existential-catastrophe,Language Agents Reduce the Risk of Existential Catastrophe,['cdkg'],2023-05-29T09:59:37Z,eaforum,, 120907,https://forum.effectivealtruism.org/posts/zcAxoAHcSECyewr2t/the-centre-for-the-governance-of-ai-is-becoming-a-nonprofit,The Centre for the Governance of AI is becoming a nonprofit,['MarkusAnderljung'],2021-07-09T10:05:39Z,eaforum,, 120919,https://forum.effectivealtruism.org/posts/KpAa9uGoMY3b2htru/doing-global-priorities-or-ai-policy-research-from-remote,Doing Global Priorities or AI Policy research from remote location?,['With Love from Israel'],2019-10-29T09:34:35Z,eaforum,, 120935,https://forum.effectivealtruism.org/posts/dgk2eLf8DLxEG6msd/how-would-a-language-model-become-goal-directed,How would a language model become goal-directed?,['David Mears'],2022-07-16T14:50:16Z,eaforum,, 120945,https://forum.effectivealtruism.org/posts/4vphSKe9aSSGuQRap/ai-wellbeing,AI Wellbeing,"['Simon', 'cdkg']",2023-07-11T00:34:23Z,eaforum,, 120961,https://forum.effectivealtruism.org/posts/L9ogLxNWuCbPM9AsP/new-series-of-posts-answering-one-of-holden-s-important,"New series of posts answering one of Holden's ""Important, actionable research questions""",['Evan R. Murphy'],2022-05-12T21:22:34Z,eaforum,, 120970,https://forum.effectivealtruism.org/posts/zb22pAKoFGsqKwnCg/the-shutdown-problem-three-theorems,The Shutdown Problem: Three Theorems,['EJT'],2023-10-23T15:36:03Z,eaforum,, 120987,https://forum.effectivealtruism.org/posts/pq5tS6WGmeaTWi5uu/ai-policy-and-governance-in-australia-notes-from-an-initial,AI policy & governance in Australia: notes from an initial discussion,['AlexanderSaeri'],2023-05-15T00:00:38Z,eaforum,, 121009,https://forum.effectivealtruism.org/posts/8yaQ6i3oaFLprsFyb/ai-alignment-researchers-may-have-a-comparative-advantage-in,AI alignment researchers may have a comparative advantage in reducing s-risks,['Lukas_Gloor'],2023-02-15T13:01:51Z,eaforum,, 121034,https://forum.effectivealtruism.org/posts/2sZudkyLtNqsuskE5/the-uk-ai-safety-summit-tomorrow,The UK AI Safety Summit tomorrow,['SebastianSchmidt'],2023-10-31T19:09:18Z,eaforum,, 121061,https://forum.effectivealtruism.org/posts/aTztN2FFRQ4GB6CRt/how-does-one-find-out-their-agi-timelines,How does one find out their AGI timelines?,['Yadav'],2022-11-07T22:34:43Z,eaforum,, 121070,https://forum.effectivealtruism.org/posts/isTXkKprgHh5j8WQr/strategic-perspectives-on-transformative-ai-governance,Strategic Perspectives on Transformative AI Governance: Introduction,['MMMaas'],2022-07-02T11:20:27Z,eaforum,, 121085,https://forum.effectivealtruism.org/posts/LdzWExZBLBBXScaog/usd1-000-bounty-for-an-ai-programme-lead-recommendation,"$1,000 bounty for an AI Programme Lead recommendation","['Cillian Crosson', 'Training for Good']",2023-08-14T13:11:48Z,eaforum,, 121095,https://forum.effectivealtruism.org/posts/3Rz4TGR9T8JELGGG9/transformative-ai-and-compute-reading-list,Transformative AI and Compute - Reading List,['Frederik Berg'],2023-09-04T06:21:44Z,eaforum,, 121110,https://forum.effectivealtruism.org/posts/GcYTpFBfXx7WCgza6/how-to-pursue-a-career-in-ai-governance-and-coordination,How to pursue a career in AI governance and coordination,"['Cody_Fenwick', '80000_Hours']",2023-09-25T12:00:38Z,eaforum,, 121143,https://forum.effectivealtruism.org/posts/WJZAc6fTYNbb5DeAW/a-primer-and-some-reflections-on-recent-cser-work-eab-talk,A primer & some reflections on recent CSER work (EAB talk),['MMMaas'],2022-04-12T12:56:26Z,eaforum,, 121179,https://forum.effectivealtruism.org/posts/Bi8av6iknHFXkSxnS/should-chatgpt-make-us-downweight-our-belief-in-the,Should ChatGPT make us downweight our belief in the consciousness of non-human animals?,['splinter'],2023-02-18T23:29:38Z,eaforum,, 121196,https://forum.effectivealtruism.org/posts/raFAKyw7ofSo9mRQ3/ought-s-theory-of-change,Ought's theory of change,"['stuhlmueller', 'jungofthewon']",2022-04-12T00:09:21Z,eaforum,, 121214,https://forum.effectivealtruism.org/posts/JQQAQrunyGGhzE23a/database-of-existential-risk-estimates,Database of existential risk estimates,['MichaelA'],2020-04-15T12:43:08Z,eaforum,, 121231,https://forum.effectivealtruism.org/posts/zpReK9a8gkpGNYmBt/preventing-an-ai-related-catastrophe-problem-profile,Preventing an AI-related catastrophe - Problem profile,"['Benjamin Hilton', '80000_Hours']",2022-08-29T18:49:18Z,eaforum,, 121252,https://forum.effectivealtruism.org/posts/zozxDjHkizsfWLEC3/m-and-a-in-ai,M&A in AI,['Hauke Hillebrandt'],2023-10-30T17:43:59Z,eaforum,, 121273,https://forum.effectivealtruism.org/posts/yWHhb5MBBD7gb46Ch/is-there-much-need-for-frontend-engineers-in-ai-alignment,Is there much need for frontend engineers in AI alignment?,['Michael G'],2023-09-21T20:48:06Z,eaforum,, 121283,https://forum.effectivealtruism.org/posts/r8kZ78uBs6XTMhKes/congressional-hearing-oversight-of-a-i-legislating-on,[Congressional Hearing] Oversight of A.I.: Legislating on Artificial Intelligence,['Tristan Williams'],2023-11-01T18:15:47Z,eaforum,, 121323,https://forum.effectivealtruism.org/posts/vZWkDkvc3zhdLaPpd/crosspost-ai-regulation-may-be-more-important-than-ai,[Crosspost] AI Regulation May Be More Important Than AI Alignment For Existential Safety,['Otto'],2023-08-24T16:01:08Z,eaforum,, 121337,https://forum.effectivealtruism.org/posts/qy25pydHAYZoCFsAG/ai-safety-newsletter-3-ai-policy-proposals-and-a-new,AI Safety Newsletter #3: AI policy proposals and a new challenger approaches,"['Oliver Z', 'Dan H', 'Akash', 'aogara']",2023-04-25T16:15:17Z,eaforum,, 121371,https://forum.effectivealtruism.org/posts/zYSAFtjasxsfm3nmh/cost-effectiveness-of-student-programs-for-ai-safety,Cost-effectiveness of student programs for AI safety research,['Center for AI Safety'],2023-07-10T17:23:37Z,eaforum,, 121388,https://forum.effectivealtruism.org/posts/h2EaaDchr9QYuKz9z/rabbits-robots-and-resurrection,"Rabbits, robots and resurrection",['Patrick Wilson'],2022-05-10T15:00:18Z,eaforum,, 121418,https://forum.effectivealtruism.org/posts/743io3WyzezJbHowW/the-concept-of-boundary-layer-in-language-games-and-its,The Concept of Boundary Layer in Language Games and Its Implications for AI,['Mirage'],2023-03-24T13:50:41Z,eaforum,, 121440,https://forum.effectivealtruism.org/posts/vGiyvfaGGFEzQsETR/what-s-happening-in-australia,What's Happening in Australia,"['Bradley Tjandra', 'Nathan Sherburn']",2022-11-07T01:03:57Z,eaforum,, 121462,https://forum.effectivealtruism.org/posts/oo96uRHNbGjr4DHut/linkpost-governance-of-superintelligence-by-openai,"[Linkpost] ""Governance of superintelligence"" by OpenAI",['Daniel_Eth'],2023-05-22T20:15:25Z,eaforum,, 121486,https://forum.effectivealtruism.org/posts/dJy59zjes5qRKwc9S/catholic-theologians-and-priests-on-artificial-intelligence,Catholic theologians and priests on artificial intelligence,['anonymous6'],2022-06-14T18:53:49Z,eaforum,, 121513,https://forum.effectivealtruism.org/posts/yNxn4HxDSMdRyrv6E/collection-of-work-on-should-you-focus-on-the-eu-if-you-re,Collection of work on 'Should you focus on the EU if you're interested in AI governance for longtermist/x-risk reasons?',['MichaelA'],2022-08-06T16:49:44Z,eaforum,, 121522,https://forum.effectivealtruism.org/posts/8CM9vZ2nnQsWJNsHx/existential-risk-from-ai-survey-results,"""Existential risk from AI"" survey results",['RobBensinger'],2021-06-01T20:19:33Z,eaforum,, 121537,https://forum.effectivealtruism.org/posts/aztshctf3PxBnKHqF/international-ai-institutions-a-literature-review-of-models,"International AI Institutions: a literature review of models, examples, and proposals","['MMMaas', 'JJ_Villalobos', 'Legal Priorities Project']",2023-09-26T15:26:18Z,eaforum,, 121581,https://forum.effectivealtruism.org/posts/RYNtykh5xM467zRNj/why-some-people-disagree-with-the-cais-statement-on-ai,Why some people disagree with the CAIS statement on AI,"['David_Moss', 'WillemSleegers']",2023-08-15T13:39:49Z,eaforum,, 121606,https://forum.effectivealtruism.org/posts/MSkxRv8hviGvGgasD/ai-risk-reward-a-simple-model,AI risk/reward: A simple model,['Nathan Young'],2023-05-04T19:12:25Z,eaforum,, 121629,https://forum.effectivealtruism.org/posts/WhDa26A3AKaStvuD9/critique-of-superintelligence-part-5,Critique of Superintelligence Part 5,['Fods12'],2018-12-13T05:19:05Z,eaforum,, 121646,https://forum.effectivealtruism.org/posts/xqBaBjXYy5yHbpXou/l-importanza-delle-ia-come-possibile-minaccia-per-l-umanita,L’importanza delle IA come possibile minaccia per l’umanità,['EA Italy'],2023-01-17T22:24:34Z,eaforum,, 121674,https://forum.effectivealtruism.org/posts/jPW3jgfYPBrwHEbog/linkpost-ny-times-feature-on-anthropic,[Linkpost] NY Times Feature on Anthropic,['Garrison'],2023-07-12T19:30:53Z,eaforum,, 121692,https://forum.effectivealtruism.org/posts/TQdNRM9gofsN9thYv/theory-waw-might-be-of-higher-impact-than-x-risk-prevention,Theory: “WAW might be of higher impact than x-risk prevention based on utilitarianism”,['Jens Aslaug'],2023-09-12T13:11:15Z,eaforum,, 121715,https://forum.effectivealtruism.org/posts/J7LwudixQX5FHrcP3/governing-high-impact-ai-systems-understanding-canada-s-2,"Governing High-Impact AI Systems: Understanding Canada’s Proposed AI Bill. April 15, Carleton University, Ottawa",['Liav.Koren'],2023-03-27T23:11:39Z,eaforum,, 121724,https://forum.effectivealtruism.org/posts/6qaaRnu6oN4pdAnWF/enabling-more-feedback,Enabling more feedback,['JJ Hepburn'],2021-12-10T06:52:12Z,eaforum,, 121736,https://forum.effectivealtruism.org/posts/6ercwC6JAPFTdKsy6/please-someone-make-a-dataset-of-supposed-cases-of-tech,"Please, someone make a dataset of supposed cases of ""tech panic""",['Harrison Durland'],2023-11-07T02:49:15Z,eaforum,, 121745,https://forum.effectivealtruism.org/posts/2unCFr2pnFHuNDT9z/apply-to-spring-2024-policy-internships-we-can-help,Apply to Spring 2024 policy internships (we can help),"['Elika', 'Vaidehi Agarwalla']",2023-10-04T14:45:40Z,eaforum,, 121754,https://forum.effectivealtruism.org/posts/ebYdBNpGnshhm2Gkq/neartermists-should-consider-agi-timelines-in-their-spending,Neartermists should consider AGI timelines in their spending decisions,['Tristan Cook'],2022-07-26T17:01:34Z,eaforum,, 121771,https://forum.effectivealtruism.org/posts/Zcy8EDfQ9TXFGL75m/platform-for-project-spitballing-e-g-for-ai-field-building,"Platform for Project Spitballing? (e.g., for AI field building)",['Harrison Durland'],2023-04-03T15:45:55Z,eaforum,, 121793,https://forum.effectivealtruism.org/posts/kWbBwoqgSaadMtzf5/how-to-become-more-agentic-by-gpt-ea-forum-v1,"How to become more agentic, by GPT-EA-Forum-v1",['JoyOptimizer'],2022-06-20T06:50:15Z,eaforum,, 121807,https://forum.effectivealtruism.org/posts/zpaz2n5pT4xLeF3K9/three-kinds-of-competitiveness,Three kinds of competitiveness,['AI Impacts'],2020-04-02T03:46:37Z,eaforum,, 121829,https://forum.effectivealtruism.org/posts/7WvoCbWfa6kWsgbA9/168-whether-deep-history-says-we-re-heading-for-an,"#168 – Whether deep history says we’re heading for an intelligence explosion (Ian Morris on the 80,000 Hours Podcast)",['80000_Hours'],2023-10-24T15:24:58Z,eaforum,, 121844,https://forum.effectivealtruism.org/posts/GDdvdhbGfCehnoJzY/strongest-real-world-examples-supporting-ai-risk-claims,Strongest real-world examples supporting AI risk claims?,['rosehadshar'],2023-09-05T15:11:48Z,eaforum,, 121857,https://forum.effectivealtruism.org/posts/aaa9pEwnvyeE2TYvg/announcing-timaeus,Announcing Timaeus,"['Stan van Wingerden', 'Jesse Hoogland', 'Alexander Gietelink Oldenziel']",2023-10-22T13:32:22Z,eaforum,, 121875,https://forum.effectivealtruism.org/posts/gYoB8vZcPGcL3cAKH/language-models-surprised-us,Language models surprised us,['Ajeya'],2023-08-29T21:18:28Z,eaforum,, 121893,https://forum.effectivealtruism.org/posts/XQJNnFryp8ZFqk6td/ai-safety-strategy-a-new-organization-for-better-timelines,AI Safety Strategy - A new organization for better timelines,['Prometheus'],2023-06-14T20:41:18Z,eaforum,, 121916,https://forum.effectivealtruism.org/posts/k6K3iktCLCTHRMJsY/the-possibility-of-an-indefinite-ai-pause,The possibility of an indefinite AI pause,['Matthew_Barnett'],2023-09-19T12:28:06Z,eaforum,, 121943,https://forum.effectivealtruism.org/posts/ogwD28mzJy8dkwtmc/values-lock-in-is-already-happening-without-agi,Values lock-in is already happening (without AGI),['anonymous'],2022-09-01T22:21:46Z,eaforum,, 121964,https://forum.effectivealtruism.org/posts/DiGL5FuLgWActPBsf/how-much-should-governments-pay-to-prevent-catastrophes,How much should governments pay to prevent catastrophes? Longtermism’s limited role,"['EJT', 'CarlShulman']",2023-03-19T16:50:01Z,eaforum,, 122003,https://forum.effectivealtruism.org/posts/gsdM7hbDNrD5kpuZR/agi-safety-fundamentals-programme-is-contracting-a-low-code,AGI Safety Fundamentals programme is contracting a low-code engineer,['Jamie Bernardi'],2022-08-26T15:43:07Z,eaforum,, 122018,https://forum.effectivealtruism.org/posts/pB6zRSH4Pekmh9Gmo/join-asap-ai-safety-accountability-programme,Join ASAP (AI Safety Accountability Programme) 🚀,['TheMcDouglas'],2022-09-10T11:15:28Z,eaforum,, 122030,https://forum.effectivealtruism.org/posts/CcJsh4JcxEqYDaSte/spreading-messages-to-help-with-the-most-important-century,Spreading messages to help with the most important century,['Holden Karnofsky'],2023-01-25T20:35:25Z,eaforum,, 122060,https://forum.effectivealtruism.org/posts/HexzSqmfx9APAdKnh/distillation-of-how-likely-is-deceptive-alignment,"Distillation of ""How Likely is Deceptive Alignment?""",['NickGabs'],2022-12-01T20:22:36Z,eaforum,, 122080,https://forum.effectivealtruism.org/posts/5MZpxbJJ5pkEBpAAR/the-case-for-long-term-corporate-governance-of-ai,The case for long-term corporate governance of AI,"['SethBaum', 'jonasschuett']",2021-11-03T10:50:10Z,eaforum,, 122104,https://forum.effectivealtruism.org/posts/adgGhehBAAjpdwKJT/apply-now-for-the-eu-tech-policy-fellowship-2023,Apply now for the EU Tech Policy Fellowship 2023,"['Jan-Willem', 'Cillian Crosson', 'Training for Good', 'SteveThompson']",2022-11-11T06:16:26Z,eaforum,, 122116,https://forum.effectivealtruism.org/posts/prvzqAxbRtzAcorq6/no-impending-agi-doesn-t-make-everything-else-unimportant,No. Impending AGI doesn't make everything else unimportant.,['Igor Ivanov'],2023-09-04T18:56:03Z,eaforum,, 122130,https://forum.effectivealtruism.org/posts/nALZEFPcFd5sHJ2ew/four-reasons-i-find-ai-safety-emotionally-compelling,Four reasons I find AI safety emotionally compelling,"['Kat Woods', 'Amber Dawn']",2022-06-28T14:01:33Z,eaforum,, 122139,https://forum.effectivealtruism.org/posts/JmzTk4GBzRQj4NeLb/credo-ai-is-hiring,Credo AI is hiring!,['IanEisenberg'],2022-03-03T18:02:23Z,eaforum,, 122156,https://forum.effectivealtruism.org/posts/cJSuDrGdDW9y7AfBt/how-to-hedge-investment-portfolio-against-ai-risk,How to hedge investment portfolio against AI risk?,['Timothy_Liptrot'],2023-01-31T08:04:33Z,eaforum,, 122169,https://forum.effectivealtruism.org/posts/gY9TjxNgSkgsMRDnL/i-designed-an-ai-safety-course-for-a-philosophy-department,I designed an AI safety course (for a philosophy department),['Eleni_A'],2023-09-23T21:56:53Z,eaforum,, 122181,https://forum.effectivealtruism.org/posts/A8ndMGC4FTQq46RRX/critique-of-superintelligence-part-1,Critique of Superintelligence Part 1,['Fods12'],2018-12-13T05:10:52Z,eaforum,, 122194,https://forum.effectivealtruism.org/posts/wfc46sHzSNspcem3k/jeffrey-ding-bringing-techno-globalism-back-a-romantically,Jeffrey Ding: Bringing techno-globalism back: a romantically realist reframing of the US-China tech relationship,['EA Global'],2020-11-21T08:12:00Z,eaforum,, 122205,https://forum.effectivealtruism.org/posts/juhMehg89FrLX9pTj/a-grand-strategy-to-recruit-ai-capabilities-researchers-into,A grand strategy to recruit AI capabilities researchers into AI safety research,['Peter S. Park'],2022-04-15T17:11:36Z,eaforum,, 122222,https://forum.effectivealtruism.org/posts/HmYfoKW6FuyFHmwcJ/why-is-argument-mapping-not-more-common-in-ea-rationality,"Why is ""Argument Mapping"" Not More Common in EA/Rationality (And What Objections Should I Address in a Post on the Topic?)",['Harrison Durland'],2022-12-23T21:55:56Z,eaforum,, 122232,https://forum.effectivealtruism.org/posts/BPcwvcqvuzScrfhrf/desirable-ai-qualities,Desirable? AI qualities,['brb243'],2022-03-21T22:05:14Z,eaforum,, 122247,https://forum.effectivealtruism.org/posts/aoBxSba4CsEAtHqRy/largest-ai-model-in-2-years-from-usd10b,Largest AI model in 2 years from $10B,['Péter Drótos'],2023-10-24T15:14:17Z,eaforum,, 122265,https://forum.effectivealtruism.org/posts/S4hpXjJ5cHvn6LkLu/announcing-the-ea-project-ideas-database,Announcing the EA Project Ideas Database,['Joe Rogero'],2023-06-22T20:20:26Z,eaforum,, 122277,https://forum.effectivealtruism.org/posts/e2dK25iWou3irqFss/the-orthogonality-thesis-is-not-obviously-true,The Orthogonality Thesis is Not Obviously True,['Omnizoid'],2023-04-05T21:08:02Z,eaforum,, 122297,https://forum.effectivealtruism.org/posts/4kRPYuogoSKnHNBhY/an-intervention-to-shape-policy-dialogue-communication-and,"An intervention to shape policy dialogue, communication, and AI research norms for AI safety",['Lee_Sharkey'],2017-10-01T18:29:16Z,eaforum,, 122311,https://forum.effectivealtruism.org/posts/b2kSos3JqQCjKayHR/ai-alignment-2018-2019-review,AI Alignment 2018-2019 Review,['Habryka'],2020-01-28T21:14:03Z,eaforum,, 122350,https://forum.effectivealtruism.org/posts/EhPKbX5JkwxvhfGhC/link-post-ai-should-be-terrified-of-humans,[link post] AI Should Be Terrified of Humans,['BrianK'],2023-07-24T11:13:01Z,eaforum,, 122362,https://forum.effectivealtruism.org/posts/ZnPLPFC49nJym7y8g/agi-x-animal-welfare-a-high-ev-outreach-opportunity,AGI x Animal Welfare: A High-EV Outreach Opportunity?,['simeon_c'],2023-06-28T20:44:25Z,eaforum,, 122372,https://forum.effectivealtruism.org/posts/DCtqgsywCRakLvHn6/alignment-s-phlogiston,Alignment's phlogiston,['Eleni_A'],2022-08-18T01:41:16Z,eaforum,, 122385,https://forum.effectivealtruism.org/posts/gomS2ocBzXJA2Mg3w/discussing-ai-human-collaboration-through-fiction-the-story,Discussing AI-Human Collaboration Through Fiction: The Story of Laika and GPT-∞,['Laika'],2023-07-27T06:04:54Z,eaforum,, 122397,https://forum.effectivealtruism.org/posts/ixua7wT7ZwuGfSLLi/a-request-to-keep-pessimistic-ai-posts-actionable-1,A request to keep pessimistic AI posts actionable.,['tcelferact'],2023-05-11T15:35:10Z,eaforum,, 122408,https://forum.effectivealtruism.org/posts/gbPthwLw3NovHAJdp/software-engineering-career-review,Software engineering - Career review,"['Benjamin Hilton', '80000_Hours']",2022-02-08T06:11:29Z,eaforum,, 122442,https://forum.effectivealtruism.org/posts/KDtg6dzjcETnJGaQr/how-would-you-estimate-the-value-of-delaying-agi-by-1-day-in,"How would you estimate the value of delaying AGI by 1 day, in marginal donations to GiveWell?",['AnonymousAccount'],2022-12-16T09:25:34Z,eaforum,, 122457,https://forum.effectivealtruism.org/posts/wn9PkfWWWhpCypep6/misha-yagudin-and-ozzie-gooen-discuss-llms-and-effective,Misha Yagudin and Ozzie Gooen Discuss LLMs and Effective Altruism,"['Ozzie Gooen', 'Misha_Yagudin']",2023-01-06T22:59:28Z,eaforum,, 122500,https://forum.effectivealtruism.org/posts/yasigF54XKCzuxcfh/emerging-technologies-more-to-explore,Emerging Technologies: More to explore,['EA Handbook'],2021-01-01T11:06:51Z,eaforum,, 122523,https://forum.effectivealtruism.org/posts/DEFJkvzHeBdpmKQNR/england-and-wales-and-windfalls,England & Wales & Windfalls,['John Bridge'],2022-06-03T10:26:47Z,eaforum,, 122573,https://forum.effectivealtruism.org/posts/BZiJ7C6cxrHsDbHsh/a-fictional-ai-law-laced-w-alignment-theory,A fictional AI law laced w/ alignment theory,['Miguel'],2023-07-17T03:26:33Z,eaforum,, 122587,https://forum.effectivealtruism.org/posts/mjB9osLTJJM4zKhoq/2022-ai-expert-survey-results,2022 AI expert survey results,['Zach Stein-Perlman'],2022-08-04T15:54:10Z,eaforum,, 122602,https://forum.effectivealtruism.org/posts/k2tBL2nNStZEoc4tF/cause-exploration-prizes-expanding-communication-about-agi,[Cause Exploration Prizes] Expanding communication about AGI risks,['Ines'],2022-09-22T05:30:59Z,eaforum,, 122628,https://forum.effectivealtruism.org/posts/aYg2ceChLMRbwqkyQ/ai-research-considerations-for-human-existential-safety,AI Research Considerations for Human Existential Safety (ARCHES),['Andrew Critch'],2020-05-21T06:55:32Z,eaforum,, 122643,https://forum.effectivealtruism.org/posts/Y5eQHtEB29nW6FfQE/are-we-confident-that-superintelligent-artificial,Are we confident that superintelligent artificial intelligence disempowering humans would be bad?,['Vasco Grilo'],2023-06-10T09:24:22Z,eaforum,, 122654,https://forum.effectivealtruism.org/posts/cPuXn8oDJpTDxGGdB/there-s-no-fire-alarm-for-artificial-general-intelligence,There's No Fire Alarm for Artificial General Intelligence,['EA Forum Archives'],2017-10-14T02:41:52Z,eaforum,, 122675,https://forum.effectivealtruism.org/posts/Xyu5NJ5tyJM8xFcJJ/job-managing-director-at-the-cooperative-ai-foundation,[Job] Managing Director at the Cooperative AI Foundation ($5000 Referral Bonus),['Lewis Hammond'],2023-07-03T16:02:25Z,eaforum,, 122685,https://forum.effectivealtruism.org/posts/cgcSoDqGYsiBfPEy4/we-did-agisf-s-8-week-course-in-3-days-here-s-how-it-went,We Did AGISF’s 8-week Course in 3 Days. Here’s How it Went,"['ag4000', 'Logan Riggs']",2022-07-24T16:46:30Z,eaforum,, 122701,https://forum.effectivealtruism.org/posts/ijeBndPQdx8kCcM2R/cambridge-ai-safety-hub-is-looking-for-full-or-part-time,Cambridge AI Safety Hub is looking for full- or part-time organisers,['hannah'],2023-07-15T14:31:47Z,eaforum,, 122713,https://forum.effectivealtruism.org/posts/MK9AfXfkiz2mku6fv/it-s-ok-not-to-go-into-ai-for-students,It's OK not to go into AI (for students),['ruthgrace'],2022-07-14T15:16:54Z,eaforum,, 122730,https://forum.effectivealtruism.org/posts/szwZkDBtW5sECHucy/brian-tse-sino-western-cooperation-in-ai-safety,Brian Tse: Sino-Western cooperation in AI safety,['EA Global'],2020-01-30T22:02:37Z,eaforum,, 122769,https://forum.effectivealtruism.org/posts/nAFavriWTLzmqTCcJ/how-agi-could-end-up-being-many-different-specialized-ai-s,"How ""AGI"" could end up being many different specialized AI's stitched together",['titotal'],2023-05-08T12:32:40Z,eaforum,, 122792,https://forum.effectivealtruism.org/posts/aWr4rMf7ZhoCAtoMc/skill-up-in-ml-for-ai-safety-with-the-intro-to-ml-safety,Skill up in ML for AI safety with the Intro to ML Safety course (Spring 2023),"['james', 'Oliver Z']",2023-01-05T11:02:34Z,eaforum,, 122834,https://forum.effectivealtruism.org/posts/fkN9zcqNeZGrXeeMF/international-cooperation-as-a-tool-to-reduce-two,International cooperation as a tool to reduce two existential risks.,['johl@umich.edu'],2021-04-19T16:51:37Z,eaforum,, 122861,https://forum.effectivealtruism.org/posts/S8xfeJGER74xwqLta/apply-to-the-cavendish-labs-fellowship-by-4-15,Apply to the Cavendish Labs Fellowship (by 4/15),"['Derik K', 'dyusha']",2023-04-03T23:06:43Z,eaforum,, 122870,https://forum.effectivealtruism.org/posts/juWCs6gyRvsXxPLgt/a-modest-case-for-hope,A modest case for hope,['xavier rg'],2022-10-17T06:03:20Z,eaforum,, 122879,https://forum.effectivealtruism.org/posts/GNfWT8Xqh89wRaaSg/unions-for-ai-safety,Unions for AI safety?,['dEAsign'],2023-09-24T00:13:00Z,eaforum,, 122897,https://forum.effectivealtruism.org/posts/tBkAg7Cys84eGyew6/assessing-china-s-importance-as-an-ai-superpower,Assessing China's importance as an AI superpower,['JulianHazell'],2023-02-03T11:08:43Z,eaforum,, 122914,https://forum.effectivealtruism.org/posts/3tkYQi7eyHnARzfPu/closing-the-feedback-loop-on-ai-safety-research,Closing the Feedback Loop on AI Safety Research.,['Ben.Hartley'],2022-07-29T21:46:22Z,eaforum,, 122924,https://forum.effectivealtruism.org/posts/5HhHJjP5qkCwvJozA/please-help-me-sense-check-my-assumptions-about-the-needs-of,Please help me sense-check my assumptions about the needs of the AI Safety community and related career plans,['PeterSlattery'],2023-03-27T08:11:23Z,eaforum,, 122948,https://forum.effectivealtruism.org/posts/3yAtGF3bCHqkSN52h/stampy-s-ai-safety-info-new-distillations-3-may-2023,Stampy's AI Safety Info - New Distillations #3 [May 2023],['markov'],2023-06-06T14:27:25Z,eaforum,, 122967,https://forum.effectivealtruism.org/posts/dYf4w6qvidP7x5AND/data-collection-for-ai-alignment-career-review,Data collection for AI alignment - Career review,"['Benjamin Hilton', '80000_Hours']",2022-06-03T11:44:22Z,eaforum,, 122990,https://forum.effectivealtruism.org/posts/NfgMAS67nKTGzmQMB/x-distracts-from-y-as-a-thinly-disguised-fight-over-group,“X distracts from Y” as a thinly-disguised fight over group status / politics,['Steven Byrnes'],2023-09-25T15:29:42Z,eaforum,, 123009,https://forum.effectivealtruism.org/posts/3EWpLid8tkyYJakfm/announcing-bluedot-impact,Announcing BlueDot Impact,"['Dewi', 'Jamie Bernardi', 'Will Saunter']",2022-12-09T16:45:48Z,eaforum,, 123025,https://forum.effectivealtruism.org/posts/4rMxiyPTPdzaFMyGm/high-impact-careers-in-formal-verification-artificial,High Impact Careers in Formal Verification: Artificial Intelligence,['quinn'],2021-06-05T14:45:20Z,eaforum,, 123048,https://forum.effectivealtruism.org/posts/L8kEmQgghxS9LXF3H/ea-is-underestimating-intelligence-agencies-and-this-is,EA is underestimating intelligence agencies and this is dangerous,['trevor1'],2023-08-26T16:52:54Z,eaforum,, 123068,https://forum.effectivealtruism.org/posts/DK7N5YofbM2cfPi8h/european-union-ai-development-and-governance-partnerships,European Union AI Development and Governance Partnerships,['EU AI Governance'],2022-01-19T10:26:15Z,eaforum,, 123087,https://forum.effectivealtruism.org/posts/hCwDNq6sZofgSEN3s/ai-safety-7-months-of-discussion-in-17-minutes,AI Safety - 7 months of discussion in 17 minutes,['Zoe Williams'],2023-03-15T23:41:37Z,eaforum,, 123128,https://forum.effectivealtruism.org/posts/NvzeAtoynxGjDnWkp/announcing-the-harvard-ai-safety-team,Announcing the Harvard AI Safety Team,['Xander Davies'],2022-06-30T18:34:04Z,eaforum,, 123161,https://forum.effectivealtruism.org/posts/eqnDKGjaujNWN3t3i/values-and-control,Values and control,['dotsam'],2022-08-04T18:28:45Z,eaforum,, 123174,https://forum.effectivealtruism.org/posts/CbBcrqkPCEc2tSgyq/eliciting-responses-to-marc-andreessen-s-why-ai-will-save,"Eliciting responses to Marc Andreessen's ""Why AI Will Save the World""",['Coleman@21stTalks'],2023-07-17T19:58:07Z,eaforum,, 123183,https://forum.effectivealtruism.org/posts/Y2xbKLjEmL6dCd2Z6/uk-government-to-host-first-global-summit-on-ai-safety,UK government to host first global summit on AI Safety,['DavidNash'],2023-06-08T13:24:16Z,eaforum,, 123204,https://forum.effectivealtruism.org/posts/gMri6G4LajzBHgmz4/i-m-interviewing-kat-woods-ea-powerhouse-what-should-i-ask,"I'm Interviewing Kat Woods, EA Powerhouse. What Should I Ask?",['SereneDesiree'],2022-09-20T09:49:07Z,eaforum,, 123217,https://forum.effectivealtruism.org/posts/AtdApEsvPr8QhdoBa/metaculus-predicts-weak-agi-in-2-years-and-agi-in-10,Metaculus Predicts Weak AGI in 2 Years and AGI in 10,['Chris Leong'],2023-03-24T19:43:18Z,eaforum,, 123227,https://forum.effectivealtruism.org/posts/KxDgeyyhppRD5qdfZ/link-post-how-plausible-are-ai-takeover-scenarios,[Link post] How plausible are AI Takeover scenarios?,"['SammyDMartin', 'Sam Clarke']",2021-09-27T13:03:54Z,eaforum,, 123250,https://forum.effectivealtruism.org/posts/jE844jDBytBK8dWhw/simulating-a-possible-alignment-solution-in-gpt2-medium,Simulating a possible alignment solution in GPT2-medium using Archetypal Transfer Learning,['Miguel'],2023-05-02T16:23:42Z,eaforum,, 123263,https://forum.effectivealtruism.org/posts/CfFpEoibJTrTmiWtF/ai-safety-newsletter-5-geoffrey-hinton-speaks-out-on-ai-risk,"AI Safety Newsletter #5: Geoffrey Hinton speaks out on AI risk, the White House meets with AI labs, and Trojan attacks on language models","['Center for AI Safety', 'Dan H', 'Akash', 'aogara']",2023-05-09T15:26:51Z,eaforum,, 123291,https://forum.effectivealtruism.org/posts/6LTh4foNuC3NdtmZH/ai-could-defeat-all-of-us-combined,AI Could Defeat All Of Us Combined,['Holden Karnofsky'],2022-06-10T23:25:51Z,eaforum,, 123317,https://forum.effectivealtruism.org/posts/MskKEsj8nWREoMjQK/introduction-to-pragmatic-ai-safety-pragmatic-ai-safety-1,Introduction to Pragmatic AI Safety [Pragmatic AI Safety #1],"['ThomasW', 'Dan H']",2022-05-09T17:02:00Z,eaforum,, 123338,https://forum.effectivealtruism.org/posts/LgmCk9Lzpiot6G4Xa/how-to-get-more-academics-enthusiastic-about-doing-ai-safety,How to get more academics enthusiastic about doing AI Safety research?,['PabloAMC'],2021-09-04T14:10:15Z,eaforum,, 123355,https://forum.effectivealtruism.org/posts/ExtCWHofqmBwDqfcb/illusion-of-truth-effect-and-ambiguity-effect-bias-in,Illusion of truth effect and Ambiguity effect: Bias in Evaluating AGI X-Risks,['Remmelt'],2023-01-05T04:05:18Z,eaforum,, 123365,https://forum.effectivealtruism.org/posts/o3zdvskr2DZPTRnkF/manifund-x-ai-worldviews,Manifund x AI Worldviews,"['Austin', 'Rachel Weinberg']",2023-03-31T15:32:04Z,eaforum,, 123374,https://forum.effectivealtruism.org/posts/HcGnjibaHe6To9eaG/force-multipliers-for-ea-research,‘Force multipliers’ for EA research,['Craig Drayton'],2022-06-18T13:39:28Z,eaforum,, 123396,https://forum.effectivealtruism.org/posts/FzcQSpbiiom7RHEjD/twitter-length-responses-to-24-ai-alignment-arguments,Twitter-length responses to 24 AI alignment arguments,['RobBensinger'],2022-03-14T19:34:22Z,eaforum,, 123409,https://forum.effectivealtruism.org/posts/YW6fDEDsd3MXDKhYD/slowing-down-ai-progress,Slowing down AI progress?,['Eleni_A'],2022-07-26T08:46:12Z,eaforum,, 123418,https://forum.effectivealtruism.org/posts/Qpgfrde9w9vCT3vgm/ai-impacts-quarterly-newsletter-jan-mar-2023,"AI Impacts Quarterly Newsletter, Jan-Mar 2023",['Harlan'],2023-04-17T23:07:19Z,eaforum,, 123432,https://forum.effectivealtruism.org/posts/tmxkRFx6HyhhvHdz4/a-map-to-navigate-ai-governance,A Map to Navigate AI Governance,['CaroJ'],2022-02-14T22:41:22Z,eaforum,, 123452,https://forum.effectivealtruism.org/posts/tgxZEei8ghtpxJoAg/when-to-diversify-breaking-down-mission-correlated-investing,When to diversify? Breaking down mission-correlated investing,"['jh', 'MichaelDickens']",2022-11-29T11:18:33Z,eaforum,, 123472,https://forum.effectivealtruism.org/posts/3sJbwpGbAu5tpGkqD/douglas-hoftstadter-concerned-about-ai-xrisk,Douglas Hoftstadter concerned about AI xrisk,['Eli Rose'],2023-07-03T03:30:40Z,eaforum,, 123484,https://forum.effectivealtruism.org/posts/SifFuesK7oc7DAMbw/intergenerational-trauma-impeding-cooperative-existential,Intergenerational trauma impeding cooperative existential safety efforts,['Andrew Critch'],2022-06-03T17:27:29Z,eaforum,, 123501,https://forum.effectivealtruism.org/posts/Bmjucdecv3p5smNC8/ai-impacts-quarterly-newsletter-apr-jun-2023,"AI Impacts Quarterly Newsletter, Apr-Jun 2023",['Harlan'],2023-07-18T18:01:45Z,eaforum,, 123525,https://forum.effectivealtruism.org/posts/M2SBwctwC6vBqAmZW/a-personal-take-on-longtermist-ai-governance,A personal take on longtermist AI governance,['lukeprog'],2021-07-16T22:08:04Z,eaforum,, 123549,https://forum.effectivealtruism.org/posts/DxNSjxcMhFnHLztcN/chris-olah-on-working-at-top-ai-labs-without-an-undergrad,Chris Olah on working at top AI labs without an undergrad degree,['80000_Hours'],2021-09-10T20:46:02Z,eaforum,, 123566,https://forum.effectivealtruism.org/posts/G334oF8rmwWEAyhQD/beri-epoch-and-far-will-explain-their-work-and-current-job,"BERI, Epoch, and FAR will explain their work & current job openings online this Sunday",['Rockwell'],2022-08-19T20:34:57Z,eaforum,, 123587,https://forum.effectivealtruism.org/posts/AhKpFhL4gKErf7bo3/apply-to-a-small-iteration-of-mlab-to-be-run-in-oxford,Apply to a small iteration of MLAB to be run in Oxford,"['Rio P', 'MariaK', 'OliverHayman']",2023-08-29T19:39:16Z,eaforum,, 123596,https://forum.effectivealtruism.org/posts/nT8Ybhjvz5qG3eLuw/implementational-considerations-for-digital-consciousness,Implementational Considerations for Digital Consciousness,['Derek Shiller'],2023-07-30T22:15:54Z,eaforum,, 123606,https://forum.effectivealtruism.org/posts/LywpuDpNEhTw8iqR3/ross-gruetzemacher-defining-and-unpacking-transformative-ai,Ross Gruetzemacher: Defining and unpacking transformative AI,['EA Global'],2019-10-18T08:22:08Z,eaforum,, 123615,https://forum.effectivealtruism.org/posts/f5wxYKFiJwjjRvk4Q/20-tips-tricks-lessons-and-thoughts-on-hosting-hackathons,"20+ tips, tricks, lessons and thoughts on hosting hackathons",['gergo'],2023-11-06T10:59:13Z,eaforum,, 123644,https://forum.effectivealtruism.org/posts/EBZggasznbotKrpLW/tony-blair-institute-ai-safety-work,Tony Blair Institute AI Safety Work,['TomWestgarth'],2023-06-13T13:16:34Z,eaforum,, 123665,https://forum.effectivealtruism.org/posts/xJYRiy8Jjy2Tk2qHr/intro-to-ai-risk-for-ai-grad-students,Intro to AI risk for AI grad students?,['tae'],2023-09-22T20:34:33Z,eaforum,, 123679,https://forum.effectivealtruism.org/posts/Rn4Em42vXcDWCEhSK/nuclear-brinksmanship-is-not-a-good-ai-x-risk-strategy,Nuclear brinksmanship is not a good AI x-risk strategy,['titotal'],2023-03-30T22:07:14Z,eaforum,, 123711,https://forum.effectivealtruism.org/posts/T3piiDHvaGuzE7KKF/ai-safety-overview-ceri-summer-research-fellowship-1,AI Safety Overview: CERI Summer Research Fellowship,['Jamie Bernardi'],2022-03-24T15:12:38Z,eaforum,, 123732,https://forum.effectivealtruism.org/posts/946ymfxwd7YAC9yvT/cea-should-invest-in-helping-altruists-navigate-advanced-ai,CEA Should Invest in Helping Altruists Navigate Advanced AI,['Chris Leong'],2023-05-14T14:52:57Z,eaforum,, 123758,https://forum.effectivealtruism.org/posts/Yseu9oG3gnb6ERc7n/three-biases-that-made-me-believe-in-ai-risk,Three Biases That Made Me Believe in AI Risk,['beth\u200b'],2019-02-13T23:22:21Z,eaforum,, 123776,https://forum.effectivealtruism.org/posts/Go5CDwyna3hAfngKP/no-the-emh-does-not-imply-that-markets-have-long-agi,"No, the EMH does not imply that markets have long AGI timelines",['Jakob'],2023-04-24T08:27:39Z,eaforum,, 123790,https://forum.effectivealtruism.org/posts/QYHJ6GSkusS7EjbSg/new-speaker-series-on-ai-alignment-starting-march-3,New Speaker Series on AI Alignment Starting March 3,['Zechen Zhang'],2022-02-26T10:58:52Z,eaforum,, 123800,https://forum.effectivealtruism.org/posts/vxpqFFtrRsG9RLkqa/announcement-you-can-now-listen-to-the-ai-safety,Announcement: You can now listen to the “AI Safety Fundamentals” courses,"['peterhartree', 'Jamie Bernardi', 'Perrin Walker', 'TYPE III AUDIO', 'BlueDot Impact']",2023-06-09T16:32:59Z,eaforum,, 123811,https://forum.effectivealtruism.org/posts/kZqvjtLMkQyByi6yb/open-philanthropy-s-ai-governance-grantmaking-so-far,Open Philanthropy's AI governance grantmaking (so far),['Aaron Gertler'],2020-12-17T12:00:06Z,eaforum,, 123845,https://forum.effectivealtruism.org/posts/ZD7KxnqfXR7cyouxP/towards-evidence-gap-maps-for-ai-safety,Towards evidence gap-maps for AI safety,['dEAsign'],2023-07-25T08:13:35Z,eaforum,, 123871,https://forum.effectivealtruism.org/posts/ZNcdt7eYWW7YXALvx/what-ai-could-mean-for-animals,What AI could mean for animals,['Max Taylor'],2023-10-06T08:36:59Z,eaforum,, 123918,https://forum.effectivealtruism.org/posts/TaJrx7XHMdK6kvQ9X/the-physicists-a-play-about-extinction-and-the,"""The Physicists"": A play about extinction and the responsibility of scientists",['Lara_TH'],2022-11-29T16:53:12Z,eaforum,, 123938,https://forum.effectivealtruism.org/posts/j3DmLmbhGQkYcZD2p/what-could-an-ai-caused-existential-catastrophe-actually,What could an AI-caused existential catastrophe actually look like?,"['Benjamin Hilton', '80000_Hours']",2022-09-12T16:25:16Z,eaforum,, 123965,https://forum.effectivealtruism.org/posts/G3JuuRsALQXLgXycL/what-is-the-best-article-to-introduce-someone-to-ai-safety,What is the best article to introduce someone to AI safety for the first time?,['trevor1'],2022-11-22T02:06:52Z,eaforum,, 123974,https://forum.effectivealtruism.org/posts/twf3ByYGZGAupAKAB/ai-risk-in-the-state-of-the-european-union-address,AI-Risk in the State of the European Union Address,['Sam Bogerd'],2023-09-13T13:27:20Z,eaforum,, 124004,https://forum.effectivealtruism.org/posts/RueHqBuBKQBtSYkzp/observations-on-the-funding-landscape-of-ea-and-ai-safety,Observations on the funding landscape of EA and AI safety,"['Vilhelm Skoglund', 'Jona']",2023-10-02T09:45:35Z,eaforum,, 124035,https://forum.effectivealtruism.org/posts/fkPBQNNuDzSeX8jmp/apply-to-the-constellation-visiting-researcher-program-and,"Apply to the Constellation Visiting Researcher Program and Astra Fellowship, in Berkeley this Winter","['Anjay F', 'billzito', 'Alexandra Bates', 'Nate Thomas']",2023-10-26T03:14:42Z,eaforum,, 124045,https://forum.effectivealtruism.org/posts/ExHkFcNAL9cjqFmsF/law-following-ai-3-lawless-ai-agents-undermine-stabilizing,Law-Following AI 3: Lawless AI Agents Undermine Stabilizing Agreements,['Cullen'],2022-04-27T17:20:35Z,eaforum,, 124062,https://forum.effectivealtruism.org/posts/LM2JnTHygKbn7eKLz/ai-alignment-research-engineer-accelerator-arena-call-for-1,AI Alignment Research Engineer Accelerator (ARENA): call for applicants,"['TheMcDouglas', ""Kathryn O'Rourke""]",2023-11-07T09:43:41Z,eaforum,, 124077,https://forum.effectivealtruism.org/posts/LjBgFdgHGjmnwjGob/why-isn-t-there-a-charity-entrepreneurship-program-for-ai,Why isn't there a Charity Entrepreneurship program for AI Safety?,['yanni'],2023-10-04T02:12:22Z,eaforum,, 124086,https://forum.effectivealtruism.org/posts/DcxHhLuKDWeASxGz3/hiring-inform-and-shape-a-new-project-on-ai-safety-at-1,HIRING: Inform and shape a new project on AI safety at Partnership on AI,['Madhulika Srikumar'],2021-11-24T16:29:13Z,eaforum,, 124098,https://forum.effectivealtruism.org/posts/apZwBKDope6xqP3CT/will-the-vast-majority-of-technological-progress-happen-in,Will the vast majority of technological progress happen in the longterm future?,['Vasco Grilo'],2023-07-08T08:40:09Z,eaforum,, 124113,https://forum.effectivealtruism.org/posts/hEwtb9Zjt5qwc2ygH/3-levels-of-threat-obfuscation,3 levels of threat obfuscation,['Holden Karnofsky'],2023-08-02T17:09:13Z,eaforum,, 124134,https://forum.effectivealtruism.org/posts/b3nGMGGhTZawy8Zfd/ai-x-risk-integrating-on-the-shoulders-of-giants,AI X-Risk: Integrating on the Shoulders of Giants,['TD_Pilditch'],2022-11-01T16:07:57Z,eaforum,, 124150,https://forum.effectivealtruism.org/posts/Xf6QE6txgvfCGvZpk/case-studies-of-self-governance-to-reduce-technology-risk,Case studies of self-governance to reduce technology risk,['jia'],2021-04-06T08:49:58Z,eaforum,, 124180,https://forum.effectivealtruism.org/posts/svD6fFGWvjsvCxjgM/aisn-18-challenges-of-reinforcement-learning-from-human,"AISN #18: Challenges of Reinforcement Learning from Human Feedback, Microsoft’s Security Breach, and Conceptual Research on AI Safety","['Center for AI Safety', 'aogara', 'Dan H']",2023-08-08T15:52:07Z,eaforum,, 124203,https://forum.effectivealtruism.org/posts/pbiGHk6AjRxdBPoD8/ai-safety-starter-pack,AI safety starter pack,['mariushobbhahn'],2022-03-28T16:05:34Z,eaforum,, 124227,https://forum.effectivealtruism.org/posts/4MkkdbSa42h73pXi8/scale-schlep-and-systems,"Scale, schlep, and systems",['Ajeya'],2023-10-10T16:59:04Z,eaforum,, 124249,https://forum.effectivealtruism.org/posts/q8jxedwSKBdWA3nH7/we-are-not-alone-many-communities-want-to-stop-big-tech-from,We are not alone: many communities want to stop Big Tech from scaling unsafe AI,['Remmelt'],2023-09-22T17:38:08Z,eaforum,, 124264,https://forum.effectivealtruism.org/posts/izJxJwgteyDrKyyXe/against-using-stock-prices-to-forecast-ai-timelines,Against using stock prices to forecast AI timelines,"['basil.halperin', 'J. Zachary Mazlish', 'tmychow']",2023-01-10T16:04:05Z,eaforum,, 124279,https://forum.effectivealtruism.org/posts/veR4W92bZsTsGgS3D/a-moral-backlash-against-ai-will-probably-slow-down-agi,A moral backlash against AI will probably slow down AGI development,['Geoffrey Miller'],2023-05-31T21:31:18Z,eaforum,, 124296,https://forum.effectivealtruism.org/posts/oqBJk2Ae3RBegtFfn/my-thoughts-on-nanotechnology-strategy-research-as-an-ea,My thoughts on nanotechnology strategy research as an EA cause area,['Ben Snodin'],2022-05-02T09:41:11Z,eaforum,, 124323,https://forum.effectivealtruism.org/posts/xkmiLmecWnD4LKRQ2/ea-is-in-albuquerque,🏜️ EA is in Albuquerque!,['Alex Long'],2023-05-12T22:09:54Z,eaforum,, 124332,https://forum.effectivealtruism.org/posts/CZvJEhNWjecB8pgzw/an-agi-emergency-eject-criteria-consensus-could-be-really-1,An 'AGI Emergency Eject Criteria' consensus could be really useful.,['tcelferact'],2023-04-07T16:21:44Z,eaforum,, 124352,https://forum.effectivealtruism.org/posts/nJz6vaqK7xBDXpMwk/aligning-the-aligners-ensuring-aligned-ai-acts-for-the,Aligning the Aligners: Ensuring Aligned AI acts for the common good of all mankind,['timunderwood'],2023-01-16T11:13:16Z,eaforum,, 124372,https://forum.effectivealtruism.org/posts/WqQaPYhzDYJwLC6gW/ai-governance-career-paths-for-europeans,AI Governance Career Paths for Europeans,['careersthrowaway'],2020-05-16T06:40:55Z,eaforum,, 124394,https://forum.effectivealtruism.org/posts/ChuABPEXmRumcJY57/video-and-transcript-of-presentation-on-existential-risk,Video and Transcript of Presentation on Existential Risk from Power-Seeking AI,['Joe_Carlsmith'],2022-05-08T03:52:44Z,eaforum,, 124425,https://forum.effectivealtruism.org/posts/goYTp3CyLA4dnL2kN/catastrophic-risks-from-unsafe-ai-navigating-a-tightrope,"Catastrophic Risks from Unsafe AI: Navigating a Tightrope Scenario (Ben Garfinkel, EAG London 2023)",['AlexanderSaeri'],2023-06-02T09:59:21Z,eaforum,, 124449,https://forum.effectivealtruism.org/posts/7zXEEQED89Ebbazc2/how-to-diversify-conceptual-ai-alignment-the-model-behind,How to Diversify Conceptual AI Alignment: the Model Behind Refine,['adamShimi'],2022-07-20T10:44:27Z,eaforum,, 124469,https://forum.effectivealtruism.org/posts/7jdEqubznyiNnY4Tn/sentience-institute-2021-end-of-year-summary-1,Sentience Institute 2021 End of Year Summary,['Ali'],2021-11-26T14:40:37Z,eaforum,, 124496,https://forum.effectivealtruism.org/posts/voEDjdnZyWkxi54SR/are-humans-human-compatible,Are Humans 'Human Compatible'?,['Matt Boyd'],2019-12-06T05:49:12Z,eaforum,, 124526,https://forum.effectivealtruism.org/posts/XTNwCtsecACuARTcH/considerations-on-transformative-ai-and-explosive-growth,Considerations on transformative AI and explosive growth from a semiconductor-industry perspective,['Muireall'],2023-05-31T01:11:44Z,eaforum,, 124538,https://forum.effectivealtruism.org/posts/QA4N75QwsCbZtFBWF/the-right-to-protection-from-catastrophic-ai-risk,The right to protection from catastrophic AI risk,['Jack Cunningham'],2022-04-09T23:11:15Z,eaforum,, 124560,https://forum.effectivealtruism.org/posts/aXGDHeyhaep5sLzuG/link-post-interesting-shallow-round-up-of-reasons-to-be,[Link Post] Interesting shallow round-up of reasons to be skeptical that transformative AI or explosive economic growth are coming soon,['Dr. David Mathers'],2023-06-28T19:49:31Z,eaforum,, 124590,https://forum.effectivealtruism.org/posts/q6t5zKCg5peZA92Zu/pivotal-act-intentions-negative-consequences-and-fallacious,“Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments,['Andrew Critch'],2022-04-19T20:24:28Z,eaforum,, 124609,https://forum.effectivealtruism.org/posts/suBJdDkEu9EaSmTxJ/implications-of-large-language-model-diffusion-for-ai,Implications of large language model diffusion for AI governance,['Ben Cottier'],2022-12-21T13:50:29Z,eaforum,, 124651,https://forum.effectivealtruism.org/posts/iYCAoP3JgXxGAvMrr/fhi-report-the-windfall-clause-distributing-the-benefits-of,FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good,['Cullen'],2020-02-05T23:49:43Z,eaforum,, 124664,https://forum.effectivealtruism.org/posts/9dHpEjCzBnenaXfBC/announcing-far-labs-an-ai-safety-coworking-space-1,"Announcing FAR Labs, an AI safety coworking space",['ghabs'],2023-10-02T20:15:46Z,eaforum,, 124679,https://forum.effectivealtruism.org/posts/yczkGfcfWoRN6zfrf/encultured-ai-part-1-enabling-new-benchmarks,"Encultured AI, Part 1: Enabling New Benchmarks",['Andrew Critch'],2022-08-08T22:49:41Z,eaforum,, 124694,https://forum.effectivealtruism.org/posts/zmi4oAGMMe92xgSD3/a-discussion-with-chatgpt-on-value-based-models-vs-large,"A discussion with ChatGPT on value-based models vs. large language models, etc..",['Miguel'],2023-02-04T16:49:30Z,eaforum,, 124714,https://forum.effectivealtruism.org/posts/7mSqokBNuHu3rzy4L/retrospective-on-recent-activity-of-riesgos-catastroficos,Retrospective on recent activity of Riesgos Catastróficos Globales,['Jaime Sevilla'],2023-05-01T18:35:35Z,eaforum,, 124731,https://forum.effectivealtruism.org/posts/fLroJGMbszAjYBSdE/singapore-s-technical-ai-alignment-research-career-guide-1,Singapore’s Technical AI Alignment Research Career Guide,['Yi-Yang'],2020-08-26T08:09:58Z,eaforum,, 124752,https://forum.effectivealtruism.org/posts/e2upqGf6q4CiudLMu/ai-risk-intro-2-solving-the-problem,AI Risk Intro 2: Solving The Problem,"['LRudL', 'TheMcDouglas']",2022-09-24T09:33:05Z,eaforum,, 124800,https://forum.effectivealtruism.org/posts/Cre2YC3hd5DeYLqDH/link-post-new-york-times-white-house-unveils-initiatives-to,[Link Post: New York Times] White House Unveils Initiatives to Reduce Risks of A.I.,['Rockwell'],2023-05-04T14:04:56Z,eaforum,, 124833,https://forum.effectivealtruism.org/posts/JPdfFC3dM3Ksr4apo/pile-of-law-and-law-following-ai,Pile of Law and Law-Following AI,['Cullen'],2022-07-13T00:29:18Z,eaforum,, 124847,https://forum.effectivealtruism.org/posts/p3eiBqnijXPv5pCMA/usd20k-in-prizes-ai-safety-arguments-competition,[$20K In Prizes] AI Safety Arguments Competition,"['ThomasW', 'Dan H', 'Oliver Z', 'Sidney Hough', 'Kevin Liu']",2022-04-26T16:21:40Z,eaforum,, 124857,https://forum.effectivealtruism.org/posts/jwCym3zqbztA8qRZ4/arc-is-hiring-theoretical-researchers,ARC is hiring theoretical researchers,"['Jacob_Hilton', 'Paul_Christiano', 'Mark Xu']",2023-06-12T19:11:42Z,eaforum,, 124885,https://forum.effectivealtruism.org/posts/CgffHE7tggZmw5Lb6/seeking-advice-on-impactful-career-paths-given-my-unique,Seeking advice on impactful career paths given my unique capabilities and interests,['Grateful4PathTips (bot)'],2023-03-31T23:30:46Z,eaforum,, 124899,https://forum.effectivealtruism.org/posts/wDaqyPJxhb6SASJSS/life-of-gpt,Life of GPT,['Odd anon'],2023-11-08T22:31:55Z,eaforum,, 124929,https://forum.effectivealtruism.org/posts/94DvC7J5vtbSKWSXf/creative-writing-contest-metal-or-mortal,[Creative Writing Contest] Metal or Mortal,['Louis'],2021-10-16T16:24:11Z,eaforum,, 124942,https://forum.effectivealtruism.org/posts/zH88C83bnPtLruwKg/analogy-of-ai-alignment-as-raising-a-child,Analogy of AI Alignment as Raising a Child?,['Aaron_Scher'],2022-02-19T21:40:24Z,eaforum,, 124954,https://forum.effectivealtruism.org/posts/sCTJdSavgXkDo8Log/ai-manufactured-crisis-don-t-trust-ai-to-protect-us-from-ai,AI Manufactured Crisis (don't trust AI to protect us from AI),['WobblyPanda2'],2023-06-01T11:12:45Z,eaforum,, 124968,https://forum.effectivealtruism.org/posts/JE3ZjEoWot6yQFSJj/join-the-ai-testing-hackathon-this-friday,Join the AI Testing Hackathon this Friday,"['Esben Kran', 'Apart Research']",2022-12-12T14:24:30Z,eaforum,, 124999,https://forum.effectivealtruism.org/posts/pTZ5uCA8memQ9faje/link-how-understanding-valence-could-help-make-future-ais,[Link] How understanding valence could help make future AIs safer,['Milan_Griffes'],2020-10-08T18:54:00Z,eaforum,, 125025,https://forum.effectivealtruism.org/posts/LB4b4idcMCWg4eJYA/perche-il-deep-learning-moderno-potrebbe-rendere-difficile-l,Perché il deep learning moderno potrebbe rendere difficile l’allineamento delle IA,['EA Italy'],2023-01-17T23:29:16Z,eaforum,, 125046,https://forum.effectivealtruism.org/posts/tSdEfPepkj6vHZyf9/why-not-offer-a-multi-million-billion-dollar-prize-for,Why not offer a multi-million / billion dollar prize for solving the Alignment Problem?,['Aryeh Englander'],2022-04-17T16:08:25Z,eaforum,, 125055,https://forum.effectivealtruism.org/posts/dzS6MwDdYcFFgmBFj/ai-can-exploit-safety-plans-posted-on-the-internet,AI can exploit safety plans posted on the Internet,['Peter S. Park'],2022-12-04T12:17:27Z,eaforum,, 125075,https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different,We are fighting a shared battle (a call for a different approach to AI Strategy),['Gideon Futerman'],2023-03-16T14:37:17Z,eaforum,, 125096,https://forum.effectivealtruism.org/posts/hBAeEJqunNKXv8Mnp/is-a-career-in-making-ai-systems-more-secure-a-meaningful,Is a career in making AI systems more secure a meaningful way to mitigate the X-risk posed by AGI?,['Kyle O’Brien'],2022-02-13T07:05:32Z,eaforum,, 125105,https://forum.effectivealtruism.org/posts/ZPNNnEu2HGNSNmifo/we-all-teach-here-s-how-to-do-it-better,We all teach: here's how to do it better,['Michael Noetel'],2022-09-30T02:06:26Z,eaforum,, 125126,https://forum.effectivealtruism.org/posts/AqfWhMvfiakEcpwfv/training-a-gpt-model-on-ea-texts-what-data,Training a GPT model on EA texts: what data?,['JoyOptimizer'],2022-06-04T05:59:50Z,eaforum,, 125139,https://forum.effectivealtruism.org/posts/WtwMy69JKZeHEvykc/contribute-by-facilitating-the-agi-safety-fundamentals,Contribute by facilitating the AGI Safety Fundamentals Programme,['Jamie Bernardi'],2021-12-06T11:50:01Z,eaforum,, 125154,https://forum.effectivealtruism.org/posts/bLWG7onTMKzdozez8/it-s-not-obvious-that-getting-dangerous-ai-later-is-better,It’s not obvious that getting dangerous AI later is better,['Aaron_Scher'],2023-09-23T05:35:02Z,eaforum,, 125176,https://forum.effectivealtruism.org/posts/JNcp9c7Gzt5hBwA8u/my-plan-for-a-most-important-century-reading-group,My plan for a “Most Important Century” reading group,"[""Jack O'Brien""]",2022-01-19T09:32:40Z,eaforum,, 125190,https://forum.effectivealtruism.org/posts/ZZe5aFGKeZATYGGMD/upcoming-speaker-series-on-emerging-tech-national-security,"Upcoming speaker series on emerging tech, national security & US policy careers",['kuhanj'],2023-06-21T04:49:34Z,eaforum,, 125199,https://forum.effectivealtruism.org/posts/rykCowkpDJiwr9t2G/2024-s-risk-intro-fellowship,2024 S-risk Intro Fellowship,['Center on Long-Term Risk'],2023-10-12T19:14:48Z,eaforum,, 125211,https://forum.effectivealtruism.org/posts/D8NfyuQeGspM9fYpT/begging-pleading-ai-orgs-to-comment-on-nist-ai-risk,"Begging, Pleading AI Orgs to Comment on NIST AI Risk Management Framework",['anonymous'],2022-04-15T19:35:14Z,eaforum,, 125229,https://forum.effectivealtruism.org/posts/LwhzE3scZTqxERtNn/it-s-not-how-you-use-it,It's (not) how you use it,['Eleni_A'],2022-09-07T13:28:29Z,eaforum,, 125250,https://forum.effectivealtruism.org/posts/zrSx3NRZEaJENazHK/why-i-think-it-s-important-to-work-on-ai-forecasting,Why I think it's important to work on AI forecasting,['Matthew_Barnett'],2023-02-27T21:24:18Z,eaforum,, 125267,https://forum.effectivealtruism.org/posts/rXLazPKnm7PrwGAs6/new-deepmind-report-on-institutions-for-global-ai-governance,New DeepMind report on institutions for global AI governance,['finm'],2023-07-14T16:05:45Z,eaforum,, 125276,https://forum.effectivealtruism.org/posts/EcKmt8ZJ3dcQBigna/launching-foresight-institute-s-ai-grant-for-underexplored,Launching Foresight Institute’s AI Grant for Underexplored Approaches to AI Safety – Apply for Funding!,"['elteerkers', 'Allison Duettmann']",2023-08-17T07:27:13Z,eaforum,, 125297,https://forum.effectivealtruism.org/posts/o7e5xsGMswpK7q7ic/wfw-opportunity-and-theory-of-impact,WFW?: Opportunity and Theory of Impact,['DavidCorfield'],2022-11-02T00:45:04Z,eaforum,, 125319,https://forum.effectivealtruism.org/posts/ZExF3Z7WZpdBZZZEy/has-private-agi-research-made-independent-safety-research,Has private AGI research made independent safety research ineffective already? What should we do about this?,['Roman Leventov'],2023-01-23T16:23:46Z,eaforum,, 125337,https://forum.effectivealtruism.org/posts/u4E3LCPTqiqJtfhup/three-scenarios-of-pseudo-alignment,Three scenarios of pseudo-alignment,['Eleni_A'],2022-09-05T20:26:02Z,eaforum,, 125358,https://forum.effectivealtruism.org/posts/vEAieBkRqL7Rj8KvY/ai-safety-groups-should-imitate-career-development-clubs,AI Safety groups should imitate career development clubs,['Joshc'],2022-11-09T23:48:02Z,eaforum,, 125372,https://forum.effectivealtruism.org/posts/FoRyordtA7LDoEhd7/there-are-no-coherence-theorems,There are no coherence theorems,['EJT'],2023-02-20T21:52:14Z,eaforum,, 125381,https://forum.effectivealtruism.org/posts/HrS2pXQ3zuTwr2SKS/what-does-it-mean-to-become-an-expert-in-ai-hardware-1,What does it mean to become an expert in AI Hardware?,['Toph'],2021-01-09T04:15:03Z,eaforum,, 125431,https://forum.effectivealtruism.org/posts/wxjuboWosFP7ez5tM/geoffrey-miller-on-cross-cultural-understanding-between,Geoffrey Miller on Cross-Cultural Understanding Between China and Western Countries as a Neglected Consideration in AI Alignment,['Evan_Gaensbauer'],2023-04-17T03:26:30Z,eaforum,, 125452,https://forum.effectivealtruism.org/posts/saEXX9Nucz8mh9XgB/race-to-the-top-benchmarks-for-ai-safety,Race to the Top: Benchmarks for AI Safety,['isaduan'],2022-12-04T22:50:56Z,eaforum,, 125466,https://forum.effectivealtruism.org/posts/BSmMok4r5ocnD5dqT/rp-s-ai-governance-and-strategy-team-june-2023-interim-1,RP’s AI Governance & Strategy team - June 2023 interim overview,['MichaelA'],2023-06-22T13:45:12Z,eaforum,, 125505,https://forum.effectivealtruism.org/posts/bFDs7yFiEhgPt4LWt/the-case-for-ai-adaptation-the-perils-of-living-in-a-world,The Case for AI Adaptation: The Perils of Living in a World with Aligned and Well-Deployed Transformative Artificial Intelligence,['HTC'],2023-05-30T18:29:14Z,eaforum,, 125521,https://forum.effectivealtruism.org/posts/c73nsggC2GQE5wBjq/announcing-the-spt-model-web-app-for-ai-governance,Announcing the SPT Model Web App for AI Governance,"['Paolo Bova', 'Jonas Emanuel Müller', 'Tanja Rüegg', 'Modeling Cooperation', 'Robert Trager']",2022-08-04T10:45:25Z,eaforum,, 125539,https://forum.effectivealtruism.org/posts/mWGodAi9Mv2a2EbNj/systemic-cascading-risks-relevance-in-longtermism-and-value,Systemic Cascading Risks: Relevance in Longtermism & Value Lock-In,['Richard Ren'],2022-09-02T07:53:08Z,eaforum,, 125565,https://forum.effectivealtruism.org/posts/g3j7FfuHxFxWDGWpW/what-we-re-missing-the-case-for-structural-risks-from-ai,What we're missing: the case for structural risks from AI,['Justin Olive'],2023-11-09T05:52:08Z,eaforum,, 125584,https://forum.effectivealtruism.org/posts/JKwRejsticvZg2vre/risk-averse-batch-active-inverse-reward-design,Risk-averse Batch Active Inverse Reward Design,['Panagiotis Liampas'],2023-10-07T08:56:39Z,eaforum,, 125608,https://forum.effectivealtruism.org/posts/Hw7DjsX6xjCAxXgGv/how-when-should-one-introduce-ai-risk-arguments-to-people,How/When Should One Introduce AI Risk Arguments to People Unfamiliar With the Idea?,['Harrison Durland'],2022-08-09T02:57:32Z,eaforum,, 125617,https://forum.effectivealtruism.org/posts/zCBZb5M2wTzngayNc/what-should-we-optimize-a-conversation,What Should We Optimize - A Conversation,['Johannes C. Mayer'],2022-04-07T14:48:27Z,eaforum,, 125640,https://forum.effectivealtruism.org/posts/f8BY2yiLBzHLntjTL/toby-ord-s-new-report-on-lessons-from-the-development-of-the,Toby Ord's new report on lessons from the development of the atomic bomb,['Ishan Mukherjee'],2022-11-22T10:37:21Z,eaforum,, 125656,https://forum.effectivealtruism.org/posts/54RWjNAn2hqeo3Xzq/ai-alignment-in-the-new-yorker-1,AI Alignment in The New Yorker,['Eleni_A'],2023-05-17T21:19:54Z,eaforum,, 125671,https://forum.effectivealtruism.org/posts/xLufaRBrDJuAXf3DB/refer-the-cooperative-ai-foundation-s-new-coo-receive,"Refer the Cooperative AI Foundation’s New COO, Receive $5000",['Lewis Hammond'],2022-06-16T13:27:39Z,eaforum,, 125684,https://forum.effectivealtruism.org/posts/JtaHnmWDsYGiaNn3a/making-of-ian,Making of #IAN,['kirchner.jan'],2021-08-29T16:24:41Z,eaforum,, 125694,https://forum.effectivealtruism.org/posts/Dkx7B2cSJMaLEzBKp/4-types-of-agi-selection-and-how-to-constrain-them,"4 types of AGI selection, and how to constrain them",['Remmelt'],2023-08-09T15:02:49Z,eaforum,, 125728,https://forum.effectivealtruism.org/posts/KHmoNx3zpCAaiHxTW/existential-cybersecurity-risks-and-ai-a-research-agenda,Existential Cybersecurity Risks & AI (A Research Agenda),['Madhav Malhotra'],2023-09-20T12:03:34Z,eaforum,, 125765,https://forum.effectivealtruism.org/posts/2h2E448uqCY6uGbAg/the-missing-link-to-agi,The missing link to AGI,['Yuri Barzov'],2022-09-28T16:37:52Z,eaforum,, 125780,https://forum.effectivealtruism.org/posts/Q83ayse5S8CksbT7K/changes-in-funding-in-the-ai-safety-field,Changes in funding in the AI safety field,['Sebastian_Farquhar'],2017-02-03T13:09:58Z,eaforum,, 125810,https://forum.effectivealtruism.org/posts/zzcWFPHCuNEYCw4kJ/fiscal-sponsorship-ops-support-or-incubation,"Fiscal sponsorship, ops support, or incubation?","['Harry Luk', 'Peter S. Park']",2023-10-04T22:06:40Z,eaforum,, 125824,https://forum.effectivealtruism.org/posts/foJhEZzG5sx9cQJZq/agisf-adaptation-for-in-person-groups,AGISF adaptation for in-person groups,"['Sam Marks', 'Xander Davies', 'richard_ngo']",2023-01-17T18:33:39Z,eaforum,, 125845,https://forum.effectivealtruism.org/posts/avrFeH6LpqJrjmGmc/pausing-ai-might-be-good-policy-but-it-s-bad-politics,"Pausing AI might be good policy, but it's bad politics",['Stephen Clare'],2023-10-23T13:36:58Z,eaforum,, 125862,https://forum.effectivealtruism.org/posts/KHw3ezJzA7z3itWNW/ai-safety-researcher-career-review,AI Safety researcher career review,['Benjamin_Todd'],2021-11-23T00:00:00Z,eaforum,, 125884,https://forum.effectivealtruism.org/posts/ggSXcuMzRaowDbKTz/possible-divergence-in-agi-risk-tolerance-between-selfish,Possible Divergence in AGI Risk Tolerance between Selfish and Altruistic agents,['Brad West'],2023-09-09T00:22:20Z,eaforum,, 125899,https://forum.effectivealtruism.org/posts/ctEhHxYH2a9Mrrx2f/is-ai-forecasting-a-waste-of-effort-on-the-margin,Is AI forecasting a waste of effort on the margin?,['Emrik'],2022-11-05T00:41:43Z,eaforum,, 125917,https://forum.effectivealtruism.org/posts/wqZiSGi8effcRgiyh/how-cisa-can-support-the-security-of-large-ai-models-against,How CISA can Support the Security of Large AI Models Against Theft [Grad School Assignment],['Harrison Durland'],2023-05-03T15:36:21Z,eaforum,, 125942,https://forum.effectivealtruism.org/posts/qrapebbppHASWB3W9/agi-alignment-results-from-a-series-of-aligned-actions,AGI alignment results from a series of aligned actions,['anonymous'],2021-12-27T19:33:30Z,eaforum,, 125962,https://forum.effectivealtruism.org/posts/qx6vWLwpn7joKwwAZ/why-some-people-believe-in-agi-but-i-don-t,"Why some people believe in AGI, but I don't.",['cveres'],2022-10-26T03:09:17Z,eaforum,, 125977,https://forum.effectivealtruism.org/posts/TTsPA6NQY39PGYJa4/mutual-assured-destruction-used-against-agi,Mutual Assured Destruction used against AGI,['L3opard'],2022-10-08T09:35:13Z,eaforum,, 125986,https://forum.effectivealtruism.org/posts/9qWknhfxrgtMoD4J9/machine-learning-for-scientific-discovery-ai-safety-camp,Machine Learning for Scientific Discovery - AI Safety Camp,['Eleni_A'],2023-01-06T03:06:34Z,eaforum,, 125995,https://forum.effectivealtruism.org/posts/A2YwuXe3Eo5kMZhZo/13-background-claims-about-ea,13 background claims about EA,['Akash'],2022-09-07T03:54:45Z,eaforum,, 126028,https://forum.effectivealtruism.org/posts/cXH2sG3taM5hKbiva/beyond-simple-existential-risk-survival-in-a-complex,Beyond Simple Existential Risk: Survival in a Complex Interconnected World,['Gideon Futerman'],2022-11-21T14:35:42Z,eaforum,, 126057,https://forum.effectivealtruism.org/posts/cJLsd2TYxv8KCzHvg/announcing-the-aipolicyideas-com-database,Announcing the AIPolicyIdeas.com Database,['abiolvera'],2023-06-23T16:09:57Z,eaforum,, 126073,https://forum.effectivealtruism.org/posts/nRXugEFFDz7MtGKz9/there-should-be-a-public-adversarial-collaboration-on-ai-x,There should be a public adversarial collaboration on AI x-risk,['pradyuprasad'],2023-01-23T04:09:24Z,eaforum,, 126082,https://forum.effectivealtruism.org/posts/DTPR8agC36kojCq9j/there-is-only-one-goal-or-drive-only-self-perpetuation,There is only one goal or drive - only self-perpetuation counts,['freest one'],2023-06-13T01:37:01Z,eaforum,, 126100,https://forum.effectivealtruism.org/posts/uT2S5jWGEEi58bqby/my-current-take-on-existential-ai-risk-fb-post,My current take on existential AI risk [FB post],['Aryeh Englander'],2023-05-01T16:22:36Z,eaforum,, 126140,https://forum.effectivealtruism.org/posts/3yojNGhTXAydhfkNg/slightly-against-aligning-with-neo-luddites,Slightly against aligning with neo-luddites,['Matthew_Barnett'],2022-12-26T23:27:35Z,eaforum,, 126156,https://forum.effectivealtruism.org/posts/ph6wvA2EtQ7pG3yvG/linkpost-michael-nielsen-remarks-on-oppenheimer,[Linkpost] Michael Nielsen remarks on 'Oppenheimer',['Tom Barnes'],2023-08-31T15:41:40Z,eaforum,, 126171,https://forum.effectivealtruism.org/posts/aeJB4qAWBxcvtZHad/markus-anderljung-and-ben-garfinkel-fireside-chat-on-ai,Markus Anderljung and Ben Garfinkel: Fireside chat on AI governance,['EA Global'],2020-07-24T14:56:18Z,eaforum,, 126196,https://forum.effectivealtruism.org/posts/nxBKxFcfMnEb3Cmys/how-to-use-ai-speech-transcription-and-analysis-to,How to use AI speech transcription and analysis to accelerate social science research,['AlexanderSaeri'],2023-01-31T04:01:38Z,eaforum,, 126214,https://forum.effectivealtruism.org/posts/brhX6axLaqxtDKWXe/shulman-and-yudkowsky-on-ai-progress,Shulman and Yudkowsky on AI progress,"['CarlShulman', 'EliezerYudkowsky']",2021-12-04T11:37:23Z,eaforum,, 126237,https://forum.effectivealtruism.org/posts/hCsxvMAGpkEuLCE4E/why-ai-alignment-could-be-hard-with-modern-deep-learning,Why AI alignment could be hard with modern deep learning,['Ajeya'],2021-09-21T15:35:48Z,eaforum,, 126256,https://forum.effectivealtruism.org/posts/Qoecey2umNjcqEGHP/apply-to-greater-than-50-ai-safety-funders-in-one,Apply to >50 AI safety funders in one application with the Nonlinear Network [Round Closed],"['Drew Spartz', 'Kat Woods', 'Emerson Spartz']",2023-04-12T21:06:39Z,eaforum,, 126265,https://forum.effectivealtruism.org/posts/KHQKbwWk7oosAxnMC/ai-safety-applying-to-graduate-studies,AI Safety: Applying to Graduate Studies,['frances_lorenz'],2021-12-15T22:56:37Z,eaforum,, 126294,https://forum.effectivealtruism.org/posts/SvjiueLQpjJLRehuF/aisn-24-kissinger-urges-us-china-cooperation-on-ai-china-s,"AISN #24: Kissinger Urges US-China Cooperation on AI, China's New AI Law, US Export Controls, International Institutions, and Open Source AI","['Center for AI Safety', 'aogara', 'Dan H']",2023-10-18T17:03:52Z,eaforum,, 126321,https://forum.effectivealtruism.org/posts/ZnMZzFjJuG7kNQfnW/why-not-to-solve-alignment-by-making-superintelligent-humans,Why not to solve alignment by making superintelligent humans?,['Pato'],2022-10-16T21:26:17Z,eaforum,, 126334,https://forum.effectivealtruism.org/posts/L3mLPmBcsoXv36yt9/global-computing-capacity,Global computing capacity,['Vasco Grilo'],2023-05-01T06:09:36Z,eaforum,, 126350,https://forum.effectivealtruism.org/posts/yFQREgJtKib7zGM9w/large-language-models-as-corporate-lobbyists-and,"Large Language Models as Corporate Lobbyists, and Implications for Societal-AI Alignment",['johnjnay'],2023-01-04T22:22:14Z,eaforum,, 126366,https://forum.effectivealtruism.org/posts/sC69HBdkLuq58Yzpw/how-to-troll-for-good-leveraging-ip-for-ai-governance,How to ‘troll for good’: Leveraging IP for AI governance,['Michael Huang'],2023-02-26T06:34:24Z,eaforum,, 126381,https://forum.effectivealtruism.org/posts/FicNtafsLGnFkf7yA/updates-from-campaign-for-ai-safety-1,Updates from Campaign for AI Safety,"['Jolyn Khoo', 'Nik Samoylov']",2023-07-19T08:15:09Z,eaforum,, 126404,https://forum.effectivealtruism.org/posts/eDNtcAyrNqaCenbos/eur200k-in-european-ai-and-society-fund-grants,€200k in European AI & Society Fund grants,['Artūrs Kaņepājs'],2023-07-06T13:00:40Z,eaforum,, 126417,https://forum.effectivealtruism.org/posts/J4cLuxvAwnKNQxwxj/how-does-ai-progress-affect-other-ea-cause-areas,How does AI progress affect other EA cause areas?,['Luis Mota Freitas'],2023-06-09T12:43:06Z,eaforum,, 126432,https://forum.effectivealtruism.org/posts/PutG2gC5huKK8ktWs/scalable-and-transferable-black-box-jailbreaks-for-language,Scalable And Transferable Black-Box Jailbreaks For Language Models Via Persona Modulation,"['soroushjp', 'Quentin Feuillade--Montixi', 'Rusheb']",2023-11-07T18:00:15Z,eaforum,, 126449,https://forum.effectivealtruism.org/posts/JsnhfXsNgi3GxfScL/tetlock-on-low-ai-xrisk,Tetlock on low AI xrisk,['TeddyW'],2023-07-13T14:19:33Z,eaforum,, 126461,https://forum.effectivealtruism.org/posts/T4EfQm9YzYdWd6Xyq/why-does-an-ai-have-to-have-specified-goals,Why does an AI have to have specified goals?,['Luke Eure'],2023-08-22T20:15:09Z,eaforum,, 126470,https://forum.effectivealtruism.org/posts/k7QW3F4GzSd6QYKp9/aisn-14-openai-s-superalignment-team-musk-s-xai-launches-and,"AISN#14: OpenAI’s ‘Superalignment’ team, Musk’s xAI launches, and developments in military AI use","['Center for AI Safety', 'Dan H']",2023-07-12T16:58:05Z,eaforum,, 126494,https://forum.effectivealtruism.org/posts/F8B4JTgfDMXDd7q7G/implications-of-agi-on-subjective-human-experience,Implications of AGI on Subjective Human Experience,['Erica S.'],2023-05-30T18:47:26Z,eaforum,, 126517,https://forum.effectivealtruism.org/posts/YBaJvhcat3PGhCCnk/ama-ought,AMA: Ought,"['stuhlmueller', 'jungofthewon']",2022-08-03T17:24:35Z,eaforum,, 126532,https://forum.effectivealtruism.org/posts/bB2CSnFS6mEcNmPgD/the-costs-of-caution,The costs of caution,['Kelsey Piper'],2023-05-01T20:04:09Z,eaforum,, 126549,https://forum.effectivealtruism.org/posts/NkmxjzHbk5WxvK5xs/phd-position-ai-interpretability-in-berlin-germany,"PhD Position: AI Interpretability in Berlin, Germany",['Stephan_Wäldchen'],2023-04-22T18:57:32Z,eaforum,, 126565,https://forum.effectivealtruism.org/posts/NbiHKTN5QhFFfjjm5/ai-safety-seems-hard-to-measure,AI Safety Seems Hard to Measure,['Holden Karnofsky'],2022-12-11T01:31:39Z,eaforum,, 126590,https://forum.effectivealtruism.org/posts/rdWKqSzia2yBz8Z7i/could-unions-be-an-underrated-driver-for-ai-safety-policy,Could unions be an underrated driver for AI safety policy?,['Dunning K.'],2023-07-12T13:21:04Z,eaforum,, 126603,https://forum.effectivealtruism.org/posts/bxvzu7qBF4cAsSu6d/how-do-ai-timelines-affect-giving-now-vs-later,How Do AI Timelines Affect Giving Now vs. Later?,['MichaelDickens'],2021-08-03T03:36:43Z,eaforum,, 126619,https://forum.effectivealtruism.org/posts/n2F2rymJdCcQSYy8y/retrospective-on-the-ai-safety-field-building-hub,Retrospective on the AI Safety Field Building Hub,['Vael Gates'],2023-02-02T02:06:53Z,eaforum,, 126638,https://forum.effectivealtruism.org/posts/HCdxb2hqnKE3pWs73/how-to-regulate-cutting-edge-ai-models-markus-anderljung-on,"How to regulate cutting-edge AI models (Markus Anderljung on The 80,000 Hours Podcast)",['80000_Hours'],2023-07-11T12:36:04Z,eaforum,, 126668,https://forum.effectivealtruism.org/posts/3kMQTjtdWqkxGuWxB/update-on-cause-area-focus-working-group,Update on cause area focus working group,['Bastian_Stern'],2023-08-10T01:21:25Z,eaforum,, 126684,https://forum.effectivealtruism.org/posts/ggiCDnYcSKLxwFbBv/the-pugwash-conferences-and-the-anti-ballistic-missile,The Pugwash Conferences and the Anti-Ballistic Missile Treaty as a case study of Track II diplomacy,['rani_martin'],2022-09-16T10:42:03Z,eaforum,, 126706,https://forum.effectivealtruism.org/posts/oHiQcBtDJiqPLnoAE/the-case-for-building-expertise-to-work-on-us-ai-policy-and,"The case for building expertise to work on US AI policy, and how to do it",['80000_Hours'],2019-01-31T22:44:16Z,eaforum,, 126722,https://forum.effectivealtruism.org/posts/SCiAvwbczwag8on6S/how-can-i-best-use-my-career-to-pass-impactful-ai-and,How can I best use my career to pass impactful AI and Biosecurity policy.,['maxg'],2023-10-13T05:14:36Z,eaforum,, 126734,https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future,My take on What We Owe the Future,['elifland'],2022-09-01T18:07:44Z,eaforum,, 126768,https://forum.effectivealtruism.org/posts/uXNytB4fhgSyMJp9w/new-working-paper-series-of-the-legal-priorities-project,New Working Paper Series of the Legal Priorities Project,['Legal Priorities Project'],2021-10-18T10:30:27Z,eaforum,, 126819,https://forum.effectivealtruism.org/posts/6aYfWyo9DKEheogf8/don-t-call-it-ai-alignment,Don't Call It AI Alignment,['RedStateBlueState'],2023-02-20T05:27:44Z,eaforum,, 126830,https://forum.effectivealtruism.org/posts/yxk2ue2eLeCrozvRz/normal-accidents-and-ai-systems,"""Normal accidents"" and AI systems",['Eleni_A'],2022-08-08T18:43:42Z,eaforum,, 126855,https://forum.effectivealtruism.org/posts/sNqzGZjv4pRJjjhZs/wild-animal-welfare-scenarios-for-ai-doom,Wild Animal Welfare Scenarios for AI Doom,['utilistrutil'],2023-06-08T19:41:38Z,eaforum,, 126876,https://forum.effectivealtruism.org/posts/iqDt8YFLjvtjBPyv6/some-things-i-heard-about-ai-governance-at-eag,Some Things I Heard about AI Governance at EAG,['utilistrutil'],2023-02-28T21:28:00Z,eaforum,, 126929,https://forum.effectivealtruism.org/posts/Bjr6FXvnKqb37uMPP/shutting-down-ai-safety-support,Shutting down AI Safety Support,['JJ Hepburn'],2023-07-30T06:00:09Z,eaforum,, 126944,https://forum.effectivealtruism.org/posts/aZXgWemk6cfzYwxKB/tony-blair-institute-compute-for-ai-index-seeking-a-supplier,Tony Blair Institute - Compute for AI Index ( Seeking a Supplier),['TomWestgarth'],2022-10-03T10:25:21Z,eaforum,, 126954,https://forum.effectivealtruism.org/posts/zjmpFW3nBKwaBB5xr/corporate-campaigns-work-a-key-learning-for-ai-safety,Corporate campaigns work: a key learning for AI Safety,['Jamie_Harris'],2023-08-17T21:35:59Z,eaforum,, 126971,https://forum.effectivealtruism.org/posts/XnnfPC2gsgRFZezkE/linkpost-what-are-reasonable-ai-fears-by-robin-hanson-2023,"[linkpost] ""What Are Reasonable AI Fears?"" by Robin Hanson, 2023-04-23",['Arjun Panickssery'],2023-04-14T23:26:51Z,eaforum,, 127010,https://forum.effectivealtruism.org/posts/MNPrXCsPpwTgygMxc/cortes-pizarro-and-afonso-as-precedents-for-takeover,"Cortés, Pizarro, and Afonso as Precedents for Takeover",['AI Impacts'],2020-03-02T12:25:13Z,eaforum,, 127029,https://forum.effectivealtruism.org/posts/evoMzZWPbkGmPeJvB/could-realistic-depictions-of-catastrophic-ai-risks,Could realistic depictions of catastrophic AI risks effectively reduce said risks?,['Matthew Barber'],2022-08-17T20:01:26Z,eaforum,, 127045,https://forum.effectivealtruism.org/posts/r8q7mzfxqr8fxrEfH/updates-from-campaign-for-ai-safety-2,Updates from Campaign for AI Safety,['Jolyn Khoo'],2023-06-16T09:45:52Z,eaforum,, 127083,https://forum.effectivealtruism.org/posts/dASEFCurRpNot4Gpc/8-possible-high-level-goals-for-work-on-nuclear-risk,8 possible high-level goals for work on nuclear risk,['MichaelA'],2022-03-29T06:30:53Z,eaforum,, 127122,https://forum.effectivealtruism.org/posts/gYxY5Mr2srBnrbuaT/announcing-the-ai-fables-writing-contest,Announcing the AI Fables Writing Contest!,['Daystar Eld'],2023-07-12T03:04:51Z,eaforum,, 127137,https://forum.effectivealtruism.org/posts/PYeMoDripSZsasgi6/rethink-priorities-is-hiring-a-compute-governance-researcher,Rethink Priorities is hiring a Compute Governance Researcher or Research Assistant,"['MichaelA', 'Rethink Priorities']",2023-06-07T13:22:44Z,eaforum,, 127156,https://forum.effectivealtruism.org/posts/njD2PurEKDEZcMLKZ/jobs-that-can-help-with-the-most-important-century,Jobs that can help with the most important century,['Holden Karnofsky'],2023-02-12T18:19:00Z,eaforum,, 127202,https://forum.effectivealtruism.org/posts/whDMv4NjsMcPrLq2b/cser-and-fhi-advice-to-un-high-level-panel-on-digital,CSER and FHI advice to UN High-level Panel on Digital Cooperation,['HaydnBelfield'],2019-03-08T20:39:30Z,eaforum,, 127242,https://forum.effectivealtruism.org/posts/fmDFytmxwX9qBgcaX/why-aren-t-you-freaking-out-about-openai-at-what-point-would,Why aren't you freaking out about OpenAI? At what point would you start?,['AppliedDivinityStudies'],2021-10-10T13:06:41Z,eaforum,, 127262,https://forum.effectivealtruism.org/posts/hfnuwh6miJ3yn2Jpq/what-do-ai-safety-pitches-not-get-about-your-field,What Do AI Safety Pitches Not Get About Your Field?,['Aris Richardson'],2022-09-20T18:13:41Z,eaforum,, 127271,https://forum.effectivealtruism.org/posts/u9atkEFDcuMkgipch/preventing-ai-misuse-state-of-the-art-research-and-its-flaws,Preventing AI Misuse: State of the Art Research and its Flaws,['Madhav Malhotra'],2023-04-23T10:50:39Z,eaforum,, 127292,https://forum.effectivealtruism.org/posts/eHYxg7cFxqQPGo7hD/complex-systems-for-ai-safety-pragmatic-ai-safety-3,Complex Systems for AI Safety [Pragmatic AI Safety #3],"['ThomasW', 'Dan H']",2022-05-24T00:04:04Z,eaforum,, 127321,https://forum.effectivealtruism.org/posts/aupKXpPGnFmbfE2xC/is-eric-schmidt-funding-ai-capabilities-research-by-the-us,Is Eric Schmidt funding AI capabilities research by the US government?,['anonymous'],2022-12-24T08:32:32Z,eaforum,, 127337,https://forum.effectivealtruism.org/posts/Gg6SvNy8ZRAjYRbCZ/quantifying-the-far-future-effects-of-interventions,Quantifying the Far Future Effects of Interventions,['MichaelDickens'],2016-05-18T02:15:07Z,eaforum,, 127367,https://forum.effectivealtruism.org/posts/C94JhsbSfZ8iPNedy/why-ai-is-harder-than-we-think-melanie-mitchell,Why AI is Harder Than We Think - Melanie Mitchell,['BrownHairedEevee'],2021-04-28T08:19:03Z,eaforum,, 127385,https://forum.effectivealtruism.org/posts/C5GxzWrJRrPibia5z/what-ai-take-over-movies-or-books-will-scare-me-into-taking,What AI Take-Over Movies or Books Will Scare Me Into Taking AI Seriously?,['Jordan Arel'],2023-01-10T08:30:20Z,eaforum,, 127394,https://forum.effectivealtruism.org/posts/WKSwH4eyDiqhJMcrz/very-briefly-the-chips-act-1,Very Briefly: The CHIPS Act,['Yadav'],2023-02-26T13:53:56Z,eaforum,, 127411,https://forum.effectivealtruism.org/posts/roYj4ijkKCotSk4ob/the-benefits-of-distillation-in-research,The Benefits of Distillation in Research,['Jonas Hallgren'],2023-03-04T19:19:10Z,eaforum,, 127426,https://forum.effectivealtruism.org/posts/p7qXjisiADiCBnofk/the-eu-ai-act-needs-a-definition-of-high-risk-foundation,The EU AI Act needs a definition of high-risk foundation models to avoid regulatory overreach and backlash,['matthias_samwald'],2023-05-31T15:34:01Z,eaforum,, 127437,https://forum.effectivealtruism.org/posts/iekFPDBHusqqvSmsy/epoch-is-hiring-an-ml-hardware-researcher,Epoch is hiring an ML Hardware Researcher,['merilalama'],2023-07-20T19:08:42Z,eaforum,, 127447,https://forum.effectivealtruism.org/posts/HWKqmTLcbsf4F5xAk/ai-safety-and-consciousness-research-a-brainstorm,AI safety and consciousness research: A brainstorm,['Daniel_Friedrich'],2023-03-15T14:33:42Z,eaforum,, 127473,https://forum.effectivealtruism.org/posts/8pSq73kTJmPrzTfir/predicting-researcher-interest-in-ai-alignment,Predicting researcher interest in AI alignment,['Vael Gates'],2023-02-02T00:58:01Z,eaforum,, 127497,https://forum.effectivealtruism.org/posts/xGSw8gho7CJNXrPtf/relevant-pre-agi-possibilities,Relevant pre-AGI possibilities,['kokotajlod'],2020-06-20T13:15:29Z,eaforum,, 127506,https://forum.effectivealtruism.org/posts/8sW4h368DsoooHBNP/the-great-energy-descent-part-2-limits-to-growth-and-why-we,The great energy descent - Part 2: Limits to growth and why we probably won’t reach the stars,['Corentin Biteau'],2022-08-31T21:51:16Z,eaforum,, 127537,https://forum.effectivealtruism.org/posts/TT62phLw2AZWn6tDc/how-rethink-priorities-research-could-inform-your,How Rethink Priorities’ Research could inform your grantmaking,"['kierangreig', 'Peter Wildeford', 'Marcus_A_Davis', 'Rethink Priorities']",2023-10-04T18:24:43Z,eaforum,, 127553,https://forum.effectivealtruism.org/posts/FHTyixYNnGaQfEexH/a-concern-about-the-evolutionary-anchor-of-ajeya-cotra-s,A concern about the “evolutionary anchor” of Ajeya Cotra’s report on AI timelines.,['NunoSempere'],2022-08-16T14:44:37Z,eaforum,, 127563,https://forum.effectivealtruism.org/posts/DTTADonxnDRoksp4E/ai-safety-ideas-a-collaborative-ai-safety-research-platform,AI Safety Ideas: A collaborative AI safety research platform,"['Apart Research', 'Esben Kran']",2022-10-17T17:01:30Z,eaforum,, 127588,https://forum.effectivealtruism.org/posts/ST3JjsLdTBnaK46BD/how-i-failed-to-form-views-on-ai-safety-3,How I failed to form views on AI safety,['Ada-Maaria Hyvärinen'],2022-04-17T11:05:24Z,eaforum,, 127610,https://forum.effectivealtruism.org/posts/2ZeHrfJr9uHHJ2e8J/my-understanding-of-paul-christiano-s-iterated-amplification,My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda,['Chi'],2020-08-15T19:59:22Z,eaforum,, 127641,https://forum.effectivealtruism.org/posts/mBKLGm9GXzoet9Gaq/on-deepmind-and-trying-to-fairly-hear-out-both-ai-doomers,"On DeepMind and Trying to Fairly Hear Out Both AI Doomers and Doubters (Rohin Shah on The 80,000 Hours Podcast)",['80000_Hours'],2023-06-12T12:53:21Z,eaforum,, 127675,https://forum.effectivealtruism.org/posts/diZWNmLRgcbuwmYn4/long-term-future-fund-may-2021-grant-recommendations,Long-Term Future Fund: May 2021 grant recommendations,['abergal'],2021-05-27T06:44:16Z,eaforum,, 127710,https://forum.effectivealtruism.org/posts/NKMxC2nA47uuhFm8x/a-defense-of-work-on-mathematical-ai-safety,A Defense of Work on Mathematical AI Safety,['Davidmanheim'],2023-07-06T14:13:11Z,eaforum,, 127736,https://forum.effectivealtruism.org/posts/iBTon2dRYwcoS9Jyr/predict-responses-to-the-existential-risk-from-ai-survey,"Predict responses to the ""existential risk from AI"" survey",['RobBensinger'],2021-05-28T01:38:03Z,eaforum,, 127753,https://forum.effectivealtruism.org/posts/DLEdzaiSqoC4eonKp/linkpost-ai-alignment-explained-in-5-points-updated,"[Linkpost] AI Alignment, Explained in 5 Points (updated)",['Daniel_Eth'],2023-04-18T08:09:48Z,eaforum,, 127774,https://forum.effectivealtruism.org/posts/ELn2cdcDwAET6fqEi/merger-of-deepmind-and-google-brain,Merger of DeepMind and Google Brain,['Greg_Colbourn'],2023-04-20T20:16:03Z,eaforum,, 127784,https://forum.effectivealtruism.org/posts/YKY4KmKEurY8cwHTJ/agi-as-a-black-swan-event,AGI as a Black Swan Event,['Stephen McAleese'],2022-12-04T23:35:51Z,eaforum,, 127805,https://forum.effectivealtruism.org/posts/TLbiusoP77D9vCnzJ/a-solipsistic-repugnant-conclusion,"A ""Solipsistic"" Repugnant Conclusion",['Ramiro'],2022-07-21T16:06:46Z,eaforum,, 127820,https://forum.effectivealtruism.org/posts/WmnAQ4qTYwCviwDhS/announcing-key-phenomena-in-ai-risk-facilitated-reading,Announcing “Key Phenomena in AI Risk” (facilitated reading group),"['nora', 'particlemania']",2023-05-09T16:52:45Z,eaforum,, 127837,https://forum.effectivealtruism.org/posts/e9Q94Mq6LTjSAgujY/how-strong-is-the-evidence-of-unaligned-ai-systems-causing,How strong is the evidence of unaligned AI systems causing harm?,['BrownHairedEevee'],2020-07-21T04:08:08Z,eaforum,, 127857,https://forum.effectivealtruism.org/posts/p5hfDvD59tidpLyi8/intro-to-brain-like-agi-safety-series-just-finished,“Intro to brain-like-AGI safety” series—just finished!,['Steven Byrnes'],2022-05-17T15:35:38Z,eaforum,, 127874,https://forum.effectivealtruism.org/posts/ktEzS3pkfeqPNh6r5/ai-strategy-nearcasting,AI strategy nearcasting,['Holden Karnofsky'],2022-08-26T16:25:24Z,eaforum,, 127896,https://forum.effectivealtruism.org/posts/jatnoouJuCpcKpnoh/announcing-the-govai-policy-team,Announcing the GovAI Policy Team,['MarkusAnderljung'],2022-08-01T22:46:07Z,eaforum,, 127913,https://forum.effectivealtruism.org/posts/sH9i6PSsXZABM5RNq/introducing-the-new-riesgos-catastroficos-globales-team,Introducing the new Riesgos Catastróficos Globales team,"['Jaime Sevilla', 'JuanGarcia', 'Mónica Ulloa', 'Claudette Salinas', 'JorgeTorresC']",2023-03-03T23:04:35Z,eaforum,, 127952,https://forum.effectivealtruism.org/posts/EArLfuDz34zJHJZJx/centre-for-the-study-of-existential-risk-four-month-report-1,Centre for the Study of Existential Risk Four Month Report June - September 2020,['HaydnBelfield'],2020-12-02T18:33:42Z,eaforum,, 127987,https://forum.effectivealtruism.org/posts/WfodoyjePTTuaTjLe/efficacy-of-ai-activism-have-we-ever-said-no,Efficacy of AI Activism: Have We Ever Said No?,['charlieh943'],2023-10-27T16:52:08Z,eaforum,, 128010,https://forum.effectivealtruism.org/posts/QWuKM5fsbry8Jp2x5/why-people-want-to-work-on-ai-safety-but-don-t,Why people want to work on AI safety (but don’t),['Emily Grundy'],2023-01-24T06:41:24Z,eaforum,, 128026,https://forum.effectivealtruism.org/posts/9AqZL4FnhP7wfgjoM/seeking-participants-for-study-of-ai-safety-researchers,Seeking participants for study of AI safety researchers,['Gardner'],2022-12-14T09:38:20Z,eaforum,, 128037,https://forum.effectivealtruism.org/posts/j7X8nQ7YvvA7Pi4BX/a-critique-of-ai-takeover-scenarios,A Critique of AI Takeover Scenarios,['Fods12'],2022-08-31T13:49:15Z,eaforum,, 128059,https://forum.effectivealtruism.org/posts/fqEcHtEvancXg4Jy4/daniel-dewey-the-open-philanthropy-project-s-work-on,Daniel Dewey: The Open Philanthropy Project's work on potential risks from advanced AI,['EA Global'],2017-08-11T08:19:00Z,eaforum,, 128089,https://forum.effectivealtruism.org/posts/3gmkrj3khJHndYGNe/estimating-the-current-and-future-number-of-ai-safety,Estimating the Current and Future Number of AI Safety Researchers,['Stephen McAleese'],2022-09-28T20:58:13Z,eaforum,, 128110,https://forum.effectivealtruism.org/posts/CsjKJkAyxXxmfz8su/the-history-of-ai-rights-research,The History of AI Rights Research,['Jamie_Harris'],2022-08-27T08:14:55Z,eaforum,, 128134,https://forum.effectivealtruism.org/posts/NAcN98bACuwcnB32H/the-navigation-fund-launched-is-hiring-a-program-officer-to,"The Navigation Fund launched + is hiring a program officer to lead the distribution of $20M annually for AI safety! Full-time, fully remote, pay starts at $200k",['vincentweisser'],2023-11-03T21:53:52Z,eaforum,, 128143,https://forum.effectivealtruism.org/posts/Bzezf2zmgBhtCD3Pb/components-of-strategic-clarity-strategic-perspectives-on,"Components of Strategic Clarity [Strategic Perspectives on Long-term AI Governance, #2]",['MMMaas'],2022-07-02T11:22:47Z,eaforum,, 128158,https://forum.effectivealtruism.org/posts/7YoEy2fEPdarHxYkC/can-we-convince-people-to-work-on-ai-safety-without,Can we convince people to work on AI safety without convincing them about AGI happening this century?,['BrianTan'],2020-11-26T14:46:28Z,eaforum,, 128168,https://forum.effectivealtruism.org/posts/c3Gmaygoh6RgpqL9w/14-ways-ml-could-improve-informative-video,14 Ways ML Could Improve Informative Video,['Ozzie Gooen'],2023-01-10T13:53:39Z,eaforum,, 128196,https://forum.effectivealtruism.org/posts/tsHfFdAGehzoH6BZR/summary-of-stuart-russell-s-new-book-human-compatible,"Summary of Stuart Russell's new book, ""Human Compatible""",['Rohin Shah'],2019-10-19T19:56:52Z,eaforum,, 128229,https://forum.effectivealtruism.org/posts/cKbehBhq7NxTq3pck/a-case-study-of-regulation-done-well-canadian-biorisk,A case study of regulation done well? Canadian biorisk regulations,['rosehadshar'],2023-09-08T17:10:33Z,eaforum,, 128267,https://forum.effectivealtruism.org/posts/Y2hkJ5STZfBCyRG9r/a-conversation-with-rohin-shah,A conversation with Rohin Shah,['AI Impacts'],2019-11-12T01:31:04Z,eaforum,, 128288,https://forum.effectivealtruism.org/posts/G3vzNHjrL8AQmBqFb/apply-for-the-ml-winter-camp-in-cambridge-uk-2-10-jan,"Apply for the ML Winter Camp in Cambridge, UK [2-10 Jan]","['Nathan_Barnard', 'Joe Hardie', 'qurat', 'Catherine Brewer', 'hannah']",2022-12-02T19:33:41Z,eaforum,, 128299,https://forum.effectivealtruism.org/posts/vxLrFdrqRPdaHJwgs/join-the-interpretability-research-hackathon,Join the interpretability research hackathon,"['Esben Kran', 'Sabrina Zaki', 'RichardAnnilo', 'Joe Hardie', 'Apart Research']",2022-10-28T16:26:05Z,eaforum,, 128314,https://forum.effectivealtruism.org/posts/kXaxasXfG8DQR4jgq/some-quotes-from-tuesday-s-senate-hearing-on-ai,Some quotes from Tuesday's Senate hearing on AI,['Daniel_Eth'],2023-05-17T12:13:04Z,eaforum,, 128351,https://forum.effectivealtruism.org/posts/8YXFaM9yHbhiJTPqp/agi-rising-why-we-are-in-a-new-era-of-acute-risk-and,"AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now",['Greg_Colbourn'],2023-05-02T10:17:04Z,eaforum,, 128372,https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates,On Deference and Yudkowsky's AI Risk Estimates,['bgarfinkel'],2022-06-19T14:35:40Z,eaforum,, 128386,https://forum.effectivealtruism.org/posts/zxrBi4tzKwq2eNYKm/ea-infosec-skill-up-in-or-make-a-transition-to-infosec-via,EA Infosec: skill up in or make a transition to infosec via this book club,"['Jason Clinton', 'Wim van der Schoot']",2023-03-05T21:02:04Z,eaforum,, 128397,https://forum.effectivealtruism.org/posts/6CibzfFnRWXcZosxv/proposals-for-the-ai-regulatory-sandbox-in-spain,Proposals for the AI Regulatory Sandbox in Spain,"['Guillem Bas', 'Jaime Sevilla', 'Mónica Ulloa']",2023-04-27T10:33:13Z,eaforum,, 128430,https://forum.effectivealtruism.org/posts/DGQHZZNMdjDghgu2S/owen-cotton-barratt-what-does-and-doesn-t-ai-mean-for,Owen Cotton-Barratt: What does (and doesn't) AI mean for effective altruism?,['EA Global'],2017-08-11T08:19:00Z,eaforum,, 128452,https://forum.effectivealtruism.org/posts/pdMjPuddtHeLSBDiF/apply-to-fall-policy-internships-we-can-help,Apply to fall policy internships (we can help),"['Elika', 'Vaidehi Agarwalla']",2023-07-02T21:37:00Z,eaforum,, 128461,https://forum.effectivealtruism.org/posts/pPQ5wqEPxLexCqGkL/linkpost-the-godfather-of-a-i-leaves-google-and-warns-of,[Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead,['Darius1'],2023-05-01T19:54:39Z,eaforum,, 128483,https://forum.effectivealtruism.org/posts/XkDXSmqoKhR7ezKYf/feedback-request-on-ea-philippines-career-advice-research,Feedback Request on EA Philippines' Career Advice Research for Technical AI Safety,['BrianTan'],2020-10-03T10:39:16Z,eaforum,, 128492,https://forum.effectivealtruism.org/posts/dKgWZ8GMNkXfRwjqH/seeking-social-science-students-collaborators-interested-in,Seeking social science students / collaborators interested in AI existential risks,['Vael Gates'],2021-09-24T21:56:32Z,eaforum,, 128511,https://forum.effectivealtruism.org/posts/wY6aBzcXtSprmDhFN/exponential-ai-takeoff-is-a-myth,Exponential AI takeoff is a myth,['Christoph Hartmann'],2023-05-31T11:47:33Z,eaforum,, 128535,https://forum.effectivealtruism.org/posts/CKNJ9Lxru34JevCyi/nearcast-based-deployment-problem-analysis-karnofsky-2022,"Nearcast-based “deployment problem” analysis (Karnofsky, 2022)",['Will Aldred'],2023-01-09T16:57:37Z,eaforum,, 128564,https://forum.effectivealtruism.org/posts/kJzPDbgmA8nrLqTgH/asya-bergal-reasons-you-might-think-human-level-ai-is,Asya Bergal: Reasons you might think human-level AI is unlikely to happen soon,['EA Global'],2020-08-26T16:01:43Z,eaforum,, 128586,https://forum.effectivealtruism.org/posts/DhJhtxMX6SdYAsWiY/forecasting-through-fiction,Forecasting Through Fiction,['Yitz'],2022-07-06T05:23:18Z,eaforum,, 128603,https://forum.effectivealtruism.org/posts/EoqeJCBiuJbMTKfPZ/unveiling-the-american-public-opinion-on-ai-moratorium-and,Unveiling the American Public Opinion on AI Moratorium and Government Intervention: The Impact of Media Exposure,['Otto'],2023-05-08T10:49:06Z,eaforum,, 128613,https://forum.effectivealtruism.org/posts/Liphmkodcu7XPDKfK/rethink-priorities-2022-impact-2023-strategy-and-funding-1,"Rethink Priorities’ 2022 Impact, 2023 Strategy, and Funding Gaps",['kierangreig'],2022-11-25T05:37:51Z,eaforum,, 128640,https://forum.effectivealtruism.org/posts/WDsxB6n4dhxmwbQex/how-should-technical-ai-researchers-best-transition-into-ai,How should technical AI researchers best transition into AI governance and policy?,['Gabriel Mukobi'],2023-09-10T05:29:37Z,eaforum,, 128663,https://forum.effectivealtruism.org/posts/vDsGvWEzoccPnJqDQ/informatica-special-issue-on-superintelligence,Informatica: Special Issue on Superintelligence,['RyanCarey'],2017-05-03T05:05:56Z,eaforum,, 128676,https://forum.effectivealtruism.org/posts/zJYMkxGgpG8mCqagc/an-update-on-the-campaign-for-ai-safety-dot-org,An Update On The Campaign For AI Safety Dot Org,['anonymous'],2023-05-05T00:19:56Z,eaforum,, 128695,https://forum.effectivealtruism.org/posts/4fTrqJ7w8weRCYHeF/brain-computer-interfaces-and-brain-organoids-in-ai,Brain-computer interfaces and brain organoids in AI alignment?,['freedomandutility'],2023-04-15T22:28:34Z,eaforum,, 128708,https://forum.effectivealtruism.org/posts/XvWWfq9iqFj8x7Eu8/list-of-ai-safety-courses-and-resources,List of AI safety courses and resources,"['Daniel del Castillo', 'Chris Leong', 'Kat Woods']",2021-09-06T14:26:42Z,eaforum,, 128724,https://forum.effectivealtruism.org/posts/jyaRY8yWgv679XS7p/link-and-commentary-beyond-near-and-long-term-towards-a,[Link and commentary] Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society,['MichaelA'],2020-03-14T09:04:11Z,eaforum,, 128738,https://forum.effectivealtruism.org/posts/zcthR42p6iHXDWYrw/un-public-call-for-nominations-for-high-level-advisory-body,UN Public Call for Nominations For High-level Advisory Body on Artificial Intelligence,['vincentweisser'],2023-08-10T10:34:14Z,eaforum,, 128748,https://forum.effectivealtruism.org/posts/6h3a9bvJ2uYBfWxEM/ama-markus-anderljung-pm-at-govai-fhi-1,"AMA: Markus Anderljung (PM at GovAI, FHI)",['MarkusAnderljung'],2020-09-21T11:23:10Z,eaforum,, 128763,https://forum.effectivealtruism.org/posts/2bfYxTt2FsGXnwDyt/a-new-place-to-discuss-cognitive-science-ethics-and-human,"A new place to discuss cognitive science, ethics and human alignment",['Daniel_Friedrich'],2022-11-04T14:34:14Z,eaforum,, 128788,https://forum.effectivealtruism.org/posts/E4gfMSqmznDwMrv9q/are-social-media-algorithms-an-existential-risk,Are social media algorithms an existential risk?,['Barry Grimes'],2020-09-15T08:52:13Z,eaforum,, 128799,https://forum.effectivealtruism.org/posts/xzJ9uNotWGDHznGi9/can-ai-solve-climate-change,Can AI solve climate change?,['Vivian'],2023-05-13T20:44:31Z,eaforum,, 128814,https://forum.effectivealtruism.org/posts/6diKmxL8hD89yoMrq/g7-summit-cooperation-on-ai-policy,G7 Summit—Cooperation on AI Policy,['Lenny'],2023-05-19T10:10:14Z,eaforum,, 128829,https://forum.effectivealtruism.org/posts/Rg7h7G3KTvaYEtL55/us-public-perception-of-cais-statement-and-the-risk-of,US public perception of CAIS statement and the risk of extinction,"['Jamie Elsey', 'David_Moss']",2023-06-22T16:39:54Z,eaforum,, 128845,https://forum.effectivealtruism.org/posts/oqveRcMwRMDk6SYXM/clarifications-about-structural-risk-from-ai,Clarifications about structural risk from AI,['Sam Clarke'],2022-01-18T12:57:43Z,eaforum,, 128873,https://forum.effectivealtruism.org/posts/iKWbkomL8WrA8Yy4X/critique-of-superintelligence-part-3,Critique of Superintelligence Part 3,['Fods12'],2018-12-13T05:13:38Z,eaforum,, 128900,https://forum.effectivealtruism.org/posts/YwnfPtxHktfowyrMD/the-religion-problem-in-ai-alignment,The religion problem in AI alignment,['Geoffrey Miller'],2022-09-16T01:24:40Z,eaforum,, 128925,https://forum.effectivealtruism.org/posts/cWeioTmbs73iZjs25/large-language-models-as-fiduciaries-to-humans,Large Language Models as Fiduciaries to Humans,['johnjnay'],2023-01-24T19:53:37Z,eaforum,, 128944,https://forum.effectivealtruism.org/posts/Q2BJnpNh8e6RAWFnm/consider-trying-the-elk-contest-i-am,Consider trying the ELK contest (I am),['Holden Karnofsky'],2022-01-05T19:42:11Z,eaforum,, 128960,https://forum.effectivealtruism.org/posts/2ENbqRr9Q7PSABtv2/stuart-russell-human-compatible-ai-roundtable-with-allan,"Stuart Russell Human Compatible AI Roundtable with Allan Dafoe, Rob Reich, & Marietje Schaake",['Mahendra Prasad'],2021-02-11T07:43:06Z,eaforum,, 128974,https://forum.effectivealtruism.org/posts/P2AKkH8nsK8xBJ9se/2023-open-philanthropy-ai-worldviews-contest-odds-of,2023 Open Philanthropy AI Worldviews Contest: Odds of Artificial General Intelligence by 2043,['srhoades10'],2023-03-14T20:32:40Z,eaforum,, 128998,https://forum.effectivealtruism.org/posts/wFC3axfuwABHmoQ9H/the-vitalik-buterin-fellowship-in-ai-existential-safety-is,The Vitalik Buterin Fellowship in AI Existential Safety is open for applications!,['Cynthia Chen'],2022-10-14T03:23:11Z,eaforum,, 129007,https://forum.effectivealtruism.org/posts/oAxuq5E7DsQTmxQwi/amazon-to-invest-up-to-usd4-billion-in-anthropic,Amazon to invest up to $4 billion in Anthropic,['Davis_Kingsley'],2023-09-25T14:55:32Z,eaforum,, 129019,https://forum.effectivealtruism.org/posts/yCx3kCReJtucpdd33/the-current-alignment-plan-and-how-we-might-improve-it-or,"The current alignment plan, and how we might improve it | EAG Bay Area 23",['Buck'],2023-06-07T21:03:11Z,eaforum,, 129055,https://forum.effectivealtruism.org/posts/yAw8afSSEFqonufPj/future-bowl-forecasting-tournament,Future Bowl Forecasting Tournament,['ncmoulios'],2022-11-28T16:42:10Z,eaforum,, 129064,https://forum.effectivealtruism.org/posts/HYYJnAtmoavcbksgp/is-ai-safety-still-neglected,Is AI safety still neglected?,['Coafos'],2022-03-30T09:09:46Z,eaforum,, 129079,https://forum.effectivealtruism.org/posts/fX8JsabQyRSd7zWiD/introducing-the-nonlinear-fund-ai-safety-research-incubation,"Introducing The Nonlinear Fund: AI Safety research, incubation, and funding",['Kat Woods'],2021-03-18T14:07:06Z,eaforum,, 129103,https://forum.effectivealtruism.org/posts/L8GjzvRYA9g9ox2nP/prospects-for-ai-safety-agreements-between-countries,Prospects for AI safety agreements between countries,['oeg'],2023-04-14T17:41:22Z,eaforum,, 129130,https://forum.effectivealtruism.org/posts/APAD7PaEHgFyW3Nc4/ai-impacts-historic-trends-in-technological-progress,AI Impacts: Historic trends in technological progress,['Aaron Gertler'],2020-02-12T00:08:22Z,eaforum,, 129149,https://forum.effectivealtruism.org/posts/ZkuiHKjPWsjf5zTrw/draft-report-on-ai-timelines,Draft report on AI timelines,['Ajeya'],2020-12-15T12:10:51Z,eaforum,, 129158,https://forum.effectivealtruism.org/posts/w5GsJBF8YHqWdCroW/what-are-the-arguments-that-support-china-building-agi-if,What are the arguments that support China building AGI+ if Western companies delay/pause AI development?,['DMMF'],2023-03-29T18:53:21Z,eaforum,, 129167,https://forum.effectivealtruism.org/posts/mZ4ctSAEMgWj6DAwt/messy-personal-stuff-that-affected-my-cause-prioritization,Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety),['Julia_Wise'],2022-05-05T17:59:22Z,eaforum,, 129189,https://forum.effectivealtruism.org/posts/qEKqhZFsx5zwyf9Mu/us-congress-introduces-create-ai-act-for-establishing,US Congress introduces CREATE AI Act for establishing National AI Research Resource,['Daniel_Eth'],2023-07-28T23:28:00Z,eaforum,, 129198,https://forum.effectivealtruism.org/posts/cecf9mGdqtxbfEt7z/govai-annual-report-2021-2,GovAI Annual Report 2021,['GovAI'],2022-01-05T16:57:20Z,eaforum,, 129219,https://forum.effectivealtruism.org/posts/Mw9ZxmZqiaXM2rb49/what-does-and-doesn-t-ai-mean-for-effective-altruism,What does (and doesn't) AI mean for effective altruism?,['EA Global'],2017-08-12T07:00:00Z,eaforum,, 129240,https://forum.effectivealtruism.org/posts/WHDb9r9yMFetG7oz5/social-scientists-interested-in-ai-safety-should-consider,"Social scientists interested in AI safety should consider doing direct technical AI safety research, (possibly meta-research), or governance, support roles, or community building instead",['Vael Gates'],2022-07-20T23:01:12Z,eaforum,, 129257,https://forum.effectivealtruism.org/posts/iTcdun6jm9nxLy8Rp/announcing-the-eu-tech-policy-fellowship,Announcing the EU Tech Policy Fellowship,"['Jan-Willem', 'Cillian Crosson', 'SteveThompson']",2022-03-30T08:15:37Z,eaforum,, 129270,https://forum.effectivealtruism.org/posts/m8PsJsSfQAYxPusHi/scenario-mapping-advanced-ai-risk-request-for-participation,Scenario Mapping Advanced AI Risk: Request for Participation with Data Collection,['Kiliank'],2022-03-27T11:44:25Z,eaforum,, 129285,https://forum.effectivealtruism.org/posts/6esJGutHz9QcSuQxa/develop-anthropomorphic-agi-to-save-humanity-from-itself,"""Develop Anthropomorphic AGI to Save Humanity from Itself"" (Future Fund AI Worldview Prize submission)","['ketanrama', 'Nick_Beckstead', 'leopold', 'ab', 'William_MacAskill']",2022-11-05T17:57:10Z,eaforum,, 129305,https://forum.effectivealtruism.org/posts/Ksqero4BmGFs8qfiC/ai-safety-newsletter-4-ai-and-cybersecurity-persuasive-ais,"AI Safety Newsletter #4: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks","['Center for AI Safety', 'Dan H', 'Akash', 'aogara']",2023-05-02T16:51:20Z,eaforum,, 129330,https://forum.effectivealtruism.org/posts/ekComvhb2HREowgah/increased-availability-and-willingness-for-deployment-of,Increased Availability and Willingness for Deployment of Resources for Effective Altruism and Long-Termism,['Evan_Gaensbauer'],2021-12-29T20:20:55Z,eaforum,, 129350,https://forum.effectivealtruism.org/posts/JSko4DZsppThDN7iP/ai-alternative-futures-exploratory-scenario-mapping-for,AI Alternative Futures: Exploratory Scenario Mapping for Artificial Intelligence Risk - Request for Participation [Linkpost],['Kiliank'],2022-05-09T19:53:26Z,eaforum,, 129368,https://forum.effectivealtruism.org/posts/SsHBQLA6goqAmqvdS/the-state-of-ai-governance-in-africa-musings-from-the-global,The State of AI Governance in Africa: Musings from the Global South,['Thaiya Jesse Wallace'],2023-08-17T11:34:49Z,eaforum,, 129390,https://forum.effectivealtruism.org/posts/cEqBEeNrhKzDp25fH/the-importance-of-artificial-sentience,The Importance of Artificial Sentience,['Jamie_Harris'],2021-03-03T17:17:49Z,eaforum,, 129421,https://forum.effectivealtruism.org/posts/yumtQxSbwuDsfWqcb/four-part-playbook-for-dealing-with-ai-holden-karnofsky-on,"Four part playbook for dealing with AI (Holden Karnofsky on the 80,000 Hours Podcast)",['80000_Hours'],2023-08-02T11:56:24Z,eaforum,, 129462,https://forum.effectivealtruism.org/posts/CrmE6T5A8JhkxnRzw/future-matters-8-bing-chat-ai-labs-on-safety-and-pausing,"Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters","['Pablo', 'matthew.vandermerwe']",2023-03-21T14:50:04Z,eaforum,, 129491,https://forum.effectivealtruism.org/posts/AJwuMw7ddcKQNFLcR/20-concrete-projects-for-reducing-existential-risk,20 concrete projects for reducing existential risk,"['Buhl', 'Jam Kraprayoon']",2023-06-21T15:54:49Z,eaforum,, 129525,https://forum.effectivealtruism.org/posts/N6dyo3cyk7eCLAoEB/update-from-campaign-for-ai-safety,Update from Campaign for AI Safety,['Nik Samoylov'],2023-06-01T10:46:47Z,eaforum,, 129553,https://forum.effectivealtruism.org/posts/GpnLDSjzNkGB5xvry/ai-risk-discussions-website-exploring-interviews-from-97-ai,“AI Risk Discussions” website: Exploring interviews from 97 AI Researchers,"['Vael Gates', 'Lukas Trötzmüller', 'Maheen Shermohammed', 'michaelkeenan', 'zchuang']",2023-02-02T01:00:01Z,eaforum,, 129569,https://forum.effectivealtruism.org/posts/knCbv3LxwciWHfACE/ai-risk-us-presidental-candidate,AI Risk US Presidental Candidate,['Simon Berens'],2023-04-11T20:18:15Z,eaforum,, 129581,https://forum.effectivealtruism.org/posts/tmxNQJ48SWEWXSt4i/a-simple-way-of-exploiting-ai-s-coming-economic-impact-may,A simple way of exploiting AI's coming economic impact may be highly-impactful,['kuira'],2023-07-16T10:30:24Z,eaforum,, 129590,https://forum.effectivealtruism.org/posts/BajkGx4FbaE9y3vz5/data-publication-for-the-2021-artificial-intelligence,"Data Publication for the 2021 Artificial Intelligence, Morality, and Sentience (AIMS) Survey",['Janet Pauketat'],2022-03-24T15:43:17Z,eaforum,, 129607,https://forum.effectivealtruism.org/posts/x9Rn5SfapcbbZaZy9/ea-for-dumb-people,EA for dumb people?,['Olivia Addy'],2022-07-11T10:46:55Z,eaforum,, 129628,https://forum.effectivealtruism.org/posts/zsFCj2mfnYZmSW2FF/ai-risk-is-like-terminator-stop-saying-it-s-not-1,AI Risk is like Terminator; Stop Saying it's Not,['skluug'],2022-03-08T19:17:17Z,eaforum,, 129639,https://forum.effectivealtruism.org/posts/FHav3yN9uFpFxYiYx/what-are-the-top-priorities-in-a-slow-takeoff-multipolar,"What are the top priorities in a slow-takeoff, multipolar world?",['JP Addison'],2021-08-25T08:47:47Z,eaforum,, 129666,https://forum.effectivealtruism.org/posts/k73qrirnxcKtKZ4ng/the-old-ai-lessons-for-ai-governance-from-early-electricity-1,The ‘Old AI’: Lessons for AI governance from early electricity regulation,"['Sam Clarke', 'Di Cooke']",2022-12-19T02:46:23Z,eaforum,, 129696,https://forum.effectivealtruism.org/posts/Fj6wgJdDYuNP2FeD4/6-non-obvious-mental-health-issues-specific-to-ai-safety,6 non-obvious mental health issues specific to AI safety,['Igor Ivanov'],2023-08-18T15:47:44Z,eaforum,, 129722,https://forum.effectivealtruism.org/posts/Gah8junjra4cTN9G8/why-does-no-one-care-about-ai,Why does no one care about AI?,['Olivia Addy'],2022-08-07T22:04:32Z,eaforum,, 129732,https://forum.effectivealtruism.org/posts/HuQtr7qfB2EfcGqTu/technological-developments-that-could-increase-risks-from-1,Technological developments that could increase risks from nuclear weapons: A shallow review,"['MichaelA', 'Will Aldred']",2023-02-09T15:41:55Z,eaforum,, 129752,https://forum.effectivealtruism.org/posts/WfNZoquLhRnT3nC4e/working-at-ea-organizations-series-machine-intelligence,Working at EA organizations series: Machine Intelligence Research Institute,['SoerenMind'],2015-11-01T12:49:17Z,eaforum,, 129773,https://forum.effectivealtruism.org/posts/ptrY5McTdQfDy8o23/short-term-ai-alignment-as-a-priority-cause,Short-Term AI Alignment as a Priority Cause,['len.hoang.lnh'],2020-02-11T16:22:09Z,eaforum,, 129795,https://forum.effectivealtruism.org/posts/CQxd84nNgojYwLucb/compute-and-antitrust-regulatory-implications-of-the-ai,"Compute & Antitrust: Regulatory implications of the AI hardware supply chain, from chip design to cloud APIs","['HaydnBelfield', 'Shin-ShinHua']",2022-08-19T17:20:52Z,eaforum,, 129817,https://forum.effectivealtruism.org/posts/tGpwWsP5iBfZFigeZ/future-matters-6-ftx-collapse-value-lock-in-and,"Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk","['Pablo', 'matthew.vandermerwe']",2022-12-30T13:10:55Z,eaforum,, 129844,https://forum.effectivealtruism.org/posts/7QyemcXLaxNicLNNa/five-years-of-rethink-priorities-impact-future-plans-funding,"Five Years of Rethink Priorities: Impact, Future Plans, Funding Needs (July 2023)",['Rethink Priorities'],2023-07-18T15:59:50Z,eaforum,, 129877,https://forum.effectivealtruism.org/posts/Bnp9YDqErNXHmTvvE/the-slippery-slope-from-dalle-2-to-deepfake-anarchy,The Slippery Slope from DALLE-2 to Deepfake Anarchy,"['stecas', 'philljkc']",2022-11-05T14:47:38Z,eaforum,, 129905,https://forum.effectivealtruism.org/posts/2DzLY6YP2z5zRDAGA/a-freshman-year-during-the-ai-midgame-my-approach-to-the,A freshman year during the AI midgame: my approach to the next year,['Buck'],2023-04-14T00:38:50Z,eaforum,, 129929,https://forum.effectivealtruism.org/posts/mDG49CJyxzeN99ELz/aisafety-world-is-a-map-of-the-ais-ecosystem,AISafety.world is a map of the AIS ecosystem,['Hamish McDoodles'],2023-04-06T11:47:36Z,eaforum,, 129941,https://forum.effectivealtruism.org/posts/kCu2cANxdkr7ferQ4/should-i-force-myself-to-work-on-agi-alignment,Should I force myself to work on AGI alignment?,['Isaac Benson'],2022-08-24T17:25:23Z,eaforum,, 129950,https://forum.effectivealtruism.org/posts/qJffR9vj92kY32iHg/debates-on-reducing-long-term-s-risks,Debates on reducing long-term s-risks?,['jackchang110'],2023-04-06T01:26:09Z,eaforum,, 129965,https://forum.effectivealtruism.org/posts/mqBLFdNzkxfbfcaoX/announcing-the-2023-pibbss-summer-research-fellowship,Announcing the 2023 PIBBSS Summer Research Fellowship,"['Dušan D. Nešić (Dushan)', 'nora']",2023-01-12T21:38:30Z,eaforum,, 129975,https://forum.effectivealtruism.org/posts/2hduXN5MXCZPqKjSv/everything-s-normal-until-it-s-not,Everything's normal until it's not,['Eleni_A'],2023-03-10T01:42:35Z,eaforum,, 129990,https://forum.effectivealtruism.org/posts/DBaLPBcWyQtY34Kt9/chatgpt-can-write-code,ChatGPT can write code! ?,['Miguel'],2022-12-10T05:36:47Z,eaforum,, 130000,https://forum.effectivealtruism.org/posts/KBMSJj63nZfsji2wS/beware-popular-discussions-of-ai-sentience,"Beware popular discussions of AI ""sentience""",['Dr. David Mathers'],2023-06-08T08:57:49Z,eaforum,, 130021,https://forum.effectivealtruism.org/posts/knitD2FPQpsTMhLJP/how-much-should-states-invest-in-contingency-plans-for,How much should states invest in contingency plans for widespread internet outage?,['Kinoshita Yoshikazu (pseudonym)'],2023-04-07T16:05:10Z,eaforum,, 130040,https://forum.effectivealtruism.org/posts/SGFRneArKi93qbrRG/truthful-ai,Truthful AI,"['Owen Cotton-Barratt', 'Lukas Finnveden', 'ab']",2021-10-20T15:11:10Z,eaforum,, 130065,https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological,"The History, Epistemology and Strategy of Technological Restraint, and lessons for AI (short essay)",['MMMaas'],2022-08-10T11:00:35Z,eaforum,, 130080,https://forum.effectivealtruism.org/posts/WoopYJtscbaodKQwh/ea-ai-emerging-tech-orgs-should-be-involved-with-patent,EA AI/Emerging Tech Orgs Should Be Involved with Patent Office Partnership,['anonymous'],2022-06-12T22:32:46Z,eaforum,, 130092,https://forum.effectivealtruism.org/posts/oHZPackrDrjyEb9Sk/lessons-on-project-management-from-how-big-things-get-done,Lessons on project management from “How Big Things Get Done”,['Cristina Schmidt Ibáñez'],2023-05-17T19:15:11Z,eaforum,, 130114,https://forum.effectivealtruism.org/posts/7jbrN4aenCM9EZqyT/a-roundtable-for-safe-ai-rsai,A Roundtable for Safe AI (RSAI)?,['Lara_TH'],2023-03-09T12:11:45Z,eaforum,, 130129,https://forum.effectivealtruism.org/posts/qJmadiMjhLbYgqjXj/looking-for-canadian-summer-co-op-position-in-ai-governance,Looking for Canadian summer co-op position in AI Governance,['tcelferact'],2023-06-26T17:27:28Z,eaforum,, 130138,https://forum.effectivealtruism.org/posts/4PAi6nNRfQwwhdtBW/questions-for-further-investigation-of-ai-diffusion,Questions for further investigation of AI diffusion,['Ben Cottier'],2022-12-21T13:50:39Z,eaforum,, 130159,https://forum.effectivealtruism.org/posts/pb7Q9awb5nsx3mRzk/ml-safety-scholars-summer-2022-retrospective,ML Safety Scholars Summer 2022 Retrospective,['ThomasW'],2022-11-01T03:09:09Z,eaforum,, 130198,https://forum.effectivealtruism.org/posts/j2TreuRZT9mBFEMEs/the-bletchley-declaration-on-ai-safety,The Bletchley Declaration on AI Safety,['Hauke Hillebrandt'],2023-11-01T11:44:43Z,eaforum,, 130220,https://forum.effectivealtruism.org/posts/GwgnYbyEnAWgBCvwA/should-we-publish-arguments-for-the-preservation-of-humanity,Should we publish arguments for the preservation of humanity?,['Jeremy'],2023-04-07T13:51:48Z,eaforum,, 130234,https://forum.effectivealtruism.org/posts/9RYvJu2iNJMXgWCBn/introducing-the-ml-safety-scholars-program,Introducing the ML Safety Scholars Program,"['ThomasW', 'Dan H', 'Mantas Mazeika', 'Oliver Z', 'Sidney Hough', 'Kevin Liu']",2022-05-04T13:14:07Z,eaforum,, 130246,https://forum.effectivealtruism.org/posts/ivep4R7LoSLhWwHGX/peter-eckersley-1979-2022,Peter Eckersley (1979-2022),['Gavin'],2022-09-03T10:45:47Z,eaforum,, 130269,https://forum.effectivealtruism.org/posts/ygnYXvkezCLasdh7A/exploratory-survey-on-psychology-of-ai-risk-perception,Exploratory survey on psychology of AI risk perception,['Daniel_Friedrich'],2022-08-02T20:34:27Z,eaforum,, 130278,https://forum.effectivealtruism.org/posts/GRv3KB2nPFRREXb5o/reviews-of-is-power-seeking-ai-an-existential-risk,"Reviews of ""Is power-seeking AI an existential risk?""",['Joe_Carlsmith'],2021-12-16T20:50:47Z,eaforum,, 130290,https://forum.effectivealtruism.org/posts/G9Zc3yaT2q2rZXBbL/will-agi-cause-mass-technological-unemployment,Will AGI cause mass technological unemployment?,['BrownHairedEevee'],2020-06-22T20:55:00Z,eaforum,, 130300,https://forum.effectivealtruism.org/posts/oGoP4LjSZAsYfcF3N/animal-advocacy-in-the-age-of-ai-1,Animal Advocacy in the Age of AI,"['Constance Li', 'Nicholas Kees Dupuis']",2023-07-27T07:08:30Z,eaforum,, 130330,https://forum.effectivealtruism.org/posts/qKXLpe7FNCdok3uvY/what-are-the-challenges-and-problems-with-programming-law,What are the challenges and problems with programming law-breaking constraints into AGI?,['MichaelStJules'],2020-02-02T20:53:04Z,eaforum,, 130341,https://forum.effectivealtruism.org/posts/zabdCSArBLHSaQnrn/legal-assistance-for-victims-of-ai,Legal Assistance for Victims of AI,['bob'],2023-03-17T11:42:58Z,eaforum,, 130352,https://forum.effectivealtruism.org/posts/oTJ5vMNwdWiHj2iKL/humanities-research-ideas-for-longtermists,Humanities Research Ideas for Longtermists,['Lizka'],2021-06-09T04:39:41Z,eaforum,, 130392,https://forum.effectivealtruism.org/posts/fg6RrvtSJ2kxe9Ens/eric-drexler-paretotopian-goal-alignment,Eric Drexler: Paretotopian goal alignment,['EA Global'],2019-03-15T14:51:56Z,eaforum,, 130408,https://forum.effectivealtruism.org/posts/y4Pu5jhYoRibb9MyC/from-voluntary-to-mandatory-are-the-esg-disclosure,"From voluntary to mandatory, are the ESG disclosure frameworks still fertile ground for unrealised EA career pathways? – A 2023 update on ESG potential impact",['Christopher Chan'],2023-06-04T12:00:33Z,eaforum,, 130433,https://forum.effectivealtruism.org/posts/htgEGY5xbhFeJvt7E/why-just-make-an-agent-which-cares-only-about-binary-rewards,"Why ""just make an agent which cares only about binary rewards"" doesn't work.",['Lysandre Terrisse'],2023-05-09T16:51:00Z,eaforum,, 130451,https://forum.effectivealtruism.org/posts/wP2JueKsfvrfNbkT5/some-preliminary-opinions-on-ai-safety-problems,Some Preliminary Opinions on AI Safety Problems,['yonxinzhang'],2023-04-06T12:42:24Z,eaforum,, 130466,https://forum.effectivealtruism.org/posts/S9JeqH4qYvoLZqq9c/an-entire-category-of-risks-is-undervalued-by-ea-summary-of,An entire category of risks is undervalued by EA [Summary of previous forum post],['Richard Ren'],2022-09-05T15:07:54Z,eaforum,, 130484,https://forum.effectivealtruism.org/posts/XyCLLYkBCPw44jpmQ/new-book-on-s-risks,New book on s-risks,['Tobias_Baumann'],2022-10-26T12:04:25Z,eaforum,, 130499,https://forum.effectivealtruism.org/posts/fSeDA7B7Hve5LeaWq/comments-on-manheim-s-what-s-in-a-pause,"Comments on Manheim's ""What's in a Pause?""",['RobBensinger'],2023-09-18T12:16:14Z,eaforum,, 130520,https://forum.effectivealtruism.org/posts/SZJBE3fuk2majqwJQ/principles-for-ai-welfare-research,Principles for AI Welfare Research,['jeffsebo'],2023-06-19T11:30:26Z,eaforum,, 130550,https://forum.effectivealtruism.org/posts/HBgAruFrZhFKBFfDa/applications-open-for-agi-safety-fundamentals-alignment,Applications open for AGI Safety Fundamentals: Alignment Course,"['Jamie Bernardi', 'richard_ngo']",2022-12-13T10:50:20Z,eaforum,, 130560,https://forum.effectivealtruism.org/posts/kgPJMJxahHeQMFR5d/three-camps-in-ai-x-risk-discussions-my-personal-very,Three camps in AI x-risk discussions: My personal very oversimplified overview,['Aryeh Englander'],2023-06-30T21:42:37Z,eaforum,, 130581,https://forum.effectivealtruism.org/posts/DhcaE7MbMwaCyNcxP/why-building-ventures-in-ai-safety-is-particularly,Why building ventures in AI Safety is particularly challenging,['Heramb Podar'],2023-11-06T00:16:35Z,eaforum,, 130603,https://forum.effectivealtruism.org/posts/yNitwYkHP6DtkkSrG/should-ai-writers-be-prohibited-in-education,Should AI writers be prohibited in education?,['Eleni_A'],2023-01-16T22:29:47Z,eaforum,, 130616,https://forum.effectivealtruism.org/posts/JNCmoe3fno2mhbb4o/what-are-the-best-journals-to-publish-ai-governance-papers,What are the best journals to publish AI governance papers in?,['CaroJ'],2022-05-02T10:07:43Z,eaforum,, 130626,https://forum.effectivealtruism.org/posts/g9gfXhNhLdJxSFBLW/fundamentals-of-global-priorities-research-in-economics,Fundamentals of Global Priorities Research in Economics Syllabus,['poliboni'],2023-08-08T12:16:10Z,eaforum,, 130650,https://forum.effectivealtruism.org/posts/PFxmd5bf7nqGNLYCg/a-bird-s-eye-view-of-the-ml-field-pragmatic-ai-safety-2,A Bird's Eye View of the ML Field [Pragmatic AI Safety #2],"['ThomasW', 'Dan H']",2022-05-09T17:15:26Z,eaforum,, 130683,https://forum.effectivealtruism.org/posts/vYdgmvEQaMcstXcNv/explorers-in-a-virtual-country-navigating-the-knowledge,Explorers in a virtual country: Navigating the knowledge landscape of large language models,['AlexanderSaeri'],2023-03-28T21:32:41Z,eaforum,, 130696,https://forum.effectivealtruism.org/posts/kueQQx6mtnTtn9Xwr/prior-probability-of-this-being-the-most-important-century,Prior probability of this being the most important century,['Vasco Grilo'],2023-07-15T07:18:28Z,eaforum,, 130711,https://forum.effectivealtruism.org/posts/zCYGbYAaXeq7v67Km/prize-and-fast-track-to-alignment-research-at-alter,Prize and fast track to alignment research at ALTER,['Vanessa'],2022-09-18T09:15:01Z,eaforum,, 130720,https://forum.effectivealtruism.org/posts/cGM86RhxMdfDYbQnn/we-should-say-more-than-x-risk-is-high,We should say more than “x-risk is high”,['OllieBase'],2022-12-16T22:09:43Z,eaforum,, 130731,https://forum.effectivealtruism.org/posts/DdzSEFBEb6rtfChpN/what-we-owe-the-microbiome,What we owe the microbiome,['TeddyW'],2022-12-17T16:17:18Z,eaforum,, 130743,https://forum.effectivealtruism.org/posts/LYqkptuAiPQcmmGbs/ai-safety-microgrant-round,AI Safety Microgrant Round,"['Chris Leong', 'Damola Morenikeji', 'David_Kristoffersson']",2022-11-14T04:25:17Z,eaforum,, 130756,https://forum.effectivealtruism.org/posts/qotAq6NeabqvEsXz2/fhi-report-how-will-national-security-considerations-affect,FHI Report: How Will National Security Considerations Affect Antitrust Decisions in AI? An Examination of Historical Precedents,['Cullen'],2020-07-28T18:33:17Z,eaforum,, 130766,https://forum.effectivealtruism.org/posts/tLCnuSG6jZnm33FB9/effective-enforceability-of-eu-competition-law-under,Effective Enforceability of EU Competition Law Under Different AI Development Scenarios: A Framework for Legal Analysis,"['HaydnBelfield', 'Shin-ShinHua']",2022-08-19T17:20:26Z,eaforum,, 130798,https://forum.effectivealtruism.org/posts/fXkCcsyF8M6dp6sXx/where-i-m-at-with-ai-risk-convinced-of-danger-but-not-yet-of,Where I'm at with AI risk: convinced of danger but not (yet) of doom,['Amber Dawn'],2023-03-21T13:23:11Z,eaforum,, 130828,https://forum.effectivealtruism.org/posts/3KcYyn2qnRJ3LvSpn/thinking-in-limits-about-tai-from-the-demand-perspective,"Thinking-in-limits about TAI from the demand perspective. Demand saturation, resource wars, new debt.",['Ivan Madan'],2023-11-07T22:44:11Z,eaforum,, 130851,https://forum.effectivealtruism.org/posts/KyNdKTNfJoJccJ7rF/what-are-the-biggest-threats-to-humanity-a-happier-world,What Are The Biggest Threats To Humanity? (A Happier World video),['Jeroen Willems'],2023-01-31T19:50:32Z,eaforum,, 130878,https://forum.effectivealtruism.org/posts/DG6bf5YW3jxLRD7KN/policy-ideas-for-mitigating-ai-risk,Policy ideas for mitigating AI risk,['Thomas Larsen'],2023-09-16T10:31:08Z,eaforum,, 130908,https://forum.effectivealtruism.org/posts/9apBqe4KH394cS89z/a-different-approach-to-community-building-the-spiral-path,A Different Approach to Community Building: The Spiral Path to Impact,['ezrah'],2023-05-23T18:41:16Z,eaforum,, 130922,https://forum.effectivealtruism.org/posts/JMb37qrCYCeKqFxtp/the-case-for-civil-disobedience-for-the-ai-movement,The Case For Civil Disobedience For The AI Movement,['Murali Thoppil'],2023-04-24T13:07:58Z,eaforum,, 130942,https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff,Ben Garfinkel: How sure are we about this AI stuff?,"['bgarfinkel', 'EA Global']",2019-02-09T19:17:32Z,eaforum,, 130972,https://forum.effectivealtruism.org/posts/syEQKdmhNHbrBqtwe/neuronpedia-ai-safety-game,Neuronpedia - AI Safety Game,['johnnylin'],2023-10-16T09:35:17Z,eaforum,, 130993,https://forum.effectivealtruism.org/posts/KKmAPEeicn5mGF93D/what-are-some-low-cost-outside-the-box-ways-to-do-fund,What are some low-cost outside-the-box ways to do/fund alignment research?,['trevor1'],2022-11-11T05:57:26Z,eaforum,, 131019,https://forum.effectivealtruism.org/posts/wRQx4tBtqpqF4QE3h/report-artificial-intelligence-risk-management-in-spain,Report: Artificial Intelligence Risk Management in Spain,"['JorgeTorresC', 'Jaime Sevilla', 'Guillem Bas', 'Roberto Tinoco', 'Mónica Ulloa']",2023-06-15T16:08:01Z,eaforum,, 131057,https://forum.effectivealtruism.org/posts/a2XaDeadFe6eHfDwG/the-credibility-of-apocalyptic-claims-a-critique-of-techno,The Credibility of Apocalyptic Claims: A Critique of Techno-Futurism within Existential Risk,['Ember'],2022-08-16T19:48:03Z,eaforum,, 131081,https://forum.effectivealtruism.org/posts/7awJW2GPafcE4HYNf/tarbell-fellowship-2024-applications-open-ai-journalism,Tarbell Fellowship 2024 - Applications Open (AI Journalism),['Cillian Crosson'],2023-09-28T10:38:27Z,eaforum,, 131096,https://forum.effectivealtruism.org/posts/kCBQHWqbk4Nrns8P7/model-based-policy-analysis-under-deep-uncertainty,Model-Based Policy Analysis under Deep Uncertainty,['Max Reddel'],2023-03-06T14:24:20Z,eaforum,, 131113,https://forum.effectivealtruism.org/posts/M3N9ZXs8jmX8arYXK/redwood-research-is-hiring-for-several-roles-operations-and,Redwood Research is hiring for several roles (Operations and Technical),"['JJXWang', 'billzito']",2022-04-14T15:23:49Z,eaforum,, 131124,https://forum.effectivealtruism.org/posts/Zncu6QpJLJRSGofvK/google-s-ethics-is-alarming,Google's ethics is alarming,['len.hoang.lnh'],2021-02-25T05:57:05Z,eaforum,, 131145,https://forum.effectivealtruism.org/posts/twmN3eponyRBxsH4P/distillation-of-the-offense-defense-balance-of-scientific,Distillation of The Offense-Defense Balance of Scientific Knowledge,['Arjun Yadav'],2022-08-12T07:01:02Z,eaforum,, 131160,https://forum.effectivealtruism.org/posts/bwn3zPFkfesNhizCa/how-i-came-to-longtermism-on-my-own-and-an-outsider,How I Came To Longtermism On My Own & An Outsider Perspective On EA Longtermism,['Jordan Arel'],2022-08-07T02:42:29Z,eaforum,, 131186,https://forum.effectivealtruism.org/posts/9orJx6uvgbLD7FkGC/fli-is-hiring-a-new-director-of-us-policy,FLI is hiring a new Director of US Policy,['aaguirre'],2022-07-27T00:07:30Z,eaforum,, 131203,https://forum.effectivealtruism.org/posts/cwZ8EvhWKTGbMKqzw/washington-post-article-about-ea-university-groups,Washington Post article about EA university groups,['Lizka'],2023-07-05T12:58:12Z,eaforum,, 131218,https://forum.effectivealtruism.org/posts/Yoxmaj8QKpQun9mDz/ricerca-sulla-sicurezza-delle-ia-panoramica-delle-carriere,Ricerca sulla sicurezza delle IA: panoramica delle carriere,['EA Italy'],2023-01-17T11:06:12Z,eaforum,, 131234,https://forum.effectivealtruism.org/posts/rwW8GKAfuagKgG7AQ/let-s-talk-about-impostor-syndrome-in-ai-safety,Let's talk about Impostor syndrome in AI safety,['Igor Ivanov'],2023-09-22T14:06:52Z,eaforum,, 131254,https://forum.effectivealtruism.org/posts/gJGMFdGqFhs3mKo2s/supplement-to-the-brussels-effect-and-ai-how-eu-ai,"Supplement to ""The Brussels Effect and AI: How EU AI regulation will impact the global AI market""","['MarkusAnderljung', 'Charlotte']",2022-08-16T20:55:20Z,eaforum,, 131281,https://forum.effectivealtruism.org/posts/MymQqnT8gZ2yjmeYX/beginner-s-guide-to-reducing-s-risks-link-post,Beginner’s guide to reducing s-risks [link-post],['Center on Long-Term Risk'],2023-10-17T00:51:09Z,eaforum,, 131300,https://forum.effectivealtruism.org/posts/GnALeFmKbknkGYdp8/fundamental-challenges-in-ai-governance,Fundamental Challenges in AI Governance,['Tharin'],2023-10-23T01:30:09Z,eaforum,, 131332,https://forum.effectivealtruism.org/posts/qxfmGBxAe5ZqdfDfv/link-gcri-s-seth-baum-reviews-the-precipice,[Link] GCRI's Seth Baum reviews The Precipice,['Aryeh Englander'],2022-06-06T19:33:44Z,eaforum,, 131342,https://forum.effectivealtruism.org/posts/EJ5a2ApokQqGB98P8/david-krueger-on-ai-alignment-in-academia-and-coordination,David Krueger on AI Alignment in Academia and Coordination,['Michaël Trazzi'],2023-01-07T21:14:48Z,eaforum,, 131364,https://forum.effectivealtruism.org/posts/LTCwe2RaCreLZ4gd2/safety-without-oppression-an-ai-governance-problem,Safety without oppression: an AI governance problem,['Nathan_Barnard'],2022-07-28T10:19:12Z,eaforum,, 131391,https://forum.effectivealtruism.org/posts/7kFPFYQSY7ZttoveS/cost-effectiveness-of-professional-field-building-programs,Cost-effectiveness of professional field-building programs for AI safety research,['Center for AI Safety'],2023-07-10T17:26:56Z,eaforum,, 131419,https://forum.effectivealtruism.org/posts/SEqJoRL5Y8cypFasr/why-agi-timeline-research-discourse-might-be-overrated,Why AGI Timeline Research/Discourse Might Be Overrated,['Miles_Brundage'],2022-07-03T08:04:10Z,eaforum,, 131440,https://forum.effectivealtruism.org/posts/pvJxxpza2ZRa5y9e2/le-tempistiche-delle-ia-il-dibattito-e-il-punto-di-vista,Le Tempistiche delle IA: il dibattito e il punto di vista degli “esperti”,['EA Italy'],2023-01-17T23:30:53Z,eaforum,, 131459,https://forum.effectivealtruism.org/posts/Z7r83zrSXcis6ymKo/dissolving-ai-risk-parameter-uncertainty-in-ai-future,‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting,['Froolow'],2022-10-18T22:54:34Z,eaforum,, 131480,https://forum.effectivealtruism.org/posts/iKbdvrdFRSsPwNkh7/some-of-my-current-impressions-entering-ai-safety,Some of My Current Impressions Entering AI Safety,['Phib'],2023-03-28T05:18:05Z,eaforum,, 131490,https://forum.effectivealtruism.org/posts/XHmWPCgXu7aGstTnZ/urgent-need-for-refinancing,Urgent Need for Refinancing,['Tobias W. Kaiser'],2023-07-10T19:35:23Z,eaforum,, 131505,https://forum.effectivealtruism.org/posts/zxxew56gnYhYEupsc/regrant-up-to-usd600-000-to-ai-safety-projects-with-givewiki,"Regrant up to $600,000 to AI safety projects with GiveWiki",['Dawn Drescher'],2023-10-28T19:56:07Z,eaforum,, 131519,https://forum.effectivealtruism.org/posts/dotnvSqB2faF3kHcs/what-s-the-best-machine-learning-newsletter-how-do-you-keep,What's the best machine learning newsletter? How do you keep up to date?,['Mathieu Putz'],2022-03-25T14:36:46Z,eaforum,, 131530,https://forum.effectivealtruism.org/posts/nTybQwrnyRMenasCc/carnegie-council-misunderstands-longtermism,Carnegie Council MisUnderstands Longtermism,['Jeff A'],2022-09-30T02:57:37Z,eaforum,, 131539,https://forum.effectivealtruism.org/posts/wQERLNFoMidffTLar/joscha-bach-on-synthetic-intelligence-annotated,Joscha Bach on Synthetic Intelligence [annotated],['Roman Leventov'],2023-03-02T11:21:56Z,eaforum,, 131561,https://forum.effectivealtruism.org/posts/vRuZGTYYCssraBavN/link-thiel-on-gcrs,[Link] Thiel on GCRs,['Milan_Griffes'],2019-07-22T20:47:13Z,eaforum,, 131583,https://forum.effectivealtruism.org/posts/wrdWS2K8hWfoAzRst/what-would-you-do-if-you-had-a-lot-of-money-power-influence,What would you do if you had a lot of money/power/influence and you thought that AI timelines were very short?,['Greg_Colbourn'],2021-11-12T21:59:07Z,eaforum,, 131592,https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts,Samotsvety's AI risk forecasts,"['elifland', 'Misha_Yagudin']",2022-09-09T04:01:11Z,eaforum,, 131603,https://forum.effectivealtruism.org/posts/npm6mrJQzungTLsKj/black-box-investigations-research-hackathon,Black Box Investigations Research Hackathon,"['Esben Kran', 'Jonas Hallgren', 'Apart Research']",2022-09-15T10:09:44Z,eaforum,, 131613,https://forum.effectivealtruism.org/posts/EcrNFxGszfgcGevtf/risk-awareness-moments-rams-a-concept-for-thinking-about-ai,"""Risk Awareness Moments"" (Rams): A concept for thinking about AI governance interventions",['oeg'],2023-04-14T17:40:16Z,eaforum,, 131628,https://forum.effectivealtruism.org/posts/oNeX5x362cWZX84PT/how-to-persuade-a-non-cs-background-person-to-believe-agi-is,How to persuade a non-CS background person to believe AGI is 50% possible in 2040?,['jackchang110'],2023-04-01T15:27:24Z,eaforum,, 131637,https://forum.effectivealtruism.org/posts/9btFvtGwkufZpC7Yu/australians-call-for-ai-safety-to-be-taken-seriously,Australians call for AI safety to be taken seriously,['AlexanderSaeri'],2023-07-21T01:16:37Z,eaforum,, 131655,https://forum.effectivealtruism.org/posts/r5ZaEPbxHnM3cc5b8/supporting-global-coordination-in-ai-development-why-and-how,Supporting global coordination in AI development: Why and how to contribute to international AI standards,['pcihon'],2019-04-17T22:17:31Z,eaforum,, 131672,https://forum.effectivealtruism.org/posts/RE8eRsMh43YRZvTWg/cser-advice-to-eu-high-level-expert-group-on-ai,CSER Advice to EU High-Level Expert Group on AI,['HaydnBelfield'],2019-03-08T20:42:11Z,eaforum,, 131706,https://forum.effectivealtruism.org/posts/fwGevCo3bvypymhwb/the-dilemma-of-ultimate-technology,The Dilemma of Ultimate Technology,['Aino'],2023-07-20T12:24:33Z,eaforum,, 131720,https://forum.effectivealtruism.org/posts/CjifvmM3Kjn3beMyB/a-strange-twist-on-the-road-to-agi,A strange twist on the road to AGI,['cveres'],2022-10-12T23:27:09Z,eaforum,, 131729,https://forum.effectivealtruism.org/posts/K2xQrrXn5ZSgtntuT/what-do-xpt-forecasts-tell-us-about-ai-risk-1,What do XPT forecasts tell us about AI risk?,"['Forecasting Research Institute', 'rosehadshar']",2023-07-19T07:43:27Z,eaforum,, 131748,https://forum.effectivealtruism.org/posts/Kyh84cxzWcaKonHFG/openai-s-massive-push-to-make-superintelligence-safe-in-4,"OpenAI’s massive push to make superintelligence safe in 4 years or less (Jan Leike on the 80,000 Hours Podcast)",['80000_Hours'],2023-08-08T18:00:24Z,eaforum,, 131774,https://forum.effectivealtruism.org/posts/ZcGLsL6kuHMGWsBjp/ainotaimurain-sareteiru-to-no-chi,AIのタイムライン ─ 提案されている論証と「専門家」の立ち位置,['EA Japan'],2023-08-17T14:59:51Z,eaforum,, 131795,https://forum.effectivealtruism.org/posts/Ykqh8ku7NHN9CGkdC/modeling-the-impact-of-ai-safety-field-building-programs,Modeling the impact of AI safety field-building programs,['Center for AI Safety'],2023-07-10T17:22:20Z,eaforum,, 131814,https://forum.effectivealtruism.org/posts/C5X3XbHQkj5d8EeXg/fixing-insider-threats-in-the-ai-supply-chain,Fixing Insider Threats in the AI Supply Chain,['Madhav Malhotra'],2023-10-07T10:49:46Z,eaforum,, 131836,https://forum.effectivealtruism.org/posts/vqX25ML2vBN6cvmkx/pivotal-outcomes-and-pivotal-processes,Pivotal outcomes and pivotal processes,['Andrew Critch'],2022-06-17T23:43:33Z,eaforum,, 131851,https://forum.effectivealtruism.org/posts/uk39s7wgLcotmFsto/can-we-simulate-human-evolution-to-create-a-somewhat-aligned,Can we simulate human evolution to create a somewhat aligned AGI?,['Thomas Kwa'],2022-03-29T01:23:07Z,eaforum,, 131872,https://forum.effectivealtruism.org/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama,"I'm Buck Shlegeris, I do research and outreach at MIRI, AMA",['Buck'],2019-11-15T22:44:18Z,eaforum,, 131896,https://forum.effectivealtruism.org/posts/AmxxnazJcBWzWEeqj/forecasting-transformative-ai-what-kind-of-ai,Forecasting Transformative AI: What Kind of AI?,['Holden Karnofsky'],2021-08-10T21:38:46Z,eaforum,, 131917,https://forum.effectivealtruism.org/posts/NPHJBby6KjDC7iNYK/what-can-superintelligent-ani-tell-us-about-superintelligent,What can superintelligent ANI tell us about superintelligent AGI?,['Ted Sanders'],2023-06-12T06:32:29Z,eaforum,, 131931,https://forum.effectivealtruism.org/posts/dsEMaqKNmArdCRGeH/a-viral-license-for-ai-safety,A Viral License for AI Safety,['IvanVendrov'],2021-06-05T02:00:27Z,eaforum,, 131962,https://forum.effectivealtruism.org/posts/fwdjMtJLpkyJ2Gice/how-could-a-moratorium-fail,How could a moratorium fail?,['Davidmanheim'],2023-09-22T15:11:52Z,eaforum,, 131982,https://forum.effectivealtruism.org/posts/fsaogRokXxby6LFd7/a-compute-based-framework-for-thinking-about-the-future-of,A compute-based framework for thinking about the future of AI,['Matthew_Barnett'],2023-05-31T22:00:19Z,eaforum,, 131998,https://forum.effectivealtruism.org/posts/FJsk9i9c9zLC7eKLF/introducing-a-new-course-on-the-economics-of-ai,Introducing a New Course on the Economics of AI,['akorinek'],2021-12-21T04:55:08Z,eaforum,, 132026,https://forum.effectivealtruism.org/posts/hy2qcaYStNTBqaZCs/do-you-worry-about-totalitarian-regimes-using-ai-alignment,Do you worry about totalitarian regimes using AI Alignment technology to create AGI that subscribe to their values?,['diodio_yang'],2023-02-28T18:12:43Z,eaforum,, 132040,https://forum.effectivealtruism.org/posts/wEr8XqQvNwf4yP6mx/donation-recommendations-for-xrisk-ai-safety,Donation recommendations for xrisk + ai safety,['vincentweisser'],2023-02-06T21:25:54Z,eaforum,, 132053,https://forum.effectivealtruism.org/posts/JjAjJ53mmpQqBeobQ/the-race-to-the-end-of-humanity-structural-uncertainty,“The Race to the End of Humanity” – Structural Uncertainty Analysis in AI Risk Models,['Froolow'],2023-05-19T12:03:36Z,eaforum,, 132063,https://forum.effectivealtruism.org/posts/w58znDmKpfqvYoGdY/simulating-shutdown-code-activations-in-an-ai-virus-lab,Simulating Shutdown Code Activations in an AI Virus Lab,['Miguel'],2023-06-20T05:27:32Z,eaforum,, 132076,https://forum.effectivealtruism.org/posts/ed9EHSDLRp2oMwoyr/apply-to-be-a-stanford-hai-junior-fellow-assistant-professor,"Apply to be a Stanford HAI Junior Fellow (Assistant Professor- Research) by Nov. 15, 2021",['Vael Gates'],2021-10-31T02:21:44Z,eaforum,, 132085,https://forum.effectivealtruism.org/posts/yJkZK62NKuRu7SaJw/ai-safety-fundamentals-an-informal-cohort-starting-soon,AI Safety Fundamentals: An Informal Cohort Starting Soon! (cross-posted to lesswrong.com),['Tiago'],2023-06-04T18:21:44Z,eaforum,, 132094,https://forum.effectivealtruism.org/posts/EgAGoFazXe9yPbjcm/longtermist-reasons-to-work-for-innovative-governments,Longtermist reasons to work for innovative governments,['ac'],2020-10-13T16:32:53Z,eaforum,, 132110,https://forum.effectivealtruism.org/posts/DjAQMc9rAEyjkmwYb/ai-labs-requests-for-input,AI labs' requests for input,['Zach Stein-Perlman'],2023-08-19T17:00:09Z,eaforum,, 132136,https://forum.effectivealtruism.org/posts/3ffgjMEJ4jY4rdgJy/a-survey-of-the-potential-long-term-impacts-of-ai,A Survey of the Potential Long-term Impacts of AI,['Sam Clarke'],2022-07-18T09:48:30Z,eaforum,, 132174,https://forum.effectivealtruism.org/posts/7WfMYzLfcTyDtD6Gn/pause-for-thought-the-ai-pause-debate,Pause For Thought: The AI Pause Debate,['Scott Alexander'],2023-10-10T15:34:54Z,eaforum,, 132198,https://forum.effectivealtruism.org/posts/5ZyLZjgJzyZdDLFrh/likelihood-of-an-anti-ai-backlash-results-from-a-preliminary,Likelihood of an anti-AI backlash: Results from a preliminary Twitter poll,['Geoffrey Miller'],2022-09-27T22:01:31Z,eaforum,, 132210,https://forum.effectivealtruism.org/posts/bYm63mL6NioCMq66w/data-poisoning-for-dummies-no-code-no-math,"Data Poisoning for Dummies (No Code, No Math)",['Madhav Malhotra'],2023-09-04T20:48:26Z,eaforum,, 132236,https://forum.effectivealtruism.org/posts/9Y5YzNDMdYYg6hjwD/what-term-to-use-for-ai-in-different-policy-contexts,What term to use for AI in different policy contexts?,['oeg'],2023-09-06T15:08:30Z,eaforum,, 132251,https://forum.effectivealtruism.org/posts/hBP67ZkaBPNrJSpWT/scrutinizing-ai-risk-80k-81-v-quick-summary,"Scrutinizing AI Risk (80K, #81) - v. quick summary",['Ben'],2020-07-23T19:02:56Z,eaforum,, 132273,https://forum.effectivealtruism.org/posts/BGDe6ZxfxyHTqNS5X/why-policymakers-should-beware-claims-of-new-arms-races,"Why policymakers should beware claims of new ""arms races"" (Bulletin of the Atomic Scientists)",['christian.r'],2022-07-14T13:38:24Z,eaforum,, 132282,https://forum.effectivealtruism.org/posts/CvrWR7LSw2GpJMsPp/pillars-to-convergence,Pillars to Convergence,['Phlobton'],2023-04-01T13:04:30Z,eaforum,, 132307,https://forum.effectivealtruism.org/posts/92TAmcppCL7t54Ajn/announcing-the-european-network-for-ai-safety-enais,Announcing the European Network for AI Safety (ENAIS),"['Esben Kran', 'Teun_Van_Der_Weij', 'Dušan D. Nešić (Dushan)', 'Jonathan Claybrough', 'simeon_c', 'Magdalena Wache']",2023-03-22T17:57:37Z,eaforum,, 132331,https://forum.effectivealtruism.org/posts/PS5GKpvKxDhPCcg7y/diagram-with-commentary-for-agi-as-an-x-risk,Diagram with Commentary for AGI as an X-Risk,['Jared Leibowich'],2023-05-24T22:27:42Z,eaforum,, 132350,https://forum.effectivealtruism.org/posts/BMkDcRrGWBj2j24NB/google-could-build-a-conscious-ai-in-three-months,Google could build a conscious AI in three months,['Derek Shiller'],2022-10-01T13:24:11Z,eaforum,, 132366,https://forum.effectivealtruism.org/posts/73mcXk9gEq6GpgpFG/updates-from-campaign-for-ai-safety-4,Updates from Campaign for AI Safety,"['Jolyn Khoo', 'Nik Samoylov']",2023-08-30T05:36:40Z,eaforum,, 132390,https://forum.effectivealtruism.org/posts/8DtA57z9EyifD2wj5/ai-risk-microdynamics-survey,AI Risk Microdynamics Survey,['Froolow'],2022-10-09T20:00:41Z,eaforum,, 132399,https://forum.effectivealtruism.org/posts/ZMd2hjMF2auyeqtfr/primitive-global-discourse-framework-constitutional-ai-using,"Primitive Global Discourse Framework, Constitutional AI using legal frameworks, and Monoculture - A loss of control over the role of AGI in society",['broptross'],2023-06-01T05:12:02Z,eaforum,, 132431,https://forum.effectivealtruism.org/posts/SxpQpqXBvAxrPWC2e/ai-safety-newsletter-6-examples-of-ai-safety-progress-yoshua,"AI Safety Newsletter #6: Examples of AI safety progress, Yoshua Bengio proposes a ban on AI agents, and lessons from nuclear arms control","['Center for AI Safety', 'Dan H', 'Akash', 'aogara']",2023-05-16T15:14:36Z,eaforum,, 132466,https://forum.effectivealtruism.org/posts/LgKKwKgkWfdGES5xf/credo-ai-is-hiring-for-ai-gov-researcher-and-more,Credo AI is hiring for AI Gov Researcher & more!,['IanEisenberg'],2023-08-15T21:10:00Z,eaforum,, 132481,https://forum.effectivealtruism.org/posts/cFRbLmhCEu74wcJ3D/discussion-best-intuition-pumps-for-ai-safety,[Discussion] Best intuition pumps for AI safety,['mariushobbhahn'],2021-11-06T08:11:17Z,eaforum,, 132494,https://forum.effectivealtruism.org/posts/u3cJGX33zf32TsCMg/keep-chasing-ai-safety-press-coverage,Keep Chasing AI Safety Press Coverage,['RedStateBlueState'],2023-04-04T20:40:27Z,eaforum,, 132514,https://forum.effectivealtruism.org/posts/vH9J7GEkYitWmjdGM/facilitator-help-wanted-for-columbia-ea-ai-safety-groups,Facilitator Help Wanted for Columbia EA AI Safety Groups,['Berkan Ottlik'],2022-07-05T10:27:51Z,eaforum,, 132523,https://forum.effectivealtruism.org/posts/jcwm3bazs2sj686KC/what-i-m-doing,What I'm doing,['Chris Leong'],2022-07-19T11:31:09Z,eaforum,, 132556,https://forum.effectivealtruism.org/posts/Ewk9eXrcRRcJvqBY8/ai-risk-and-policy-forecasts-from-metaculus-and-fli-s-ai,AI Risk & Policy Forecasts from Metaculus & FLI's AI Pathways Workshop,['Will Aldred'],2023-05-16T08:53:31Z,eaforum,, 132577,https://forum.effectivealtruism.org/posts/gZNSTrDD7agjo7Nqy/japan-ai-alignment-conference,Japan AI Alignment Conference,['ChrisScammell'],2023-03-10T09:23:03Z,eaforum,, 132586,https://forum.effectivealtruism.org/posts/KDAXeG9Cz64gBDYXC/ai-relevant-regulation-insurance-in-safety-critical,AI-Relevant Regulation: Insurance in Safety-Critical Industries,['SWK'],2023-07-22T17:52:02Z,eaforum,, 132603,https://forum.effectivealtruism.org/posts/9kNqYzEAYtvLg2BbR/baobao-zhang-how-social-science-research-can-inform-ai,Baobao Zhang: How social science research can inform AI governance,['EA Global'],2021-01-22T15:10:56Z,eaforum,, 132636,https://forum.effectivealtruism.org/posts/Lb2TjSsjpqA8rQ7dP/the-state-of-ai-in-different-countries-an-overview,The state of AI in different countries — an overview,['Lizka'],2023-09-14T10:37:00Z,eaforum,, 132647,https://forum.effectivealtruism.org/posts/Konde3tJY2SFoFbpd/former-israeli-prime-minister-speaks-about-ai-x-risk,Former Israeli Prime Minister Speaks About AI X-Risk,['Yonatan Cale'],2023-05-20T12:09:35Z,eaforum,, 132657,https://forum.effectivealtruism.org/posts/widWpunQMfuNTCYE3/fermi-estimation-of-the-impact-you-might-have-working-on-ai,Fermi estimation of the impact you might have working on AI safety,['frib'],2022-05-13T13:30:53Z,eaforum,, 132674,https://forum.effectivealtruism.org/posts/wncsWoJnpEid3dJy8/ml4g-germany-ai-alignment-camp-1,ML4G Germany - AI Alignment Camp,['Evander H.'],2023-06-27T15:33:48Z,eaforum,, 132685,https://forum.effectivealtruism.org/posts/brFTTy47YdGxCDzqp/how-to-catch-a-chatgpt-cheat-7-practical-tips,How to Catch a ChatGPT Cheat: 7 Practical Tips,['Marshall'],2022-12-27T16:09:53Z,eaforum,, 132700,https://forum.effectivealtruism.org/posts/MYuhZPSySQoo3h4kD/make-a-neural-network-in-10-minutes,Make a neural network in ~10 minutes,['Arjun Yadav'],2022-04-25T18:36:07Z,eaforum,, 132737,https://forum.effectivealtruism.org/posts/WdkfjWZiBnLjc7Geg/technology-is-power-raising-awareness-of-technological-risks,Technology is Power: Raising Awareness Of Technological Risks,['Marc Wong'],2023-02-09T15:13:26Z,eaforum,, 132754,https://forum.effectivealtruism.org/posts/7KL8CitpBmnzZgKHY/how-to-create-curriculum-for-self-study-towards-ai-alignment,How to create curriculum for self-study towards AI alignment work?,['OIUJHKDFS'],2023-01-07T19:53:47Z,eaforum,, 132763,https://forum.effectivealtruism.org/posts/2xjrJwbmsaGzjD7w7/ai-control-idea-give-an-agi-the-primary-objective-of,"AI Control idea: Give an AGI the primary objective of deleting itself, but construct obstacles to this as best we can. All other objectives are secondary to this primary goal.",['Justausername'],2023-04-03T14:32:20Z,eaforum,, 132774,https://forum.effectivealtruism.org/posts/hy48QDCDL7A7cGQ75/aisn-17-automatically-circumventing-llm-guardrails-the,"AISN #17: Automatically Circumventing LLM Guardrails, the Frontier Model Forum, and Senate Hearing on AI Oversight","['Center for AI Safety', 'Dan H', 'aogara']",2023-08-01T15:24:23Z,eaforum,, 132800,https://forum.effectivealtruism.org/posts/a3PDjRBu9uTkRGeBS/a-response-to-matthews-on-ai-risk,A response to Matthews on AI Risk,['RyanCarey'],2015-08-11T12:58:39Z,eaforum,, 132824,https://forum.effectivealtruism.org/posts/Hg4dQqxyFpmkoYKeg/aisn-20-llm-proliferation-ai-deception-and-continuing,"AISN #20: LLM Proliferation, AI Deception, and Continuing Drivers of AI Capabilities","['Center for AI Safety', 'aogara', 'Dan H']",2023-08-29T15:03:41Z,eaforum,, 132854,https://forum.effectivealtruism.org/posts/GBAzW4pZ5JgJqGMJg/against-the-open-source-closed-source-dichotomy-regulated,Against the Open Source / Closed Source Dichotomy: Regulated Source as a Model for Responsible AI Development,['alexherwix'],2023-09-04T20:23:34Z,eaforum,, 132870,https://forum.effectivealtruism.org/posts/kHqfZczkcp5Wp4yvW/carl-shulman-on-ai-takeover-mechanisms-and-more-part-ii-of,Carl Shulman on AI takeover mechanisms (& more): Part II of Dwarkesh Patel interview for The Lunar Society,['alejandro'],2023-07-25T18:31:48Z,eaforum,, 132888,https://forum.effectivealtruism.org/posts/khRpZcCzjHPj2qLL5/pros-and-cons-of-boycotting-paid-chat-gpt,Pros and Cons of boycotting paid Chat GPT,['NickLaing'],2023-03-18T08:50:49Z,eaforum,, 132913,https://forum.effectivealtruism.org/posts/XSs6HqFvTAHR3bHLg/there-have-been-3-planes-billionaire-donors-and-2-have,There have been 3 planes (billionaire donors) and 2 have crashed,['trevor1'],2022-12-17T03:38:07Z,eaforum,, 132936,https://forum.effectivealtruism.org/posts/mdg8gL59LiiZmaGCw/new-reference-standard-on-llm-application-security-started,New reference standard on LLM Application security started by OWASP,['QuantumForest'],2023-06-19T19:56:23Z,eaforum,, 132953,https://forum.effectivealtruism.org/posts/6LNvQYyNQpDQmnnux/slowing-down-ai-progress-is-an-underexplored-alignment,Slowing down AI progress is an underexplored alignment strategy,['Michael Huang'],2022-07-13T03:22:58Z,eaforum,, 132968,https://forum.effectivealtruism.org/posts/Tm2wtkQkrSwEvtMAn/acausal-normalcy,Acausal normalcy,['Andrew Critch'],2023-03-03T23:35:12Z,eaforum,, 132984,https://forum.effectivealtruism.org/posts/MqhzeTpemwHcxrwzd/aisn-12-policy-proposals-from-ntia-s-request-for-comment-and,AISN #12: Policy Proposals from NTIA’s Request for Comment and Reconsidering Instrumental Convergence,"['Center for AI Safety', 'Dan H', 'aogara']",2023-06-27T15:25:40Z,eaforum,, 133006,https://forum.effectivealtruism.org/posts/cpN8axaLbpHDix9ie/what-is-the-role-of-bayesian-ml-for-ai-alignment-safety,What is the role of Bayesian ML for AI alignment/safety?,['mariushobbhahn'],2022-01-11T08:07:16Z,eaforum,, 133037,https://forum.effectivealtruism.org/posts/PZ76HmcbNREuoAfgG/mahendra-prasad-rational-group-decision-making,Mahendra Prasad: Rational group decision-making,['EA Global'],2020-07-08T15:06:40Z,eaforum,, 133061,https://forum.effectivealtruism.org/posts/vnnmNYwi7QbJPstsz/how-europe-might-matter-for-ai-governance,How Europe might matter for AI governance,['stefan.torges'],2019-07-12T23:42:25Z,eaforum,, 133085,https://forum.effectivealtruism.org/posts/DJuhFbtJLJ92pCsKW/new-sequence-towards-a-worldwide-watertight-windfall-clause,"New Sequence - Towards a worldwide, watertight Windfall Clause",['John Bridge'],2022-04-07T15:02:06Z,eaforum,, 133113,https://forum.effectivealtruism.org/posts/Wp3EjrgwEBFvnqzvg/podcast-video-transcript-eliezer-yudkowsky-why-ai-will-kill,"Podcast/video/transcript: Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality",['PeterSlattery'],2023-04-09T10:37:55Z,eaforum,, 133139,https://forum.effectivealtruism.org/posts/vxK7zuhfhYuoohGR7/cross-post-change-my-mind-we-should-define-and-measure-the,[Cross-post] Change my mind: we should define and measure the effectiveness of advanced AI,['David Johnston'],2022-04-06T00:20:50Z,eaforum,, 133162,https://forum.effectivealtruism.org/posts/Xdf3cnuSR68btiTKR/status-quo-engines-ai-essay,Status Quo Engines - AI essay,['Ilana_Goldowitz_Jimenez'],2023-05-28T14:33:52Z,eaforum,, 133196,https://forum.effectivealtruism.org/posts/Lna7SayJkyrKczH4n/play-regrantor-move-up-to-usd250-000-to-your-top-high-impact,"Play Regrantor: Move up to $250,000 to Your Top High-Impact Projects!","['Dawn Drescher', 'Greg_Colbourn']",2023-05-17T16:51:15Z,eaforum,, 133209,https://forum.effectivealtruism.org/posts/AsgkzmBCiFmidpndx/georgetown-ea-fall-2022-intro-to-ai-reading-group,"Georgetown EA Fall 2022""Intro to AI"" Reading Group",['Daniel H'],2022-10-08T01:44:04Z,eaforum,, 133230,https://forum.effectivealtruism.org/posts/5hihHtfhGoyNB5kLQ/imagine-agi-killed-us-all-in-three-years-what-would-have,Imagine AGI killed us all in three years. What would have been our biggest mistakes?,['anonymous'],2023-04-07T00:06:36Z,eaforum,, 133241,https://forum.effectivealtruism.org/posts/3qpaRKe8R4ptiqSkr/uk-prime-minister-rishi-sunak-s-speech-on-ai,UK Prime Minister Rishi Sunak's Speech on AI,['Tobias Häberli'],2023-10-26T10:34:00Z,eaforum,, 133275,https://forum.effectivealtruism.org/posts/2pxGXYX2JrptvLpzZ/ai-safety-career-bottlenecks-survey-responses-responses,AI Safety Career Bottlenecks Survey Responses Responses,['Linda Linsefors'],2021-05-28T10:41:37Z,eaforum,, 133304,https://forum.effectivealtruism.org/posts/ycCBeG5SfApC3mcPQ/even-more-early-career-eas-should-try-ai-safety-technical,(Even) More Early-Career EAs Should Try AI Safety Technical Research,['levin'],2022-06-30T21:14:33Z,eaforum,, 133322,https://forum.effectivealtruism.org/posts/MwfMqx7EoPjsqtdYK/understanding-how-hard-alignment-is-may-be-the-most,Understanding how hard alignment is may be the most important research direction right now,['Aron'],2023-06-07T19:05:16Z,eaforum,, 133334,https://forum.effectivealtruism.org/posts/SZFDtA4pjZzepdacv/13-very-different-stances-on-agi,13 Very Different Stances on AGI,['Ozzie Gooen'],2021-12-27T23:30:31Z,eaforum,, 133346,https://forum.effectivealtruism.org/posts/8Ajsy96jGHcf27Xre/intro-to-brain-like-agi-safety-series-halfway-point,“Intro to brain-like-AGI safety” series—halfway point!,['Steven Byrnes'],2022-03-09T15:21:03Z,eaforum,, 133359,https://forum.effectivealtruism.org/posts/mno4DMuHEWxLCQKXv/ai-ethical-committee,AI Ethical Committee,['eaaicommittee'],2022-03-01T23:35:52Z,eaforum,, 133373,https://forum.effectivealtruism.org/posts/AGhrkv7Ha6giWon7Z/lessons-learned-and-review-of-the-ai-safety-nudge,Lessons learned and review of the AI Safety Nudge Competition,"['Marc Carauleanu', 'Chris Leong']",2023-01-17T17:13:23Z,eaforum,, 133394,https://forum.effectivealtruism.org/posts/D6nmypgiiPfS42pub/tan-zhi-xuan-ai-alignment-philosophical-pluralism-and-the,"Tan Zhi Xuan: AI alignment, philosophical pluralism, and the relevance of non-Western philosophy",['EA Global'],2020-11-21T08:12:00Z,eaforum,, 133404,https://forum.effectivealtruism.org/posts/dr2ig3tquB59viY2v/followup-on-terminator,Followup on Terminator,['skluug'],2022-03-12T01:11:40Z,eaforum,, 133425,https://forum.effectivealtruism.org/posts/FZ2BMwSYhkdBWmTTA/good-futures-initiative-winter-project-internship,Good Futures Initiative: Winter Project Internship,['Aris Richardson'],2022-11-27T23:27:27Z,eaforum,, 133437,https://forum.effectivealtruism.org/posts/QeLE22fefLqKfYTW6/eli-lifland-on-navigating-the-ai-alignment-landscape,Eli Lifland on Navigating the AI Alignment Landscape,"['Ozzie Gooen', 'elifland']",2023-02-01T00:07:48Z,eaforum,, 133479,https://forum.effectivealtruism.org/posts/zd5inbT4kYKivincm/ai-is-centralizing-by-default-let-s-not-make-it-worse,AI is centralizing by default; let's not make it worse,['Quintin Pope'],2023-09-21T13:35:34Z,eaforum,, 133506,https://forum.effectivealtruism.org/posts/4XK5zkyv94voC8Fjr/announcing-the-most-important-century-writing-prize,Announcing The Most Important Century Writing Prize,"['michel', 'Drew Spartz']",2022-10-31T21:37:01Z,eaforum,, 133515,https://forum.effectivealtruism.org/posts/8ErtxW7FRPGMtDqJy/the-academic-contribution-to-ai-safety-seems-large,The academic contribution to AI safety seems large,['Gavin'],2020-07-30T10:30:19Z,eaforum,, 133531,https://forum.effectivealtruism.org/posts/AjxZ8RTNPZTmwTh2j/what-do-we-do-if-ai-doesn-t-take-over-the-world-but-still,"What do we do if AI doesn't take over the world, but still causes a significant global problem?",['James_Banks'],2020-08-02T03:35:19Z,eaforum,, 133553,https://forum.effectivealtruism.org/posts/dyyXcdgBchGczruJq/donation-offsets-for-chatgpt-plus-subscriptions,Donation offsets for ChatGPT Plus subscriptions,['Jeffrey Ladish'],2023-03-16T23:11:18Z,eaforum,, 133569,https://forum.effectivealtruism.org/posts/LM6TxGTJhhyixD5of/i-m-interviewing-nova-das-sarma-about-ai-safety-and,I'm interviewing Nova Das Sarma about AI safety and information security. What shouId I ask her?,['Robert_Wiblin'],2022-03-25T15:38:04Z,eaforum,, 133584,https://forum.effectivealtruism.org/posts/CgeDuvedjqCj56HXZ/a-note-of-caution-on-believing-things-on-a-gut-level,A note of caution on believing things on a gut level,['Nathan_Barnard'],2023-05-09T12:20:10Z,eaforum,, 133600,https://forum.effectivealtruism.org/posts/7p6CFnd6fYYqsH42r/graphical-representations-of-paul-christiano-s-doom-model,Graphical Representations of Paul Christiano's Doom Model,['Nathan Young'],2023-05-07T13:03:19Z,eaforum,, 133610,https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize,Announcing the Future Fund's AI Worldview Prize,"['Nick_Beckstead', 'leopold', 'William_MacAskill', 'ketanrama', 'ab']",2022-09-23T16:28:35Z,eaforum,, 133638,https://forum.effectivealtruism.org/posts/tnzLTnBQLEDv9zygo/establishing-oxford-s-ai-safety-student-group-lessons-learnt,Establishing Oxford’s AI Safety Student Group: Lessons Learnt and Our Model,"['Wilkin1234', 'juliakarbing', 'Catherine Brewer']",2022-09-21T07:57:13Z,eaforum,, 133659,https://forum.effectivealtruism.org/posts/LjBYatyXkce5EiLDo/optimism-ai-risk-and-ea-blind-spots,"Optimism, AI risk, and EA blind spots",['Justis'],2022-09-28T17:21:17Z,eaforum,, 133680,https://forum.effectivealtruism.org/posts/iZ6e2M4pmkNb3Dji5/chris-olah-on-what-the-hell-is-going-on-inside-neural,Chris Olah on what the hell is going on inside neural networks,['80000_Hours'],2021-08-04T15:13:13Z,eaforum,, 133708,https://forum.effectivealtruism.org/posts/hxTFAetiiSL7dZmyb/ideal-governance-for-companies-countries-and-more,"Ideal governance (for companies, countries and more)",['Holden Karnofsky'],2022-04-07T16:54:03Z,eaforum,, 133732,https://forum.effectivealtruism.org/posts/ahFdna5XAMxyuTssp/asking-for-online-calls-on-ai-s-risks-discussions,Asking for online calls on AI s-risks discussions,['jackchang110'],2023-05-14T13:58:51Z,eaforum,, 133741,https://forum.effectivealtruism.org/posts/JALJeiqtZJbQfsMeq/on-running-a-city-wide-university-group,On running a city-wide university group,['gergo'],2023-11-06T09:43:05Z,eaforum,, 133780,https://forum.effectivealtruism.org/posts/g72tGduJMDhqR86Ns/diamondoid-bacteria-nanobots-deadly-threat-or-dead-end-a,"""Diamondoid bacteria"" nanobots: deadly threat or dead-end? A nanotech investigation",['titotal'],2023-09-29T14:01:14Z,eaforum,, 133802,https://forum.effectivealtruism.org/posts/xGTcoL4rJsxGuDLFy/concrete-actionable-policies-relevant-to-ai-safety-written,Concrete actionable policies relevant to AI safety (written 2019),['weeatquince'],2022-12-16T18:41:50Z,eaforum,, 133842,https://forum.effectivealtruism.org/posts/hycChZFhDQjcGcLXD/should-ai-focus-on-problem-solving-or-strategic-planning-why,Should AI focus on problem-solving or strategic planning? Why not both?,['oliver_siegel'],2022-11-01T09:53:57Z,eaforum,, 133859,https://forum.effectivealtruism.org/posts/5SYG9tjv2E4kyE9Zi/discontinuous-progress-in-history-an-update,Discontinuous progress in history: an update,['AI Impacts'],2020-04-17T16:28:51Z,eaforum,, 133871,https://forum.effectivealtruism.org/posts/3jfXxzxrnwPwBwiig/is-it-crunch-time-yet-if-so-who-can-help,"Is it crunch time yet? If so, who can help?",['NicholasKross'],2021-10-13T04:11:12Z,eaforum,, 133885,https://forum.effectivealtruism.org/posts/QGtYFLBtSuegHz5tP/anki-deck-for-learning-the-main-ai-safety-orgs-projects-and,"Anki deck for learning the main AI safety orgs, projects, and programs",['Bryce Robertson'],2023-09-29T18:42:55Z,eaforum,, 133894,https://forum.effectivealtruism.org/posts/gw3tyZShzig28B4PE/gpt-2-as-step-toward-general-intelligence-alexander-2019,"GPT-2 as step toward general intelligence (Alexander, 2019)",['Will Aldred'],2022-07-18T16:14:11Z,eaforum,, 133917,https://forum.effectivealtruism.org/posts/6LDXiJ5Er6nrAfBiN/a-mesa-optimization-perspective-on-ai-valence-and-moral,A mesa-optimization perspective on AI valence and moral patienthood,['jacobpfau'],2021-09-09T22:23:59Z,eaforum,, 133940,https://forum.effectivealtruism.org/posts/Khz5s6hrTWo4cReNL/e-a-megaproject-ideas,E.A. Megaproject Ideas,['Tomer_Goloboy'],2022-03-21T01:23:57Z,eaforum,, 133956,https://forum.effectivealtruism.org/posts/eHKLgsXfMvSyAWb7E/drivers-of-large-language-model-diffusion-incremental,"Drivers of large language model diffusion: incremental research, publicity, and cascades",['Ben Cottier'],2022-12-21T13:50:04Z,eaforum,, 133989,https://forum.effectivealtruism.org/posts/DP6GHJSNEbszBMF4s/a-selection-of-some-writings-and-considerations-on-the-cause,A selection of some writings and considerations on the cause of artificial sentience,['Raphaël_Pesah'],2023-08-10T18:23:48Z,eaforum,, 134018,https://forum.effectivealtruism.org/posts/T7e42LXSDCkoF33Mt/potential-risks-from-advanced-artificial-intelligence-the,Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity,['Holden Karnofsky'],2016-05-06T12:55:19Z,eaforum,, 134050,https://forum.effectivealtruism.org/posts/FLTJtDmCxfpoZDA5K/questions-on-databases-of-ai-risk-estimates,Questions on databases of AI Risk estimates,['Froolow'],2022-10-02T09:12:26Z,eaforum,, 134062,https://forum.effectivealtruism.org/posts/pDjtcoawgvpDpoyrL/applications-open-govai-summer-fellowship-2023,Applications Open: GovAI Summer Fellowship 2023,['GovAI'],2022-12-21T15:00:47Z,eaforum,, 134078,https://forum.effectivealtruism.org/posts/JMhwY2WE3oqkRxf6h/metaculus-launches-conditional-cup-to-explore-linked,Metaculus Launches Conditional Cup to Explore Linked Forecasts,['christian'],2023-10-18T20:41:41Z,eaforum,, 134087,https://forum.effectivealtruism.org/posts/7gL7CFBmybjAjJvAw/ai-safety-hub-serbia-soft-launch,AI Safety Hub Serbia Soft Launch,['Dušan D. Nešić (Dushan)'],2023-07-25T19:39:16Z,eaforum,, 134100,https://forum.effectivealtruism.org/posts/wdxMmSnK5JscvuK35/diminishing-returns-in-machine-learning-part-1-hardware,Diminishing Returns in Machine Learning Part 1: Hardware Development and the Physical Frontier,['Brian Chau'],2023-05-27T12:39:03Z,eaforum,, 134119,https://forum.effectivealtruism.org/posts/QYbP47ZErrgFYXBLX/the-alignment-problem-from-a-deep-learning-perspective,The alignment problem from a deep learning perspective,['richard_ngo'],2022-08-11T03:18:36Z,eaforum,, 134144,https://forum.effectivealtruism.org/posts/BGFk3fZF36i7kpwWM/artificial-intelligence-and-nuclear-command-control-and-1,"Artificial Intelligence and Nuclear Command, Control, & Communications: The Risks of Integration",['Peter Rautenbach'],2022-11-18T13:01:35Z,eaforum,, 134170,https://forum.effectivealtruism.org/posts/s29BdN8EeyKjg5v6M/how-should-we-invest-in-long-term-short-termism-given-the,"How should we invest in ""long-term short-termism"" given the likelihood of transformative AI?",['James_Banks'],2021-01-12T23:54:29Z,eaforum,, 134182,https://forum.effectivealtruism.org/posts/HsDMguLtdhFP46GQ8/how-open-source-machine-learning-software-shapes-ai,How Open Source Machine Learning Software Shapes AI,['Max Langenkamp'],2022-09-28T17:49:07Z,eaforum,, 134206,https://forum.effectivealtruism.org/posts/jfLjsxcejCFDpo7dw/whether-you-should-do-a-phd-doesn-t-depend-much-on-timelines,Whether you should do a PhD doesn't depend much on timelines.,['alex lawsen (previously alexrjl)'],2023-03-22T12:25:30Z,eaforum,, 134216,https://forum.effectivealtruism.org/posts/wcXrW2cyi2zkJxDmo/ea-poland-is-facing-an-existential-risk,EA Poland is facing an existential risk,['EA Poland'],2023-11-10T16:23:51Z,eaforum,, 134240,https://forum.effectivealtruism.org/posts/D8GitXAMt7deG8tBc/how-quickly-ai-could-transform-the-world-tom-davidson-on-the,"How quickly AI could transform the world (Tom Davidson on The 80,000 Hours Podcast)","['80000_Hours', 'Luisa_Rodriguez', 'Tom_Davidson']",2023-05-08T13:23:51Z,eaforum,, 134258,https://forum.effectivealtruism.org/posts/hhLbFmhXbmaX5PcCa/extended-deadline-jan-23rd-announcing-the-pibbss-summer,[Extended Deadline: Jan 23rd] Announcing the PIBBSS Summer Research Fellowship,['nora'],2021-12-18T16:54:57Z,eaforum,, 134267,https://forum.effectivealtruism.org/posts/QhzJFpQPa9qxfAmXp/explained-simply-quantilizers,Explained Simply: Quantilizers,['brook'],2023-09-08T12:54:59Z,eaforum,, 134279,https://forum.effectivealtruism.org/posts/SezmJHRmdxufBzEmC/should-we-nationalize-ai-development,Should we nationalize AI development?,['Jadon Schmitt'],2023-07-20T05:31:41Z,eaforum,, 134290,https://forum.effectivealtruism.org/posts/XGPW25NZHq2WHbK9w/ai-policy-careers-in-the-eu,AI policy careers in the EU,['Lauro Langosco'],2019-11-11T10:43:49Z,eaforum,, 134309,https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment,Paul Christiano: Current work in AI alignment,['EA Global'],2020-04-03T07:06:37Z,eaforum,, 134337,https://forum.effectivealtruism.org/posts/2xrTTgvosGSsM85RZ/katja-grace-on-slowing-down-ai-ai-expert-surveys-and,"Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk",['Michaël Trazzi'],2022-09-16T18:00:50Z,eaforum,, 134365,https://forum.effectivealtruism.org/posts/QzrgMhTMoLe5mEas8/ai-risk-intro-1-advanced-ai-might-be-very-bad,AI Risk Intro 1: Advanced AI Might Be Very Bad,"['LRudL', 'TheMcDouglas']",2022-09-11T10:57:08Z,eaforum,, 134394,https://forum.effectivealtruism.org/posts/kCAcrjvXDt2evMpBz/a-tale-of-2-5-orthogonality-theses,A tale of 2.5 orthogonality theses,['Arepo'],2022-05-01T13:53:18Z,eaforum,, 134412,https://forum.effectivealtruism.org/posts/hBjrAeuGvwk9pbwLL/career-advice-philosophy-programming-greater-than-ai-safety-1,Career Advice: Philosophy + Programming -> AI Safety,['tcelferact'],2022-03-18T15:09:30Z,eaforum,, 134428,https://forum.effectivealtruism.org/posts/S7x3ztfd9h8ux68wN/tai-safety-bibliographic-database,TAI Safety Bibliographic Database,['Jess_Riedel'],2020-12-22T16:03:54Z,eaforum,, 134444,https://forum.effectivealtruism.org/posts/h9unK57kLnmKdG6uq/riesgos-catastroficos-globales-needs-funding,Riesgos Catastróficos Globales needs funding,['Jaime Sevilla'],2023-08-01T16:26:27Z,eaforum,, 134479,https://forum.effectivealtruism.org/posts/uSH6DqjzggAYQGjxm/the-rival-ai-deployment-problem-a-pre-deployment-agreement,The Rival AI Deployment Problem: a Pre-deployment Agreement as the least-bad response,['HaydnBelfield'],2022-09-23T09:28:45Z,eaforum,, 134506,https://forum.effectivealtruism.org/posts/JsS5vuiHEoBMbYk5R/usd20k-in-bounties-for-ai-safety-public-materials,$20K in Bounties for AI Safety Public Materials,"['ThomasW', 'Dan H', 'Oliver Z']",2022-08-05T02:57:31Z,eaforum,, 134521,https://forum.effectivealtruism.org/posts/7kj38wnMANwEAp6AT/how-could-ai-governance-go-wrong,How Could AI Governance Go Wrong?,['HaydnBelfield'],2022-05-26T21:29:42Z,eaforum,, 134553,https://forum.effectivealtruism.org/posts/opCxiPwxFcaaayyMB/relationship-between-ea-community-and-ai-safety,Relationship between EA Community and AI safety,['Tom Barnes'],2023-09-18T13:49:43Z,eaforum,, 134566,https://forum.effectivealtruism.org/posts/hSugooaEQNTeKFsDu/who-ordered-alignment-s-apple,Who ordered alignment's apple?,['Eleni_A'],2022-08-28T14:24:14Z,eaforum,, 134579,https://forum.effectivealtruism.org/posts/hwyzytrEhdDeoyPzH/apply-to-the-cambridge-ml-for-alignment-bootcamp-camlab-26,Apply to the Cambridge ML for Alignment Bootcamp (CaMLAB) [26 March - 8 April],['hannah'],2023-02-09T16:32:24Z,eaforum,, 134598,https://forum.effectivealtruism.org/posts/2qZSv3skTT3pgEcGu/chatgpt-is-capable-of-cognitive-empathy,ChatGPT is capable of cognitive empathy!,['mikbp'],2023-03-30T20:42:31Z,eaforum,, 134614,https://forum.effectivealtruism.org/posts/ctPrrzFnXGyWrmK3w/eu-ai-act-passed-vote-and-x-risk-was-a-main-topic,"EU AI Act passed vote, and x-risk was a main topic",['Ariel G.'],2023-06-15T13:16:27Z,eaforum,, 134631,https://forum.effectivealtruism.org/posts/xqbm65f7TZbjfhsz4/longevity-research-as-ai-x-risk-intervention,Longevity research as AI X-risk intervention,['DirectedEvolution'],2022-11-06T17:58:09Z,eaforum,, 134652,https://forum.effectivealtruism.org/posts/gFoWdiGYtXrhmBusH/key-questions-about-artificial-sentience-an-opinionated,Key questions about artificial sentience: an opinionated guide,['rgb'],2022-04-25T13:43:00Z,eaforum,, 134673,https://forum.effectivealtruism.org/posts/m6zJ8xTuQp398uopy/birds-brains-planes-and-ai-against-appeals-to-the-complexity,"Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain",['kokotajlod'],2021-01-18T12:39:30Z,eaforum,, 134683,https://forum.effectivealtruism.org/posts/xmun77hGeBbg4AjxJ/an-exercise-in-speed-reading-the-national-security,An Exercise in Speed-Reading: The National Security Commission on AI (NSCAI) Final Report,['abiolvera'],2022-08-17T16:55:34Z,eaforum,, 134694,https://forum.effectivealtruism.org/posts/dPh5FgqwuQGGA6FSr/ai-alignment-is-intractable-and-we-humans-should-stop,AI Alignment is intractable (and we humans should stop working on it),['GPT 3'],2022-07-28T20:02:40Z,eaforum,, 134707,https://forum.effectivealtruism.org/posts/yjvhEqBshzxMjKL9g/an-argument-for-accelerating-international-ai-governance-1,An argument for accelerating international AI governance research (part 2),['MattThinks'],2023-08-22T22:40:32Z,eaforum,, 134722,https://forum.effectivealtruism.org/posts/XmKhYQfnfqb3Z7Dkr/aligning-ai-with-humans-by-leveraging-legal-informatics,Aligning AI with Humans by Leveraging Legal Informatics,['johnjnay'],2022-09-18T07:43:10Z,eaforum,, 134735,https://forum.effectivealtruism.org/posts/kJWqg4JjGcJCF5gyj/is-ai-like-disk-drives,Is AI like disk drives?,['Tanae'],2023-09-02T19:12:34Z,eaforum,, 134747,https://forum.effectivealtruism.org/posts/BFBf5yPLoJMGozygE/current-uk-government-levers-on-ai-development,Current UK government levers on AI development,['rosehadshar'],2023-04-10T13:16:22Z,eaforum,, 134765,https://forum.effectivealtruism.org/posts/otQtErQEB6R4GCDwF/working-in-congress-part-1-background-and-some-ea-cause-area-1,Working in Congress (Part #1): Background and some EA cause area analysis,['US Policy Careers'],2021-04-11T18:24:18Z,eaforum,, 134788,https://forum.effectivealtruism.org/posts/orpsrpYDMAdWRPFZW/should-the-ea-community-have-a-dl-engineering-fellowship,Should the EA community have a DL engineering fellowship?,['PabloAMC'],2021-12-24T13:43:39Z,eaforum,, 134809,https://forum.effectivealtruism.org/posts/2hwhxpFfjR3Bhf3Ya/what-can-we-do-now-to-prepare-for-ai-sentience-in-order-to,"What can we do now to prepare for AI sentience, in order to protect them from the global scale of human sadism?",['rime'],2023-04-18T09:58:37Z,eaforum,, 134829,https://forum.effectivealtruism.org/posts/QXywXmka8pACPuiHq/lpp-summer-research-fellowship-in-law-and-ai-2023,LPP Summer Research Fellowship in Law & AI 2023: Applications Open,['Legal Priorities Project'],2023-06-20T14:31:23Z,eaforum,, 134866,https://forum.effectivealtruism.org/posts/4TcaBNu7EmEukjGoc/questions-about-ai-that-bother-me,Questions about AI that bother me,['Eleni_A'],2023-01-31T06:50:44Z,eaforum,, 134887,https://forum.effectivealtruism.org/posts/pC9RJdmP3rnhuHpCm/emerging-paradigms-the-case-of-artificial-intelligence,Emerging Paradigms: The Case of Artificial Intelligence Safety,['Eleni_A'],2023-01-18T05:59:07Z,eaforum,, 134907,https://forum.effectivealtruism.org/posts/fniRhiPYw8b6FETsn/jade-leung-why-companies-should-be-leading-on-ai-governance,Jade Leung: Why companies should be leading on AI governance,['EA Global'],2019-05-15T23:37:37Z,eaforum,, 134931,https://forum.effectivealtruism.org/posts/caqjHNvAQc6B8auHM/summary-existential-risk-from-power-seeking-ai-by-joseph,Summary: Existential risk from power-seeking AI by Joseph Carlsmith,['rileyharris'],2023-10-28T15:05:46Z,eaforum,, 134959,https://forum.effectivealtruism.org/posts/xbF8fStkkRWbF9xg5/searle-vs-bostrom-crucial-considerations-for-ea-ai-work,Searle vs Bostrom: crucial considerations for EA AI work?,['Forumite'],2022-07-13T10:18:11Z,eaforum,, 134968,https://forum.effectivealtruism.org/posts/KGGDduXSwZQTQJ9xc/what-do-xpt-forecasts-tell-us-about-ai-timelines,What do XPT forecasts tell us about AI timelines?,"['rosehadshar', 'Forecasting Research Institute']",2023-07-21T08:30:19Z,eaforum,, 134983,https://forum.effectivealtruism.org/posts/uWgQv2gigQvDhQum6/the-ai-guide-i-m-sending-my-grandparents,The AI guide I'm sending my grandparents,['James Martin'],2023-04-27T20:04:17Z,eaforum,, 135022,https://forum.effectivealtruism.org/posts/wXzc75txE5hbHqYug/the-great-energy-descent-short-version-an-important-thing-ea,The great energy descent (short version) - An important thing EA might have missed,['Corentin Biteau'],2022-08-31T21:50:44Z,eaforum,, 135055,https://forum.effectivealtruism.org/posts/EWiCySDcLSyiHTRQn/a-theologian-s-response-to-anthropogenic-existential-risk,A Theologian's Response to Anthropogenic Existential Risk,['Fr Peter Wyg'],2022-11-03T04:37:53Z,eaforum,, 135069,https://forum.effectivealtruism.org/posts/FdYzaKxaP7hJNa5AF/my-attempt-at-explaining-the-case-for-ai-risk-in-a,My attempt at explaining the case for AI risk in a straightforward way,['JulianHazell'],2023-03-25T16:32:02Z,eaforum,, 135093,https://forum.effectivealtruism.org/posts/AQRvQ3AuQaPmuurk8/mathematical-circuits-in-neural-networks,Mathematical Circuits in Neural Networks,['Sean Osier'],2022-09-22T02:32:22Z,eaforum,, 135104,https://forum.effectivealtruism.org/posts/pX63E56uNkQgHJvx6/ai-relevant-regulation-iaea,AI-Relevant Regulation: IAEA,['SWK'],2023-07-15T18:20:40Z,eaforum,, 135123,https://forum.effectivealtruism.org/posts/uB8BgEvvu5YXerFbw/intro-to-ml-safety-virtual-program-12-june-14-august-1,Intro to ML Safety virtual program: 12 June - 14 August,"['james', 'Oliver Z']",2023-05-05T10:04:59Z,eaforum,, 135151,https://forum.effectivealtruism.org/posts/DBDpnAhxvRWmfmtfv/microdooms-averted-by-working-on-ai-safety,Microdooms averted by working on AI Safety,['Nikola'],2023-09-17T21:51:26Z,eaforum,, 135173,https://forum.effectivealtruism.org/posts/9piqRDGX6BisdMdRw/can-we-survive-technology-by-john-von-neumann,"""Can We Survive Technology?"" by John von Neumann",['Eli Rose'],2023-03-13T02:26:49Z,eaforum,, 135189,https://forum.effectivealtruism.org/posts/TmnYEfiqxFtAXDaCd/de-dicto-and-de-se-reference-matters-for-alignment,De Dicto and De Se Reference Matters for Alignment,['philgoetz'],2023-10-03T21:57:55Z,eaforum,, 135205,https://forum.effectivealtruism.org/posts/HEszzR4Am7PxN3hBG/here-are-the-finalists-from-fli-s-usd100k-worldbuilding,Here are the finalists from FLI’s $100K Worldbuilding Contest,['Jackson Wagner'],2022-06-06T18:42:35Z,eaforum,, 135226,https://forum.effectivealtruism.org/posts/QEifHsCzHzuKtF82F/mitigating-ethical-concerns-and-risks-in-the-us-approach-to,Mitigating Ethical Concerns and Risks in the US Approach to Autonomous Weapons Systems through Effective Altruism,['Vee'],2023-06-11T10:37:28Z,eaforum,, 135255,https://forum.effectivealtruism.org/posts/LLfaikCmysmdxussN/fiction-improved-governance-on-the-critical-path-to-ai,[Fiction] Improved Governance on the Critical Path to AI Alignment by 2045.,['Jackson Wagner'],2022-05-18T15:50:29Z,eaforum,, 135277,https://forum.effectivealtruism.org/posts/AC5jfXrBntwgHtZcR/fanaticism-in-ai-seri-project,Fanaticism in AI: SERI Project,['Jake Arft-Guatelli'],2021-09-24T04:39:34Z,eaforum,, 135290,https://forum.effectivealtruism.org/posts/ByHc6jdXF9skwevYf/the-happiness-maximizer-why-ea-is-an-x-risk,The Happiness Maximizer: Why EA is an x-risk,['Obasi Shaw'],2022-08-30T04:29:26Z,eaforum,, 135313,https://forum.effectivealtruism.org/posts/i8Eseu6HXHKp37Hye/linkpost-sharing-powerful-ai-models-the-emerging-paradigm-of,[linkpost] Sharing powerful AI models: the emerging paradigm of structured access,['ts'],2022-01-20T21:10:47Z,eaforum,, 135324,https://forum.effectivealtruism.org/posts/Ck2hHcNnvHZpFNm5T/what-are-the-risks-of-an-oracle-ai,What are the risks of an oracle AI?,['Griffin Young'],2022-10-05T06:18:37Z,eaforum,, 135333,https://forum.effectivealtruism.org/posts/c6RnqjBd3BAkqsknB/the-us-expands-restrictions-on-ai-exports-to-china-what-are,The US expands restrictions on AI exports to China. What are the x-risk effects?,['Stephen Clare'],2022-10-14T18:17:54Z,eaforum,, 135348,https://forum.effectivealtruism.org/posts/mSdnDYzfqh5MEYgox/spicy-takes-about-ai-policy-clark-2022,"Spicy takes about AI policy (Clark, 2022)",['Will Aldred'],2022-08-09T13:49:32Z,eaforum,, 135372,https://forum.effectivealtruism.org/posts/SiF3iWGSFn562vbGr/conversation-on-ai-risk-with-adam-gleave,Conversation on AI risk with Adam Gleave,['AI Impacts'],2019-12-27T21:43:37Z,eaforum,, 135393,https://forum.effectivealtruism.org/posts/DHJh3fuXK3TCtBsNq/linkpost-the-a-i-dilemma-march-9-2023-with-tristan-harris,"[Linkpost] The A.I. Dilemma - March 9, 2023, with Tristan Harris and Aza Raskin",['PeterSlattery'],2023-04-14T08:00:31Z,eaforum,, 135429,https://forum.effectivealtruism.org/posts/ZgbHXyushSdxxNjS2/join-aisafety-info-s-writing-and-editing-hackathon-aug-25-28,Join AISafety.info's Writing & Editing Hackathon (Aug 25-28) (Prizes to be won!),"['Siao Si', 'Stampy']",2023-08-05T14:06:06Z,eaforum,, 135445,https://forum.effectivealtruism.org/posts/AZyJdher64htcpKti/re-some-thoughts-on-vegetarianism-and-veganism,Re: Some thoughts on vegetarianism and veganism,['Fai'],2022-02-25T20:43:51Z,eaforum,, 135473,https://forum.effectivealtruism.org/posts/9oDMuY2cGfqBfp94T/some-thoughts-on-ai-could-defeat-all-of-us-combined,"Some thoughts on ""AI could defeat all of us combined""",['Milan_Griffes'],2023-06-02T15:03:07Z,eaforum,, 135498,https://forum.effectivealtruism.org/posts/A4KaDtjvGBANFa9Bb/resilience-via-fragmented-power,Resilience Via Fragmented Power,['steve6320'],2022-07-14T15:37:22Z,eaforum,, 135522,https://forum.effectivealtruism.org/posts/3xwSa4cE9eaxfo5mH/aisn-23-new-openai-models-news-from-anthropic-and,"AISN #23: New OpenAI Models, News from Anthropic, and Representation Engineering","['Center for AI Safety', 'aogara', 'Dan H']",2023-10-04T17:10:43Z,eaforum,, 135552,https://forum.effectivealtruism.org/posts/Yk4D4DZpx6eriMDyY/statement-on-ai-extinction-signed-by-agi-labs-top-academics,"Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures",['Center for AI Safety'],2023-05-30T09:06:20Z,eaforum,, 135562,https://forum.effectivealtruism.org/posts/QDBntBeBWJ94EQdou/time-stamping-an-urgent-neglected-ai-safety-measure,"Time-stamping: An urgent, neglected AI safety measure",['Axel Svensson'],2023-01-30T11:21:38Z,eaforum,, 135573,https://forum.effectivealtruism.org/posts/udGrjhfYqxv7GhWA4/creative-writing-contest-an-ai-safety-limerick,[Creative Writing Contest] An AI Safety Limerick,['Ben_West'],2021-10-18T19:11:32Z,eaforum,, 135585,https://forum.effectivealtruism.org/posts/22zk3tZyYWoanQwt7/training-for-good-update-and-plans-for-2023,Training for Good - Update & Plans for 2023,"['Cillian Crosson', 'Training for Good', 'SteveThompson', 'Jan-Willem']",2022-11-15T16:02:18Z,eaforum,, 135601,https://forum.effectivealtruism.org/posts/RuPKfSELEC2nXYX57/how-roodman-s-gwp-model-translates-to-tai-timelines,How Roodman's GWP model translates to TAI timelines,['kokotajlod'],2020-11-16T14:11:39Z,eaforum,, 135619,https://forum.effectivealtruism.org/posts/wpQ2qhF8Z6oonsaPX/announcing-ai-safety-support,Announcing AI Safety Support,['Linda Linsefors'],2020-11-19T20:19:58Z,eaforum,, 135640,https://forum.effectivealtruism.org/posts/jq5cbCxERw8t6PPQ8/skilling-up-in-ml-engineering-for-alignment-request-for,Skilling-up in ML Engineering for Alignment: request for comments,['TheMcDouglas'],2022-04-24T06:40:21Z,eaforum,, 135651,https://forum.effectivealtruism.org/posts/wQAYidiuiC42h4BKX/maybe-ai-risk-shouldn-t-affect-your-life-plan-all-that-much,Maybe AI risk shouldn't affect your life plan all that much,['Justis'],2022-07-22T15:30:23Z,eaforum,, 135671,https://forum.effectivealtruism.org/posts/Ci2Lh5fuwBqtKSG72/linkpost-given-extinction-worries-why-don-t-ai-researchers,"[Linkpost] Given Extinction Worries, Why Don’t AI Researchers Quit? Well, Several Reasons",['Daniel_Eth'],2023-06-06T07:31:53Z,eaforum,, 135692,https://forum.effectivealtruism.org/posts/APxdBEvcgGsmK5LAp/governance-of-ai-breakfast-cereal-car-factories-etc,"Governance of AI, Breakfast Cereal, Car Factories, Etc.",['Jeff Martin'],2023-11-06T01:44:29Z,eaforum,, 135710,https://forum.effectivealtruism.org/posts/2sn8RWPaChvyuHCcp/ama-the-new-open-philanthropy-technology-policy-fellowship,AMA: The new Open Philanthropy Technology Policy Fellowship,['lukeprog'],2021-07-26T15:11:51Z,eaforum,, 135719,https://forum.effectivealtruism.org/posts/kN3HgzDajBRAyS3sS/the-flaws-that-make-today-s-ai-architecture-unsafe-and-a-new,The flaws that make today's AI architecture unsafe and a new approach that could fix it,['80000_Hours'],2020-06-22T22:15:00Z,eaforum,, 135752,https://forum.effectivealtruism.org/posts/6ajPou3jMjicwsnEs/vignettes-workshop-ai-impacts,Vignettes Workshop (AI Impacts),['kokotajlod'],2021-06-15T11:02:04Z,eaforum,, 135761,https://forum.effectivealtruism.org/posts/SQSXfiByKat2YzpWu/new-roles-on-my-team-come-build-open-phil-s-technical-ai,New roles on my team: come build Open Phil's technical AI safety program with me!,['Ajeya'],2023-10-19T16:46:33Z,eaforum,, 135782,https://forum.effectivealtruism.org/posts/M68oj7fwXoPFJisap/we-should-expect-to-worry-more-about-speculative-risks,We should expect to worry more about speculative risks,['bgarfinkel'],2022-05-29T21:08:57Z,eaforum,, 135792,https://forum.effectivealtruism.org/posts/2sepfMDwgRfBpQC8S/linkpost-openai-leaders-call-for-regulation-of,"[Linkpost] OpenAI leaders call for regulation of ""superintelligence"" to reduce existential risk.",['Lowe'],2023-05-25T14:14:18Z,eaforum,, 135812,https://forum.effectivealtruism.org/posts/rExHeXfikaAxdMiDv/delegated-agents-in-practice-how-companies-might-end-up,"Delegated agents in practice: How companies might end up selling AI services that act on behalf of consumers and coalitions, and what this implies for safety research",['Remmelt'],2020-11-26T16:39:59Z,eaforum,, 135843,https://forum.effectivealtruism.org/posts/AW4iRhriRHkdGokLp/research-summary-forecasting-with-large-language-models,Research Summary: Forecasting with Large Language Models,['Damien Laird'],2023-04-02T10:52:04Z,eaforum,, 135872,https://forum.effectivealtruism.org/posts/vuATadXMheRhBvXfi/sam-altman-safety-and-capabilities-are-not-these-two,"Sam Altman: ""safety and capabilities are not these two separate things""",['Yarrow Bouchard'],2023-11-03T05:07:47Z,eaforum,, 135881,https://forum.effectivealtruism.org/posts/NkAPQnDuSMDLwziYY/mitigating-existential-risks-associated-with-human-nature,Mitigating existential risks associated with human nature and AI: Thoughts on serious measures.,['Paul J. Watson'],2023-03-25T19:10:12Z,eaforum,, 135906,https://forum.effectivealtruism.org/posts/SAvkXAwrzdhecAaCj/new-career-review-ai-safety-technical-research,New career review: AI safety technical research,"['Benjamin Hilton', '80000_Hours']",2023-07-17T15:34:53Z,eaforum,, 135950,https://forum.effectivealtruism.org/posts/w5cmtouHZxGLondEA/governments-pose-larger-risks-than-corporations-a-brief,Governments pose larger risks than corporations: a brief response to Grace,['David Johnston'],2022-10-19T11:54:55Z,eaforum,, 135962,https://forum.effectivealtruism.org/posts/z63MFmYXSCHeFxRz3/crucial-considerations-in-the-field-of-wild-animal-welfare,Crucial considerations in the field of Wild Animal Welfare (WAW),['Holly_Elmore'],2022-04-10T19:43:43Z,eaforum,, 135985,https://forum.effectivealtruism.org/posts/LD6wKNdPbxfdgYnao/concrete-actions-to-improve-ai-governance-the-behaviour,Concrete actions to improve AI governance: the behaviour science approach,['AlexanderSaeri'],2022-12-01T21:34:00Z,eaforum,, 136016,https://forum.effectivealtruism.org/posts/qdkow9kQhuqtoyxxs/emergent-ventures-ai,Emergent Ventures AI,['Gavin'],2022-04-08T22:08:24Z,eaforum,, 136025,https://forum.effectivealtruism.org/posts/c2yZNSwvccJGrjmMM/four-questions-i-ask-ai-safety-researchers,Four questions I ask AI safety researchers,['Akash'],2022-07-17T17:25:25Z,eaforum,, 136041,https://forum.effectivealtruism.org/posts/cjEaCKmRbfa5jmPop/public-opinion-on-ai-safety-aims-2023-and-2021-summary,Public Opinion on AI Safety: AIMS 2023 and 2021 Summary,"['Janet Pauketat', 'Ali', 'Jacy']",2023-09-25T18:09:43Z,eaforum,, 136063,https://forum.effectivealtruism.org/posts/N4LKrktopDs5Qdqgn/an-introduction-to-critiques-of-prominent-ai-safety,An Introduction to Critiques of prominent AI safety organizations,['Omega'],2023-07-19T06:53:51Z,eaforum,, 136082,https://forum.effectivealtruism.org/posts/6j6qgNa3uGmzJEMoN/artificial-intelligence-as-exit-strategy-from-the-age-of,Artificial Intelligence as exit strategy from the age of acute existential risk,['Arturo Macias'],2023-04-12T14:41:45Z,eaforum,, 136102,https://forum.effectivealtruism.org/posts/eggdG27y75ot8dNn7/three-pillars-for-avoiding-agi-catastrophe-technical,"Three pillars for avoiding AGI catastrophe: Technical alignment, deployment decisions, and coordination",['alexlintz'],2022-08-03T21:24:21Z,eaforum,, 136126,https://forum.effectivealtruism.org/posts/tkXPqvpCGaeNqBgSe/new-report-on-the-state-of-ai-safety-in-china,New report on the state of AI safety in China,['Geoffrey Miller'],2023-10-27T20:20:21Z,eaforum,, 136168,https://forum.effectivealtruism.org/posts/Zu2CTGP5xDR9nusoG/agi-risk-how-to-internationally-regulate-industries-in-non,AGI Risk: How to internationally regulate industries in non-democracies,['Timothy_Liptrot'],2022-05-16T22:45:04Z,eaforum,, 136184,https://forum.effectivealtruism.org/posts/cEj7o9rbPjmy7CDht/law-following-ai-2-intent-alignment-superintelligence,Law-Following AI 2: Intent Alignment + Superintelligence → Lawless AI (By Default),['Cullen'],2022-04-27T17:18:44Z,eaforum,, 136205,https://forum.effectivealtruism.org/posts/8JazqnCNrkJtK2Bx4/why-eas-are-skeptical-about-ai-safety,Why EAs are skeptical about AI Safety,['Lukas Trötzmüller'],2022-07-18T19:01:37Z,eaforum,, 136224,https://forum.effectivealtruism.org/posts/iDYt2e4skogJEn946/potential-risks-from-advanced-ai,Potential Risks from Advanced AI,['EA Global'],2017-08-13T07:00:00Z,eaforum,, 136248,https://forum.effectivealtruism.org/posts/nhsdeCEZAaBQQaro8/using-artificial-intelligence-machine-vision-to-increase-the-1,Using artificial intelligence (machine vision) to increase the effectiveness of human-wildlife conflict mitigations could benefit WAW,"['Rethink Priorities', 'Tapinder Sidhu']",2022-10-28T16:23:47Z,eaforum,, 136276,https://forum.effectivealtruism.org/posts/DNm5sbFogr9wvDasH/thoughts-on-yesterday-s-un-security-council-meeting-on-ai,Thoughts on yesterday’s UN Security Council meeting on AI,['Greg_Colbourn'],2023-07-19T16:46:49Z,eaforum,, 136293,https://forum.effectivealtruism.org/posts/XxWsAw7DefKipzRLc/my-summary-of-pragmatic-ai-safety,My summary of “Pragmatic AI Safety”,['Eleni_A'],2022-11-05T14:47:47Z,eaforum,, 136330,https://forum.effectivealtruism.org/posts/3rf99yiGhjDdBDeCJ/ai-safety-bounties,AI Safety Bounties,['PatrickL'],2023-08-24T14:30:13Z,eaforum,, 136358,https://forum.effectivealtruism.org/posts/aQ6QP3rLsLcZqYodr/ai-risks-the-most-convincing-argument,AI risks: the most convincing argument,['Eleni_A'],2022-08-06T20:26:50Z,eaforum,, 136371,https://forum.effectivealtruism.org/posts/rZoRGxJzipcQoaPST/how-many-people-are-working-directly-on-reducing-existential,How many people are working (directly) on reducing existential risk from AI?,"['Benjamin Hilton', '80000_Hours']",2023-01-17T14:03:35Z,eaforum,, 136381,https://forum.effectivealtruism.org/posts/5NcCWNC3yWdqeaEdH/link-post-michael-nielsen-s-notes-on-existential-risk-from,"[Link post] Michael Nielsen's ""Notes on Existential Risk from Artificial Superintelligence""",['Joel Becker'],2023-09-19T13:31:01Z,eaforum,, 136401,https://forum.effectivealtruism.org/posts/wsCXLzXWEPr5pwHWm/donating-against-short-term-ai-risks,Donating against Short Term AI risks,['Jan-Willem'],2020-11-16T12:23:10Z,eaforum,, 136414,https://forum.effectivealtruism.org/posts/gpNZbrSjHMHYqhvHn/the-existential-risk-of-speciesist-bias-in-ai,The Existential Risk of Speciesist Bias in AI,['Sam Tucker'],2023-11-11T03:27:09Z,eaforum,, 136426,https://forum.effectivealtruism.org/posts/Rx3baBysEhdQFzPdo/markus-anderljung-on-the-ai-policy-landscape,Markus Anderljung On The AI Policy Landscape,['Michaël Trazzi'],2022-09-09T17:27:21Z,eaforum,, 136461,https://forum.effectivealtruism.org/posts/3J8aBk8wc668CJnbb/rethink-priorities-is-looking-for-a-co-founder-for-a-new,Rethink Priorities is looking for a (Co-)Founder for a New Project: Field Building in Universities for AI Policy Careers in the US,['KevinN'],2023-08-28T16:01:56Z,eaforum,, 136474,https://forum.effectivealtruism.org/posts/9qrAhvNi27AKtyKAw/news-spanish-ai-image-outcry-us-ai-workforce-regulation,"News: Spanish AI image outcry + US AI workforce ""regulation""",['Ulrik Horn'],2023-09-26T07:43:39Z,eaforum,, 136489,https://forum.effectivealtruism.org/posts/HmDbjHBgopNDA6mrW/which-possible-ai-impacts-should-receive-the-most-additional,Which possible AI impacts should receive the most additional attention?,['David Johnston'],2022-05-31T02:01:38Z,eaforum,, 136498,https://forum.effectivealtruism.org/posts/XFGdTab6eMJriDMGD/announcing-aisummittalks-featuring-professor-stuart-russell,Announcing #AISummitTalks featuring Professor Stuart Russell and many others,['Otto'],2023-10-24T10:16:14Z,eaforum,, 136507,https://forum.effectivealtruism.org/posts/yQHzdmXa7KBB52fBz/ai-risk-and-survivorship-bias-how-andreessen-and-lecun-got,AI Risk and Survivorship Bias - How Andreessen and LeCun got it wrong,['stepanlos'],2023-07-14T17:10:49Z,eaforum,, 136521,https://forum.effectivealtruism.org/posts/oeAdt2GukZ3KayFhM/podcast-krister-bykvist-on-moral-uncertainty-rationality,"Podcast: Krister Bykvist on moral uncertainty, rationality, metaethics, AI and future populations",['Gus Docker'],2021-10-21T15:17:38Z,eaforum,, 136537,https://forum.effectivealtruism.org/posts/JbScJgCDedXaBgyKC/what-if-we-don-t-need-a-hard-left-turn-to-reach-agi,"What if we don't need a ""Hard Left Turn"" to reach AGI?",['Eigengender'],2022-07-15T09:49:12Z,eaforum,, 136553,https://forum.effectivealtruism.org/posts/6jxrzk99eEjsBxoMA/go-mobilize-lessons-from-gm-protests-for-pausing-ai,Go Mobilize? Lessons from GM Protests for Pausing AI,['charlieh943'],2023-10-24T15:01:49Z,eaforum,, 136590,https://forum.effectivealtruism.org/posts/pZmjeb5RddWqsjp2j/new-open-letter-on-ai-include-consciousness-research,"New open letter on AI — ""Include Consciousness Research""",['Jamie_Harris'],2023-04-28T07:50:12Z,eaforum,, 136606,https://forum.effectivealtruism.org/posts/Kd759DsB68t8wxCES/will-the-eu-regulations-on-ai-matter-to-the-rest-of-the,Will the EU regulations on AI matter to the rest of the world?,['anonymous'],2022-01-01T21:56:48Z,eaforum,, 136631,https://forum.effectivealtruism.org/posts/oB3MnFQa8LqcuEhjG/background-for-understanding-the-diffusion-of-large-language,"Background for ""Understanding the diffusion of large language models""",['Ben Cottier'],2022-12-21T13:49:36Z,eaforum,, 136655,https://forum.effectivealtruism.org/posts/7hyLqG27skzfGR3ze/what-are-the-most-pressing-issues-in-short-term-ai-policy,What are the most pressing issues in short-term AI policy?,['BrownHairedEevee'],2020-01-14T22:05:11Z,eaforum,, 136665,https://forum.effectivealtruism.org/posts/iwTr8S8QkutyYroGy/apply-to-the-ml-for-alignment-bootcamp-mlab-in-berkeley-jan,Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22],"['Habryka', 'Buck']",2021-11-03T18:20:38Z,eaforum,, 136676,https://forum.effectivealtruism.org/posts/P7x4hbanGKE2adfxe/call-for-pythia-style-foundation-model-suite-for-alignment,Call for Pythia-style foundation model suite for alignment research,['Lucretia'],2023-05-01T20:26:38Z,eaforum,, 136687,https://forum.effectivealtruism.org/posts/PMhKbnaky7hMopWhM/help-us-design-the-interface-for-aisafety-com,Help us design the interface for aisafety.com,['Kim Holder'],2023-10-23T17:27:27Z,eaforum,, 136697,https://forum.effectivealtruism.org/posts/LprnaEj3uhkmYtmat/disentangling-arguments-for-the-importance-of-ai-safety,Disentangling arguments for the importance of AI safety,['richard_ngo'],2019-01-23T14:58:28Z,eaforum,, 136731,https://forum.effectivealtruism.org/posts/XxgQ9KaqDEpdxMBmc/ai-predictions-future-fund-ai-worldview-prize-submission,"""AI predictions"" (Future Fund AI Worldview Prize submission)","['ketanrama', 'Nick_Beckstead', 'leopold', 'ab', 'William_MacAskill']",2022-11-05T17:51:31Z,eaforum,, 136770,https://forum.effectivealtruism.org/posts/Eu4ZDCt2yaKavtQ9s/ai-safety-newsletter-2-chaosgpt-natural-selection-and-ai,"AI Safety Newsletter #2: ChaosGPT, Natural Selection, and AI Safety in the Media","['Oliver Z', 'Dan H', 'Akash', 'aogara']",2023-04-18T18:36:33Z,eaforum,, 136799,https://forum.effectivealtruism.org/posts/BGWmAqrk64q2w6JjM/critique-of-superintelligence-part-2,Critique of Superintelligence Part 2,['Fods12'],2018-12-13T05:12:50Z,eaforum,, 136816,https://forum.effectivealtruism.org/posts/sXJkaQFFYodhEXNvr/alignment-and-capabilities-what-s-the-difference,Alignment & Capabilities: What's the difference?,['John G. Halstead'],2023-08-31T22:13:37Z,eaforum,, 136828,https://forum.effectivealtruism.org/posts/6FdvKpxey9gLRe8S8/chaining-retroactive-funders-to-borrow-against-unlikely,Chaining Retroactive Funders to Borrow Against Unlikely Utopias,['Dawn Drescher'],2022-04-19T18:25:58Z,eaforum,, 136845,https://forum.effectivealtruism.org/posts/FQd2Awx8oPs9HBqev/agi-timelines-ignore-the-social-factor-at-their-peril-future,"""AGI timelines: ignore the social factor at their peril"" (Future Fund AI Worldview Prize submission)","['ketanrama', 'Nick_Beckstead', 'leopold', 'ab', 'William_MacAskill']",2022-11-05T17:45:49Z,eaforum,, 136864,https://forum.effectivealtruism.org/posts/JQxvZZdPG5KYjyBfg/four-mindset-disagreements-behind-existential-risk,Four mindset disagreements behind existential risk disagreements in ML,['RobBensinger'],2023-04-11T04:53:48Z,eaforum,, 136889,https://forum.effectivealtruism.org/posts/iR3cwZgoQe3R47Lgr/training-for-good-is-hiring-and-why-you-should-join-us-ai,Training for Good is hiring (and why you should join us): AI Programme Lead and Operations Associate,"['Cillian Crosson', 'Training for Good']",2023-08-03T16:50:20Z,eaforum,, 136902,https://forum.effectivealtruism.org/posts/XYC8jmM4WPCDYZZmm/i-don-t-want-to-talk-about-ai,I don't want to talk about ai,"['Kirsten', 'EA Lifestyles']",2023-05-22T21:19:06Z,eaforum,, 136914,https://forum.effectivealtruism.org/posts/wxxoRHmisojF6Y2qD/un-secretary-general-recognises-existential-threat-from-ai,UN Secretary-General recognises existential threat from AI,['Greg_Colbourn'],2023-06-15T17:03:04Z,eaforum,, 136924,https://forum.effectivealtruism.org/posts/DkQaJwYMkSFN6E3f9/what-should-the-average-ea-do-about-ai-alignment,What Should the Average EA Do About AI Alignment?,['Raemon'],2017-02-25T20:07:11Z,eaforum,, 136951,https://forum.effectivealtruism.org/posts/BkcnuZyKcDZBpSrQS/il-panorama-della-governance-lungoterminista-delle,Il panorama della governance lungoterminista delle intelligenze artificiali,['EA Italy'],2023-01-17T11:03:47Z,eaforum,, 136990,https://forum.effectivealtruism.org/posts/fPJfMWL5znqSzDSny/what-work-has-been-done-on-the-post-agi-distribution-of,What work has been done on the post-AGI distribution of wealth?,['levin'],2022-07-06T18:59:26Z,eaforum,, 137000,https://forum.effectivealtruism.org/posts/fGfXrbtBJJasA2EKj/most-leading-ai-experts-believe-that-advanced-ai-could-be,Most Leading AI Experts Believe That Advanced AI Could Be Extremely Dangerous to Humanity,['jai'],2023-05-04T16:19:11Z,eaforum,, 137012,https://forum.effectivealtruism.org/posts/DuPEzGJ5oscqxD5oh/shah-and-yudkowsky-on-alignment-failures,Shah and Yudkowsky on alignment failures,"['EliezerYudkowsky', 'Rohin Shah']",2022-02-28T19:25:13Z,eaforum,, 137046,https://forum.effectivealtruism.org/posts/yNxxtd8HAcEukCb8Z/the-positive-case-for-a-focus-on-achieving-safe-ai,The positive case for a focus on achieving safe AI?,['vipulnaik'],2021-06-25T04:01:24Z,eaforum,, 137055,https://forum.effectivealtruism.org/posts/yPpCCC4REq3zKXWdJ/review-what-we-owe-the-future,Review: What We Owe The Future,['Kelsey Piper'],2022-11-21T21:41:07Z,eaforum,, 137064,https://forum.effectivealtruism.org/posts/vEL3aZDXTbLHAe25o/best-project-management-software-for-research-projects-and,Best project management software for research projects and labs?,['PeterSlattery'],2023-10-05T18:38:31Z,eaforum,, 137073,https://forum.effectivealtruism.org/posts/e3kLF5qPE8cRqsF8v/sixty-years-after-the-cuban-missile-crisis-a-new-era-of,"Sixty years after the Cuban Missile Crisis, a new era of global catastrophic risks",['christian.r'],2022-10-13T11:25:27Z,eaforum,, 137087,https://forum.effectivealtruism.org/posts/aJwcgm2nqiZu6zq2S/taking-a-leave-of-absence-from-open-philanthropy-to-work-on,Taking a leave of absence from Open Philanthropy to work on AI safety,['Holden Karnofsky'],2023-02-23T19:05:44Z,eaforum,, 137098,https://forum.effectivealtruism.org/posts/nTALzRAWxRnrxvoep/implications-of-the-whitehouse-meeting-with-ai-ceos-for-ai,Implications of the Whitehouse meeting with AI CEOs for AI superintelligence risk - a first-step towards evals?,['Jamie Bernardi'],2023-05-07T17:33:59Z,eaforum,, 137123,https://forum.effectivealtruism.org/posts/ng2h4EmCgaZK2GWF3/assessing-the-state-of-ai-r-and-d-in-the-us-china-and-europe,"Assessing the state of AI R&D in the US, China, and Europe – Part 1: Output indicators",['stefan.torges'],2019-11-01T14:41:10Z,eaforum,, 137141,https://forum.effectivealtruism.org/posts/Sa4ahq8AGTniuuvjE/linkpost-538-politics-podcast-on-ai-risk-and-politics,[Linkpost] 538 Politics Podcast on AI risk & politics,['jackva'],2023-04-11T17:03:07Z,eaforum,, 137157,https://forum.effectivealtruism.org/posts/pdSjwSb4GaZAApLTr/winners-of-the-ai-safety-nudge-competition,Winners of the AI Safety Nudge Competition,"['Marc Carauleanu', 'Chris Leong']",2022-11-15T01:06:28Z,eaforum,, 137166,https://forum.effectivealtruism.org/posts/MDtbDMNvaJsb75FiD/credo-ai-is-hiring-for-several-roles,Credo AI is hiring for several roles,['IanEisenberg'],2022-04-11T15:58:18Z,eaforum,, 137179,https://forum.effectivealtruism.org/posts/MEN9YMyqBJ9AodZri/aisn-22-the-landscape-of-us-ai-legislation-hearings,"AISN #22: The Landscape of US AI Legislation - Hearings, Frameworks, Bills, and Laws","['Center for AI Safety', 'aogara', 'Dan H']",2023-09-19T14:43:50Z,eaforum,, 137215,https://forum.effectivealtruism.org/posts/hJDSSTMcv9teNfHQM/takeaways-from-safety-by-default-interviews,Takeaways from safety by default interviews,['AI Impacts'],2020-04-07T02:02:00Z,eaforum,, 137239,https://forum.effectivealtruism.org/posts/bzchdNZma9TaXY46Y/when-will-digital-compute-match-the-human-brain,When will digital compute match the human brain?,['Yarrow Bouchard'],2023-05-21T06:14:41Z,eaforum,, 137249,https://forum.effectivealtruism.org/posts/AKnBQboyyKz9QdD4T/call-for-papers-on-global-ai-governance-from-the-un,Call for Papers on Global AI Governance from the UN,['Chris Leong'],2023-08-20T08:56:35Z,eaforum,, 137262,https://forum.effectivealtruism.org/posts/svaHSiPFykYs9tYet/should-some-people-start-working-to-influence-the-people-who,"Should some people start working to influence the people who are most likely to shape the values of the first AGIs, so that they take into account the interests of wild and farmed animals and sentient digital minds?",['Keyvan Mostafavi'],2023-08-31T12:08:46Z,eaforum,, 137271,https://forum.effectivealtruism.org/posts/QZRxWqJHYcZnzSJTf/ai-forecasting-resolution-council-forecasting-infrastructure,"AI Forecasting Resolution Council (Forecasting infrastructure, part 2)","['jacobjacob', 'goldhaber']",2019-08-29T17:43:50Z,eaforum,, 137281,https://forum.effectivealtruism.org/posts/nsoFyaasfQipyiWzN/long-term-future-fund-ask-us-anything,Long-Term Future Fund: Ask Us Anything!,['AdamGleave'],2020-12-03T13:44:38Z,eaforum,, 137296,https://forum.effectivealtruism.org/posts/PsHneZoLySmZ7W9dC/idea-an-ai-governance-group-colocated-with-every-ai-research,Idea: an AI governance group colocated with every AI research group!,['capybaralet'],2020-12-07T23:41:05Z,eaforum,, 137312,https://forum.effectivealtruism.org/posts/ixa4mM9aYF4yyqj84/ai-safety-executive-summary,AI Safety Executive Summary,['Sean Osier'],2022-09-06T08:26:54Z,eaforum,, 137333,https://forum.effectivealtruism.org/posts/49rzRKh2ZYH2QjPkg/safety-evaluations-and-standards-for-ai-or-beth-barnes-or,Safety evaluations and standards for AI | Beth Barnes | EAG Bay Area 23,['Beth Barnes'],2023-06-16T14:15:11Z,eaforum,, 137358,https://forum.effectivealtruism.org/posts/5nPo6nPYZz4F2h5sa/uk-policy-and-politics-careers,UK policy and politics careers,['weeatquince'],2019-09-28T16:18:44Z,eaforum,, 137373,https://forum.effectivealtruism.org/posts/76iiCpwiJNKCGZd9o/on-the-compute-governance-era-and-what-has-to-come-after,"On the compute governance era and what has to come after (Lennart Heim on The 80,000 Hours Podcast)",['80000_Hours'],2023-06-23T20:11:35Z,eaforum,, 137402,https://forum.effectivealtruism.org/posts/vcjLwqLDqNEmvewHY/tips-for-conducting-worldview-investigations,Tips for conducting worldview investigations,['lukeprog'],2022-04-12T19:28:56Z,eaforum,, 137419,https://forum.effectivealtruism.org/posts/MdfLn33GpNWGN7CSE/love-and-ai-relational-brain-mind-dynamics-in-ai-development,Love and AI: Relational Brain/Mind Dynamics in AI Development,['Jeffrey Kursonis'],2022-06-21T07:09:18Z,eaforum,, 137430,https://forum.effectivealtruism.org/posts/prpDSEQXgffZtvPST/ai-risk-increasing-persuasion-power,AI Risk: Increasing Persuasion Power,['kewlcats'],2020-08-03T20:25:35Z,eaforum,, 137447,https://forum.effectivealtruism.org/posts/zvbGXCxc5jBowCuNX/how-technical-safety-standards-could-promote-tai-safety,How technical safety standards could promote TAI safety,"['Cullen', 'Jade Leung', 'MarkusAnderljung']",2022-08-08T16:57:57Z,eaforum,, 137470,https://forum.effectivealtruism.org/posts/nbYRmenLjF3wE45sm/successif-join-our-ai-program-to-help-mitigate-the,Successif: Join our AI program to help mitigate the catastrophic risks of AI,"['ClaireB', 'AzrielZ']",2023-10-25T16:51:29Z,eaforum,, 137492,https://forum.effectivealtruism.org/posts/Q9tiLjgdHTMqFYsii/what-does-bing-chat-tell-us-about-ai-risk,What does Bing Chat tell us about AI risk?,['Holden Karnofsky'],2023-02-28T18:47:12Z,eaforum,, 137515,https://forum.effectivealtruism.org/posts/whEmrvK9pzioeircr/will-ai-end-everything-a-guide-to-guessing-or-eag-bay-area,Will AI end everything? A guide to guessing | EAG Bay Area 23,['Katja_Grace'],2023-05-25T17:01:14Z,eaforum,, 137535,https://forum.effectivealtruism.org/posts/udwe7LDvRFrhp5dFD/join-a-learning-by-writing-group,Join a ‘learning by writing' group,['jwpieters'],2023-04-26T11:36:03Z,eaforum,, 137547,https://forum.effectivealtruism.org/posts/RRaN57QAw8XNi9RXN/why-the-orthogonality-thesis-s-veracity-is-not-the-point,Why the Orthogonality Thesis's veracity is not the point:,['Antoine de Scorraille'],2020-07-23T15:40:28Z,eaforum,, 137564,https://forum.effectivealtruism.org/posts/GfdDZBiFjBb5fogCN/why-we-need-a-new-agency-to-regulate-advanced-artificial,Why we need a new agency to regulate advanced artificial intelligence,['Michael Huang'],2022-08-04T13:38:26Z,eaforum,, 137575,https://forum.effectivealtruism.org/posts/ZAP7gvPpD9YtoymxJ/is-the-time-crunch-for-ai-safety-movement-building-now,Is the time crunch for AI Safety Movement Building now?,['Chris Leong'],2022-06-08T12:19:33Z,eaforum,, 137596,https://forum.effectivealtruism.org/posts/PJxkdzTTYDyrRT99M/ai-relevant-regulation-cern,AI-Relevant Regulation: CERN,['SWK'],2023-07-15T18:40:22Z,eaforum,, 137612,https://forum.effectivealtruism.org/posts/Lbtcjfxhrs8kfKK2M/how-to-make-the-best-of-the-most-important-century-1,How to make the best of the most important century?,['Holden Karnofsky'],2021-09-14T21:05:57Z,eaforum,, 137641,https://forum.effectivealtruism.org/posts/pT9QTeAGT4GmMbT5w/a-course-for-the-general-public-on-ai,A course for the general public on AI,['LeandroD'],2020-08-31T01:29:51Z,eaforum,, 137650,https://forum.effectivealtruism.org/posts/QR7yGoFBonY6hege9/connor-leahy-on-conjecture-and-dying-with-dignity,Connor Leahy on Conjecture and Dying with Dignity,['Michaël Trazzi'],2022-07-22T19:30:15Z,eaforum,, 137680,https://forum.effectivealtruism.org/posts/KqCybin8rtfP3qztq/agi-and-lock-in,AGI and Lock-In,"['Lukas Finnveden', 'Jess_Riedel', 'CarlShulman']",2022-10-29T01:56:10Z,eaforum,, 137707,https://forum.effectivealtruism.org/posts/YBD9BoDaapCfqBmNd/university-community-building-seems-like-the-wrong-model-for,University community building seems like the wrong model for AI safety,['George Stiffman'],2022-02-26T06:23:20Z,eaforum,, 137720,https://forum.effectivealtruism.org/posts/Sz4myiNgqmHjr2MA7/link-post-coordination-challenges-for-preventing-ai-conflict,[Link post] Coordination challenges for preventing AI conflict,['stefan.torges'],2021-03-09T09:39:54Z,eaforum,, 137738,https://forum.effectivealtruism.org/posts/JCBPexSaGCfLtq3DP/the-problem-of-artificial-suffering,The problem of artificial suffering,['mlsbt'],2021-09-24T14:43:47Z,eaforum,, 137756,https://forum.effectivealtruism.org/posts/PH2pqsqgXQkfCdmkv/how-to-become-an-ai-safety-researcher,How to become an AI safety researcher,['peterbarnett'],2022-04-12T11:33:09Z,eaforum,, 137785,https://forum.effectivealtruism.org/posts/DXuwsXsqGq5GtmsB3/ai-alignment-with-humans-but-with-which-humans,AI alignment with humans... but with which humans?,['Geoffrey Miller'],2022-09-08T23:43:50Z,eaforum,, 137796,https://forum.effectivealtruism.org/posts/aDFR6c3Qd6cqrQu7c/could-ai-accelerate-economic-growth,Could AI accelerate economic growth?,['Tom_Davidson'],2023-06-07T19:07:26Z,eaforum,, 137812,https://forum.effectivealtruism.org/posts/Z4tsromjxAbMpAtiZ/risk-of-ai-deceleration,Risk of AI deceleration.,['Micah Zoltu'],2023-04-18T11:19:37Z,eaforum,, 137827,https://forum.effectivealtruism.org/posts/MN34Pd6gCeHPgnMwH/visit-mexico-city-in-january-and-february-to-interact-with,Visit Mexico City in January & February to interact with the AI Futures Fellowship,"['AmAristizabal', 'Jaime Andres Fernandez']",2023-07-28T16:44:22Z,eaforum,, 137836,https://forum.effectivealtruism.org/posts/R4nbXRipSzFECwkaE/is-interest-in-alignment-worth-mentioning-for-grad-school,Is interest in alignment worth mentioning for grad school applications?,['Franziska Fischer'],2022-10-16T04:50:47Z,eaforum,, 137845,https://forum.effectivealtruism.org/posts/7nFw536oK9H8rZmCP/is-china-becoming-a-science-and-technology-superpower,Is China Becoming a Science and Technology Superpower? Jeffrey Ding's Insight on China's Diffusion Deficit,['Wyman Kwok'],2023-04-25T17:00:49Z,eaforum,, 137856,https://forum.effectivealtruism.org/posts/DPDzKeQTyKEFDMwmg/artificial-intelligence-morality-and-sentience-aims-survey-1,"Artificial Intelligence, Morality, and Sentience (AIMS) Survey: 2021","['Janet Pauketat', 'Ali', 'Jacy']",2022-07-01T07:47:32Z,eaforum,, 137876,https://forum.effectivealtruism.org/posts/dZTWQQash9tjy9AwH/hiring-engineers-and-researchers-to-help-align-gpt-3,Hiring engineers and researchers to help align GPT-3,['Paul_Christiano'],2020-10-01T18:52:21Z,eaforum,, 137899,https://forum.effectivealtruism.org/posts/xfc6x9FbiK2kEorRo/disagreements-about-alignment-why-and-how-we-should-try-to,"Disagreements about Alignment: Why, and how, we should try to solve them",['ojorgensen'],2022-08-08T22:32:59Z,eaforum,, 137923,https://forum.effectivealtruism.org/posts/EEYvKn7gjps5wjFFq/actionable-guidance-and-roadmap-recommendations-for-the-nist,Actionable-guidance and roadmap recommendations for the NIST AI Risk Management Framework,"['Tony Barrett', 'Dan H']",2022-05-17T15:27:05Z,eaforum,, 137949,https://forum.effectivealtruism.org/posts/YCAEDBbskNaAc8XKx/a-brief-summary-of-the-most-important-century,A Brief Summary Of The Most Important Century,['Maynk02'],2022-10-25T15:28:49Z,eaforum,, 137973,https://forum.effectivealtruism.org/posts/AARnvz99hiEytnA9k/there-are-two-factions-working-to-prevent-ai-dangers-here-s,There are two factions working to prevent AI dangers. Here’s why they’re deeply divided.,['anonymous'],2022-08-10T19:52:29Z,eaforum,, 137989,https://forum.effectivealtruism.org/posts/b8xQEHeyABqv9ft53/linkpost-openai-is-awarding-ten-100k-grants-for-building,[Linkpost] OpenAI is awarding ten 100k grants for building prototypes of a democratic process for steering AI,['pseudonym'],2023-05-26T12:49:32Z,eaforum,, 137998,https://forum.effectivealtruism.org/posts/K6ugqr4cWnB6KELh4/keep-making-ai-safety-news,Keep Making AI Safety News,['RedStateBlueState'],2023-03-31T20:11:54Z,eaforum,, 138010,https://forum.effectivealtruism.org/posts/Q7gqF9ZCah2BEwZ9b/a-california-effect-for-artificial-intelligence,A California Effect for Artificial Intelligence,['henryj'],2022-09-09T14:17:57Z,eaforum,, 138027,https://forum.effectivealtruism.org/posts/McMwdgLZnuiCdsrrb/a-dataset-for-ai-superintelligence-stories-and-other-media,A dataset for AI/superintelligence stories and other media?,['Harrison Durland'],2022-03-29T21:41:27Z,eaforum,, 138036,https://forum.effectivealtruism.org/posts/Ne8ZS6iJJp7EpzztP/the-optimal-timing-of-spending-on-agi-safety-work-why-we,The optimal timing of spending on AGI safety work; why we should probably be spending more now,"['Tristan Cook', 'Guillaume Corlouer']",2022-10-24T17:42:06Z,eaforum,, 138049,https://forum.effectivealtruism.org/posts/DWSTgzjApuEjMdsyY/a-visualization-of-some-orgs-in-the-ai-safety-pipeline,A visualization of some orgs in the AI Safety Pipeline,"['Aaron_Scher', 'Aman Patel']",2022-04-10T16:52:44Z,eaforum,, 138058,https://forum.effectivealtruism.org/posts/fH266hKDhJMFKBSgs/ngo-and-yudkowsky-on-scientific-reasoning-and-pivotal-acts-1,Ngo and Yudkowsky on scientific reasoning and pivotal acts,"['EliezerYudkowsky', 'richard_ngo']",2022-02-21T17:00:01Z,eaforum,, 138085,https://forum.effectivealtruism.org/posts/ujRGGBxJN9AHXfzJe/slower-tech-development-can-be-about-ordering-gradualness-or,"""Slower tech development"" can be about ordering, gradualness, or distance from now",['MichaelA'],2021-11-14T20:58:05Z,eaforum,, 138110,https://forum.effectivealtruism.org/posts/6XgtNEuzzKaWrvHHS/technical-ai-safety-in-the-united-arab-emirates,Technical AI safety in the United Arab Emirates,['ea nyuad'],2022-06-21T03:11:58Z,eaforum,, 138136,https://forum.effectivealtruism.org/posts/aSDnzAm85a3Pi87rm/what-role-should-evolutionary-analogies-play-in,What role should evolutionary analogies play in understanding AI takeoff speeds?,['anson'],2021-12-11T01:16:20Z,eaforum,, 138159,https://forum.effectivealtruism.org/posts/eLKX9bmra9ZR2AQzD/ai-governance-reading-group-guide,AI Governance Reading Group Guide,['Alex HT'],2020-06-25T10:16:25Z,eaforum,, 138176,https://forum.effectivealtruism.org/posts/pFW5dfCEFwuLcwfpk/reasons-to-have-hope,Reasons to have hope,['jwpieters'],2023-04-20T10:19:20Z,eaforum,, 138191,https://forum.effectivealtruism.org/posts/926FtZiEERsGfPPv9/ai-risk-hub-in-singapore,AI risk hub in Singapore?,['kokotajlod'],2020-10-29T11:51:50Z,eaforum,, 138209,https://forum.effectivealtruism.org/posts/bsbf4am9paoTq8Lrb/applications-open-for-ai-safety-fundamentals-governance,Applications open for AI Safety Fundamentals: Governance Course,"['Jamie Bernardi', 'BlueDot Impact', 'Dewi']",2023-06-02T16:08:58Z,eaforum,, 138219,https://forum.effectivealtruism.org/posts/boxF7ZL5zLieFLCtv/how-important-are-accurate-ai-timelines-for-the-optimal,How important are accurate AI timelines for the optimal spending schedule on AI risk interventions?,['Tristan Cook'],2022-12-16T16:05:39Z,eaforum,, 138233,https://forum.effectivealtruism.org/posts/LjExZCPCHnNNTFDfq/fli-launches-worldbuilding-contest-with-usd100-000-in-prizes,"FLI launches Worldbuilding Contest with $100,000 in prizes",['ggilgallon'],2022-01-17T13:54:19Z,eaforum,, 138255,https://forum.effectivealtruism.org/posts/pAHPpX4cAwjtkLYkT/announcing-epoch-s-newly-expanded-parameters-compute-and,"Announcing Epoch's newly expanded Parameters, Compute and Data Trends in Machine Learning database","['Robi Rahman', 'Jaime Sevilla']",2023-10-25T03:03:11Z,eaforum,, 138264,https://forum.effectivealtruism.org/posts/idX6s3tTwRCXp94wY/openai-is-starting-a-new-superintelligence-alignment-team,"OpenAI is starting a new ""Superintelligence alignment"" team and they're hiring",['alejandro'],2023-07-05T18:27:38Z,eaforum,, 138291,https://forum.effectivealtruism.org/posts/7CdtdieiijWXWhiZB/what-s-going-on-with-crunch-time,What’s going on with ‘crunch time’?,['rosehadshar'],2023-01-20T09:38:43Z,eaforum,, 138317,https://forum.effectivealtruism.org/posts/hLbWWuDr3EbeQqrmg/reasons-for-my-negative-feelings-towards-the-ai-risk,Reasons for my negative feelings towards the AI risk discussion,['fergusq'],2022-09-01T07:33:35Z,eaforum,, 138339,https://forum.effectivealtruism.org/posts/NAdzbiZyJ5rNmBKey/existential-risk-of-misaligned-intelligence-augmentation,Existential Risk of Misaligned Intelligence Augmentation (Particularly Using High-Bandwidth BCI Implants),['Damian Gorski'],2023-01-24T17:02:16Z,eaforum,, 138378,https://forum.effectivealtruism.org/posts/x3ih5ohtTdLXQf4Fq/linkpost-how-to-get-into-independent-research-on-alignment,[Linkpost] How To Get Into Independent Research On Alignment/Agency,['Jackson Wagner'],2022-02-14T21:40:18Z,eaforum,, 138388,https://forum.effectivealtruism.org/posts/HeJ5BB9wh2TZ5ZrYn/excerpts-from-majority-leader-schumer-delivers-remarks-to,"Excerpts from ""Majority Leader Schumer Delivers Remarks To Launch SAFE Innovation Framework For Artificial Intelligence At CSIS""",['Chris Leong'],2023-07-21T23:15:45Z,eaforum,, 138416,https://forum.effectivealtruism.org/posts/Y4SaFM5LfsZzbnymu/the-case-for-ai-safety-advocacy-to-the-public,The Case for AI Safety Advocacy to the Public,['Holly_Elmore'],2023-09-20T12:03:37Z,eaforum,, 138435,https://forum.effectivealtruism.org/posts/nb3vfv4ntM6dwQ9mx/introducing-future-matters-a-strategy-consultancy,Introducing Future Matters – a strategy consultancy,"['KyleGracey', 'Justus_Baumann', 'Vegard Beyer']",2023-09-30T02:06:47Z,eaforum,, 138462,https://forum.effectivealtruism.org/posts/RLgjq9GFz9iHKC4ZZ/summary-of-the-ai-bill-of-rights-and-policy-implications,Summary of the AI Bill of Rights and Policy Implications,['Tristan Williams'],2023-06-20T09:28:33Z,eaforum,, 138495,https://forum.effectivealtruism.org/posts/pKG5fsfrgDSQtssfu/on-taking-ai-risk-seriously,On taking AI risk seriously,['Eleni_A'],2023-03-13T05:44:50Z,eaforum,, 138504,https://forum.effectivealtruism.org/posts/9BPs6ZmtqCbNfYaKg/agi-x-risk-timelines-10-chance-by-year-x-estimates-should-be,"AGI x-risk timelines: 10% chance (by year X) estimates should be the headline, not 50%.",['Greg_Colbourn'],2022-03-01T12:02:40Z,eaforum,, 138516,https://forum.effectivealtruism.org/posts/D3pyxpzxgff6H4sSv/what-are-some-current-already-present-challenges-from-ai,"What are some current, already present challenges from AI?",['nonzerosum'],2022-06-30T15:44:09Z,eaforum,, 138534,https://forum.effectivealtruism.org/posts/TxeqEJSmNdBKq9Ekw/ai-cybersecurity-and-malware-a-shallow-report-general,"AI, Cybersecurity, and Malware: A Shallow Report [General]",['Madhav Malhotra'],2023-03-31T12:01:29Z,eaforum,, 138562,https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment,How to pursue a career in technical AI alignment,['CharlieRS'],2022-06-04T21:36:22Z,eaforum,, 138588,https://forum.effectivealtruism.org/posts/6WvnfKvF2i6mqp3za/an-overview-of-catastrophic-ai-risks,An Overview of Catastrophic AI Risks,"['Center for AI Safety', 'Dan H', 'Mantas Mazeika', 'ThomasW']",2023-08-15T21:52:26Z,eaforum,, 138618,https://forum.effectivealtruism.org/posts/FdmdsXuAbzwAjTecr/ft-we-must-slow-down-the-race-to-god-like-ai,FT: We must slow down the race to God-like AI,['Angelina Li'],2023-04-24T11:57:35Z,eaforum,, 138643,https://forum.effectivealtruism.org/posts/KZiaBCWWW3FtZXGBi/the-heterogeneity-of-human-value-types-implications-for-ai,The heterogeneity of human value types: Implications for AI alignment,['Geoffrey Miller'],2022-09-16T21:21:17Z,eaforum,, 138656,https://forum.effectivealtruism.org/posts/9iGFjYnRquxiy29jm/safety-timelines-how-long-will-it-take-to-solve-alignment,Safety timelines: How long will it take to solve alignment?,"['Esben Kran', 'Jonathan Rystrom', 'Thomas Steinthal', 'Apart Research']",2022-09-19T12:51:50Z,eaforum,, 138676,https://forum.effectivealtruism.org/posts/2RCAkouYpiKyn4AbA/five-neglected-work-areas-that-could-reduce-ai-risk,Five neglected work areas that could reduce AI risk,"['Aaron_Scher', 'Charlotte']",2023-09-24T02:09:29Z,eaforum,, 138714,https://forum.effectivealtruism.org/posts/N8pJdopFs7cLzAB6F/what-can-the-principal-agent-literature-tell-us-about-ai-1,What can the principal-agent literature tell us about AI risk?,['ac'],2020-02-10T10:10:20Z,eaforum,, 138741,https://forum.effectivealtruism.org/posts/s5AAzpmbqedKgEaDj/who-is-testing-ai-safety-public-outreach-messaging,Who is testing AI Safety public outreach messaging?,['anonymous'],2023-04-15T00:53:20Z,eaforum,, 138750,https://forum.effectivealtruism.org/posts/NgBQcZbMtDLW8fpSg/against-gdp-as-a-metric-for-timelines-and-takeoff-speeds,Against GDP as a metric for timelines and takeoff speeds,['kokotajlod'],2020-12-29T17:50:04Z,eaforum,, 138772,https://forum.effectivealtruism.org/posts/xBeqaWEJfWZv8ALWn/announcing-cavendish-labs,Announcing Cavendish Labs,"['dyusha', 'Derik K']",2023-01-19T20:00:33Z,eaforum,, 138798,https://forum.effectivealtruism.org/posts/ByBBqwRXWqX5m9erL/update-to-samotsvety-agi-timelines,Update to Samotsvety AGI timelines,"['Misha_Yagudin', 'JonathanMann', 'NunoSempere']",2023-01-24T04:27:37Z,eaforum,, 138819,https://forum.effectivealtruism.org/posts/bm4qeNJcc82BKJnWk/the-heritability-of-human-values-a-behavior-genetic-critique,The heritability of human values: A behavior genetic critique of Shard Theory,['Geoffrey Miller'],2022-10-20T15:53:55Z,eaforum,, 138834,https://forum.effectivealtruism.org/posts/zdA3ZpGZ5FxfaRgjb/key-papers-in-language-model-safety,Key Papers in Language Model Safety,['aogara'],2022-06-20T14:59:18Z,eaforum,, 138871,https://forum.effectivealtruism.org/posts/XRphCh6NbfQiDF3Nt/racing-through-a-minefield-the-ai-deployment-problem,Racing through a minefield: the AI deployment problem,['Holden Karnofsky'],2022-12-31T21:44:56Z,eaforum,, 138901,https://forum.effectivealtruism.org/posts/xtNa7bPehioFMjnqx/ai-relevant-regulation-cpsc,AI-Relevant Regulation: CPSC,['SWK'],2023-08-13T15:44:14Z,eaforum,, 138925,https://forum.effectivealtruism.org/posts/jSJk9BPTCuHo7Acv7/is-this-community-over-emphasizing-ai-alignment,Is this community over-emphasizing AI alignment?,['Lixiang'],2023-01-08T06:23:12Z,eaforum,, 138934,https://forum.effectivealtruism.org/posts/bGzwWYfXgKqdurdmb/anthropic-s-responsible-scaling-policy-and-long-term-benefit,Anthropic's Responsible Scaling Policy & Long-Term Benefit Trust,['Zach Stein-Perlman'],2023-09-19T17:00:11Z,eaforum,, 138960,https://forum.effectivealtruism.org/posts/trqswoctpQ92tcY2y/criticism-thread-what-things-should-openphil-improve-on,Criticism Thread: What things should OpenPhil improve on?,['anonymousEA20'],2023-02-04T08:16:23Z,eaforum,, 138975,https://forum.effectivealtruism.org/posts/s33LLoR6vbwiyRpTm/epistemic-maps-for-ai-debates-or-for-other-issues,"""Epistemic maps"" for AI Debates? (or for other issues)",['Harrison Durland'],2021-08-30T04:59:30Z,eaforum,, 138988,https://forum.effectivealtruism.org/posts/3538iKtS2YmN67som/the-ai-revolution-and-international-politics-allan-dafoe,The AI revolution and international politics (Allan Dafoe),['EA Global'],2017-06-02T08:48:00Z,eaforum,, 139020,https://forum.effectivealtruism.org/posts/Qhn5nyRf93dsXodsw/cause-area-differential-neurotechnology-development,Cause Area: Differential Neurotechnology Development,['mwcvitkovic'],2022-08-10T02:39:52Z,eaforum,, 139054,https://forum.effectivealtruism.org/posts/FnszH6ZGBi9hd8rtv/google-invests-usd300mn-in-artificial-intelligence-start-up,Google invests $300mn in artificial intelligence start-up Anthropic | FT,['𝕮𝖎𝖓𝖊𝖗𝖆'],2023-02-03T19:43:48Z,eaforum,, 139063,https://forum.effectivealtruism.org/posts/jAWSicEi3PD8JHmac/intellectual-diversity-in-ai-safety,Intellectual Diversity in AI Safety,['KR'],2020-07-22T19:07:25Z,eaforum,, 139078,https://forum.effectivealtruism.org/posts/3hSEQnEN2D3SSzHWn/what-s-in-a-pause-3,What's in a Pause?,['Davidmanheim'],2023-09-16T10:13:44Z,eaforum,, 139106,https://forum.effectivealtruism.org/posts/THogLaytmj3n8oGbD/p-doom-or-agi-is-high-why-the-default-outcome-of-agi-is-doom,P(doom|AGI) is high: why the default outcome of AGI is doom,['Greg_Colbourn'],2023-05-02T10:40:58Z,eaforum,, 139128,https://forum.effectivealtruism.org/posts/wLQkTBHcPKtoku8Js/ai-risk-in-africa,AI Risk in Africa,['Claude Formanek'],2021-10-12T02:28:19Z,eaforum,, 139153,https://forum.effectivealtruism.org/posts/zvALRCKshYGYetsbC/reflections-on-the-pibbss-fellowship-2022,Reflections on the PIBBSS Fellowship 2022,"['nora', 'particlemania']",2022-12-11T22:03:58Z,eaforum,, 139178,https://forum.effectivealtruism.org/posts/ZvMPNLFBHur9qopw9/is-it-time-for-a-pause,Is it time for a pause?,['Kelsey Piper'],2023-04-06T11:48:35Z,eaforum,, 139194,https://forum.effectivealtruism.org/posts/gYCjGx6fnJSSMva4v/is-gpt-3-the-death-of-the-paperclip-maximizer,Is GPT-3 the death of the paperclip maximizer?,['matthias_samwald'],2020-08-03T11:34:18Z,eaforum,, 139204,https://forum.effectivealtruism.org/posts/fTvw6K3CfxXdxAE5G/a-study-of-ai-science-models,A Study of AI Science Models,"['Eleni_A', 'C Tilli', 'machinebiology']",2023-05-13T19:15:00Z,eaforum,, 139231,https://forum.effectivealtruism.org/posts/jMyjwRMMkYCnFmMHH/ai-and-policy-1-3-on-knowing-the-effect-of-today-s-policies,"AI & Policy 1/3: On knowing the effect of today’s policies on Transformative AI risks, and the case for institutional improvements.",['weeatquince'],2019-08-27T11:04:10Z,eaforum,, 139258,https://forum.effectivealtruism.org/posts/4efXC5WZaHSHJJZTF/sharing-the-world-with-digital-minds,Sharing the World with Digital Minds,['Aaron Gertler'],2020-12-01T08:00:00Z,eaforum,, 139275,https://forum.effectivealtruism.org/posts/pMoGmfg6rNsJWfZey/crypto-oracle-protocols-for-ai-alignment-with-real-world,Crypto 'oracle protocols' for AI alignment with real-world data?,['Geoffrey Miller'],2022-09-22T23:05:27Z,eaforum,, 139284,https://forum.effectivealtruism.org/posts/tX3ax2aSTbu4BtQBN/accidentally-teaching-ai-models-to-deceive-us-ajeya-cotra-on,"Accidentally teaching AI models to deceive us (Ajeya Cotra on The 80,000 Hours Podcast)","['80000_Hours', 'Luisa_Rodriguez', 'Ajeya']",2023-05-15T20:58:40Z,eaforum,, 139312,https://forum.effectivealtruism.org/posts/Zsz3BYQTJjJdZd4DR/we-can-t-do-long-term-utilitarian-calculations-until-we-know,We Can’t Do Long Term Utilitarian Calculations Until We Know if AIs Can Be Conscious or Not,['Mike20731'],2022-09-02T08:37:09Z,eaforum,, 139333,https://forum.effectivealtruism.org/posts/d4mr2GDftfsh8BDpq/getting-washington-and-silicon-valley-to-tame-ai-mustafa,"Getting Washington and Silicon Valley to tame AI (Mustafa Suleyman on the 80,000 Hours Podcast)",['80000_Hours'],2023-09-04T16:25:45Z,eaforum,, 139366,https://forum.effectivealtruism.org/posts/FHAAJKTFd92YmaTHc/what-defense-layers-should-governments-ai-labs-and,"What “defense layers” should governments, AI labs, and businesses use to prevent catastrophic AI failures?",['alexlintz'],2021-12-03T14:24:35Z,eaforum,, 139393,https://forum.effectivealtruism.org/posts/suyb4vC75Wo9EKgyu/argument-against-impact-eu-is-not-an-ai-superpower,Argument Against Impact: EU Is Not an AI Superpower,['EU AI Governance'],2022-01-31T09:48:29Z,eaforum,, 139414,https://forum.effectivealtruism.org/posts/esfHrQnu9aSHuXbmy/linkpost-alpaca-7b-release-or-budget-chatgpt-for-everybody,[Linkpost] Alpaca 7B release | Budget ChatGPT for everybody?,['Felix Wolf'],2023-03-17T13:08:50Z,eaforum,, 139440,https://forum.effectivealtruism.org/posts/nNCdhMenNYRcRFWab/would-a-super-intelligent-ai-necessarily-support-its-own,Would a super-intelligent AI necessarily support its own existence?,['Porque?'],2023-06-25T10:39:59Z,eaforum,, 139462,https://forum.effectivealtruism.org/posts/AoPR8BFrAFgGGN9iZ/chaining-the-evil-genie-why-outer-ai-safety-is-probably-easy,"Chaining the evil genie: why ""outer"" AI safety is probably easy",['titotal'],2022-08-30T13:55:39Z,eaforum,, 139477,https://forum.effectivealtruism.org/posts/KBtyxTZCMzh2BJnJs/chatgpt-bug-leaked-users-conversation-histories,ChatGPT bug leaked users' conversation histories,['Ian Turner'],2023-03-27T00:17:22Z,eaforum,, 139486,https://forum.effectivealtruism.org/posts/YweBjDwgdco669H72/ai-x-risk-in-the-news-how-effective-are-recent-media-items,AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results.,['Otto'],2023-05-04T14:04:59Z,eaforum,, 139508,https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk,US public opinion of AI policy and risk,"['Jamie Elsey', 'David_Moss']",2023-05-12T13:22:38Z,eaforum,, 139523,https://forum.effectivealtruism.org/posts/jSvWKv37DibR8BwNX/is-working-on-ai-safety-as-dangerous-as-ignoring-it,Is working on AI safety as dangerous as ignoring it?,['jkmh'],2021-09-20T23:07:00Z,eaforum,, 139542,https://forum.effectivealtruism.org/posts/dqpR2E4Bw9KEEaWoK/announcing-the-existential-infosec-forum,Announcing the Existential InfoSec Forum,"['calebp', 'Wim van der Schoot']",2023-07-07T21:08:45Z,eaforum,, 139560,https://forum.effectivealtruism.org/posts/MBmFuoHgnow59zGfy/cnas-report-artificial-intelligence-and-arms-control,CNAS report: 'Artificial Intelligence and Arms Control',['MMMaas'],2022-10-13T08:35:28Z,eaforum,, 139578,https://forum.effectivealtruism.org/posts/DjECTMZy9jB5hGZwg/paul-christiano-machine-intelligence-and-capital,Paul Christiano – Machine intelligence and capital accumulation,['Tessa'],2014-05-15T00:10:34Z,eaforum,, 139599,https://forum.effectivealtruism.org/posts/9cja9E52LCLa9Abbt/is-there-a-news-tracker-about-gpt-4-why-has-everything,Is there a news-tracker about GPT-4? Why has everything become so silent about it?,['Franziska Fischer'],2022-10-29T08:56:14Z,eaforum,, 139609,https://forum.effectivealtruism.org/posts/bBpE5HrjCFDLMvZKd/uk-s-new-10-year-national-ai-strategy-released-today,"UK's new 10-year ""National AI Strategy,"" released today",['jared_m'],2021-09-22T11:18:03Z,eaforum,, 139635,https://forum.effectivealtruism.org/posts/pNhc3jensyBY4Hz6u/panel-discussion-on-ai-consciousness-with-rob-long-and-jeff,Panel discussion on AI consciousness with Rob Long and Jeff Sebo,['Aaron Bergman'],2023-09-09T03:38:16Z,eaforum,, 139656,https://forum.effectivealtruism.org/posts/kDyG6p6FqwJ4ioQt4/concerns-about-ai-safety-career-change,Concerns about AI safety career change,['mmKALLL'],2023-01-13T20:52:47Z,eaforum,, 139683,https://forum.effectivealtruism.org/posts/gBL3yX4fAszePCnN2/ai-forecasting-dictionary-forecasting-infrastructure-part-1,"AI Forecasting Dictionary (Forecasting infrastructure, part 1)","['jacobjacob', 'goldhaber']",2019-08-08T13:16:09Z,eaforum,, 139700,https://forum.effectivealtruism.org/posts/R5RB4Rjmb2rpHGsmz/paleontological-study-of-extinctions-supports-ai-as-a,Paleontological study of extinctions supports AI as a existential threat to humanity,['kpurens'],2023-04-11T14:10:51Z,eaforum,, 139717,https://forum.effectivealtruism.org/posts/pvDGtDbaSj8gZuwN5/opportunities-for-impact-beyond-the-eu-ai-act,Opportunities for Impact Beyond the EU AI Act,['Cillian_'],2023-10-12T15:06:56Z,eaforum,, 139748,https://forum.effectivealtruism.org/posts/uiBCfZH7NrujeLdgK/pessimism-about-ai-safety,Pessimism about AI Safety,['Max_He-Ho'],2023-04-02T07:57:12Z,eaforum,, 139769,https://forum.effectivealtruism.org/posts/ccw9v9giKxg8nyLhp/xpt-forecasts-on-some-biological-anchors-inputs,XPT forecasts on (some) biological anchors inputs,"['Forecasting Research Institute', 'rosehadshar']",2023-07-24T13:32:39Z,eaforum,, 139793,https://forum.effectivealtruism.org/posts/9eQFPiNmH2s5ZyNEu/aisn-21-google-deepmind-s-gpt-4-competitor-military,"AISN #21: Google DeepMind’s GPT-4 Competitor, Military Investments in Autonomous Drones, The UK AI Safety Summit, and Case Studies in AI Policy","['Center for AI Safety', 'aogara', 'Dan H']",2023-09-05T14:59:51Z,eaforum,, 139829,https://forum.effectivealtruism.org/posts/9rdkqNd2faqzP9f9p/effective-persuasion-for-ai-alignment-risk,Effective Persuasion For AI Alignment Risk,['Brian Lui'],2022-08-09T23:55:46Z,eaforum,, 139843,https://forum.effectivealtruism.org/posts/9RCFq976d9YXBbZyq/research-reality-graphing-to-support-ai-policy-and-more,Research + Reality Graphing to Support AI Policy (and more): Summary of a Frozen Project,['Harrison Durland'],2022-07-02T20:58:53Z,eaforum,, 139863,https://forum.effectivealtruism.org/posts/HtiyM6KQFogL77oBJ/sentience-in-machines-how-do-we-test-for-this-objectively,Sentience in Machines - How Do We Test for This Objectively?,['Mayowa Osibodu'],2023-03-20T05:20:51Z,eaforum,, 139878,https://forum.effectivealtruism.org/posts/W4BRXGvz7BvMPFNvy/resources-and-opportunities-for-careers-in-european-ai,Resources & opportunities for careers in European AI Policy,"['Cillian_', 'Training for Good']",2023-10-12T15:02:15Z,eaforum,, 139895,https://forum.effectivealtruism.org/posts/qkK5ejystp8GCJ3vC/incident-reporting-for-ai-safety,Incident reporting for AI safety,"['Zach Stein-Perlman', 'SeLo', 'stepanlos', 'MvK']",2023-07-19T17:00:57Z,eaforum,, 139932,https://forum.effectivealtruism.org/posts/cYveBTjXWoutARLvA/how-to-engage-with-ai-4-social-justice-actors,How to engage with AI 4 Social Justice actors,['TomWestgarth'],2022-04-26T08:39:59Z,eaforum,, 139955,https://forum.effectivealtruism.org/posts/cMvxw4ehHJy2vYJDA/student-project-for-engaging-with-ai-alignment,Student project for engaging with AI alignment,['Per Ivar Friborg'],2022-05-09T10:44:12Z,eaforum,, 139970,https://forum.effectivealtruism.org/posts/bfDke8yv6sX94jF4R/desensitizing-deepfakes,Desensitizing Deepfakes,['Phib'],2023-03-29T01:20:55Z,eaforum,, 139985,https://forum.effectivealtruism.org/posts/kjYx6BvTKKJg3xQie/where-on-the-continuum-of-pure-ea-to-pure-ais-should-you-be,Where on the continuum of pure EA to pure AIS should you be? (Uni Group Organizers Focus),['jessica_mccurdy'],2023-06-26T23:46:58Z,eaforum,, 140015,https://forum.effectivealtruism.org/posts/3K2fKB8azNoEiEL9t/cooperation-avoidance-and-indifference-alternate-futures-for,"Cooperation, Avoidance, and Indifference: Alternate Futures for Misaligned AGI",['Kiel Brennan-Marquez'],2022-12-10T20:32:34Z,eaforum,, 140034,https://forum.effectivealtruism.org/posts/g38CkMbFzKBtdzFXY/biosafety-regulations-bmbl-and-their-relevance-for-ai,Biosafety Regulations (BMBL) and their relevance for AI,['stepanlos'],2023-06-29T19:20:32Z,eaforum,, 140052,https://forum.effectivealtruism.org/posts/pEjBEJHAoNuqS4pWH/ai-safety-for-dummies-like-me,AI Safety For Dummies (Like Me),['Madhav Malhotra'],2022-08-24T20:26:03Z,eaforum,, 140088,https://forum.effectivealtruism.org/posts/BY8gXSpGijypbGitT/why-i-prioritize-moral-circle-expansion-over-reducing,Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment,['Jacy'],2018-02-20T18:29:13Z,eaforum,, 140106,https://forum.effectivealtruism.org/posts/QiCZoxjjvPpd8qfWb/epoch-and-fri-mentorship-program-summer-2023-1,Epoch and FRI Mentorship Program Summer 2023,['merilalama'],2023-06-13T14:27:47Z,eaforum,, 140115,https://forum.effectivealtruism.org/posts/uPnmzDnoSviCcKq2L/mechanism-design-for-ai-safety-agenda-creation-retreat,Mechanism Design for AI Safety - Agenda Creation Retreat,['Rubi J. Hudson'],2023-02-10T03:05:56Z,eaforum,, 140124,https://forum.effectivealtruism.org/posts/GQqbbEJzBd4GsraPT/announcing-the-introduction-to-ml-safety-course,Announcing the Introduction to ML Safety Course,"['ThomasW', 'Dan H', 'Oliver Z']",2022-08-06T02:50:57Z,eaforum,, 140145,https://forum.effectivealtruism.org/posts/zGiD94SHwQ9MwPyfW/important-actionable-research-questions-for-the-most,"Important, actionable research questions for the most important century",['Holden Karnofsky'],2022-02-24T16:34:29Z,eaforum,, 140169,https://forum.effectivealtruism.org/posts/dCb8tWsAmbYPSiqYT/military-artificial-intelligence-as-contributor-to-global,Military Artificial Intelligence as Contributor to Global Catastrophic Risk,"['MMMaas', 'Di Cooke']",2022-06-27T10:35:48Z,eaforum,, 140209,https://forum.effectivealtruism.org/posts/DLrhnzDeqSywhrNew/agency-foundations-challenge-september-8th-24th-usd10k,"Agency Foundations Challenge: September 8th-24th, $10k Prizes","['Catalin M', 'Esben Kran']",2023-08-30T06:12:08Z,eaforum,, 140227,https://forum.effectivealtruism.org/posts/vaARSdTS2X73jAMZ9/the-ethical-basilisk-thought-experiment,The Ethical Basilisk Thought Experiment,['Kyrtin'],2023-08-23T13:24:12Z,eaforum,, 140237,https://forum.effectivealtruism.org/posts/5iTFKqJpSNwjk8iLv/epoch-is-hiring-a-research-data-analyst,Epoch is hiring a Research Data Analyst,['merilalama'],2022-11-22T17:34:42Z,eaforum,, 140246,https://forum.effectivealtruism.org/posts/wBbdQpy6dCjnMgxpJ/coherence-arguments-imply-a-force-for-goal-directed-behavior,Coherence arguments imply a force for goal-directed behavior,['Katja_Grace'],2021-04-06T21:44:59Z,eaforum,, 140260,https://forum.effectivealtruism.org/posts/FdAfhdsSGKxP6axZY/the-probability-that-artificial-general-intelligence-will-be,The probability that Artificial General Intelligence will be developed by 2043 is extremely low.,['cveres'],2022-10-06T11:26:05Z,eaforum,, 140278,https://forum.effectivealtruism.org/posts/22xpqq5SBRGCtyXtz/public-explainer-on-ai-as-an-existential-risk-1,Public Explainer on AI as an Existential Risk,['AndrewDoris'],2022-10-07T19:23:35Z,eaforum,, 140312,https://forum.effectivealtruism.org/posts/fRY74NeM3cxCdNPth/but-exactly-how-complex-and-fragile,But exactly how complex and fragile?,['Katja_Grace'],2019-12-13T07:05:23Z,eaforum,, 140335,https://forum.effectivealtruism.org/posts/pHKsedBYAvzFCniDF/ai-safety-needs-great-product-builders,AI Safety Needs Great Product Builders,['goodgravy'],2022-11-02T11:33:59Z,eaforum,, 140356,https://forum.effectivealtruism.org/posts/gvN4LBh7ZMguxxhHW/reducing-profit-motivations-in-ai-development,Reducing profit motivations in AI development,['Luke Frymire'],2023-04-03T20:04:08Z,eaforum,, 140372,https://forum.effectivealtruism.org/posts/iQCbubkxFCcZXmXZ9/how-the-ethisizer-almost-broke-story,How The EthiSizer Almost Broke `Story',['Velikovsky_of_Newcastle'],2023-05-08T16:58:16Z,eaforum,, 140386,https://forum.effectivealtruism.org/posts/MM22jJnQkLq2tPKHk/the-no-sandbagging-on-checkable-tasks-hypothesis,The “no sandbagging on checkable tasks” hypothesis,['Joe_Carlsmith'],2023-07-31T23:13:07Z,eaforum,, 140405,https://forum.effectivealtruism.org/posts/JFyzCv5YynN665nH8/thoughts-on-agi-organizations-and-capabilities-work,Thoughts on AGI organizations and capabilities work,"['RobBensinger', 'So8res']",2022-12-07T19:46:33Z,eaforum,, 140428,https://forum.effectivealtruism.org/posts/hybGfBnkrtL9E3EcS/how-long-will-reaching-a-risk-awareness-moment-and-charts,How long will reaching a Risk Awareness Moment and CHARTS agreement take?,['Yadav'],2023-09-06T16:39:07Z,eaforum,, 140449,https://forum.effectivealtruism.org/posts/n5vRgmv3iBEe6Xh3P/general-vs-specific-arguments-for-the-longtermist-importance,General vs specific arguments for the longtermist importance of shaping AI development,['Sam Clarke'],2021-10-15T14:43:38Z,eaforum,, 140466,https://forum.effectivealtruism.org/posts/haczkGjMusjyozTiH/free-guy-a-rom-com-on-the-moral-patienthood-of-digital-1,"Free Guy, a rom-com on the moral patienthood of digital sentience",['mic'],2021-12-23T07:47:46Z,eaforum,, 140482,https://forum.effectivealtruism.org/posts/eKzzfLtHdG36Sr5Hw/more-academic-diversity-in-alignment,More Academic Diversity in Alignment?,['ojorgensen'],2022-11-27T17:52:51Z,eaforum,, 140494,https://forum.effectivealtruism.org/posts/TecosAABRtPhaNdrL/raphael-milliere-on-the-limits-of-deep-learning-and-ai-x,Raphaël Millière on the Limits of Deep Learning and AI x-risk skepticism,['Michaël Trazzi'],2022-06-24T18:33:18Z,eaforum,, 140507,https://forum.effectivealtruism.org/posts/fRjj6nm9xbW4kFcTZ/advice-on-pursuing-technical-ai-safety-research,Advice on Pursuing Technical AI Safety Research,['frances_lorenz'],2022-05-31T17:48:56Z,eaforum,, 140534,https://forum.effectivealtruism.org/posts/48Cdimcq7NmznFzkz/call-for-submissions-ai-safety-special-session-at-the,Call for submissions: AI Safety Special Session at the Conference on Artificial Life (ALIFE 2023),['Rory Greig'],2023-02-05T16:37:45Z,eaforum,, 140545,https://forum.effectivealtruism.org/posts/BCwaWkMHTMjFMvedS/aisn-9-statement-on-extinction-risks-competitive-pressures,"AISN #9: Statement on Extinction Risks, Competitive Pressures, and When Will AI Reach Human-Level?","['Center for AI Safety', 'Dan H', 'aogara']",2023-06-06T15:56:50Z,eaforum,, 140569,https://forum.effectivealtruism.org/posts/xqQe85ZEs8KHxAbaF/what-considerations-influence-whether-i-have-more-influence,What considerations influence whether I have more influence over short or long timelines?,['kokotajlod'],2020-11-05T19:57:16Z,eaforum,, 140582,https://forum.effectivealtruism.org/posts/RDoLDJ4toRNpMRBmk/which-ai-safety-org-to-join,Which AI Safety Org to Join?,['Yonatan Cale'],2022-10-11T19:42:01Z,eaforum,, 140596,https://forum.effectivealtruism.org/posts/R9XhTTyrNQR8PvsRf/atari-early,Atari early,['AI Impacts'],2020-04-02T23:28:31Z,eaforum,, 140612,https://forum.effectivealtruism.org/posts/wPHpdwfu3toRDf6hM/main-paths-to-impact-in-eu-ai-policy,Main paths to impact in EU AI Policy,['JOMG_Monnet'],2022-12-08T16:17:26Z,eaforum,, 140640,https://forum.effectivealtruism.org/posts/h7Rj8Y8YWZccYMy5J/it-takes-5-layers-and-1000-artificial-neurons-to-simulate-a,It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link],['MichaelStJules'],2021-09-07T21:53:54Z,eaforum,, 140661,https://forum.effectivealtruism.org/posts/fSDxnLcCn8h22gCYB/ea-psychology-and-ai-safety-research,"EA, Psychology & AI Safety Research",['Sam Ellis'],2022-05-26T23:46:43Z,eaforum,, 140682,https://forum.effectivealtruism.org/posts/eW7YwLz548kDZaE6i/supervised-program-for-alignment-research-spar-at-uc,Supervised Program for Alignment Research (SPAR) at UC Berkeley: Spring 2023 summary,"['mic', 'Dylan Xu', 'caroq']",2023-08-19T02:32:49Z,eaforum,, 140698,https://forum.effectivealtruism.org/posts/i6btyefRRX23yCpnP/what-ai-companies-can-do-today-to-help-with-the-most,What AI companies can do today to help with the most important century,['Holden Karnofsky'],2023-02-20T17:40:32Z,eaforum,, 140723,https://forum.effectivealtruism.org/posts/PMFoxr62AeLEwPAH9/potential-employees-have-a-unique-lever-to-influence-the,Potential employees have a unique lever to influence the behaviors of AI labs,['oxalis'],2023-03-18T20:58:49Z,eaforum,, 140743,https://forum.effectivealtruism.org/posts/bmfR73qjHQnACQaFC/call-to-demand-answers-from-anthropic-about-joining-the-ai,Call to demand answers from Anthropic about joining the AI race,['sergia'],2023-03-02T17:26:17Z,eaforum,, 140756,https://forum.effectivealtruism.org/posts/CyiuhttLjFuCygYoy/how-will-the-world-respond-to-ai-x-risk-warning-shots,"How will the world respond to ""AI x-risk warning shots"" according to reference class forecasting?",['Ryan Kidd'],2022-04-18T09:10:08Z,eaforum,, 140766,https://forum.effectivealtruism.org/posts/TxrzhfRr6EXiZHv4G/agi-battle-royale-why-slow-takeover-scenarios-devolve-into-a,AGI Battle Royale: Why “slow takeover” scenarios devolve into a chaotic multi-AGI fight to the death,['titotal'],2022-09-22T15:00:15Z,eaforum,, 140791,https://forum.effectivealtruism.org/posts/6x2MjPXhpPpnatJFQ/some-promising-career-ideas-beyond-80-000-hours-priority,"Some promising career ideas beyond 80,000 Hours' priority paths",['Ardenlk'],2020-06-26T10:34:12Z,eaforum,, 140812,https://forum.effectivealtruism.org/posts/CnHiumpJdRmQyAfAA/how-to-navigate-potential-infohazards,How to navigate potential infohazards,['more better'],2023-03-04T21:28:10Z,eaforum,, 140826,https://forum.effectivealtruism.org/posts/2bQrhgkK2DxLtNGbj/is-anyone-else-also-getting-more-worried-about-hard-takeoff,Is anyone else also getting more worried about hard takeoff AGI scenarios?,['JonCefalu'],2023-01-09T06:04:43Z,eaforum,, 140855,https://forum.effectivealtruism.org/posts/pR35WbLmruKdiMn2r/continuous-doesn-t-mean-slow,Continuous doesn’t mean slow,['Tom_Davidson'],2023-05-10T12:17:44Z,eaforum,, 140866,https://forum.effectivealtruism.org/posts/iGYTt3qvJFGppxJbk/ngo-and-yudkowsky-on-alignment-difficulty,Ngo and Yudkowsky on alignment difficulty,"['richard_ngo', 'EliezerYudkowsky']",2021-11-15T22:47:46Z,eaforum,, 140898,https://forum.effectivealtruism.org/posts/AJJTRmW7zhvXrmD5s/apply-to-ceealar-to-do-agi-moratorium-work,Apply to CEEALAR to do AGI moratorium work,['Greg_Colbourn'],2023-07-26T21:24:12Z,eaforum,, 140915,https://forum.effectivealtruism.org/posts/AgDrzikcHeyoHvaqd/the-retroactive-funding-landscape-innovations-for-donors-and,The Retroactive Funding Landscape: Innovations for Donors and Grantmakers,['Dawn Drescher'],2023-09-29T17:39:24Z,eaforum,, 140958,https://forum.effectivealtruism.org/posts/fZmQ6WQ6MQPa5q39R/how-to-think-about-slowing-ai,How to think about slowing AI,['Zach Stein-Perlman'],2023-09-17T11:23:26Z,eaforum,, 140982,https://forum.effectivealtruism.org/posts/rBteAqbfqvaFMvpv5/what-we-learned-from-running-an-australian-ai-safety,What we learned from running an Australian AI Safety Unconference,"['Alexander Saeri', 'Jo Small']",2023-10-26T00:46:17Z,eaforum,, 140993,https://forum.effectivealtruism.org/posts/44XPFrHiFwFBM2jfL/an-appraisal-of-the-future-of-life-institute-ai-existential,An appraisal of the Future of Life Institute AI existential risk program,['PabloAMC'],2022-12-11T13:36:21Z,eaforum,, 141008,https://forum.effectivealtruism.org/posts/XcFk5irHSJBK2EuF3/benevolentai-an-effectively-impactful-company,BenevolentAI - an effectively impactful company?,['Jack Hilton'],2022-10-11T14:35:21Z,eaforum,, 141017,https://forum.effectivealtruism.org/posts/CKfHDw5Lmoo6jahZD/nuclear-espionage-and-ai-governance-1,Nuclear Espionage and AI Governance,['GAA'],2021-10-04T18:21:29Z,eaforum,, 141038,https://forum.effectivealtruism.org/posts/NnygBgntvoGSuvsRH/ai-timelines-by-bio-anchors-the-debate-in-one-place,AI timelines by bio anchors: the debate in one place,['Will Aldred'],2022-07-30T23:04:49Z,eaforum,, 141068,https://forum.effectivealtruism.org/posts/HqatEhdEb42vhSo7B/opportunities-for-individual-donors-in-ai-safety,Opportunities for individual donors in AI safety,['alexflint'],2018-03-12T02:10:16Z,eaforum,, 141088,https://forum.effectivealtruism.org/posts/QZCuGZczixXXJeEyw/a-tentative-dialogue-with-a-friendly-boxed-super-agi-on,A tentative dialogue with a Friendly-boxed-super-AGI on brain uploads,['Ramiro'],2022-05-12T21:55:07Z,eaforum,, 141101,https://forum.effectivealtruism.org/posts/a4mFh3PySygwmWiAK/introducing-spirit-hazards,Introducing spirit hazards,['brb243'],2022-05-27T22:16:42Z,eaforum,, 141125,https://forum.effectivealtruism.org/posts/eqTGrEsBzJJSiuTcv/the-international-pauseai-protest-activism-under-uncertainty,The International PauseAI Protest: Activism under uncertainty,"['Joseph Miller', 'Holly_Elmore', 'joepio']",2023-10-12T17:36:15Z,eaforum,, 141146,https://forum.effectivealtruism.org/posts/epTvpAEfCY74CMdMv/student-competition-for-drafting-a-treaty-on-moratorium-of,Student competition for drafting a treaty on moratorium of large-scale AI capabilities R&D,['Nayanika'],2023-04-24T13:15:25Z,eaforum,, 141165,https://forum.effectivealtruism.org/posts/ngk6AFo5uNHB3ZKQY/inside-the-mind-of-an-aspiring-charity-entrepreneur-follow,Inside the Mind of an Aspiring Charity Entrepreneur [Follow Along] #1 - From Layoff to Co-founding in a Breathtaking Two Months,['Harry Luk'],2023-09-26T07:35:18Z,eaforum,, 141187,https://forum.effectivealtruism.org/posts/gkfMLX4NWZdmpikto/critiques-of-prominent-ai-safety-labs-conjecture,Critiques of prominent AI safety labs: Conjecture,['Omega'],2023-06-12T05:52:24Z,eaforum,, 141232,https://forum.effectivealtruism.org/posts/35bfnGmsyrZkEnkLJ/steering-ai-to-care-for-animals-and-soon,"Steering AI to care for animals, and soon",['Andrew Critch'],2022-06-14T01:13:56Z,eaforum,, 141244,https://forum.effectivealtruism.org/posts/GcrKndFY2oSKEFLub/us-ntia-ai-accountability-policy-request-for-comment,[US] NTIA: AI Accountability Policy Request for Comment,['Kyle J. Lucchese'],2023-04-13T16:12:16Z,eaforum,, 141259,https://forum.effectivealtruism.org/posts/otsZNNLr2QygEM3Md/a-bill-to-prevent-ai-from-hiring-people-instead-of-human,A bill to prevent AI from hiring people instead of human enployers in NY,['wes R'],2023-08-15T00:48:27Z,eaforum,, 141281,https://forum.effectivealtruism.org/posts/ewroS7tsqhTsstJ44/a-windfall-clause-for-ceo-could-worsen-ai-race-dynamics,A Windfall Clause for CEO could worsen AI race dynamics,['Larks'],2023-03-09T18:02:15Z,eaforum,, 141302,https://forum.effectivealtruism.org/posts/vzuqnPyfDFjtbCpgv/ai-safety-field-building-survey-talent-needs-infrastructure,"AI safety field-building survey: Talent needs, infrastructure needs, and relationship to EA","['michel', 'OllieBase', 'Angelina Li']",2023-10-27T21:08:45Z,eaforum,, 141337,https://forum.effectivealtruism.org/posts/hNPCo4kScxccK9Ham/open-problems-in-ai-x-risk-pais-5,Open Problems in AI X-Risk [PAIS #5],"['ThomasW', 'Dan H']",2022-06-10T02:22:42Z,eaforum,, 141375,https://forum.effectivealtruism.org/posts/QsHH66kyN4GJhBqpK/the-economist-feature-articles-on-llms,The Economist feature articles on LLMs,['Dr Dan Epstein'],2023-04-20T00:29:58Z,eaforum,, 141400,https://forum.effectivealtruism.org/posts/2yyZqRParGeLEja5u/alignment-goals-and-the-gut-head-gap-a-review-of-ngo-et-al,"Alignment, Goals, & The Gut-Head Gap: A Review of Ngo. et al",['Violet Hour'],2023-05-11T17:16:44Z,eaforum,, 141423,https://forum.effectivealtruism.org/posts/b5dctgmwBiKhu3BCP/why-does-any-particular-ai-safety-work-reduce-s-risks-more,Why does (any particular) AI safety work reduce s-risks more than it increases them?,['MichaelStJules'],2021-10-03T16:55:32Z,eaforum,, 141440,https://forum.effectivealtruism.org/posts/WJMa3XeZMuukAm9Lb/how-is-technical-ai-safety-research-being-evaluated,(How) Is technical AI Safety research being evaluated?,['JohnSnow'],2023-07-11T09:37:20Z,eaforum,, 141449,https://forum.effectivealtruism.org/posts/WqQDKKgZTdFe6GAFq/vael-gates-risks-from-highly-capable-ai-march-2023-1,Vael Gates: Risks from Highly-Capable AI (March 2023),['Vael Gates'],2023-04-01T20:54:18Z,eaforum,, 141466,https://forum.effectivealtruism.org/posts/3vDarp6adLPBTux5g/what-a-compute-centric-framework-says-about-ai-takeoff,What a compute-centric framework says about AI takeoff speeds,['Tom_Davidson'],2023-01-23T04:09:58Z,eaforum,, 141494,https://forum.effectivealtruism.org/posts/LBise8JBACG9DRPG4/on-the-correspondence-between-ai-misalignment-and-cognitive,On the correspondence between AI-misalignment and cognitive dissonance using a behavioral economics model,['Stijn'],2022-11-01T09:15:14Z,eaforum,, 141509,https://forum.effectivealtruism.org/posts/qTAE9GZ9kAKnstvJ4/uk-foundation-model-task-force-expression-of-interest,UK Foundation Model Task Force - Expression of Interest,['ojorgensen'],2023-06-18T09:40:47Z,eaforum,, 141526,https://forum.effectivealtruism.org/posts/FnviTNXcjG2zaYXQY/how-to-store-human-values-on-a-computer,How to store human values on a computer,['oliver_siegel'],2022-11-04T19:36:53Z,eaforum,, 141540,https://forum.effectivealtruism.org/posts/YPonS6QpDwhbRnT8N/human-values-and-agi-risk-or-william-james,Human Values and AGI Risk | William James,['William James'],2023-03-31T22:30:44Z,eaforum,, 141555,https://forum.effectivealtruism.org/posts/eAEGdZshoYuzYFpMq/dear-anthropic-people-please-don-t-release-claude,"Dear Anthropic people, please don't release Claude",['No drama'],2023-02-08T02:44:01Z,eaforum,, 141564,https://forum.effectivealtruism.org/posts/EMfLZXvwiEioPWPga/concrete-open-problems-in-mechanistic-interpretability-a,Concrete open problems in mechanistic interpretability: a technical overview,['Neel Nanda'],2023-07-06T11:35:46Z,eaforum,, 141589,https://forum.effectivealtruism.org/posts/SgeLEhS3zDfRBcXQG/speedrun-ai-alignment-prizes,Speedrun: AI Alignment Prizes,['joe'],2023-02-09T11:55:49Z,eaforum,, 141618,https://forum.effectivealtruism.org/posts/iJRrZQofGvt6q5nYg/introducing-the-mental-health-roadmap-series,Introducing the Mental Health Roadmap Series,"['Emily', 'Dave Cortright']",2023-04-11T22:26:28Z,eaforum,, 141641,https://forum.effectivealtruism.org/posts/2sMR7n32FSvLCoJLQ/critical-review-of-the-precipice-a-reassessment-of-the-risks,Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics,['Fods12'],2020-05-11T11:11:13Z,eaforum,, 141671,https://forum.effectivealtruism.org/posts/ZLbS2WrHJdPGf24xh/amanda-askell-ai-safety-needs-social-scientists,Amanda Askell: AI safety needs social scientists,['EA Global'],2019-03-04T15:50:14Z,eaforum,, 141688,https://forum.effectivealtruism.org/posts/bEe4nRbShq8sWEE7n/the-top-ai-safety-bets-for-2023-givewiki-s-latest,The Top AI Safety Bets for 2023: GiveWiki’s Latest Recommendations,['Dawn Drescher'],2023-11-11T09:04:28Z,eaforum,, 141705,https://forum.effectivealtruism.org/posts/j5xhPbj7ywdv6aEJc/ama-future-of-life-institute-s-eu-team,AMA: Future of Life Institute's EU Team,['Risto Uuk'],2022-01-31T17:14:30Z,eaforum,, 141722,https://forum.effectivealtruism.org/posts/LMvFekp6tWSN9K6Em/foundations-for-a-longtermist-foreign-policy,Foundations for a Longtermist Foreign Policy,['Manuel Carranza'],2023-03-13T22:48:49Z,eaforum,, 141742,https://forum.effectivealtruism.org/posts/LbZN3YzXHe357EjcJ/information-in-risky-technology-races,Information in risky technology races,['nemeryxu'],2022-08-02T23:35:16Z,eaforum,, 141765,https://forum.effectivealtruism.org/posts/ZNPYMp2uu5zr3Po66/technological-unemployment-ai-vs-most-important-century-ai-1,“Technological unemployment” AI vs. “most important century” AI: how far apart?,['Holden Karnofsky'],2022-10-11T04:50:55Z,eaforum,, 141785,https://forum.effectivealtruism.org/posts/bL3riEPKqZKjdHmFg/when-will-we-spend-enough-to-train-transformative-ai,When Will We Spend Enough to Train Transformative AI,['Skye Nygaard'],2023-03-28T00:41:54Z,eaforum,, 141807,https://forum.effectivealtruism.org/posts/tAsyRARbkMym5D4jK/roodman-s-thoughts-on-biological-anchors,Roodman's Thoughts on Biological Anchors,['lukeprog'],2022-09-14T12:23:24Z,eaforum,, 141822,https://forum.effectivealtruism.org/posts/k2iaSvbnpQFzE4nLB/openai-s-new-preparedness-team-is-hiring,OpenAI’s new Preparedness team is hiring,['leopold'],2023-10-26T20:41:51Z,eaforum,, 141831,https://forum.effectivealtruism.org/posts/B3akyeohHhinHkGZM/ai-safety-newsletter-7-disinformation-governance,"AI Safety Newsletter #7: Disinformation, Governance Recommendations for AI labs, and Senate Hearings on AI","['Center for AI Safety', 'Dan H', 'Akash', 'aogara']",2023-05-23T21:42:58Z,eaforum,, 141862,https://forum.effectivealtruism.org/posts/BpxKj5P9dRBB4ged6/an-appeal-to-people-who-are-smarter-than-me-please-help-me,An appeal to people who are smarter than me: please help me clarify my thinking about AI,['bethhw'],2023-08-05T16:38:56Z,eaforum,, 141883,https://forum.effectivealtruism.org/posts/fbw7mg2CzBiHqRibr/ai-safety-scholarships-look-worth-funding-if-other-funding-2,AI safety scholarships look worth-funding (if other funding is sane),['anon-a'],2019-11-19T00:59:37Z,eaforum,, 141906,https://forum.effectivealtruism.org/posts/2e9NDGiXt8PjjbTMC/technical-agi-safety-research-outside-ai,Technical AGI safety research outside AI,['richard_ngo'],2019-10-18T15:02:21Z,eaforum,, 141943,https://forum.effectivealtruism.org/posts/H9uPyi6MGmzer5i9b/who-will-be-in-charge-once-alignment-is-achieved,Who will be in charge once alignment is achieved?,['trurl'],2022-12-16T16:53:30Z,eaforum,, 141961,https://forum.effectivealtruism.org/posts/MDkYSuCzFbEgGgtAd/ai-doom-and-david-hume-a-defence-of-empiricism-in-ai-safety,AI Doom and David Hume: A Defence of Empiricism in AI Safety,['Matt Beard'],2023-05-30T20:45:32Z,eaforum,, 141980,https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations,Shallow evaluations of longtermist organizations,['NunoSempere'],2021-06-24T15:31:25Z,eaforum,, 142017,https://forum.effectivealtruism.org/posts/wBzfLyfJFfocmdrwL/the-windfall-clause-has-a-remedies-problem,The Windfall Clause has a remedies problem,['John Bridge'],2022-05-23T10:31:46Z,eaforum,, 142053,https://forum.effectivealtruism.org/posts/dreEnpSSohfkmZdCB/assessing-the-dangerousness-of-malevolent-actors-in-agi,Assessing the Dangerousness of Malevolent Actors in AGI Governance: A Preliminary Exploration,"['Callum Hinchcliffe', 'RichardAnnilo']",2023-10-14T21:18:38Z,eaforum,, 142069,https://forum.effectivealtruism.org/posts/ei4pYFJKcbGAdGnNb/calling-for-student-submissions-ai-safety-distillation,Calling for Student Submissions: AI Safety Distillation Contest,['Aris Richardson'],2022-04-23T20:24:54Z,eaforum,, 142083,https://forum.effectivealtruism.org/posts/P4ut25NhfsFEMEeLJ/what-to-think-when-a-language-model-tells-you-it-s-sentient,What to think when a language model tells you it's sentient,['rgb'],2023-02-20T02:59:27Z,eaforum,, 142108,https://forum.effectivealtruism.org/posts/ciwf4JXfMjqqz7oFn/are-there-enough-opportunities-for-ai-safety-specialists,Are there enough opportunities for AI safety specialists?,['mhint199'],2023-05-13T21:18:04Z,eaforum,, 142131,https://forum.effectivealtruism.org/posts/bFDwxxfErRStMvuAQ/biological-anchors-external-review-by-jennifer-lin-linkpost,Biological Anchors external review by Jennifer Lin (linkpost),['peterhartree'],2022-11-30T13:06:44Z,eaforum,, 142146,https://forum.effectivealtruism.org/posts/tnxxy9SH9ebBkKDW7/expected-impact-of-a-career-in-ai-safety-under-different,Expected impact of a career in AI safety under different opinions,['Jordan Taylor'],2022-06-14T14:25:32Z,eaforum,, 142179,https://forum.effectivealtruism.org/posts/SkEgBb5KqNRWHcksa/linkpost-beware-the-squirrel-by-verity-harding,[Linkpost] Beware the Squirrel by Verity Harding,['Arden'],2023-09-03T21:04:19Z,eaforum,, 142203,https://forum.effectivealtruism.org/posts/QNhpbvyAHZwBiyKmB/retrospective-on-the-summer-2021-agi-safety-fundamentals,Retrospective on the Summer 2021 AGI Safety Fundamentals,['Dewi'],2021-12-06T20:10:14Z,eaforum,, 142223,https://forum.effectivealtruism.org/posts/MEEXNgCDTKccmWpmY/please-provide-feedback-on-ai-safety-grant-proposal-thanks,"Please provide feedback on AI-safety grant proposal, thanks!",['Alex Long'],2022-12-11T23:29:22Z,eaforum,, 142232,https://forum.effectivealtruism.org/posts/Y7croZavYcv88Z7WK/mlsn-8-mechanistic-interpretability-using-law-to-inform-ai,"[MLSN #8]: Mechanistic interpretability, using law to inform AI alignment, scaling laws for proxy gaming","['ThomasW', 'Dan H']",2023-02-20T16:06:03Z,eaforum,, 142277,https://forum.effectivealtruism.org/posts/LqWPSNRu2t46fv6hK/ai-acceleration-from-a-safety-perspective-trade-offs-and,AI acceleration from a safety perspective: Trade-offs and considerations,"['mariushobbhahn', 'Tilman']",2022-01-19T09:44:41Z,eaforum,, 142302,https://forum.effectivealtruism.org/posts/wvBsDMerZ7wrnoaLr/aria-is-looking-for-topics-for-roundtables,ARIA is looking for topics for roundtables,['Nathan_Barnard'],2022-08-26T19:14:34Z,eaforum,, 142312,https://forum.effectivealtruism.org/posts/2SW5cc9ZGmzAF8Xbf/concrete-existing-examples-of-high-impact-risks-from-ai,"Concrete, existing examples of high-impact risks from AI?",['freedomandutility'],2023-04-15T22:19:32Z,eaforum,, 142323,https://forum.effectivealtruism.org/posts/hASnoLMEFj3osCLKG/podcast-transcript-nathan-barnard-on-how-us-financial,Podcast (+transcript): Nathan Barnard on how US financial regulation can inform AI governance,"['Aaron Bergman', 'Nathan_Barnard']",2023-08-08T21:46:05Z,eaforum,, 142345,https://forum.effectivealtruism.org/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety,My personal cruxes for working on AI safety,['Buck'],2020-02-13T07:11:47Z,eaforum,, 142366,https://forum.effectivealtruism.org/posts/TYyHpiAQ3TetwRMHC/what-longtermist-projects-would-you-like-to-see-implemented,What longtermist projects would you like to see implemented?,['Buhl'],2023-03-28T18:41:56Z,eaforum,, 142387,https://forum.effectivealtruism.org/posts/x9pcT6dvaGKox4PT5/chilean-ais-hackathon-retrospective,Chilean AIS Hackathon Retrospective,"['Agustín Covarrubias', 'David Solar', 'Milan Weibel']",2023-05-09T01:34:27Z,eaforum,, 142406,https://forum.effectivealtruism.org/posts/p6kCzQ2QiWBW6wxCJ/the-animals-and-humans-analogy-for-ai-risk,The animals and humans analogy for AI risk,['freedomandutility'],2022-08-13T15:35:42Z,eaforum,, 142417,https://forum.effectivealtruism.org/posts/CictHfn8kdyupvpNK/future-matters-3-digital-sentience-agi-ruin-and-forecasting,"Future Matters #3: digital sentience, AGI ruin, and forecasting track records","['Pablo', 'matthew.vandermerwe']",2022-07-04T17:44:30Z,eaforum,, 142438,https://forum.effectivealtruism.org/posts/rsnrpvKofps5Py7di/shutting-down-the-lightcone-offices,Shutting Down the Lightcone Offices,"['Habryka', 'Ben Pace']",2023-03-15T01:46:03Z,eaforum,, 142459,https://forum.effectivealtruism.org/posts/rvSFhWYuuBCxy5xpW/restricting-brain-organoid-research-to-slow-down-agi,Restricting brain organoid research to slow down AGI,['freedomandutility'],2022-11-09T13:01:51Z,eaforum,, 142478,https://forum.effectivealtruism.org/posts/axJcQL6ownMQQD5AA/event-join-metaculus-tomorrow-march-31st-for-forecast-friday,"[Event] Join Metaculus Tomorrow, March 31st, for Forecast Friday!",['christian'],2023-03-30T20:58:57Z,eaforum,, 142493,https://forum.effectivealtruism.org/posts/Rvmu5LLz8qGnFcGLz/deception-as-the-optimal-mesa-optimizers-and-inner-alignment,Deception as the optimal: mesa-optimizers and inner alignment,['Eleni_A'],2022-08-16T03:45:43Z,eaforum,, 142511,https://forum.effectivealtruism.org/posts/6va2EfHkQ3bTmdDyn/when-2-3rds-of-the-world-goes-against-you,When 2/3rds of the world goes against you,['Jeffrey Kursonis'],2022-07-02T20:34:44Z,eaforum,, 142528,https://forum.effectivealtruism.org/posts/eWphfg7bcDrqfqgqF/ben-horowitz-and-others-are-spreading-a-regulation-is-bad,"Ben Horowitz and others are spreading a ""regulation is bad"" view. Would it be useful to have a public bet on ""would Ben update his view if he had 1-1 with X-Risk researcher?"", and urge Ben to run such an experiment?",['AntonOsika'],2023-08-08T06:36:43Z,eaforum,, 142537,https://forum.effectivealtruism.org/posts/L6ZmggEJw8ri4KB8X/my-highly-personal-skepticism-braindump-on-existential-risk,My highly personal skepticism braindump on existential risk from artificial intelligence.,['NunoSempere'],2023-01-23T20:08:29Z,eaforum,, 142563,https://forum.effectivealtruism.org/posts/Qq8GEy6beot2o2CWw/my-choice-of-ai-misalignment-introduction-for-a-general,My choice of AI misalignment introduction for a general audience,['Bill'],2023-05-03T00:15:54Z,eaforum,, 142577,https://forum.effectivealtruism.org/posts/CHfuH58thMHPN8zHX/is-there-evidence-that-recommender-systems-are-changing,Is there evidence that recommender systems are changing users' preferences?,['zdgroff'],2021-04-12T19:11:42Z,eaforum,, 142588,https://forum.effectivealtruism.org/posts/furE5aznZCNDjkdmb/high-impact-job-opportunity-at-aria-uk,High impact job opportunity at ARIA (UK),['Rasool'],2023-02-12T10:35:02Z,eaforum,, 142597,https://forum.effectivealtruism.org/posts/xBDgtTXKfLjggzCn5/what-kind-of-event-targeted-to-undergraduate-cs-majors-would,"What kind of event, targeted to undergraduate CS majors, would be most effective at getting people to work on AI safety?",['CBiddulph'],2021-09-19T16:19:28Z,eaforum,, 142609,https://forum.effectivealtruism.org/posts/qrkiKXAHy6z7yABGv/applications-for-eu-tech-policy-fellowship-2024-now-open,Applications for EU Tech Policy Fellowship 2024 now open,"['Jan-Willem', 'Training for Good']",2023-09-13T16:17:43Z,eaforum,, 142620,https://forum.effectivealtruism.org/posts/THyzvDPThjK2P8fn3/meditations-on-careers-in-ai-safety,Meditations on careers in AI Safety,['PabloAMC'],2022-03-23T22:00:12Z,eaforum,, 142640,https://forum.effectivealtruism.org/posts/XQbhnKgXiRTv4vfxt/future-matters-4-ai-timelines-agi-risk-and-existential-risk,"Future Matters #4: AI timelines, AGI risk, and existential risk from climate change","['Pablo', 'matthew.vandermerwe']",2022-08-08T11:00:52Z,eaforum,, 142666,https://forum.effectivealtruism.org/posts/AuRBKFnjABa6c6GzC/what-success-looks-like,What success looks like,"['mariushobbhahn', 'MaxRa', 'Yannick_Muehlhaeuser', 'JasperGo', 'slg']",2022-06-28T14:30:37Z,eaforum,, 142689,https://forum.effectivealtruism.org/posts/dikcpP32Q3cg6tvdA/ai-incident-sharing-best-practices-from-other-fields-and-a,AI Incident Sharing - Best practices from other fields and a comprehensive list of existing platforms,['stepanlos'],2023-06-28T16:18:03Z,eaforum,, 142707,https://forum.effectivealtruism.org/posts/r72wjMns9wyaAhWhc/the-ai-messiah,The AI Messiah,['ryancbriggs'],2022-05-05T16:58:11Z,eaforum,, 142716,https://forum.effectivealtruism.org/posts/7JeqhfQq9QTtkEbjL/seeking-survey-responses-attitudes-towards-ai-risks,Seeking Survey Responses - Attitudes Towards AI risks,['anson'],2022-03-28T17:47:50Z,eaforum,, 142726,https://forum.effectivealtruism.org/posts/JD6QvQG3q5p6heKuA/philanthropists-probably-shouldn-t-mission-hedge-ai-progress,Philanthropists Probably Shouldn't Mission-Hedge AI Progress,['MichaelDickens'],2022-08-23T23:03:49Z,eaforum,, 142746,https://forum.effectivealtruism.org/posts/F2YfRtMvHfRJibwkj/promoting-compassionate-longtermism,Promoting compassionate longtermism,['jonleighton'],2022-12-07T14:26:29Z,eaforum,, 142778,https://forum.effectivealtruism.org/posts/FvgQjicdSk6S7xQvC/announcing-the-itam-ai-futures-fellowship,Announcing the ITAM AI Futures Fellowship,"['AmAristizabal', 'Jaime Andres Fernandez']",2023-07-28T16:44:12Z,eaforum,, 142787,https://forum.effectivealtruism.org/posts/HpLzFjr8PePrDpmCZ/the-terminology-of-artificial-sentience-1,The Terminology of Artificial Sentience,['Janet Pauketat'],2021-11-28T07:52:34Z,eaforum,, 142800,https://forum.effectivealtruism.org/posts/n5CNeo9jxDsCit9dj/good-policy-ideas-that-won-t-happen-yet,Good policy ideas that won’t happen (yet),['Niel_Bowerman'],2014-09-11T12:29:11Z,eaforum,, 142819,https://forum.effectivealtruism.org/posts/eXwZEjGrx6JJgtEPG/any-philosophy-phd-recommendations-for-students-interested,Any Philosophy PhD recommendations for students interested in Alignment Efforts?,['rickyhuang.hexuan'],2023-01-18T05:54:26Z,eaforum,, 142829,https://forum.effectivealtruism.org/posts/ayWPwLRjxecTLEDkN/ea-and-ai-safety-schism-agi-the-last-tech-humans-will-soon,"EA and AI Safety Schism: AGI, the last tech humans will (soon*) build",['Phib'],2023-05-15T02:05:22Z,eaforum,, 142849,https://forum.effectivealtruism.org/posts/HXoHFHCSFX7Cxn7hC/summary-of-posts-on-xpt-forecasts-on-ai-risk-and-timelines,Summary of posts on XPT forecasts on AI risk and timelines,"['Forecasting Research Institute', 'rosehadshar']",2023-07-25T08:42:55Z,eaforum,, 142869,https://forum.effectivealtruism.org/posts/QrwnajRpteBZhQZnu/sydney-ai-safety-fellowship,Sydney AI Safety Fellowship,['Chris Leong'],2021-12-02T07:35:00Z,eaforum,, 142878,https://forum.effectivealtruism.org/posts/uPu2tbzowqCw73dbh/project-idea-the-cost-of-coccidiosis-on-chicken-farming-and,Project Idea: The cost of Coccidiosis on Chicken farming and if AI can help,['Max Harris'],2022-09-26T16:30:23Z,eaforum,, 142896,https://forum.effectivealtruism.org/posts/gSGhrCXdntxLrMAmJ/ai-strategy-career-pipeline,AI strategy career pipeline,['Zach Stein-Perlman'],2023-05-22T00:00:38Z,eaforum,, 142917,https://forum.effectivealtruism.org/posts/ZzwMBRq5KAo6wfP4K/future-matters-5-supervolcanoes-ai-takeover-and-what-we-owe,"Future Matters #5: supervolcanoes, AI takeover, and What We Owe the Future","['Pablo', 'matthew.vandermerwe']",2022-09-14T13:02:11Z,eaforum,, 142934,https://forum.effectivealtruism.org/posts/QjeruoGQmYZh2ZsCt/how-long-till-brussels-a-light-investigation-into-the,How long till Brussels?: A light investigation into the Brussels Gap,['Yadav'],2022-12-26T07:49:40Z,eaforum,, 142950,https://forum.effectivealtruism.org/posts/vD3yDaDBLerMLdCQx/summary-of-technology-favours-tyranny-by-yuval-noah-harari,"Summary of ""Technology Favours Tyranny"" by Yuval Noah Harari",['Madhav Malhotra'],2022-10-26T21:37:37Z,eaforum,, 142973,https://forum.effectivealtruism.org/posts/np3KfjMadGsRc5qCm/artificial-intelligence-governance-under-change-phd,'Artificial Intelligence Governance under Change' (PhD dissertation),['MMMaas'],2022-09-15T12:10:14Z,eaforum,, 142996,https://forum.effectivealtruism.org/posts/vGfJnwq6X7hwhG3wy/all-agi-safety-questions-welcome-especially-basic-ones-july,All AGI Safety questions welcome (especially basic ones) [July 2023],"['Siao Si', 'Stampy']",2023-07-19T18:08:29Z,eaforum,, 143009,https://forum.effectivealtruism.org/posts/FWcJzBdfNF3mCKP47/ea-and-lw-forums-weekly-summary-5-11-sep-22,EA & LW Forums Weekly Summary (5 - 11 Sep 22’),['Zoe Williams'],2022-09-12T23:21:59Z,eaforum,, 143030,https://forum.effectivealtruism.org/posts/TYmufd5pMED6LreJb/how-to-escape-from-the-simulation-seeds-of-science-call-for,"""How to Escape from the Simulation"" - Seeds of Science call for reviewers",['rogersbacon1'],2023-01-26T15:12:34Z,eaforum,, 143043,https://forum.effectivealtruism.org/posts/MDNcMLQfxg2n9qXEZ/agi-catastrophe-and-takeover-some-reference-class-based,AGI Catastrophe and Takeover: Some Reference Class-Based Priors,['zdgroff'],2023-05-24T19:14:30Z,eaforum,, 143058,https://forum.effectivealtruism.org/posts/4s7m4Eru29XKCL7xa/apply-to-the-machine-learning-for-good-bootcamp-in-france,Apply to the Machine Learning For Good bootcamp in France,['Alexandre Variengien'],2022-06-17T09:13:07Z,eaforum,, 143068,https://forum.effectivealtruism.org/posts/JZEgmumeamzBAAprt/how-come-there-isn-t-that-much-focus-in-ea-on-research-into,How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient?,['callum'],2023-04-27T10:09:08Z,eaforum,, 143080,https://forum.effectivealtruism.org/posts/K3XiFGMdAQXBGTFSH/a-survey-of-concrete-risks-derived-from-artificial,A survey of concrete risks derived from Artificial Intelligence,"['Guillem Bas', 'Roberto Tinoco', 'Jaime Sevilla', 'Mónica Ulloa', 'JorgeTorresC']",2023-06-08T22:09:00Z,eaforum,, 143101,https://forum.effectivealtruism.org/posts/2hvYyzfWv4J3JLB8p/safety-first-agents-architectures-are-a-promising-path-to,Safety-First Agents/Architectures Are a Promising Path to Safe AGI,['Brendon_Wong'],2023-08-06T08:00:38Z,eaforum,, 143121,https://forum.effectivealtruism.org/posts/AzwXdHqAkBhbHSQZL/a-primer-on-god-liberalism-and-the-end-of-history,"A Primer on God, Liberalism and the End of History",['Mahdi Complex'],2022-03-28T05:26:19Z,eaforum,, 143144,https://forum.effectivealtruism.org/posts/ZtZmkgDW6MH8AEEK6/how-much-do-markets-value-open-ai,How much do markets value Open AI?,['Ben_West'],2023-05-14T19:28:38Z,eaforum,, 143155,https://forum.effectivealtruism.org/posts/Nrq9v3Kii7EmAhFk2/jade-leung-and-seth-baum-the-role-of-existing-institutions,Jade Leung and Seth Baum: The role of existing institutions in AI strategy,['EA Global'],2018-06-08T07:15:00Z,eaforum,, 143186,https://forum.effectivealtruism.org/posts/68ANc8KhEn6sbQ3P9/ai-governance-course-curriculum-and-application,AI Governance Course - Curriculum and Application,['Mauricio'],2021-11-29T13:29:30Z,eaforum,, 143195,https://forum.effectivealtruism.org/posts/EjGowxHhRifb2r8tE/welcome-to-apply-the-2024-vitalik-buterin-fellowships-in-ai,Welcome to Apply: The 2024 Vitalik Buterin Fellowships in AI Existential Safety by FLI!,['Zhijing Jin'],2023-09-25T16:20:57Z,eaforum,, 143205,https://forum.effectivealtruism.org/posts/SkkAo8W4rg5kGrkTc/we-ran-an-alignment-workshop,We Ran an Alignment Workshop,['aiden ament'],2023-01-21T05:37:17Z,eaforum,, 143220,https://forum.effectivealtruism.org/posts/2tumunFmjBuXdfF2F/survey-on-ai-existential-risk-scenarios-1,Survey on AI existential risk scenarios,"['Sam Clarke', 'ac', 'jonasschuett']",2021-06-08T17:12:30Z,eaforum,, 143237,https://forum.effectivealtruism.org/posts/Knh38LJi5aQDkFKMc/uk-government-announces-gbp100-million-in-funding-for,UK Government announces £100 million in funding for Foundation Model Taskforce.,['jwpieters'],2023-04-25T11:29:52Z,eaforum,, 143253,https://forum.effectivealtruism.org/posts/oyf3qbXSh8FJGicof/raising-the-voices-that-actually-count,Raising the voices that actually count,['Kim Holder'],2023-06-13T19:21:16Z,eaforum,, 143269,https://forum.effectivealtruism.org/posts/gNHjEmLeKM47FDdqM/introducing-the-fund-for-alignment-research-we-re-hiring-1,Introducing the Fund for Alignment Research (We're Hiring!),"['AdamGleave', 'Scott Emmons', 'Ethan Perez', 'claudiashi']",2022-07-06T02:00:07Z,eaforum,, 143290,https://forum.effectivealtruism.org/posts/iBeWbfQLA9EKfsdhu/why-we-re-not-founding-a-human-data-for-alignment-org,Why we're not founding a human-data-for-alignment org,"['LRudL', 'Mathieu Putz']",2022-09-27T20:14:01Z,eaforum,, 143310,https://forum.effectivealtruism.org/posts/7RrjXQhGgAJiDLWYR/what-does-a-marginal-grant-at-ltff-look-like-funding,What Does a Marginal Grant at LTFF Look Like? Funding Priorities and Grantmaking Thresholds at the Long-Term Future Fund,"['Linch', 'calebp', 'Daniel_Eth']",2023-08-10T20:11:50Z,eaforum,, 143326,https://forum.effectivealtruism.org/posts/9mFT7rm9wvmq9uB2m/mlsn-9-verifying-large-training-runs-security-risks-from-llm,"[MLSN #9] Verifying large training runs, security risks from LLM access to APIs, why natural selection may favor AIs over humans","['ThomasW', 'Dan H']",2023-04-11T16:05:25Z,eaforum,, 143365,https://forum.effectivealtruism.org/posts/F8DEipkSoTG3Zztkc/eag-dc-meta-bottlenecks-in-preventing-ai-doom,EAG DC: Meta-Bottlenecks in Preventing AI Doom,['Joseph Bloom'],2022-09-30T17:53:21Z,eaforum,, 143388,https://forum.effectivealtruism.org/posts/vYb2qEyqv76L62izD/saying-ai-safety-research-is-a-pascal-s-mugging-isn-t-a,Saying 'AI safety research is a Pascal's Mugging' isn't a strong response,['Robert_Wiblin'],2015-12-15T13:48:27Z,eaforum,, 143398,https://forum.effectivealtruism.org/posts/wbi3y8PsswpYvn5YN/constructive-discussion-and-thinking-methodology-for-severe,Constructive Discussion and Thinking Methodology for Severe Situations including Existential Risks,['Aino'],2023-07-08T00:04:39Z,eaforum,, 143423,https://forum.effectivealtruism.org/posts/KJw6RDm4M6gAfqW6X/examples-of-self-governance-to-reduce-technology-risk,Examples of self-governance to reduce technology risk?,['jia'],2020-09-25T13:26:17Z,eaforum,, 143442,https://forum.effectivealtruism.org/posts/CNxteiKdRk9Hez3pv/ea-relevant-foresight-institute-workshops-in-2023-wbe-and-ai,"EA relevant Foresight Institute Workshops in 2023: WBE & AI safety, Cryptography & AI safety, XHope, Space, and Atomically Precise Manufacturing",['elteerkers'],2023-01-16T14:03:00Z,eaforum,, 143452,https://forum.effectivealtruism.org/posts/cKW4db8u2uFEAHewg/thoughts-on-responsible-scaling-policies-and-regulation,Thoughts on responsible scaling policies and regulation,['Paul_Christiano'],2023-10-24T22:25:15Z,eaforum,, 143489,https://forum.effectivealtruism.org/posts/MxnCf9qBTFygKnziF/embracing-the-automated-future,Embracing the automated future,['Arjun Khemani'],2023-07-16T08:47:56Z,eaforum,, 143506,https://forum.effectivealtruism.org/posts/JN6wm6u5MMmqwdnEs/metaculus-predictions-are-much-better-than-low-information,Metaculus’ predictions are much better than low-information priors,['Vasco Grilo'],2023-04-11T08:36:58Z,eaforum,, 143522,https://forum.effectivealtruism.org/posts/WmrCQTkTgDuk5RhCP/perform-tractable-research-while-avoiding-capabilities,Perform Tractable Research While Avoiding Capabilities Externalities [Pragmatic AI Safety #4],"['ThomasW', 'Dan H']",2022-05-30T20:37:41Z,eaforum,, 143557,https://forum.effectivealtruism.org/posts/jWvgcLikfZj9MWhon/linkpost-eric-schwitzgebel-ai-systems-must-not-confuse-users,[Linkpost] Eric Schwitzgebel: AI systems must not confuse users about their sentience or moral status,['Zachary Brown'],2023-08-18T17:21:32Z,eaforum,, 143577,https://forum.effectivealtruism.org/posts/mPkFheB4EM6pmEC7y/transformative-ai-issues-not-just-misalignment-an-overview,Transformative AI issues (not just misalignment): an overview,['Holden Karnofsky'],2023-01-06T02:19:42Z,eaforum,, 143619,https://forum.effectivealtruism.org/posts/ivHfucqDNeFAR5mkH/newsletter-for-alignment-research-the-ml-safety-updates,Newsletter for Alignment Research: The ML Safety Updates,"['Esben Kran', 'Thomas Steinthal', 'Sabrina Zaki', 'Apart Research']",2022-10-22T16:17:18Z,eaforum,, 143655,https://forum.effectivealtruism.org/posts/PhG6vQahtDhsmQ4BE/the-multidisciplinary-approach-to-alignment-mata-and,The Multidisciplinary Approach to Alignment (MATA) and Archetypal Transfer Learning (ATL),['Miguel'],2023-06-19T03:23:12Z,eaforum,, 143679,https://forum.effectivealtruism.org/posts/coJu2iSwCDf2z5LJH/request-for-assistance-research-on-scenario-development-for,Request for Assistance - Research on Scenario Development for Advanced AI Risk,['Kiliank'],2022-03-30T03:01:28Z,eaforum,, 143690,https://forum.effectivealtruism.org/posts/j9nLvT5ej8mKc4fhi/ml-summer-bootcamp-reflection-aalto-ea-finland,ML Summer Bootcamp Reflection: Aalto EA Finland,['Aayush Kucheria'],2023-01-12T08:24:45Z,eaforum,, 143710,https://forum.effectivealtruism.org/posts/j9yT9Sizu2sjNuygR/why-agi-systems-will-not-be-fanatical-maximisers-unless,Why AGI systems will not be fanatical maximisers (unless trained by fanatical humans),['titotal'],2023-05-17T11:58:30Z,eaforum,, 143731,https://forum.effectivealtruism.org/posts/akbwyBioGBd68CsNx/summary-the-case-for-halting-ai-development-max-tegmark-on,Summary: The Case for Halting AI Development - Max Tegmark on the Lex Fridman Podcast,['Madhav Malhotra'],2023-04-16T22:28:55Z,eaforum,, 143761,https://forum.effectivealtruism.org/posts/J6QCmkQmuRaP7skje/differential-technology-development-preprint-on-the-concept,Differential technology development: preprint on the concept,"['Hamish_Hobbs', 'jbs', 'Allan Dafoe']",2022-09-12T13:52:51Z,eaforum,, 143775,https://forum.effectivealtruism.org/posts/nczavbHtYCjwrRK75/opec-for-a-slow-agi-takeoff,OPEC for a slow AGI takeoff,['vyrax'],2023-04-21T10:53:30Z,eaforum,, 143787,https://forum.effectivealtruism.org/posts/SMsobbG7tgya2neN9/ai-cybersecurity-and-malware-a-shallow-report-technical,"AI, Cybersecurity, and Malware: A Shallow Report [Technical]",['Madhav Malhotra'],2023-03-31T12:03:02Z,eaforum,, 143815,https://forum.effectivealtruism.org/posts/B3NyGg24gtdKETnXw/cfp-neurips-workshop-ai-meets-moral-philosophy-and-moral,[CFP] NeurIPS workshop: AI meets Moral Philosophy and Moral Psychology,['jaredlcm'],2023-09-04T06:21:37Z,eaforum,, 143848,https://forum.effectivealtruism.org/posts/ukszSQHPMN4kyyKRx/2023-stanford-existential-risks-conference,2023 Stanford Existential Risks Conference,['elizabethcooper'],2023-02-24T17:49:31Z,eaforum,, 143863,https://forum.effectivealtruism.org/posts/x3MSTGhxChEZsRvwB/yudkowsky-and-christiano-on-ai-takeoff-speeds-linkpost,Yudkowsky and Christiano on AI Takeoff Speeds [LINKPOST],['aogara'],2022-04-05T00:57:29Z,eaforum,, 143886,https://forum.effectivealtruism.org/posts/5wwcMr8tDqCwZrDGM/linkpost-longtermists-are-pushing-a-new-cold-war-with-china,[Linkpost] Longtermists Are Pushing a New Cold War With China,['Mohammad Ismam Huda'],2023-05-27T06:53:28Z,eaforum,, 143902,https://forum.effectivealtruism.org/posts/74CkwGxmXaevwzhNG/linkpost-7-a-i-companies-agree-to-safeguards-after-pressure,Linkpost: 7 A.I. Companies Agree to Safeguards After Pressure From the White House,['MHR'],2023-07-21T13:23:13Z,eaforum,, 143928,https://forum.effectivealtruism.org/posts/DPfGxeWFLQaWEgBTj/how-many-people-are-neartermist-and-have-high-p-doom,How many people are neartermist and have high P(doom)?,['Sanjay'],2023-08-02T14:24:29Z,eaforum,, 143937,https://forum.effectivealtruism.org/posts/PJLx7CwB4mtaDgmFc/critiques-of-non-existent-ai-safety-labs-yours,Critiques of non-existent AI safety labs: Yours,['Anneal'],2023-06-16T06:50:48Z,eaforum,, 143955,https://forum.effectivealtruism.org/posts/fmk8xJG2TPBc2W7zo/paul-christiano-on-how-openai-is-developing-real-solutions,"Paul Christiano on how OpenAI is developing real solutions to the 'AI alignment problem', and his vision of how humanity will progressively hand over decision-making to AI systems",['80000_Hours'],2018-10-02T11:49:34Z,eaforum,, 143995,https://forum.effectivealtruism.org/posts/umeMcbD4jDseLjsgT/singapore-ai-policy-career-guide,Singapore AI Policy Career Guide,['Yi-Yang'],2021-01-21T03:06:00Z,eaforum,, 144022,https://forum.effectivealtruism.org/posts/nTZ6bnm8HFjjJWBmt/mitigating-x-risk-through-modularity,Mitigating x-risk through modularity,['Toby Newberry'],2020-12-17T19:54:26Z,eaforum,, 144043,https://forum.effectivealtruism.org/posts/e9htD7txe8RDdcehm/exploring-metaculus-s-ai-track-record,Exploring Metaculus’s AI Track Record,"['Peter Scoblic', 'Peter Mühlbacher', 'christian']",2023-05-01T21:02:45Z,eaforum,, 144071,https://forum.effectivealtruism.org/posts/iyzik5qmvsQYjsqXu/my-suggestions-on-beginner-steps-in-ai-alignment,(My suggestions) On Beginner Steps in AI Alignment,['Joseph Bloom'],2022-09-22T15:32:59Z,eaforum,, 144101,https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact,AI Governance: Opportunity and Theory of Impact,['Allan Dafoe'],2020-09-17T06:30:02Z,eaforum,, 144121,https://forum.effectivealtruism.org/posts/WSCAck83BbYR5ZyRF/longtermism-and-shorttermism-can-disagree-on-nuclear-war-to,Longtermism and shorttermism can disagree on nuclear war to stop advanced AI,['David Johnston'],2023-03-30T23:22:36Z,eaforum,, 144136,https://forum.effectivealtruism.org/posts/NsHSu2wLWpgiALbwm/why-i-expect-successful-narrow-alignment,Why I expect successful (narrow) alignment,['Tobias_Baumann'],2018-12-29T15:46:05Z,eaforum,, 144152,https://forum.effectivealtruism.org/posts/ipvSN3Hn5vejSi36q/does-the-idea-of-agi-that-benevolently-control-us-appeal-to,Does the idea of AGI that benevolently control us appeal to EA folks?,['Noah Scales'],2022-07-16T19:17:57Z,eaforum,, 144161,https://forum.effectivealtruism.org/posts/7cCr6vAmN4Xi3yzR5/two-contrasting-models-of-intelligence-and-future-growth,Two contrasting models of “intelligence” and future growth,['Magnus Vinding'],2022-11-24T11:54:23Z,eaforum,, 144183,https://forum.effectivealtruism.org/posts/9CtcDEZCAgNkJF9pf/buck-shlegeris-how-i-think-students-should-orient-to-ai,Buck Shlegeris: How I think students should orient to AI safety,['EA Global'],2020-10-25T05:48:00Z,eaforum,, 144192,https://forum.effectivealtruism.org/posts/Cfse8PEW8yDC9oAdu/existential-risk-x-crypto-an-unconference-at-zuzalu,Existential risk x Crypto: An unconference at Zuzalu,['Yesh'],2023-04-11T13:31:16Z,eaforum,, 144207,https://forum.effectivealtruism.org/posts/FANYsqzPM9Yht3KM2/the-replication-and-emulation-of-gpt-3,The replication and emulation of GPT-3,['Ben Cottier'],2022-12-21T13:49:54Z,eaforum,, 144229,https://forum.effectivealtruism.org/posts/AYGNbYeB7bHjwidiz/why-is-no-one-trying-to-align-profit-incentives-with,Why Is No One Trying To Align Profit Incentives With Alignment Research?,['Prometheus'],2023-08-23T13:19:25Z,eaforum,, 144252,https://forum.effectivealtruism.org/posts/HYHaBsukLkoE72zTd/andrew-critch-logical-induction-progress-in-ai-alignment,Andrew Critch: Logical induction — progress in AI alignment,['EA Global'],2016-08-06T00:40:00Z,eaforum,, 144261,https://forum.effectivealtruism.org/posts/2RurEJXi5PqbEsCZb/a-list-of-good-heuristics-that-the-case-for-ai-x-risk-fails,A list of good heuristics that the case for AI X-risk fails,['Aaron Gertler'],2020-07-16T09:56:21Z,eaforum,, 144279,https://forum.effectivealtruism.org/posts/nErbrZDTjzo8wxuvP/please-share-your-perspectives-on-the-degree-of-societal,Please Share Your Perspectives on the Degree of Societal Impact from Transformative AI Outcomes,['Kiliank'],2022-04-15T01:23:29Z,eaforum,, 144290,https://forum.effectivealtruism.org/posts/rYNGXyCBFQSGupqJA/what-s-so-dangerous-about-ai-anyway-or-what-it-means-to-be-a,What’s so dangerous about AI anyway? – Or: What it means to be a superintelligence,['Thomas Kehrenberg'],2022-07-18T16:14:26Z,eaforum,, 144308,https://forum.effectivealtruism.org/posts/tyneYFeDqBgXYykxG/october-2022-ai-risk-community-survey-results,October 2022 AI Risk Community Survey Results,['Froolow'],2023-05-24T10:37:10Z,eaforum,, 144329,https://forum.effectivealtruism.org/posts/HatYvQkGFMCj2BnzH/ai-value-alignment-speaker-series-presented-by-ea-berkeley,AI Value Alignment Speaker Series Presented By EA Berkeley,['Mahendra Prasad'],2022-03-01T06:17:24Z,eaforum,, 144338,https://forum.effectivealtruism.org/posts/NzA9D9m823Tx2msm3/stackelberg-games-and-cooperative-commitment-my-thoughts-and,Stackelberg Games and Cooperative Commitment: My Thoughts and Reflections on a 2-Month Research Project,['Ben Bucknall'],2021-12-13T10:49:17Z,eaforum,, 144368,https://forum.effectivealtruism.org/posts/ZWjDkENuFohPShTyc/my-lab-s-small-ai-safety-agenda,My lab's small AI safety agenda,['Jobst Heitzig (vodle.it)'],2023-06-18T12:29:02Z,eaforum,, 144386,https://forum.effectivealtruism.org/posts/y7pCAoghcNKhhufCS/ai-pause-governance-advocacy-might-be-net-negative,"AI pause/governance advocacy might be net-negative, especially without focus on explaining the x-risk",['Samin'],2023-09-01T15:23:45Z,eaforum,, 144407,https://forum.effectivealtruism.org/posts/3kaojgsu6qy2n8TdC/pre-announcing-the-2023-open-philanthropy-ai-worldviews,Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest,['Jason Schukraft'],2022-11-21T21:45:26Z,eaforum,, 144416,https://forum.effectivealtruism.org/posts/yBavn9rtThLFTfLoz/fundamentals-of-fatal-risks,Fundamentals of Fatal Risks,['Aino'],2023-07-29T07:12:38Z,eaforum,, 144435,https://forum.effectivealtruism.org/posts/ZyjARuFsDBTFXeMP4/misalignment-museum-opens-in-san-francisco-sorry-for-killing,Misalignment Museum opens in San Francisco: ‘Sorry for killing most of humanity’,['Michael Huang'],2023-03-04T07:09:59Z,eaforum,, 144457,https://forum.effectivealtruism.org/posts/Fw7wtyCZAaJdioKWE/ai-safety-newsletter-8-rogue-ais-how-to-screen-for-ai-risks,"AI Safety Newsletter #8: Rogue AIs, how to screen for AI risks, and grants for research on democratic governance of AI","['Center for AI Safety', 'Dan H', 'Akash', 'aogara']",2023-05-30T11:44:14Z,eaforum,, 144489,https://forum.effectivealtruism.org/posts/yiTcjSWuy7ptTb5XS/what-is-it-like-doing-ai-safety-work,What is it like doing AI safety work?,"['Kat Woods', 'peterbarnett']",2023-02-21T19:24:19Z,eaforum,, 144511,https://forum.effectivealtruism.org/posts/DW4FyzRTfBfNDWm6J/some-cruxes-on-impactful-alternatives-to-ai-policy-work,Some cruxes on impactful alternatives to AI policy work,['richard_ngo'],2018-11-22T13:43:41Z,eaforum,, 144542,https://forum.effectivealtruism.org/posts/ZjQ2fXpATBMvnBzzj/measuring-artificial-intelligence-on-human-benchmarks-is,Measuring artificial intelligence on human benchmarks is naive,['Ward A'],2023-04-11T11:28:38Z,eaforum,, 144556,https://forum.effectivealtruism.org/posts/7yjd2wJjSqbzz3dZX/by-failing-to-take-serious-ai-action-the-us-could-be-in,"By failing to take serious AI action, the US could be in violation of its international law obligations",['Cecil Abungu'],2023-05-27T04:25:41Z,eaforum,, 144573,https://forum.effectivealtruism.org/posts/pcbsM45vLmHcFpNnr/president-biden-issues-executive-order-on-safe-secure-and,"President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence",['Tristan Williams'],2023-10-30T11:15:38Z,eaforum,, 144618,https://forum.effectivealtruism.org/posts/CMsdrbq9zQtHwvcdE/what-are-the-biggest-obstacles-on-ai-safety-research-career,What are the biggest obstacles on AI safety research career?,['jackchang110'],2023-03-31T14:53:44Z,eaforum,, 144628,https://forum.effectivealtruism.org/posts/ExpBagkng6QSqcN8d/a-model-based-approach-to-ai-existential-risk,A model-based approach to AI Existential Risk,['SammyDMartin'],2023-08-25T10:44:14Z,eaforum,, 144639,https://forum.effectivealtruism.org/posts/DDTYxpK42B495MPqM/how-can-i-bet-on-short-timelines,How can I bet on short timelines?,['kokotajlod'],2020-11-07T12:45:46Z,eaforum,, 144661,https://forum.effectivealtruism.org/posts/ozSBaNLysue9MmFqs/aptitudes-for-ai-governance-work,Aptitudes for AI governance work,['Sam Clarke'],2023-06-13T13:54:23Z,eaforum,, 144677,https://forum.effectivealtruism.org/posts/kcE93PGPByM3Z7iGT/you-don-t-need-to-be-a-genius-to-be-in-ai-safety-research,You don't need to be a genius to be in AI safety research,['Claire Short'],2023-05-10T22:23:32Z,eaforum,, 144696,https://forum.effectivealtruism.org/posts/MeigEG9KgJp9jFFuR/train-for-incorrigibility-then-reverse-it-shutdown-problem,"Train for incorrigibility, then reverse it (Shutdown Problem Contest Submission)",['Daniel_Eth'],2023-07-18T08:26:20Z,eaforum,, 144708,https://forum.effectivealtruism.org/posts/NJtC8xzD8BgF3TmEp/agi-safety-needs-people-with-all-skillsets,AGI Safety Needs People With All Skillsets!,['Severin'],2022-07-25T13:30:26Z,eaforum,, 144722,https://forum.effectivealtruism.org/posts/KdxGwxwY3t7iw9xjB/three-impacts-of-machine-intelligence,Three Impacts of Machine Intelligence,['Paul_Christiano'],2013-08-23T10:10:23Z,eaforum,, 144745,https://forum.effectivealtruism.org/posts/bb64TfCFbm8kNeGx4/20-critiques-of-ai-safety-that-i-found-on-twitter,20 Critiques of AI Safety That I Found on Twitter,['Daniel Kirmani'],2022-06-23T15:11:03Z,eaforum,, 144755,https://forum.effectivealtruism.org/posts/MWWZQ8C655iT9zzRd/encultured-ai-part-2-providing-a-service,"Encultured AI, Part 2: Providing a Service","['Andrew Critch', 'Nick Hay']",2022-08-11T20:13:45Z,eaforum,, 144769,https://forum.effectivealtruism.org/posts/AmA9gQMhqAQW8bC4W/i-have-thousands-of-copies-of-hpmor-in-russian-how-to-use,I have thousands of copies of HPMOR in Russian. How to use them with the most impact?,['Samin'],2022-12-27T11:07:09Z,eaforum,, 144784,https://forum.effectivealtruism.org/posts/5h8bNTFHkrNNzrrJf/results-from-the-ai-testing-hackathon,Results from the AI testing hackathon,"['Esben Kran', 'HaydnBelfield', 'Apart Research']",2023-01-02T15:46:44Z,eaforum,, 144818,https://forum.effectivealtruism.org/posts/YscrJFofd6S8eJGS8/defending-against-adversarial-policies-in-reinforcement,Defending against Adversarial Policies in Reinforcement Learning with Alternating Training,['sergia'],2022-02-12T15:59:00Z,eaforum,, 144852,https://forum.effectivealtruism.org/posts/Nxtq2d8Xb3QuuHKE8/the-bullseye-framework-my-case-against-ai-doom,The bullseye framework: My case against AI doom,['titotal'],2023-05-30T11:52:31Z,eaforum,, 144883,https://forum.effectivealtruism.org/posts/78NoGoRitPzeT8nga/draft-report-on-existential-risk-from-power-seeking-ai,Draft report on existential risk from power-seeking AI,['Joe_Carlsmith'],2021-04-28T21:41:04Z,eaforum,, 144904,https://forum.effectivealtruism.org/posts/c5SeLNpnHNNif6Doz/announcing-the-ai-safety-nudge-competition-to-help-beat,Announcing the AI Safety Nudge Competition to Help Beat Procrastination,"['Marc Carauleanu', 'Chris Leong']",2022-10-01T01:49:23Z,eaforum,, 144927,https://forum.effectivealtruism.org/posts/ZXTqMektxs2LNyMim/my-argument-against-agi,My argument against AGI,['cveres'],2022-10-12T06:32:26Z,eaforum,, 144939,https://forum.effectivealtruism.org/posts/EAmfYSBaJsMzHY2cW/ai-fables-writing-contest-winners,AI Fables Writing Contest Winners!,['Daystar Eld'],2023-11-06T02:27:45Z,eaforum,, 144957,https://forum.effectivealtruism.org/posts/BCoWhBsZbDzaywAdp/moral-spillover-in-human-ai-interaction,Moral Spillover in Human-AI Interaction,['Katerina Manoli'],2023-06-05T15:20:53Z,eaforum,, 144984,https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy,GPT-3-like models are now much easier to access and deploy than to develop,['Ben Cottier'],2022-12-21T13:49:44Z,eaforum,, 145004,https://forum.effectivealtruism.org/posts/aBKTkqeMC4AHoinFc/owain-evans-on-llms-truthful-ai-ai-composition-and-more,"Owain Evans on LLMs, Truthful AI, AI Composition, and More","['Ozzie Gooen', 'Owain_Evans']",2023-05-02T01:20:18Z,eaforum,, 145026,https://forum.effectivealtruism.org/posts/tgDRDKaMxP9okcrWJ/preliminary-investigations-on-if-stem-and-ea-communities,Preliminary investigations on if STEM and EA communities could benefit from more overlap,['elteerkers'],2023-04-11T16:08:54Z,eaforum,, 145047,https://forum.effectivealtruism.org/posts/3KsvReHD6CckfwHak/if-contractualism-then-amf,"If Contractualism, Then AMF",['Bob Fischer'],2023-10-13T18:03:04Z,eaforum,, 145060,https://forum.effectivealtruism.org/posts/EyJEL84MGz9KAAyx9/uk-ai-policy-report-content-summary-and-its-impact-on-ea,"UK AI Policy Report: Content, Summary, and its Impact on EA Cause Areas",['Algo_Law'],2022-07-21T17:32:33Z,eaforum,, 145085,https://forum.effectivealtruism.org/posts/xczCcEhp4uy3zvNEv/crises-reveal-centralisation-stefan-schubert,Crises Reveal Centralisation (Stefan Schubert),['Will Howard'],2023-05-10T09:45:58Z,eaforum,, 145098,https://forum.effectivealtruism.org/posts/saAXc8zsFgZuxFM6L/xpt-forecasts-on-some-direct-approach-model-inputs,XPT forecasts on (some) Direct Approach model inputs,"['Forecasting Research Institute', 'rosehadshar']",2023-08-20T12:39:57Z,eaforum,, 145115,https://forum.effectivealtruism.org/posts/3LfyB9Dpan7fx5Jk3/fhi-report-stable-agreements-in-turbulent-times,FHI Report: Stable Agreements in Turbulent Times,['Cullen'],2019-02-21T17:12:51Z,eaforum,, 145139,https://forum.effectivealtruism.org/posts/hHhGyxshSJg43yfqH/let-s-set-new-ai-safety-actors-up-for-success,Let’s set new AI safety actors up for success,['michel'],2023-06-26T21:17:25Z,eaforum,, 145157,https://forum.effectivealtruism.org/posts/gqumJhAKvy97Atww8/updates-from-campaign-for-ai-safety-3,Updates from Campaign for AI Safety,"['Jolyn Khoo', 'Nik Samoylov']",2023-08-07T06:09:29Z,eaforum,, 145175,https://forum.effectivealtruism.org/posts/BsAmChNX9cvwEccny/our-world-in-data-ai-timelines-what-do-experts-in-artificial,"[Our World in Data] AI timelines: What do experts in artificial intelligence expect for the future? (Roser, 2023)",['Will Aldred'],2023-02-07T14:52:27Z,eaforum,, 145185,https://forum.effectivealtruism.org/posts/hfXy8EbyNTuBixjJf/call-for-cruxes-by-rhyme-a-longtermist-history-consultancy,"Call for Cruxes by Rhyme, a Longtermist History Consultancy",['Lara_TH'],2023-03-01T10:20:38Z,eaforum,, 145198,https://forum.effectivealtruism.org/posts/zJxRcb4TCCouh3eaa/a-manifold-market-leaked-the-ai-extinction-statement-and,"A Manifold Market ""Leaked"" the AI Extinction Statement and CAIS Wanted it Deleted",['David Chee'],2023-06-12T15:57:45Z,eaforum,, 145215,https://forum.effectivealtruism.org/posts/CnD4fHwkgnknbz3ED/podcast-ajeya-cotra-on-worldview-diversification-and-how-big,[Podcast] Ajeya Cotra on worldview diversification and how big the future could be,['BrownHairedEevee'],2021-01-22T23:57:48Z,eaforum,, 145224,https://forum.effectivealtruism.org/posts/YsnNxGxFaQnvg63AQ/job-ad-seri-mats-is-hiring-for-our-summer-program,[Job Ad] SERI MATS is hiring for our summer program,['zanekay'],2023-05-26T04:51:41Z,eaforum,, 145245,https://forum.effectivealtruism.org/posts/Rnga2XRJzeYypyXDt/learning-as-much-deep-learning-math-as-i-could-in-24-hours,Learning as much Deep Learning math as I could in 24 hours,['Phosphorous'],2023-01-08T02:19:47Z,eaforum,, 145263,https://forum.effectivealtruism.org/posts/Wci5rtwGTg9tLAETN/does-generality-pay-gpt-3-can-provide-preliminary-evidence,Does generality pay? GPT-3 can provide preliminary evidence.,['BrownHairedEevee'],2020-07-12T18:53:09Z,eaforum,, 145280,https://forum.effectivealtruism.org/posts/zmxckAoi9DjDyHsyA/join-the-virtual-ai-safety-unconference-vaisu,Join the Virtual AI Safety Unconference (VAISU)!,"['Nguyên', 'Linda Linsefors']",2023-06-21T04:46:11Z,eaforum,, 145289,https://forum.effectivealtruism.org/posts/daLssjprpqfAsRWW8/normalcy-bias-and-base-rate-neglect-bias-in-evaluating-agi-x,Normalcy bias and Base rate neglect: Bias in Evaluating AGI X-Risks,['Remmelt'],2023-01-04T03:16:36Z,eaforum,, 145304,https://forum.effectivealtruism.org/posts/LqXk92pK5xeph4eDB/help-us-find-pain-points-in-ai-safety,Help us find pain points in AI safety,['Esben Kran'],2022-04-12T18:43:37Z,eaforum,, 145323,https://forum.effectivealtruism.org/posts/FNGcnxAjezjZQqSzv/safe-ai-and-moral-ai,Safe AI and moral AI,"[""William D'Alessandro""]",2023-06-01T21:18:54Z,eaforum,, 145336,https://forum.effectivealtruism.org/posts/nkwN4M6BcBmThoG7p/6-year-decrease-of-metaculus-agi-prediction,6 Year Decrease of Metaculus AGI Prediction,['Chris Leong'],2022-04-12T05:36:14Z,eaforum,, 145345,https://forum.effectivealtruism.org/posts/DaRvpDHHdaoad9Tfu/critiques-of-prominent-ai-safety-labs-redwood-research,Critiques of prominent AI safety labs: Redwood Research,['Omega'],2023-03-31T08:58:30Z,eaforum,, 145371,https://forum.effectivealtruism.org/posts/REjSEuKbzh2QRFBgK/a-great-talk-for-ai-noobs-according-to-an-ai-noob,A great talk for AI noobs (according to an AI noob),['Dov'],2023-04-23T05:32:58Z,eaforum,, 145387,https://forum.effectivealtruism.org/posts/5BpgZKFrfeRtREg7W/the-parable-of-the-boy-who-cried-5-chance-of-wolf,The Parable of the Boy Who Cried 5% Chance of Wolf,['Kat Woods'],2022-08-15T14:22:56Z,eaforum,, 145402,https://forum.effectivealtruism.org/posts/PauhAAw7Y5bHMawkT/shahar-avin-on-how-to-strategically-regulate-advanced-ai,Shahar Avin on How to Strategically Regulate Advanced AI Systems,['Michaël Trazzi'],2022-09-23T15:49:04Z,eaforum,, 145435,https://forum.effectivealtruism.org/posts/yjm5CW9JdwBTFZB2B/how-we-could-stumble-into-ai-catastrophe,How we could stumble into AI catastrophe,['Holden Karnofsky'],2023-01-16T14:52:51Z,eaforum,, 145465,https://forum.effectivealtruism.org/posts/iuBoizzA5c5KfWysc/announcing-insights-for-impact,Announcing Insights for Impact,['Christian Pearson'],2023-01-04T07:00:26Z,eaforum,, 145475,https://forum.effectivealtruism.org/posts/YnRYMLxaw43daKmeT/jesse-clifton-open-source-learning-a-bargaining-approach,Jesse Clifton: Open-source learning — a bargaining approach,['EA Global'],2019-10-18T18:05:00Z,eaforum,, 145484,https://forum.effectivealtruism.org/posts/3wcNkri9CjRC4t5Cj/why-does-agi-occur-almost-nowhere-not-even-just-as-a-remark,"Why does AGI occur almost nowhere, not even just as a remark for economic/political models?",['Franziska Fischer'],2022-10-02T14:43:40Z,eaforum,, 145501,https://forum.effectivealtruism.org/posts/RMXctNAksBgXgoszY/announcing-manifund-regrants,Announcing Manifund Regrants,"['Austin', 'Rachel Weinberg']",2023-07-05T19:42:08Z,eaforum,, 145528,https://forum.effectivealtruism.org/posts/v2KL4ApqrxuYqQckK/ngo-and-yudkowsky-on-ai-capability-gains,Ngo and Yudkowsky on AI capability gains,"['richard_ngo', 'EliezerYudkowsky']",2021-11-19T01:54:57Z,eaforum,, 145550,https://forum.effectivealtruism.org/posts/WdvrgRKLfYQRw5bRD/apply-for-mats-winter-2023-24,Apply for MATS Winter 2023-24!,"['Rocket', 'Ryan Kidd', 'Laura Vaughan']",2023-10-21T02:34:41Z,eaforum,, 145565,https://forum.effectivealtruism.org/posts/5iQoR8mhEpvRT43jv/part-1-the-ai-safety-community-has-four-main-work-groups,"Part 1: The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building",['PeterSlattery'],2022-11-25T03:45:46Z,eaforum,, 145585,https://forum.effectivealtruism.org/posts/8XZmu8BM5JBtSnHiP/part-3-a-proposed-approach-for-ai-safety-movement-building,"Part 3: A Proposed Approach for AI Safety Movement Building: Projects, Professions, Skills, and Ideas for the Future [long post][bounty for feedback]",['PeterSlattery'],2023-03-22T00:54:26Z,eaforum,, 145621,https://forum.effectivealtruism.org/posts/qCfXKWYjJwqyNqbpd/the-unknowable-catastrophe,The Unknowable Catastrophe,['Aino'],2023-07-06T15:37:44Z,eaforum,, 145635,https://forum.effectivealtruism.org/posts/4H7j4PQjTDK4W6u79/apply-to-the-new-open-philanthropy-technology-policy,Apply to the new Open Philanthropy Technology Policy Fellowship!,['lukeprog'],2021-07-20T18:41:47Z,eaforum,, 145648,https://forum.effectivealtruism.org/posts/zjMeGcgWpvDcm3CkH/early-warning-forecasting-center-what-it-is-and-why-it-d-be,"Early-warning Forecasting Center: What it is, and why it'd be cool",['Linch'],2022-03-14T19:20:01Z,eaforum,, 145670,https://forum.effectivealtruism.org/posts/xsKwDuggxcYpYCe2z/anti-foom-stop-trying-to-make-your-cute-pet-name-the-thing,Anti-'FOOM' (stop trying to make your cute pet name the thing),['david_reinstein'],2023-04-14T16:05:20Z,eaforum,, 145679,https://forum.effectivealtruism.org/posts/BqokXcCQrvkk2BktH/blake-richards-on-why-he-is-skeptical-of-existential-risk,Blake Richards on Why he is Skeptical of Existential Risk from AI,['Michaël Trazzi'],2022-06-14T19:11:22Z,eaforum,, 145700,https://forum.effectivealtruism.org/posts/LxvLiAhS47phswLJi/automated-parliaments-a-solution-to-decision-uncertainty-and,Automated Parliaments — A Solution to Decision Uncertainty and Misalignment in Language Models,['Shak Ragoler'],2023-10-02T09:47:43Z,eaforum,, 145718,https://forum.effectivealtruism.org/posts/D6pbrzLcMaMsQNmNb/an-ml-safety-insurance-company-shower-thoughts,An ML safety insurance company - shower thoughts,['EdoArad'],2021-10-18T07:45:33Z,eaforum,, 145754,https://forum.effectivealtruism.org/posts/XdhwXppfqrpPL2YDX/an-overview-of-the-ai-safety-funding-situation,An Overview of the AI Safety Funding Situation,['Stephen McAleese'],2023-07-12T14:54:37Z,eaforum,, 145780,https://forum.effectivealtruism.org/posts/HZacQkvLLeLKT3a6j/how-might-a-herd-of-interns-help-with-ai-or-biosecurity,How might a herd of interns help with AI or biosecurity research tasks/questions?,['Harrison Durland'],2022-03-20T22:49:12Z,eaforum,, 145797,https://forum.effectivealtruism.org/posts/BErC24s77fdo93ghi/alignment-grantmaking-is-funding-limited-right-now-crosspost,Alignment Grantmaking is Funding-Limited Right Now [crosspost],['johnswentworth'],2023-08-02T20:37:02Z,eaforum,, 145811,https://forum.effectivealtruism.org/posts/g2nFCW5xYqgtrWfJ9/the-role-of-academia-in-ai-safety,The role of academia in AI Safety.,['PabloAMC'],2022-03-28T00:04:21Z,eaforum,, 145827,https://forum.effectivealtruism.org/posts/Hyupm7mPgoXNu9PEW/should-you-work-at-a-leading-ai-lab-including-in-non-safety,Should you work at a leading AI lab? (including in non-safety roles),"['Benjamin Hilton', '80000_Hours']",2023-07-25T16:28:30Z,eaforum,, 145850,https://forum.effectivealtruism.org/posts/5f9Xtmy8Q559eppqJ/chatgpt-understands-but-largely-does-not-generate-spanglish,"ChatGPT understands, but largely does not generate Spanglish (and other code-mixed) text",['Milan Weibel'],2023-01-04T22:10:29Z,eaforum,, 145868,https://forum.effectivealtruism.org/posts/9hckjperBEnsxkzjP/annual-agi-benchmarking-event,Annual AGI Benchmarking Event,"['Metaculus', 'Lawrence Phillips']",2022-08-26T21:31:07Z,eaforum,, 145883,https://forum.effectivealtruism.org/posts/ph2ETEDuWhzmHxvsi/help-me-to-understand-ai-alignment,Help me to understand AI alignment!,['britomart'],2023-01-18T09:13:49Z,eaforum,, 145896,https://forum.effectivealtruism.org/posts/fL7aZSq6jbDWJkzt2/ai-safety-impact-markets-your-charity-evaluator-for-ai,AI Safety Impact Markets: Your Charity Evaluator for AI Safety,['Dawn Drescher'],2023-10-01T10:47:07Z,eaforum,, 145912,https://forum.effectivealtruism.org/posts/evakA8beTDq4KbxFy/how-much-should-you-optimize-for-the-short-timelines,How much should you optimize for the short-timelines scenario?,['SoerenMind'],2022-07-26T15:51:14Z,eaforum,, 145921,https://forum.effectivealtruism.org/posts/5aM8qQE3Pq9D8HxrR/fiction-about-ai-risk,fiction about AI risk,['Ann Garth'],2020-11-12T22:36:18Z,eaforum,, 145932,https://forum.effectivealtruism.org/posts/e55QpEExmtkRjw9CD/classifying-sources-of-ai-x-risk,Classifying sources of AI x-risk,['Sam Clarke'],2022-08-08T18:18:51Z,eaforum,, 145968,https://forum.effectivealtruism.org/posts/nKzpm3qazsXsciG29/updates-on-fli-s-value-alignment-map,Updates on FLI'S Value Alignment Map?,['QubitSwarm99'],2022-09-19T00:25:44Z,eaforum,, 145978,https://forum.effectivealtruism.org/posts/zNS53uu2tLGEJKnk9/ea-s-brain-over-body-bias-and-the-embodied-value-problem-in,"EA’s brain-over-body bias, and the embodied value problem in AI alignment",['Geoffrey Miller'],2022-09-21T18:55:52Z,eaforum,, 145999,https://forum.effectivealtruism.org/posts/77mpkNmKPjtictgDG/persuasion-tools-ai-takeover-without-agi-or-agency,Persuasion Tools: AI takeover without AGI or agency?,['kokotajlod'],2020-11-20T16:56:53Z,eaforum,, 146020,https://forum.effectivealtruism.org/posts/KopQknZEtjZdoGorT/tech-company-singularities-and-steering-them-to-reduce-x,"""Tech company singularities"", and steering them to reduce x-risk",['Andrew Critch'],2022-05-13T17:26:33Z,eaforum,, 146035,https://forum.effectivealtruism.org/posts/tQxLfkvFGWBhJ2KzR/biomimetic-alignment-alignment-between-animal-genes-and,Biomimetic alignment: Alignment between animal genes and animal brains as a model for alignment between humans and AI systems.,['Geoffrey Miller'],2023-05-26T21:25:02Z,eaforum,, 146054,https://forum.effectivealtruism.org/posts/tBLsC2jZYxLYrCdbN/guess-ask-or-tell,"Guess, ask or tell?",['dEAsign'],2023-10-19T21:52:02Z,eaforum,, 146070,https://forum.effectivealtruism.org/posts/4ni3GBBzRRAgiksHT/ai-safety-concepts-writeup-webgpt,AI Safety Concepts Writeup: WebGPT,['Justis'],2023-08-11T01:31:47Z,eaforum,, 146082,https://forum.effectivealtruism.org/posts/n8r2GWz5gSHn9dnob/the-aia-and-its-brussels-effect,The AIA and its Brussels Effect,"[""Kathryn O'Rourke""]",2022-12-27T16:01:58Z,eaforum,, 146098,https://forum.effectivealtruism.org/posts/9EjMoD8BRhXEsfzMh/what-if-ai-development-goes-well-3,What if AI development goes well?,['RoryG'],2022-08-03T08:57:29Z,eaforum,, 146124,https://forum.effectivealtruism.org/posts/6ugipGgCiYsBzbMwt/existential-risk-from-ai-and-what-dc-could-do-about-it-ezra-1,"Existential risk from AI and what DC could do about it (Ezra Klein on the 80,000 Hours Podcast)",['80000_Hours'],2023-07-26T11:48:06Z,eaforum,, 146157,https://forum.effectivealtruism.org/posts/pPrFJdRq7aPu8pFo3/a-tough-career-decision,A tough career decision,['PabloAMC'],2022-04-09T00:46:58Z,eaforum,, 146176,https://forum.effectivealtruism.org/posts/ZpTEJPgvGa9ff9AcK/christiano-cotra-and-yudkowsky-on-ai-progress,"Christiano, Cotra, and Yudkowsky on AI progress","['Ajeya', 'EliezerYudkowsky']",2021-11-25T16:30:53Z,eaforum,, 146200,https://forum.effectivealtruism.org/posts/FPXFtBQHhkDDHBSt6/report-on-semi-informative-priors-for-ai-timelines-open,Report on Semi-informative Priors for AI timelines (Open Philanthropy),['Tom_Davidson'],2021-03-26T17:46:03Z,eaforum,, 146215,https://forum.effectivealtruism.org/posts/RNgbY3zCS4CGSqKGm/christiano-and-yudkowsky-on-ai-predictions-and-human,Christiano and Yudkowsky on AI predictions and human intelligence,['EliezerYudkowsky'],2022-02-23T16:51:08Z,eaforum,, 146231,https://forum.effectivealtruism.org/posts/RiCueLbjrpsotmqku/should-people-get-neuroscience-phd-to-work-in-ai-safety,Should people get neuroscience phD to work in AI safety field?,['jackchang110'],2023-03-07T16:21:22Z,eaforum,, 146248,https://forum.effectivealtruism.org/posts/XpwejKTZkRbJ5s4cp/legal-priorities-research-a-research-agenda,Legal Priorities Research: A Research Agenda,"['jonasschuett', 'Legal Priorities Project']",2021-01-06T21:47:34Z,eaforum,, 146259,https://forum.effectivealtruism.org/posts/RkpdA8763yGtEovj9/two-reasons-we-might-be-closer-to-solving-alignment-than-it,Two reasons we might be closer to solving alignment than it seems,"['Kat Woods', 'Amber Dawn']",2022-09-24T17:38:24Z,eaforum,, 146277,https://forum.effectivealtruism.org/posts/suKkvQPxgG6ihvhfP/launching-the-collective-intelligence-project-whitepaper-and,Launching The Collective Intelligence Project: Whitepaper and Pilots,['jasmine_wang'],2023-02-06T17:00:43Z,eaforum,, 146293,https://forum.effectivealtruism.org/posts/dcTnpX2AHXvYXg6wg/closed-hiring-a-mathematician-to-work-on-the-learning,[Closed] Hiring a mathematician to work on the learning-theoretic AI alignment agenda,['Vanessa'],2022-04-19T06:49:19Z,eaforum,, 146322,https://forum.effectivealtruism.org/posts/KoriEzLMz5kxbHLEm/gamingk-the-algorithms-large-language-models-as-mirrors,Γαμινγκ the Algorithms: Large Language Models as Mirrors,['Haris Shekeris'],2023-04-01T02:14:29Z,eaforum,, 146341,https://forum.effectivealtruism.org/posts/cbtoajkfeXqJAzhRi/metaculus-year-in-review-2022,Metaculus Year in Review: 2022,['christian'],2023-01-06T01:23:07Z,eaforum,, 146367,https://forum.effectivealtruism.org/posts/G43oe4JGfesBjFtTB/deepmind-generally-capable-agents-emerge-from-open-ended,DeepMind: Generally capable agents emerge from open-ended play,['kokotajlod'],2021-07-27T19:35:09Z,eaforum,, 146381,https://forum.effectivealtruism.org/posts/Cmb6Wpbo5NyjzD2zo/intro-1-my-understandings-of-mechanistic-interpretability,(Intro/1) - My Understandings of Mechanistic Interpretability Notebook,['Yadav'],2023-07-02T15:21:14Z,eaforum,, 146395,https://forum.effectivealtruism.org/posts/6oGp7XdGySzAGq4QC/consider-paying-me-to-do-ai-safety-research-work,Consider paying me to do AI safety research work,['Rupert'],2020-11-05T08:09:58Z,eaforum,, 146404,https://forum.effectivealtruism.org/posts/gZhrqihqSEvbtTBpi/tim-cook-was-asked-about-extinction-risks-from-ai,Tim Cook was asked about extinction risks from AI,['Saul Munn'],2023-06-06T18:46:02Z,eaforum,, 146420,https://forum.effectivealtruism.org/posts/feNJWCo4LbsoKbRon/interview-with-tom-chivers-ai-is-a-plausible-existential,"Interview with Tom Chivers: “AI is a plausible existential risk, but it feels as if I’m in Pascal’s mugging”",['felix.h'],2021-02-21T13:41:24Z,eaforum,, 146435,https://forum.effectivealtruism.org/posts/iLnkLJE4JFPHHrASe/aisn-13-an-interdisciplinary-perspective-on-ai-proxy,"AISN #13: An interdisciplinary perspective on AI proxy failures, new competitors to ChatGPT, and prompting language models to misbehave","['Center for AI Safety', 'Dan H', 'aogara']",2023-07-05T15:33:20Z,eaforum,, 146456,https://forum.effectivealtruism.org/posts/bnde7LPfAazbEQnkZ/know-a-grad-student-studying-ai-s-economic-impacts,Know a grad student studying AI's economic impacts?,['Madhav Malhotra'],2023-07-05T00:07:09Z,eaforum,, 146465,https://forum.effectivealtruism.org/posts/arA65LFet5K9KDeMF/will-ai-worldview-prize-funding-be-replaced,Will AI Worldview Prize Funding Be Replaced?,['Jordan Arel'],2022-11-13T17:10:55Z,eaforum,, 146474,https://forum.effectivealtruism.org/posts/G6EXYYNp6KwageGZq/stress-externalities-more-in-ai-safety-pitches,Stress Externalities More in AI Safety Pitches,['NickGabs'],2022-09-26T20:31:52Z,eaforum,, 146491,https://forum.effectivealtruism.org/posts/hurNCKfoYacJ5PSod/my-overview-of-the-ai-alignment-landscape-a-bird-s-eye-view,My Overview of the AI Alignment Landscape: A Bird’s Eye View,['Neel Nanda'],2021-12-15T23:46:59Z,eaforum,, 146533,https://forum.effectivealtruism.org/posts/cgxSATRxfn9X9H8rd/ai-progress-the-game-show,AI Progress: The Game Show,['Alex Arnett'],2023-04-21T16:47:18Z,eaforum,, 146544,https://forum.effectivealtruism.org/posts/xFXeoYn872J9vr7jh/article-summary-current-and-near-term-ai-as-a-potential-1,Article Summary: Current and Near-Term AI as a Potential Existential Risk Factor,['AndreFerretti'],2023-06-07T13:53:22Z,eaforum,, 146568,https://forum.effectivealtruism.org/posts/khHGQBH7yk2rGWqLX/a-recent-write-up-of-the-case-for-ai-existential-risk,A recent write-up of the case for AI (existential) risk,['Timsey'],2023-05-18T13:07:01Z,eaforum,, 146602,https://forum.effectivealtruism.org/posts/XasETzmipXj4dgz7e/archetypal-transfer-learning-a-proposed-alignment-solution,Archetypal Transfer Learning: a Proposed Alignment Solution that solves the Inner x Outer Alignment Problem while adding Corrigible Traits to GPT-2-medium,['Miguel'],2023-04-26T00:40:17Z,eaforum,, 146619,https://forum.effectivealtruism.org/posts/C7DbrkCpSe4AdcMek/arena-2-0-impact-report,ARENA 2.0 - Impact Report,"['TheMcDouglas', ""Kathryn O'Rourke""]",2023-09-26T17:13:12Z,eaforum,, 146642,https://forum.effectivealtruism.org/posts/qNGv3N6feXYxjeYJb/usd500-bounty-for-alignment-contest-ideas-1,$500 bounty for alignment contest ideas,"['Akash', 'Olivia']",2022-06-30T01:55:13Z,eaforum,, 146657,https://forum.effectivealtruism.org/posts/eDwcke3TbbZKAYkgi/being-an-individual-alignment-grantmaker,Being an individual alignment grantmaker,['A_donor'],2022-02-28T16:39:24Z,eaforum,, 146672,https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand,"AI Timelines: Where the Arguments, and the ""Experts,"" Stand",['Holden Karnofsky'],2021-09-07T17:35:12Z,eaforum,, 146690,https://forum.effectivealtruism.org/posts/weJZjku3HiNgQC4ER/a-note-of-caution-about-recent-ai-risk-coverage,A note of caution about recent AI risk coverage,['Sean_o_h'],2023-06-07T17:05:02Z,eaforum,, 146713,https://forum.effectivealtruism.org/posts/R9GbQQksznh2SwS4y/silicon-valley-s-rabbit-hole-problem,Silicon Valley’s Rabbit Hole Problem,['Mandelbrot'],2023-10-08T12:25:41Z,eaforum,, 146732,https://forum.effectivealtruism.org/posts/S4f2wvw6HBioWzXCy/agi-takeoff-dynamics-intelligence-vs-quantity-explosion,AGI Takeoff dynamics - Intelligence vs Quantity explosion,['EdoArad'],2023-07-26T09:20:59Z,eaforum,, 146748,https://forum.effectivealtruism.org/posts/zzFbZyGP6iz8jLe9n/agi-ruin-a-list-of-lethalities,AGI Ruin: A List of Lethalities,['EliezerYudkowsky'],2022-06-06T23:28:39Z,eaforum,, 146779,https://forum.effectivealtruism.org/posts/ofC5eL88bC5Thjxoy/summary-of-the-precipice-2-of-4-we-are-a-danger-to-ourselves,Summary of “The Precipice” (2 of 4): We are a danger to ourselves,['rileyharris'],2023-08-13T23:53:15Z,eaforum,, 146829,https://forum.effectivealtruism.org/posts/qegC9AwJuWbCkj8xY/if-ftx-is-liquidated-who-ends-up-controlling-anthropic,"If FTX is liquidated, who ends up controlling Anthropic?",['Ofer'],2022-11-15T15:04:05Z,eaforum,, 146847,https://forum.effectivealtruism.org/posts/YpADfSeSccsEkaetk/ai-safety-field-building-vs-ea-cb,AI Safety Field Building vs. EA CB,['kuhanj'],2023-06-26T23:21:06Z,eaforum,, 146864,https://forum.effectivealtruism.org/posts/9dqyakpjfhuo2bmjn/metaculus-is-building-a-team-dedicated-to-ai-forecasting,Metaculus is building a team dedicated to AI forecasting,['christian'],2022-10-18T16:08:11Z,eaforum,, 146874,https://forum.effectivealtruism.org/posts/efmn6fydPcymxNZpT/predictions-for-future-ai-governance,Predictions for future AI governance?,['jackchang110'],2023-04-02T16:43:02Z,eaforum,, 146885,https://forum.effectivealtruism.org/posts/YMvSZi2EWxNHwFtbb/part-2-ai-safety-movement-builders-should-help-the-community,"Part 2: AI Safety Movement Builders should help the community to optimise three factors: contributors, contributions and coordination",['PeterSlattery'],2022-12-15T22:48:37Z,eaforum,, 146906,https://forum.effectivealtruism.org/posts/27aXsJRRAoNZFw9K3/some-global-catastrophic-risk-estimates,Some global catastrophic risk estimates,['Tamay'],2021-02-10T19:32:38Z,eaforum,, 146916,https://forum.effectivealtruism.org/posts/NzuJmTtRfJBjxcmnD/action-help-expand-funding-for-ai-safety-by-coordinating-on,Action: Help expand funding for AI Safety by coordinating on NSF response,['Evan R. Murphy'],2022-01-20T20:48:25Z,eaforum,, 146927,https://forum.effectivealtruism.org/posts/rJRw78oihoT5paFGd/high-level-hopes-for-ai-alignment,High-level hopes for AI alignment,['Holden Karnofsky'],2022-12-20T02:11:19Z,eaforum,, 146947,https://forum.effectivealtruism.org/posts/PWKWEFJMpHzFC6Qvu/alignment-is-hard-communicating-that-might-be-harder,"Alignment is hard. Communicating that, might be harder",['Eleni_A'],2022-09-01T11:45:06Z,eaforum,, 146971,https://forum.effectivealtruism.org/posts/RCfSmyGwyyvDFGYqL/agi-in-a-vulnerable-world,AGI in a vulnerable world,['AI Impacts'],2020-04-02T03:43:15Z,eaforum,, 146991,https://forum.effectivealtruism.org/posts/LYWXDKfjyJquYF9Gm/aisn-15-china-and-the-us-take-action-to-regulate-ai-results,"AISN#15: China and the US take action to regulate AI, results from a tournament forecasting AI risk, updates on xAI’s plan, and Meta releases its open-source and commercially available Llama 2","['Center for AI Safety', 'Dan H', 'Corin Katzke']",2023-07-19T01:40:27Z,eaforum,, 147026,https://forum.effectivealtruism.org/posts/d7ocec7gNW3KNX6Nz/if-ais-had-subcortical-brain-simulation-would-that-solve-the,"If AIs had subcortical brain simulation, would that solve the alignment problem?",['Rainbow Affect'],2023-07-31T15:48:51Z,eaforum,, 147036,https://forum.effectivealtruism.org/posts/B6FXBZBsBB2mmyp3z/self-limiting-ai-in-ai-alignment,Self-Limiting AI in AI Alignment,"[""The_Lord's_Servant_280""]",2022-12-31T19:07:29Z,eaforum,, 147046,https://forum.effectivealtruism.org/posts/sSWMWkiAHRdDdPrWN/the-metaethics-and-normative-ethics-of-agi-value-alignment,"The Metaethics and Normative Ethics of AGI Value Alignment: Many Questions, Some Implications",['Eleos Arete Citrini'],2021-09-15T19:05:20Z,eaforum,, 147072,https://forum.effectivealtruism.org/posts/dfgyNc4ShWZCQWJiu/animal-rights-the-singularity-and-astronomical-suffering,"Animal Rights, The Singularity, and Astronomical Suffering",['sapphire'],2020-08-20T20:23:10Z,eaforum,, 147096,https://forum.effectivealtruism.org/posts/kvkv6779jk6edygug/some-ai-governance-research-ideas,Some AI Governance Research Ideas,"['MarkusAnderljung', 'ac']",2021-06-03T10:51:10Z,eaforum,, 147111,https://forum.effectivealtruism.org/posts/DpWhZaGLA5X6p5dgP/crosspost-why-uncontrollable-ai-looks-more-likely-than-ever-1,[Crosspost] Why Uncontrollable AI Looks More Likely Than Ever,['Otto'],2023-03-08T15:33:50Z,eaforum,, 147134,https://forum.effectivealtruism.org/posts/jeybxkZrJmWpJaatN/agi-risk-analogies-and-arguments,AGI risk: analogies & arguments,['Gavin'],2021-03-23T13:18:21Z,eaforum,, 147163,https://forum.effectivealtruism.org/posts/uSKQLstvoxHKtWkrg/culture-and-programming-retrospective-era-fellowship-2023,Culture and Programming Retrospective: ERA Fellowship 2023,"['Gideon Futerman', 'Nandini Shiralkar']",2023-09-28T16:45:12Z,eaforum,, 147194,https://forum.effectivealtruism.org/posts/TH2tRumAuwKWN8NoG/decomposing-alignment-to-take-advantage-of-paradigms,Decomposing alignment to take advantage of paradigms,['Christopher King'],2023-06-04T14:26:25Z,eaforum,, 147215,https://forum.effectivealtruism.org/posts/vHxKLNQciXN4taEdd/applications-are-now-open-for-intro-to-ml-safety-spring-2023,Applications are now open for Intro to ML Safety Spring 2023,['Joshc'],2022-11-04T22:45:02Z,eaforum,, 147236,https://forum.effectivealtruism.org/posts/nSyvMy3QQTyBzybNx/seri-ml-alignment-theory-scholars-program-2022,SERI ML Alignment Theory Scholars Program 2022,"['Ryan Kidd', 'Victor Warlop', 'Oliver Z']",2022-04-27T16:33:42Z,eaforum,, 147263,https://forum.effectivealtruism.org/posts/E6CahapSad7psvqx4/timelines-are-short-p-doom-is-high-a-global-stop-to-frontier,"Timelines are short, p(doom) is high: a global stop to frontier AI development until x-safety consensus is our only reasonable hope",['Greg_Colbourn'],2023-10-12T11:24:11Z,eaforum,, 147286,https://forum.effectivealtruism.org/posts/3nL7Ak43gmCYEFz9P/cognitive-science-and-failed-ai-forecasts,Cognitive science and failed AI forecasts,['Eleni_A'],2022-11-18T14:25:35Z,eaforum,, 147296,https://forum.effectivealtruism.org/posts/pdKDjEhmANeD8BQhb/the-eu-ai-act-a-simple-explanation-a-stanford-study-reveals,The EU AI Act: A Simple Explanation - A Stanford Study Reveals the gaps of ChatGPT and 9 more,['Sparkvibe'],2023-06-26T08:59:37Z,eaforum,, 147305,https://forum.effectivealtruism.org/posts/TMbPEhdAAJZsSYx2L/the-limited-upside-of-interpretability,The limited upside of interpretability,['Peter S. Park'],2022-11-15T20:22:16Z,eaforum,, 147325,https://forum.effectivealtruism.org/posts/eibgQcbRXtW7tukfv/discussion-about-ai-safety-funding-fb-transcript,Discussion about AI Safety funding (FB transcript),['Akash'],2023-04-30T19:05:34Z,eaforum,, 147358,https://forum.effectivealtruism.org/posts/9cCyPE2EDpjpJvqnF/why-i-think-that-teaching-philosophy-is-high-impact,Why I think that teaching philosophy is high impact,['Eleni_A'],2022-12-19T23:00:15Z,eaforum,, 147368,https://forum.effectivealtruism.org/posts/azYjfNsMokoJhMdc4/openai-s-grant-program-for-democratic-process-for-deciding,OpenAI's grant program for democratic process for deciding what rules AI systems should follow,['Ronen Bar'],2023-06-23T10:46:30Z,eaforum,, 147377,https://forum.effectivealtruism.org/posts/eQa4WtedcAookJ7nM/does-china-have-ai-alignment-resources-institutions-how-can,Does China have AI alignment resources/institutions? How can we prioritize creating more?,['JakubK'],2022-08-04T19:23:24Z,eaforum,, 147387,https://forum.effectivealtruism.org/posts/cPuTnDowko79KAcn3/when-you-plan-according-to-your-ai-timelines-should-you-put,"When you plan according to your AI timelines, should you put more weight on the median future, or the median future | eventual AI alignment success? ⚖️",['Jeffrey Ladish'],2023-01-05T01:55:22Z,eaforum,, 147404,https://forum.effectivealtruism.org/posts/bhrKwJE7Ggv7AFM7C/modelling-large-scale-cyber-attacks-from-advanced-ai-systems,Modelling large-scale cyber attacks from advanced AI systems with Advanced Persistent Threats,['Iyngkarran Kumar'],2023-10-02T09:54:12Z,eaforum,, 147419,https://forum.effectivealtruism.org/posts/GhmcdwdT98PE5vCS2/yudkowsky-on-agi-risk-on-the-bankless-podcast,Yudkowsky on AGI risk on the Bankless podcast,['RobBensinger'],2023-03-13T00:42:23Z,eaforum,, 147440,https://forum.effectivealtruism.org/posts/WZf6KpmajZXs596JG/preparing-for-ai-assisted-alignment-research-we-need-data,Preparing for AI-assisted alignment research: we need data!,['CBiddulph'],2023-01-17T03:28:29Z,eaforum,, 147460,https://forum.effectivealtruism.org/posts/3D9bkGEtCvgQZEoAd/eu-s-importance-for-ai-governance-is-conditional-on-ai,EU's importance for AI governance is conditional on AI trajectories - a case study,['MathiasKB'],2022-01-13T14:58:14Z,eaforum,, 147470,https://forum.effectivealtruism.org/posts/3izcm9sBmTPbRtQNH/is-it-valuable-to-the-field-of-ai-safety-to-have-a,Is it valuable to the field of AI Safety to have a neuroscience background?,['Samuel Nellessen'],2022-04-03T19:44:51Z,eaforum,, 147479,https://forum.effectivealtruism.org/posts/hw8ePRLJop7kSEZK3/ais-accelerating-ai-research,AIs accelerating AI research,['Ajeya'],2023-04-12T11:41:04Z,eaforum,, 147499,https://forum.effectivealtruism.org/posts/esAGKxupuLXhQ3bW5/an-argument-for-accelerating-international-ai-governance,An argument for accelerating international AI governance research (part 1),['MattThinks'],2023-08-16T05:40:24Z,eaforum,, 147521,https://forum.effectivealtruism.org/posts/zeL52MFB2Pkq9Kdme/exploring-metaculus-community-predictions,Exploring Metaculus’ community predictions,['Vasco Grilo'],2023-03-24T07:59:16Z,eaforum,, 147541,https://forum.effectivealtruism.org/posts/f2qojPr8NaMPo2KJC/beware-safety-washing,Beware safety-washing,['Lizka'],2023-01-13T10:39:04Z,eaforum,, 147562,https://forum.effectivealtruism.org/posts/gsPmsdXWFmkwezc5L/some-talent-needs-in-ai-governance,Some talent needs in AI governance,['Sam Clarke'],2023-06-13T13:53:35Z,eaforum,, 147588,https://forum.effectivealtruism.org/posts/sxcex5KomcHgojzhc/prevenire-una-catastrofe-legata-all-intelligenza-artificiale,Prevenire una catastrofe legata all'intelligenza artificiale,['EA Italy'],2023-01-17T11:07:39Z,eaforum,, 147612,https://forum.effectivealtruism.org/posts/EZQQmhMsa36zwPeGB/we-ran-an-ai-timelines-retreat,We Ran an AI Timelines Retreat,['Lenny McCline'],2022-05-17T04:40:14Z,eaforum,, 147639,https://forum.effectivealtruism.org/posts/XxrvEytsHQAM38MBt/navigating-the-future-a-guide-on-how-to-stay-safe-with-ai-or,Navigating the Future: A Guide on How to Stay Safe with AI | Emmanuel Katto Uganda,['emmanuelkatto'],2023-08-28T11:38:15Z,eaforum,, 147686,https://forum.effectivealtruism.org/posts/nAmauAFjjgCcDwmc6/replacement-for-ponr-concept,Replacement for PONR concept,['kokotajlod'],2022-09-02T00:38:54Z,eaforum,, 147696,https://forum.effectivealtruism.org/posts/HnxQF6kkLuyiSZjhN/ai-alignment-prize-winners-and-next-round-link,AI alignment prize winners and next round [link],['RyanCarey'],2018-01-20T12:07:16Z,eaforum,, 147705,https://forum.effectivealtruism.org/posts/8kKLSy285eWbFn4qC/who-will-you-be-after-chatgpt-takes-your-job,"""Who Will You Be After ChatGPT Takes Your Job?""",['Stephen Thomas'],2023-04-21T21:31:12Z,eaforum,, 147721,https://forum.effectivealtruism.org/posts/3KAuAS2shyDwnjzNa/predictable-updating-about-ai-risk,Predictable updating about AI risk,['Joe_Carlsmith'],2023-05-08T22:05:40Z,eaforum,, 147735,https://forum.effectivealtruism.org/posts/JZqbNjtAqieivTG7Q/redwood-research-is-hiring-for-several-roles,Redwood Research is hiring for several roles,"['Jack R', 'billzito']",2021-11-29T00:18:38Z,eaforum,, 147749,https://forum.effectivealtruism.org/posts/9cdntNDJQTS8dH5fh/making-ea-more-inclusive-representative-and-impactful-in,"Making EA more inclusive, representative, and impactful in Africa","['Ashura Batungwanayo', 'Hayley Martin']",2023-08-17T20:19:19Z,eaforum,, 147774,https://forum.effectivealtruism.org/posts/ggom3PzSLS9wJrgBH/artificially-sentient-beings-moral-political-and-legal,"Artificially sentient beings: Moral, political, and legal issues",['Fırat Akova'],2023-08-01T17:48:55Z,eaforum,, 147793,https://forum.effectivealtruism.org/posts/rpDvh72yvyN8yfPQL/explore-risks-from-emerging-technology-with-peers-outside-of,Explore Risks from Emerging Technology with Peers Outside of (or New to) the AI Alignment Community - Express Interest by August 8,['Fasori'],2022-07-17T20:59:58Z,eaforum,, 147803,https://forum.effectivealtruism.org/posts/NhnpD6Pt4ZZtouQre/how-to-improve-china-western-coordination-on-ea-issues,How to Improve China-Western Coordination on EA Issues?,['Michael Kehoe'],2021-11-03T07:28:46Z,eaforum,, 147813,https://forum.effectivealtruism.org/posts/upZJFAFPeJkxFtb8i/we-can-prevent-ai-disaster-like-we-prevented-nuclear,"""We can Prevent AI Disaster Like We Prevented Nuclear Catastrophe""",['Peter'],2023-09-23T20:36:29Z,eaforum,, 147826,https://forum.effectivealtruism.org/posts/meTqCDCNzYgYmkF76/implications-of-quantum-computing-for-artificial,Implications of Quantum Computing for Artificial Intelligence alignment research (ABRIDGED),['Jaime Sevilla'],2019-09-05T14:56:29Z,eaforum,, 147852,https://forum.effectivealtruism.org/posts/RKy2emQdEgQgqv5ok/govai-webinars-on-the-governance-and-economics-of-ai,GovAI Webinars on the Governance and Economics of AI,['MarkusAnderljung'],2020-05-12T15:00:32Z,eaforum,, 147879,https://forum.effectivealtruism.org/posts/uiyHiwrXKysfdoCps/politico-article-on-open-phil-horizon-fellowship-and-ea,"Politico article on Open Phil, Horizon Fellowship, and EA",['Calum'],2023-10-15T14:18:48Z,eaforum,, 147893,https://forum.effectivealtruism.org/posts/eYkr8x9QTzgFs7mu7/ai-forecasting-question-database-forecasting-infrastructure,"AI Forecasting Question Database (Forecasting infrastructure, part 3)","['jacobjacob', 'goldhaber']",2019-09-03T14:57:55Z,eaforum,, 147913,https://forum.effectivealtruism.org/posts/WZR5ZmC9nvFXeaySS/tomorrow-the-largest-ai-safety-protest-ever,TOMORROW: the largest AI Safety protest ever!,['Holly_Elmore'],2023-10-20T18:08:59Z,eaforum,, 147934,https://forum.effectivealtruism.org/posts/zFiGfWbZGgT8sGwRC/ai-impacts-and-paul-christiano-on-takeoff-speeds,AI impacts and Paul Christiano on takeoff speeds,['Crosspost'],2018-03-02T11:16:06Z,eaforum,, 147946,https://forum.effectivealtruism.org/posts/zqRDNChFburJMmpqK/announcing-epoch-a-research-organization-investigating-the,Announcing Epoch: A research organization investigating the road to Transformative AI,"['Jaime Sevilla', 'Pablo Villalobos', 'Tamay', 'lennart', 'anson', 'mariushobbhahn']",2022-06-27T13:39:16Z,eaforum,, 147964,https://forum.effectivealtruism.org/posts/q49obZkQujkYmnFWY/vael-gates-risks-from-advanced-ai-june-2022,Vael Gates: Risks from Advanced AI (June 2022),['Vael Gates'],2022-06-14T00:49:05Z,eaforum,, 147986,https://forum.effectivealtruism.org/posts/JqBLcGYapXEG9saXD/semi-conductor-ai-stocks-discussion,Semi-conductor / AI stocks discussion.,['sapphire'],2022-11-25T23:35:41Z,eaforum,, 148002,https://forum.effectivealtruism.org/posts/atbonGDAFegfeDbTF/deepmind-is-hiring-long-term-strategy-and-governance,DeepMind is hiring Long-term Strategy & Governance researchers,['vishal'],2021-09-13T18:44:04Z,eaforum,, 148011,https://forum.effectivealtruism.org/posts/FBtcr46GBiknNvWxy/how-to-get-technological-knowledge-on-ai-ml-for-non-tech,How to get technological knowledge on AI/ML (for non-tech people),['FangFang'],2021-06-30T07:53:58Z,eaforum,, 148024,https://forum.effectivealtruism.org/posts/ZAxaXaakgdQK3ACqY/pause-for-thought-the-ai-pause-debate-astral-codex-ten,Pause For Thought: The AI Pause Debate (Astral Codex Ten),['David Mears'],2023-10-05T09:32:54Z,eaforum,, 148033,https://forum.effectivealtruism.org/posts/TfkufmAQG9PjmE6jo/brief-thoughts-on-data-reporting-and-response-for-ai-risk,"Brief thoughts on Data, Reporting, and Response for AI Risk Mitigation",['Davidmanheim'],2023-06-15T07:53:39Z,eaforum,, 148067,https://forum.effectivealtruism.org/posts/aSBEN99X2KaRLSmeT/us-policy-career-resources,US Policy Career Resources,['US Policy Careers'],2023-06-06T20:52:44Z,eaforum,, 148076,https://forum.effectivealtruism.org/posts/7yK5fB7y3bb8dEMED/aisn-16-white-house-secures-voluntary-commitments-from,AISN #16: White House Secures Voluntary Commitments from Leading AI Labs and Lessons from Oppenheimer,"['Center for AI Safety', 'Dan H', 'Corin Katzke', 'aogara']",2023-07-25T16:45:21Z,eaforum,, 148111,https://forum.effectivealtruism.org/posts/HsRWX2T6fBHXRyMaP/do-you-think-the-probability-of-future-ai-sentience,Do you think the probability of future AI sentience(suffering) is >0.1%? Why?,['jackchang110'],2023-07-10T16:41:40Z,eaforum,, 148120,https://forum.effectivealtruism.org/posts/j8XuuBvFhsKdivv8Q/some-ai-research-areas-and-their-relevance-to-existential,Some AI research areas and their relevance to existential safety,['Andrew Critch'],2020-12-15T12:15:12Z,eaforum,, 148156,https://forum.effectivealtruism.org/posts/pbrJduve9kLA2yiZq/what-is-autonomy-and-how-does-it-lead-to-greater-risk-from,"What is autonomy, and how does it lead to greater risk from AI?",['Davidmanheim'],2023-08-01T08:06:40Z,eaforum,, 148187,https://forum.effectivealtruism.org/posts/9FPvJ4dXeiXYKwFpL/the-ai-boom-mainly-benefits-big-firms-but-long-term-markets,"The AI Boom Mainly Benefits Big Firms, but long-term, markets will concentrate",['Hauke Hillebrandt'],2023-10-29T08:38:23Z,eaforum,, 148211,https://forum.effectivealtruism.org/posts/RP2JXebirXqeaQqH6/some-thoughts-on-risks-from-narrow-non-agentic-ai,"Some thoughts on risks from narrow, non-agentic AI",['richard_ngo'],2021-01-19T00:07:23Z,eaforum,, 148242,https://forum.effectivealtruism.org/posts/FzCNJtat9phGWrWJX/a-pseudo-mathematical-formulation-of-direct-work-choice,A pseudo mathematical formulation of direct work choice between two x-risks,['Joseph Bloom'],2022-08-11T00:28:36Z,eaforum,, 148265,https://forum.effectivealtruism.org/posts/CghaRkCDKYTbMhorc/the-importance-of-ai-alignment-explained-in-5-points,"The Importance of AI Alignment, explained in 5 points",['Daniel_Eth'],2023-02-11T02:56:05Z,eaforum,, 148292,https://forum.effectivealtruism.org/posts/b3goLZcNxt68WbHmB/thoughts-on-short-timelines,Thoughts on short timelines,['Tobias_Baumann'],2018-10-23T15:59:41Z,eaforum,, 148308,https://forum.effectivealtruism.org/posts/aaAf5fda88QgG4YkB/ai-timelines-and-theoretical-understanding-of-deep-learning,AI timelines and theoretical understanding of deep learning,['Venky1024'],2021-09-12T16:26:49Z,eaforum,, 148321,https://forum.effectivealtruism.org/posts/hwTdJToSsb4DxW2a2/1h-volunteers-needed-for-a-small-ai-safety-related-research,1h-volunteers needed for a small AI Safety-related research project,['PabloAMC'],2021-08-16T17:51:42Z,eaforum,, 148330,https://forum.effectivealtruism.org/posts/5d7P4gFpomfeLCHZw/unjournal-evaluations-of-artificial-intelligence-and,"Unjournal: Evaluations of ""Artificial Intelligence and Economic Growth"", and new hosting space",['david_reinstein'],2023-03-17T20:20:53Z,eaforum,, 148343,https://forum.effectivealtruism.org/posts/ipWNDXTdXgDfSw6fu/introducing-generally-intelligent-an-ai-research-lab-focused,Introducing Generally Intelligent: an AI research lab focused on improved theoretical and pragmatic understanding,['joshalbrecht'],2022-10-21T08:20:56Z,eaforum,, 148353,https://forum.effectivealtruism.org/posts/cXBznkfoPJAjacFoT/are-you-really-in-a-race-the-cautionary-tales-of-szilard-and,Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg,['HaydnBelfield'],2022-05-19T08:42:15Z,eaforum,, 148377,https://forum.effectivealtruism.org/posts/xSDWS8yWWPcqAa8NR/ai-alignment-research-links-1,AI alignment research links,['Holden Karnofsky'],2022-01-06T05:52:51Z,eaforum,, 148405,https://forum.effectivealtruism.org/posts/MfPWk4ToW3p6utWpc/posit-most-ai-safety-people-should-work-on-alignment-safety,"Posit: Most AI safety people should work on alignment/safety challenges for AI tools that already have users (Stable Diffusion, GPT)",['nonzerosum'],2022-12-20T17:23:04Z,eaforum,, 148414,https://forum.effectivealtruism.org/posts/srzs5smvt5FvhfFS5/there-should-be-an-ai-safety-project-board,There should be an AI safety project board,['mariushobbhahn'],2022-03-14T16:08:49Z,eaforum,, 148426,https://forum.effectivealtruism.org/posts/RecFq5M8NF8X98Gao/quick-survey-on-ai-alignment-resources,Quick survey on AI alignment resources,['frances_lorenz'],2022-06-30T19:08:47Z,eaforum,, 148438,https://forum.effectivealtruism.org/posts/Ya8D8jfcg2qaYEYim/ai-and-drug-discovery-security-and-risks,AI & Drug Discovery - Security and Risks,['Girving'],2023-06-28T08:57:55Z,eaforum,, 148449,https://forum.effectivealtruism.org/posts/6SvZPHAvhT5dtqefF/debate-series-should-we-push-for-a-pause-on-the-development,Debate series: should we push for a pause on the development of AI?,['Ben_West'],2023-09-08T16:29:50Z,eaforum,, 148462,https://forum.effectivealtruism.org/posts/EehuKzAvEbDbXxTwJ/what-we-can-learn-from-stress-testing-for-ai-regulation,What we can learn from stress testing for AI regulation,['Nathan_Barnard'],2023-07-17T19:56:27Z,eaforum,, 148494,https://forum.effectivealtruism.org/posts/WiQRpz8M4hKQBQa3B/is-this-a-good-way-to-bet-on-short-timelines,Is this a good way to bet on short timelines?,['kokotajlod'],2020-11-28T14:31:46Z,eaforum,, 148509,https://forum.effectivealtruism.org/posts/MwJiR6WKgPsTiADSK/creative-writing-contest-the-puppy-problem,[Creative Writing Contest] The Puppy Problem,['Louis'],2021-10-13T14:01:00Z,eaforum,, 148525,https://forum.effectivealtruism.org/posts/nhenCNq7s3zaXpQ8c/what-would-it-look-like-for-ais-to-no-longer-be-neglected,What would it look like for AIS to no longer be neglected?,['Rockwell'],2023-06-16T15:59:38Z,eaforum,, 148534,https://forum.effectivealtruism.org/posts/XHedou8TjeAccuerm/the-theoretical-computational-limit-of-the-solar-system-is-1,The theoretical computational limit of the Solar System is 1.47x10^49 bits per second.,['William the Kiwi'],2023-10-17T02:52:34Z,eaforum,, 148546,https://forum.effectivealtruism.org/posts/JYEAL8g7ArqGoTaX6/ai-pause-will-likely-backfire,AI Pause Will Likely Backfire,['Nora Belrose'],2023-09-16T10:21:25Z,eaforum,, 148569,https://forum.effectivealtruism.org/posts/Ekso4kAkjLivnaaQP/bahamian-adventures-an-epic-tale-of-entrepreneurship-ai,"Bahamian Adventures: An Epic Tale of Entrepreneurship, AI Strategy Research and Potatoes",['Jaime Sevilla'],2022-08-09T08:37:37Z,eaforum,, 148585,https://forum.effectivealtruism.org/posts/zyXtnrokGyR7yhqsg/voting-theory-has-a-hole,Voting Theory has a HOLE,['Anthony Repetto'],2021-12-04T04:20:41Z,eaforum,, 148596,https://forum.effectivealtruism.org/posts/nwEDksQ8GCCjBiMnF/agi-alignment-paperclip-maximizer-pause-defection-incentives,AGI - alignment - paperclip maximizer - pause - defection - incentives,['Mars Robertson'],2023-04-13T10:38:36Z,eaforum,, 148612,https://forum.effectivealtruism.org/posts/BzyeByJcdBqiyjGaG/linkpost-shorter-version-of-report-on-existential-risk-from,[Linkpost] Shorter version of report on existential risk from power-seeking AI,['Joe_Carlsmith'],2023-03-22T18:06:50Z,eaforum,, 148629,https://forum.effectivealtruism.org/posts/WdMnmmqqiP5zCtSfv/cognitive-science-psychology-as-a-neglected-approach-to-ai,Cognitive Science/Psychology As a Neglected Approach to AI Safety,['Kaj_Sotala'],2017-06-05T13:46:50Z,eaforum,, 148660,https://forum.effectivealtruism.org/posts/JQnYZghxrdpEYHaB8/join-the-ai-governance-and-interpretability-hackathons,Join the AI governance and interpretability hackathons!,"['Esben Kran', 'Sabrina Zaki', 'Apart Research']",2023-03-23T14:39:42Z,eaforum,, 148681,https://forum.effectivealtruism.org/posts/JsjQRqvRc5pFmeSoj/what-do-we-know-about-mustafa-suleyman-s-position-on-ai,What do we know about Mustafa Suleyman's position on AI Safety?,['Chris Leong'],2023-08-13T19:41:38Z,eaforum,, 148690,https://forum.effectivealtruism.org/posts/iiRGCydMX7aiEjvGm/12-tentative-ideas-for-us-ai-policy-luke-muehlhauser,12 tentative ideas for US AI policy (Luke Muehlhauser),['Lizka'],2023-04-19T21:06:00Z,eaforum,, 148725,https://forum.effectivealtruism.org/posts/pfEpu3gMG5bRMyfee/intro-to-caring-about-ai-alignment-as-an-ea-cause,Intro to caring about AI alignment as an EA cause,['So8res'],2017-04-14T00:42:16Z,eaforum,, 148758,https://forum.effectivealtruism.org/posts/rho5vtxSaEdXxLu3o/yudkowsky-and-christiano-discuss-takeoff-speeds,"Yudkowsky and Christiano discuss ""Takeoff Speeds""",['EliezerYudkowsky'],2021-11-22T19:42:59Z,eaforum,, 148778,https://forum.effectivealtruism.org/posts/vvocfhQ7bcBR4FLBx/apply-to-the-second-ml-for-alignment-bootcamp-mlab-2-in,Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2],"['Buck', 'Max Nadeau']",2022-05-06T00:19:02Z,eaforum,, 148790,https://forum.effectivealtruism.org/posts/BFbsqwCuuqueFRfpW/aim-for-conditional-pauses,Aim for conditional pauses,['AnonResearcherMajorAILab'],2023-09-25T01:05:56Z,eaforum,, 148815,https://forum.effectivealtruism.org/posts/DZ8JFxWo4tzuj6L85/humanity-s-vast-future-and-its-implications-for-cause,Humanity’s vast future and its implications for cause prioritization,['BrownHairedEevee'],2022-07-26T05:04:30Z,eaforum,, 148843,https://forum.effectivealtruism.org/posts/zwZLiKSgRwYRi9Jzt/final-report-of-the-national-security-commission-on,"Final Report of the National Security Commission on Artificial Intelligence (NSCAI, 2021)",['MichaelA'],2021-06-01T08:19:16Z,eaforum,, 148864,https://forum.effectivealtruism.org/posts/zcMhnfM9n6NKBaRnH/apollo-research-is-hiring-evals-and-interpretability,Apollo Research is hiring evals and interpretability engineers & scientists,['mariushobbhahn'],2023-08-04T10:56:20Z,eaforum,, 148880,https://forum.effectivealtruism.org/posts/Nm9ahJzKsDGFfF66b/nyt-google-will-recalibrate-the-risk-of-releasing-ai-due-to,NYT: Google will ‘recalibrate’ the risk of releasing AI due to competition with OpenAI,['Michael Huang'],2023-01-22T02:13:45Z,eaforum,, 148897,https://forum.effectivealtruism.org/posts/55RGoyjhc5vcEbX8o/let-s-think-about-lowering-the-burden-of-proof-for-liability,Let's think about...lowering the burden of proof for liability for harms associated with AI.,['dEAsign'],2023-09-26T12:16:54Z,eaforum,, 148908,https://forum.effectivealtruism.org/posts/NyCHoZGGw5YssvDJB/lessons-from-three-mile-island-for-ai-warning-shots,Lessons from Three Mile Island for AI Warning Shots,['NickGabs'],2022-09-26T02:47:08Z,eaforum,, 148941,https://forum.effectivealtruism.org/posts/MMtbCDTNP3M53N3Dc/agi-safety-from-first-principles,AGI safety from first principles,['richard_ngo'],2020-10-21T17:42:53Z,eaforum,, 148962,https://forum.effectivealtruism.org/posts/LLdHNTEHMoPYqGtHY/critique-of-superintelligence-part-4,Critique of Superintelligence Part 4,['Fods12'],2018-12-13T05:14:42Z,eaforum,, 148983,https://forum.effectivealtruism.org/posts/DcwdjbckGCceqctTp/7-learnings-and-a-detailed-description-of-an-ai-safety,7 Learnings and a Detailed Description of an AI Safety Reading Group,['nell'],2022-09-23T02:02:58Z,eaforum,, 149006,https://forum.effectivealtruism.org/posts/6RcicmJCvztarka8Y/grokking-forecasting-tai-with-biological-anchors,Grokking “Forecasting TAI with biological anchors”,['anson'],2022-06-06T18:56:32Z,eaforum,, 149027,https://forum.effectivealtruism.org/posts/ySCqffZTKtZp97JFB/digital-people-could-make-ai-safer,Digital people could make AI safer,['GMcGowan'],2022-06-10T15:29:49Z,eaforum,, 149047,https://forum.effectivealtruism.org/posts/HayWBGerpYFk3GsZR/brief-summary-of-key-disagreements-in-ai-risk,Brief summary of key disagreements in AI Risk,['Aryeh Englander'],2019-12-26T19:40:28Z,eaforum,, 149066,https://forum.effectivealtruism.org/posts/GDkrPrP2m6TQqdSGF/time-magazine-deepmind-s-ceo-helped-take-ai-mainstream-now,"[TIME magazine] DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution (Perrigo, 2023)",['Will Aldred'],2023-01-20T20:37:03Z,eaforum,, 149092,https://forum.effectivealtruism.org/posts/8CD4i8FsRApcbt3an/list-of-masters-programs-in-tech-policy-public-policy-and,"List of Masters Programs in Tech Policy, Public Policy and Security (Europe)",['sberg'],2023-05-29T10:23:22Z,eaforum,, 149112,https://forum.effectivealtruism.org/posts/WqQDCCLWbYfFRwubf/information-security-considerations-for-ai-and-the-long-term,Information security considerations for AI and the long term future,"['Jeffrey Ladish', 'lennart']",2022-05-02T20:53:56Z,eaforum,, 149133,https://forum.effectivealtruism.org/posts/TPDtmSnJbGZFDZTfs/the-technology-bucket-error,"The “technology"" bucket error",['Holly_Elmore'],2023-09-21T00:59:12Z,eaforum,, 149146,https://forum.effectivealtruism.org/posts/kB2DK6fpmbGCxe4ED/appendix-to-bridging-demonstration,Appendix to Bridging Demonstration,['mako yass'],2022-06-01T20:30:18Z,eaforum,, 149166,https://forum.effectivealtruism.org/posts/kcopiC5G4nagd4ndd/when-is-ai-safety-research-harmful,When is AI safety research harmful?,['Nathan_Barnard'],2022-05-09T10:36:27Z,eaforum,, 149185,https://forum.effectivealtruism.org/posts/vCAvL3DLhf2peuEhR/pitching-ai-safety-in-3-sentences,Pitching AI Safety in 3 sentences,['PabloAMC'],2022-03-30T18:50:28Z,eaforum,, 149194,https://forum.effectivealtruism.org/posts/fuEHtmY9gvQQtdawL/half-baked-ideas-thread-ea-ai-safety,Half-baked ideas thread (EA / AI Safety),['Aryeh Englander'],2022-06-23T16:05:16Z,eaforum,, 149203,https://forum.effectivealtruism.org/posts/YmEMGbnwuWeGLbnzv/updates-from-campaign-for-ai-safety,Updates from Campaign for AI Safety,"['Jolyn Khoo', 'Nik Samoylov']",2023-06-29T07:23:48Z,eaforum,, 149221,https://forum.effectivealtruism.org/posts/aSMexrjGXpNiWpbb5/a-simple-model-of-agi-deployment-risk,A Simple Model of AGI Deployment Risk,['djbinder'],2021-07-09T09:44:50Z,eaforum,, 149246,https://forum.effectivealtruism.org/posts/wdk3LCg6iFxknCYG4/why-don-t-governments-seem-to-mind-that-companies-are,Why don't governments seem to mind that companies are explicitly trying to make AGIs?,['Ozzie Gooen'],2021-12-23T07:08:02Z,eaforum,, 149263,https://forum.effectivealtruism.org/posts/J3ribNjvPRtHCK7bC/organizing-a-debate-with-experts-and-mps-to-raise-ai-xrisk,Organizing a debate with experts and MPs to raise AI xrisk awareness: a possible blueprint,['Otto'],2023-04-19T10:50:22Z,eaforum,, 149280,https://forum.effectivealtruism.org/posts/Be89az6nDN37cYuri/assistant-professor-ranked-ai-ethics-philosopher-job,"Assistant-professor-ranked AI ethics philosopher job opportunity at Canterbury University, New Zealand",['ben.smith'],2022-10-16T17:56:31Z,eaforum,, 149289,https://forum.effectivealtruism.org/posts/iitD7ia96CYkocLTd/protest-against-meta-s-irreversible-proliferation-sept-29,"Protest against Meta's irreversible proliferation (Sept 29, San Francisco)",['Holly_Elmore'],2023-09-19T23:40:26Z,eaforum,, 149299,https://forum.effectivealtruism.org/posts/FtggfJ2oxNSN8Niix/when-reporting-ai-timelines-be-clear-who-you-re-deferring-to,"When reporting AI timelines, be clear who you're deferring to",['Sam Clarke'],2022-10-10T14:24:14Z,eaforum,, 149314,https://forum.effectivealtruism.org/posts/ynxDXNbmak9fEHaAm/symbiosis-not-alignment-as-the-goal-for-liberal-democracies,"Symbiosis, not alignment, as the goal for liberal democracies in the transition to artificial general intelligence",['simonfriederich'],2023-03-17T13:04:23Z,eaforum,, 149345,https://forum.effectivealtruism.org/posts/j4G5Gqxa6JmbbQYzX/mediocre-ai-safety-as-existential-risk,Mediocre AI safety as existential risk,['Gavin'],2022-03-16T11:50:01Z,eaforum,, 149364,https://forum.effectivealtruism.org/posts/6CRvK76onGdHTqYoK/4-years-later-president-trump-and-global-catastrophic-risk,4 Years Later: President Trump and Global Catastrophic Risk,['HaydnBelfield'],2020-10-25T16:28:00Z,eaforum,, 149406,https://forum.effectivealtruism.org/posts/naJ9cJfHMTJ9CACvD/getting-started-independently-in-ai-safety,Getting started independently in AI Safety,['JJ Hepburn'],2021-07-06T15:20:50Z,eaforum,, 149424,https://forum.effectivealtruism.org/posts/D4khSueGA4Trebkks/crosspost-an-ai-pause-is-humanity-s-best-bet-for-preventing,[Crosspost] An AI Pause Is Humanity's Best Bet For Preventing Extinction (TIME),['Otto'],2023-07-24T10:18:09Z,eaforum,, 149444,https://forum.effectivealtruism.org/posts/NZz3Das7jFdCBN9zH/announcing-the-open-philanthropy-ai-worldviews-contest,Announcing the Open Philanthropy AI Worldviews Contest,"['Jason Schukraft', 'Peter Favaloro']",2023-03-10T02:33:35Z,eaforum,, 149455,https://forum.effectivealtruism.org/posts/hXLGQBrDg3iwQjWDp/capping-agi-profits,Capping AGI profits,['Luke Frymire'],2023-03-21T13:29:34Z,eaforum,, 149465,https://forum.effectivealtruism.org/posts/FgHa7FyiPyvBmMvS7/would-you-pursue-software-engineering-as-a-career-today,Would you pursue software engineering as a career today?,['justaperson'],2023-03-18T03:33:31Z,eaforum,, 149480,https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-case,Counterarguments to the basic AI risk case,['Katja_Grace'],2022-10-14T20:30:50Z,eaforum,, 149501,https://forum.effectivealtruism.org/posts/63pYakESGrQpfNw25/can-gpt-3-produce-new-ideas-partially-automating-robin,Can GPT-3 produce new ideas? Partially automating Robin Hanson and others,['NunoSempere'],2023-01-16T15:05:46Z,eaforum,, 149517,https://forum.effectivealtruism.org/posts/ruJnXtdDS7XiiwzSP/how-major-governments-can-help-with-the-most-important,How major governments can help with the most important century,['Holden Karnofsky'],2023-02-24T19:37:19Z,eaforum,, 149554,https://forum.effectivealtruism.org/posts/G7bWx2XaBxrBEKGgh/us-public-opinion-on-ai-september-2023,"US public opinion on AI, September 2023",['Zach Stein-Perlman'],2023-09-18T18:00:50Z,eaforum,, 149572,https://forum.effectivealtruism.org/posts/xzjQvqDYahigHcwgQ/aligning-recommender-systems-as-cause-area,Aligning Recommender Systems as Cause Area,['IvanVendrov'],2019-05-08T08:56:15Z,eaforum,, 149604,https://forum.effectivealtruism.org/posts/3itL9GJcxvQC5Pp5D/eu-ai-act-now-has-a-section-on-general-purpose-ai-systems,EU AI Act now has a section on general purpose AI systems,['MathiasKB'],2021-12-09T12:40:20Z,eaforum,, 149614,https://forum.effectivealtruism.org/posts/pcn3KDqfsxmobGazH/poster-session-on-ai-safety,Poster Session on AI Safety,['Neil Crawford'],2022-11-12T03:50:11Z,eaforum,, 149640,https://forum.effectivealtruism.org/posts/KkbEfpNkjNepQrj8g/publication-decisions-for-large-language-models-and-their,"Publication decisions for large language models, and their impacts",['Ben Cottier'],2022-12-21T13:50:17Z,eaforum,, 149662,https://forum.effectivealtruism.org/posts/hAQMQun7FySAWuQWg/benefits-risks-of-scott-aaronson-s-orthodox-reform-framing,Benefits/Risks of Scott Aaronson's Orthodox/Reform Framing for AI Alignment,['Jeremy'],2022-11-21T17:47:08Z,eaforum,, 149678,https://forum.effectivealtruism.org/posts/hFLEpodjWZvQLgMza/which-of-these-arguments-for-x-risk-do-you-think-we-should,Which of these arguments for x-risk do you think we should test?,['Wim'],2022-08-09T13:43:29Z,eaforum,, 149688,https://forum.effectivealtruism.org/posts/cp4aQuFfH5PwAJmdz/linkpost-ten-levels-of-ai-alignment-difficulty,[linkpost] Ten Levels of AI Alignment Difficulty,['SammyDMartin'],2023-07-04T11:23:17Z,eaforum,, 149699,https://forum.effectivealtruism.org/posts/ciKv8MRJ7gYyGS65o/what-harm-could-ai-safety-do,What harm could AI safety do?,['SeanEngelhart'],2021-05-15T01:11:13Z,eaforum,, 149720,https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design,"My current thoughts on MIRI's ""highly reliable agent design"" work",['Daniel_Dewey'],2017-07-07T01:17:25Z,eaforum,, 149743,https://forum.effectivealtruism.org/posts/YTjnCJuj3taaB6WYk/agi-predictions,AGI Predictions,['Pablo'],2020-11-21T12:02:35Z,eaforum,, 149759,https://www.lesswrong.com/posts/jdLmC46ZuXS54LKzL/why-i-m-sceptical-of-foom,Why I'm Sceptical of Foom,['DragonGod'],2022-12-08T10:01:01Z,lesswrong,, 149780,https://www.lesswrong.com/posts/peebMuCuscjkNvTnE/clarifying-the-malignity-of-the-universal-prior-the-lexical,Clarifying The Malignity of the Universal Prior: The Lexical Update,['interstice'],2020-01-15T00:00:37Z,lesswrong,, 149797,https://www.lesswrong.com/posts/n5ucT5ZbPdhfGNLtP/terminal-values-and-instrumental-values,Terminal Values and Instrumental Values,['Eliezer Yudkowsky'],2007-11-15T07:56:15Z,lesswrong,, 149818,https://www.lesswrong.com/posts/7fYxxtZqjuYXhBA2D/testing-ways-to-bypass-chatgpt-s-safety-features,Testing Ways to Bypass ChatGPT's Safety Features,['Robert_AIZI'],2022-12-05T18:50:48Z,lesswrong,, 149834,https://www.lesswrong.com/posts/G4KHuYC3pHry6yMhi/compute-research-questions-and-metrics-transformative-ai-and,Compute Research Questions and Metrics - Transformative AI and Compute [4/4],['lennart'],2021-11-28T22:49:57Z,lesswrong,, 149864,https://www.lesswrong.com/posts/5fdcsWwtvG9jAtzGK/update-on-the-uk-ai-taskforce-and-ai-safety-summit,Update on the UK AI Taskforce & AI Safety Summit,['Elliot_Mckernon'],2023-10-11T11:37:42Z,lesswrong,, 149889,https://www.lesswrong.com/posts/eywpzHRgXTCCAi8yt/what-s-actually-going-on-in-the-mind-of-the-model-when-we,"What's actually going on in the ""mind"" of the model when we fine-tune GPT-3 to InstructGPT?",['rpglover64'],2023-02-10T07:57:05Z,lesswrong,, 149904,https://www.lesswrong.com/posts/kNtBiyyGnjceNAkQK/a-double-feature-on-the-extropians,A Double-Feature on The Extropians,['Maxwell Tabarrok'],2023-06-03T18:27:47Z,lesswrong,, 149913,https://www.lesswrong.com/posts/Gkot4upnvMB7quzne/should-people-build-productizations-of-open-source-ai-models,Should people build productizations of open source AI models?,['lc'],2023-11-02T01:26:48Z,lesswrong,, 149923,https://www.lesswrong.com/posts/QDW9ksP7byF2Yoaar/accidental-optimizers,Accidental Optimizers,['aysajan'],2021-09-22T13:27:38Z,lesswrong,, 149940,https://www.lesswrong.com/posts/4v59asmKZxumHmYyz/public-opinion-on-ai-safety-aims-2023-and-2021-summary,Public Opinion on AI Safety: AIMS 2023 and 2021 Summary,"['Jacy Reese Anthis', 'Janet Pauketat', 'Ali']",2023-09-25T18:55:42Z,lesswrong,, 149969,https://www.lesswrong.com/posts/v24n8oR9aAuPCHDSA/cryptocurrency-exploits-show-the-importance-of-proactive,Cryptocurrency Exploits Show the Importance of Proactive Policies for AI X-Risk,['eSpencer'],2022-09-20T17:53:42Z,lesswrong,, 149991,https://www.lesswrong.com/posts/Qh6bnkxbMFz5SNeFd/value-uncertainty-and-the-singleton-scenario,Value Uncertainty and the Singleton Scenario,['Wei Dai'],2010-01-24T05:03:45Z,lesswrong,, 150005,https://www.lesswrong.com/posts/mm8sFBpPH3Bb2NhGg/three-reasons-to-cooperate,Three reasons to cooperate,['paulfchristiano'],2022-12-24T17:40:01Z,lesswrong,, 150023,https://www.lesswrong.com/posts/dypAjfRCe4nyasGSs/new-gpt3-impressive-capabilities-instructgpt3-1-2,New GPT3 Impressive Capabilities - InstructGPT3 [1/2],['simeon_c'],2022-03-13T10:58:46Z,lesswrong,, 150047,https://www.lesswrong.com/posts/vnzQgeiwmqfzqvFRm/gpt-3-and-the-future-of-knowledge-work,GPT-3 and the future of knowledge work,['fowlertm'],2021-03-05T17:40:12Z,lesswrong,, 150069,https://www.lesswrong.com/posts/ZxBBhzFSP6q4cz4Fv/questions-of-reasoning-under-logical-uncertainty,Questions of Reasoning under Logical Uncertainty,['So8res'],2015-01-09T17:37:55Z,lesswrong,, 150084,https://www.lesswrong.com/posts/Hug2ePykMkmPzSsx6/drawing-two-aces,Drawing Two Aces,['Eliezer Yudkowsky'],2010-01-03T10:33:46Z,lesswrong,, 150098,https://www.lesswrong.com/posts/SGHsnG7ZraTPKzveo/bias-in-rationality-is-much-worse-than-noise,Bias in rationality is much worse than noise,['Stuart_Armstrong'],2017-10-31T11:57:16Z,lesswrong,, 150112,https://www.lesswrong.com/posts/M3iPAmxZwy4gPXdXw/the-public-supports-regulating-ai-for-safety,The public supports regulating AI for safety,['Zach Stein-Perlman'],2023-02-17T04:10:03Z,lesswrong,, 150131,https://www.lesswrong.com/posts/bHBmmkdwsmjNwp94K/new-us-senate-bill-on-x-risk-mitigation-linkpost,New US Senate Bill on X-Risk Mitigation [Linkpost],['Evan R. Murphy'],2022-07-04T01:25:57Z,lesswrong,, 150142,https://www.lesswrong.com/posts/MxXtaChisukWL5Ehz/deceptive-agents-are-a-good-way-to-do-things,Deceptive Agents are a Good Way to Do Things,['David Udell'],2022-04-19T18:04:28Z,lesswrong,, 150153,https://www.lesswrong.com/posts/vrcrsd5svM3riDFst/a-critique-of-ai-alignment-pessimism,A Critique of AI Alignment Pessimism,['ExCeph'],2022-07-19T02:28:14Z,lesswrong,, 150188,https://www.lesswrong.com/posts/KW5m4eREWGitPb8Ev/fighting-akrasia-incentivising-action,Fighting Akrasia: Incentivising Action,['Gordon Seidoh Worley'],2009-04-29T13:48:56Z,lesswrong,, 150200,https://www.lesswrong.com/posts/paYyQ8Y7Zun5ERRj3/gpt-4-is-easily-controlled-exploited-with-tricky-decision,GPT-4 is easily controlled/exploited with tricky decision theoretic dilemmas.,['scasper'],2023-04-14T19:39:24Z,lesswrong,, 150221,https://www.lesswrong.com/posts/i4LjHb6enWiErXdx2/ramble-on-stuff-intelligence-simulation-ai-doom-default-mode,"Ramble on STUFF: intelligence, simulation, AI, doom, default mode, the usual",['Bill Benzon'],2023-08-26T15:49:48Z,lesswrong,, 150245,https://www.lesswrong.com/posts/Yh9nkqfoSs2KGfetf/only-a-hack-can-solve-the-shutdown-problem,Only a hack can solve the shutdown problem,['dp'],2023-07-15T20:26:55Z,lesswrong,, 150263,https://www.lesswrong.com/posts/6gZQTkLs9GCdZkFoW/the-biological-intelligence-explosion,The biological intelligence explosion,['Rob Lucas'],2021-07-25T13:08:28Z,lesswrong,, 150272,https://www.lesswrong.com/posts/iTmu5nrrtqHGe9iCr/ai-safety-in-a-vulnerable-world-requesting-feedback-on,AI Safety in a Vulnerable World: Requesting Feedback on Preliminary Thoughts,['Jordan Arel'],2022-12-06T22:35:15Z,lesswrong,, 150287,https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon,Why I think strong general AI is coming soon,['porby'],2022-09-28T05:40:38Z,lesswrong,, 150309,https://www.lesswrong.com/posts/MksJyLgJ8JQFexiWi/why-ai-alignment-would-better-be-renamed-into-artificial,"Why ""AI alignment"" would better be renamed into ""Artificial Intention research""",['chaosmage'],2023-06-15T10:32:26Z,lesswrong,, 150321,https://www.lesswrong.com/posts/5evRqMmGxTKf98pvT/evaluating-the-feasibility-of-si-s-plan,Evaluating the feasibility of SI's plan,['JoshuaFox'],2013-01-10T08:17:30Z,lesswrong,, 150343,https://www.lesswrong.com/posts/yFr8ZfGGnRX5GqndZ/introducing-corrigibility-an-fai-research-subfield,Introducing Corrigibility (an FAI research subfield),['So8res'],2014-10-20T21:09:45Z,lesswrong,, 150360,https://www.lesswrong.com/posts/Xscch4PFgmdM3GTMY/don-t-you-think-rlhf-solves-outer-alignment,Don't you think RLHF solves outer alignment?,['Charbel-Raphaël'],2022-11-04T00:36:37Z,lesswrong,, 150371,https://www.lesswrong.com/posts/Y3bkJ59j4dciiLYyw/intro-to-brain-like-agi-safety-4-the-short-term-predictor,[Intro to brain-like-AGI safety] 4. The “short-term predictor”,['Steven Byrnes'],2022-02-16T13:12:14Z,lesswrong,, 150392,https://www.lesswrong.com/posts/KFLdfuw35qkgjzWer/understanding-conjecture-notes-from-connor-leahy-interview,Understanding Conjecture: Notes from Connor Leahy interview,['Akash'],2022-09-15T18:37:52Z,lesswrong,, 150422,https://www.lesswrong.com/posts/eZ8xAyxiELASGsawb/uk-government-publishes-frontier-ai-capabilities-and-risks,"UK Government publishes ""Frontier AI: capabilities and risks"" Discussion Paper",['A.H.'],2023-10-26T13:55:17Z,lesswrong,, 150432,https://www.lesswrong.com/posts/uXoLNFeZbtEejFPHr/eliciting-credit-hacking-behaviours-in-llms-1,Eliciting Credit Hacking Behaviours in LLMs,['omegastick'],2023-09-14T15:07:38Z,lesswrong,, 150443,https://www.lesswrong.com/posts/raoeNarFYCxxyKAop/modulating-sycophancy-in-an-rlhf-model-via-activation,Modulating sycophancy in an RLHF model via activation steering,['Nina Rimsky'],2023-08-09T07:06:51Z,lesswrong,, 150458,https://www.lesswrong.com/posts/shb67DsGstZmvhiem/infernal-corrigibility-fiendishly-difficult,"Infernal Corrigibility, Fiendishly Difficult",['David Udell'],2022-05-27T20:32:51Z,lesswrong,, 150477,https://www.lesswrong.com/posts/CoEtbtMTcPczTiPuX/ais-and-gatekeepers-unite,AIs and Gatekeepers Unite!,['Eliezer Yudkowsky'],2008-10-09T17:04:31Z,lesswrong,, 150487,https://www.lesswrong.com/posts/WkchhorbLsSMbLacZ/ai-1-sydney-and-bing,AI #1: Sydney and Bing,['Zvi'],2023-02-21T14:00:00Z,lesswrong,, 150526,https://www.lesswrong.com/posts/n99LGqGyQWYNyNvXG/aisafety-world-is-a-map-of-the-ais-ecosystem,AISafety.world is a map of the AIS ecosystem,['Hamish Doodles'],2023-04-06T18:37:15Z,lesswrong,, 150543,https://www.lesswrong.com/posts/5BkEoJFEqQEWy9GcL/an-open-philanthropy-grant-proposal-causal-representation,An Open Philanthropy grant proposal: Causal representation learning of human preferences,['PabloAMC'],2022-01-11T11:28:02Z,lesswrong,, 150564,https://www.lesswrong.com/posts/5J34FAKyEmqKaT7jt/a-summary-of-savage-s-foundations-for-probability-and,A summary of Savage's foundations for probability and utility.,['Sniffnoy'],2011-05-22T19:56:28Z,lesswrong,, 150587,https://www.lesswrong.com/posts/d6DvuCKH5bSoT62DB/compendium-of-problems-with-rlhf,Compendium of problems with RLHF,['Charbel-Raphaël'],2023-01-29T11:40:53Z,lesswrong,, 150616,https://www.lesswrong.com/posts/bwcEQ4zRmLbbhAhA9/would-manhattan-project-style-be-beneficial-or-deleterious,"Would ""Manhattan Project"" style be beneficial or deleterious for AI Alignment?",['Just Learning'],2022-08-04T19:12:45Z,lesswrong,, 150633,https://www.lesswrong.com/posts/fTu69HzLSXqWgj9ib/is-google-paperclipping-the-web-the-perils-of-optimization,Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems,['Alexandros'],2010-05-10T13:25:42Z,lesswrong,, 150650,https://www.lesswrong.com/posts/ftw4d8kByxh39FdDR/inner-alignment-via-superpowers,Inner Alignment via Superpowers,"['JamesH', 'Thomas Larsen', 'Jeremy Gillen']",2022-08-30T20:01:52Z,lesswrong,, 150664,https://www.lesswrong.com/posts/bdsCWKKDSnh8wNKS3/eisenhower-s-atoms-for-peace-speech,Eisenhower's Atoms for Peace Speech,['Akash'],2023-05-17T16:10:39Z,lesswrong,, 150685,https://www.lesswrong.com/posts/DqF9c8J9LXeFFo42a/ai-safety-field-building-projects-i-d-like-to-see,AI Safety field-building projects I'd like to see,['Akash'],2022-09-11T23:43:32Z,lesswrong,, 150706,https://www.lesswrong.com/posts/2THFt7BChfCgwYDeA/let-s-discuss-functional-decision-theory,Let's Discuss Functional Decision Theory,['Chris_Leong'],2018-07-23T07:24:48Z,lesswrong,, 150717,https://www.lesswrong.com/posts/ioJGAt6x4q3GGKhrG/solomonoff-induction-still-works-if-the-universe-is,"Solomonoff induction still works if the universe is uncomputable, and its usefulness doesn't require knowing Occam's razor",['Christopher King'],2023-06-18T01:52:26Z,lesswrong,, 150727,https://www.lesswrong.com/posts/RsLsBnr6qmfKYR7sL/will-the-first-agi-agent-have-been-designed-as-an-agent-in,Will the first AGI agent have been designed as an agent (in addition to an AGI)?,['nahoj'],2022-12-03T20:32:52Z,lesswrong,, 150737,https://www.lesswrong.com/posts/wnF9iydYiBMRs2jPg/reflections-on-my-5-month-alignment-upskilling-grant,Reflections on my 5-month alignment upskilling grant,['Jay Bailey'],2022-12-27T10:51:50Z,lesswrong,, 150754,https://www.lesswrong.com/posts/ebezsHW6qJwxTFasX/paper-all-s-fair-in-love-and-love-copy-suppression-in-gpt-2,[Paper] All's Fair In Love And Love: Copy Suppression in GPT-2 Small,"['TheMcDouglas', 'Arthur Conmy', 'starship006', 'Tom McGrath', 'Neel Nanda']",2023-10-13T18:32:02Z,lesswrong,, 150776,https://www.lesswrong.com/posts/AWaJvBMb9HGBwtNqd/qualitative-strategies-of-friendliness,Qualitative Strategies of Friendliness,['Eliezer Yudkowsky'],2008-08-30T02:12:33Z,lesswrong,, 150799,https://www.lesswrong.com/posts/DQMD5XZgBXaegRDQv/miri-conversations-technology-forecasting-and-gradualism,MIRI Conversations: Technology Forecasting & Gradualism (Distillation),['TheMcDouglas'],2022-07-13T15:55:40Z,lesswrong,, 150825,https://www.lesswrong.com/posts/d3gMZmSSAHXaGisyJ/miri-s-technical-research-agenda,MIRI's technical research agenda,['So8res'],2014-12-23T18:45:41Z,lesswrong,, 150840,https://www.lesswrong.com/posts/aymbce8ge9ve2C4Po/eis-xii-summary,EIS XII: Summary,['scasper'],2023-02-23T17:45:56Z,lesswrong,, 150866,https://www.lesswrong.com/posts/gTmWZEu3CcEQ6fLLM/treating-anthropic-selfish-preferences-as-an-extension-of,Treating anthropic selfish preferences as an extension of TDT,['Manfred'],2015-01-01T00:43:57Z,lesswrong,, 150889,https://www.lesswrong.com/posts/MrdFL38Zi3DwTDkKS/apollo-research-is-hiring-evals-and-interpretability,Apollo Research is hiring evals and interpretability engineers & scientists,['Marius Hobbhahn'],2023-08-04T10:54:09Z,lesswrong,, 150908,https://www.lesswrong.com/posts/jyAerr8txxhiKnxwA/5-reasons-why-governments-militaries-already-want-ai-for,5 Reasons Why Governments/Militaries Already Want AI for Information Warfare,['trevor'],2023-10-30T16:30:38Z,lesswrong,, 150932,https://www.lesswrong.com/posts/dC3rxrMkYKLfgTYEa/what-a-reduction-of-could-could-look-like,"What a reduction of ""could"" could look like",['cousin_it'],2010-08-12T17:41:34Z,lesswrong,, 150941,https://www.lesswrong.com/posts/3CSsBfdnkkaufagKF/is-instructgpt-following-instructions-in-other-languages,Is InstructGPT Following Instructions in Other Languages Surprising?,['DragonGod'],2023-02-13T23:26:29Z,lesswrong,, 150951,https://www.lesswrong.com/posts/8NBbq7xhyDXoDWM8e/don-t-condition-on-no-catastrophes,Don't Condition on no Catastrophes,['Scott Garrabrant'],2018-02-21T21:50:31Z,lesswrong,, 150960,https://www.lesswrong.com/posts/kobnJdkMnrmZvL3Fu/confusions-and-updates-on-stem-ai,Confusions and updates on STEM AI,['Eleni Angelou'],2023-05-19T21:34:58Z,lesswrong,, 150979,https://www.lesswrong.com/posts/vtPRr5ozBkezCZB7w/newcomb-s-grandfather,Newcomb's Grandfather,['Yair Halberstadt'],2022-01-28T08:56:53Z,lesswrong,, 150988,https://www.lesswrong.com/posts/FmxhoWxvBqSxhFeJn/i-attempted-the-ai-box-experiment-and-lost,I attempted the AI Box Experiment (and lost),['Tuxedage'],2013-01-21T02:59:04Z,lesswrong,, 151010,https://www.lesswrong.com/posts/fa5o2tg9EfJE77jEQ/the-human-s-hidden-utility-function-maybe,The Human's Hidden Utility Function (Maybe),['lukeprog'],2012-01-23T19:39:43Z,lesswrong,, 151019,https://www.lesswrong.com/posts/D2z8eQWnosNnbEBdD/open-ai-co-founder-on-agi,Open AI co-founder on AGI,['ShardPhoenix'],2018-09-16T10:18:03Z,lesswrong,, 151034,https://www.lesswrong.com/posts/g9dNMXKX2fqLgW9a9/philosophical-self-ratification,Philosophical self-ratification,['jessicata'],2020-02-03T22:48:47Z,lesswrong,, 151050,https://www.lesswrong.com/posts/cyycbDAffNc6aghas/if-agi-is-imminent-why-can-t-i-hail-a-robotaxi,"If AGI is imminent, why can’t I hail a robotaxi?",['Yarrow Bouchard'],2023-11-03T18:11:43Z,lesswrong,, 151064,https://www.lesswrong.com/posts/rTgwxsxu6hstgxDR2/crosspost-organizing-a-debate-with-experts-and-mps-to-raise,[Crosspost] Organizing a debate with experts and MPs to raise AI xrisk awareness: a possible blueprint,['otto.barten'],2023-04-19T11:45:46Z,lesswrong,, 151078,https://www.lesswrong.com/posts/oT8fmwWddGwnZbbym/notes-on-meta-s-diplomacy-playing-ai,Notes on Meta's Diplomacy-Playing AI,['Erich_Grunewald'],2022-12-22T11:34:27Z,lesswrong,, 151104,https://www.lesswrong.com/posts/3xnkw6JkQdwc8Cfcf/is-the-human-brain-a-valid-choice-for-the-universal-turing,Is the human brain a valid choice for the Universal Turing Machine in Solomonoff Induction?,['habryka'],2018-12-08T01:49:56Z,lesswrong,, 151113,https://www.lesswrong.com/posts/Qsc3G2HemFWLobDSw/surprised-by-elk-report-s-counterexample-to-debate-ida,"Surprised by ELK report's counterexample to Debate, IDA",['Evan R. Murphy'],2022-08-04T02:12:15Z,lesswrong,, 151127,https://www.lesswrong.com/posts/6RjL996E8Dsz3vHPk/two-more-decision-theory-problems-for-humans,Two More Decision Theory Problems for Humans,['Wei Dai'],2019-01-04T09:00:33Z,lesswrong,, 151143,https://www.lesswrong.com/posts/EP92JhDm8kqtfATk8/yoshua-bengio-argues-for-tool-ai-and-to-ban-executive-ai,"Yoshua Bengio argues for tool-AI and to ban ""executive-AI""",['habryka'],2023-05-09T00:13:09Z,lesswrong,, 151170,https://www.lesswrong.com/posts/BQKKQiBmc63fwjDrj/graphical-tensor-notation-for-interpretability,Graphical tensor notation for interpretability,['Jordan Taylor'],2023-10-04T08:04:33Z,lesswrong,, 151180,https://www.lesswrong.com/posts/hfrktmuEMfDZJYsHq/pre-registering-a-study,Pre-registering a study,['Robert_AIZI'],2023-04-07T15:46:02Z,lesswrong,, 151198,https://www.lesswrong.com/posts/ptY6X3BdW4kgqpZFo/using-gpt-4-to-understand-code,Using GPT-4 to Understand Code,['Siddharth Hiregowdara'],2023-03-24T00:09:42Z,lesswrong,, 151208,https://www.lesswrong.com/posts/5qAwYRhLBhvovDqft/under-what-circumstances-have-governments-cancelled-ai-type,Under what circumstances have governments cancelled AI-type systems?,['David Gross'],2022-09-23T21:11:48Z,lesswrong,, 151219,https://www.lesswrong.com/posts/PnhpMqMP75Dxpvar5/shane-legg-on-prospect-theory-and-computational-finance,Shane Legg on prospect theory and computational finance,['Roko'],2009-06-21T17:57:09Z,lesswrong,, 151230,https://www.lesswrong.com/posts/TaqBzqhzEPi8eHtC2/content-generation-where-do-we-draw-the-line,Content generation. Where do we draw the line?,['Q Home'],2022-08-09T10:51:37Z,lesswrong,, 151246,https://www.lesswrong.com/posts/fcAmYySionnR4Muwi/ai-plans-com-a-contributable-compendium,AI-Plans.com - a contributable compendium,['Iknownothing'],2023-06-25T14:40:01Z,lesswrong,, 151272,https://www.lesswrong.com/posts/3kijTbgfizDgSgst3/steelman-arguments-against-the-idea-that-agi-is-inevitable,Steelman arguments against the idea that AGI is inevitable and will arrive soon,['RomanS'],2021-10-09T06:22:26Z,lesswrong,, 151288,https://www.lesswrong.com/posts/DfcGHqgxAWAL6BCst/link-post-cyber-digital-authoritarianism-national,[Link Post] Cyber Digital Authoritarianism (National Intelligence Council Report),['Phosphorous'],2023-02-26T20:51:50Z,lesswrong,, 151300,https://www.lesswrong.com/posts/QzPa6zn49qYn4Ae9k/ai-risk-us-presidential-candidate,AI Risk US Presidential Candidate,['Simon Berens'],2023-04-11T19:31:09Z,lesswrong,, 151310,https://www.lesswrong.com/posts/JCuthtnv5PTE3mrqS/6502-simulated-mind-uploading-for-microprocessors,6502 simulated - mind uploading for microprocessors,['humpolec'],2011-01-08T18:03:10Z,lesswrong,, 151319,https://www.lesswrong.com/posts/dmq9cNCpwsLqWbZQ2/an-argument-for-personal-identity-transfer,An argument for personal identity transfer.,['Gadersd'],2020-12-12T21:03:14Z,lesswrong,, 151336,https://www.lesswrong.com/posts/ngqvnWGsvTEiTASih/ai-alignment-problem-human-values-don-t-actually-exist,AI Alignment Problem: “Human Values” don’t Actually Exist,['avturchin'],2019-04-22T09:23:02Z,lesswrong,, 151353,https://www.lesswrong.com/posts/aFzLYnoLN65xWw4Xj/risk-aversion-vs-concave-utility-function,Risk aversion vs. concave utility function,['dvasya'],2012-01-31T06:25:39Z,lesswrong,, 151363,https://www.lesswrong.com/posts/6kRj95eAFJkGvMoEe/intelligence-explosion-analysis-draft-from-digital,Intelligence Explosion analysis draft: From digital intelligence to intelligence explosion,['lukeprog'],2011-11-26T06:30:25Z,lesswrong,, 151386,https://www.lesswrong.com/posts/gGhvnJYWNHyzyQDub/generalizability-and-hope-for-ai-mlaisu-w03,Generalizability & Hope for AI [MLAISU W03],['Esben Kran'],2023-01-20T10:07:00Z,lesswrong,, 151403,https://www.lesswrong.com/posts/4pRPmFSfCLvKGEnFx/towards-gears-level-understanding-of-agency,Towards Gears-Level Understanding of Agency,['Thane Ruthenis'],2022-06-16T22:00:17Z,lesswrong,, 151431,https://www.lesswrong.com/posts/TAkRFJh2A3NK6oqje/linkpost-blueprint-for-an-ai-bill-of-rights-office-of,"[Linkpost] ""Blueprint for an AI Bill of Rights"" - Office of Science and Technology Policy, USA (2022)",['Fer32dwt34r3dfsz'],2022-10-05T16:42:37Z,lesswrong,, 151450,https://www.lesswrong.com/posts/9bkY8etnLfGCzcYAu/the-preference-utilitarian-s-time-inconsistency-problem,The Preference Utilitarian’s Time Inconsistency Problem,['Wei Dai'],2010-01-15T00:26:05Z,lesswrong,, 151463,https://www.lesswrong.com/posts/DoyJiFwEzeeMBchH7/verifying-vnm-rationality-requires-an-ontology,Verifying vNM-rationality requires an ontology,['jeyoor'],2019-03-13T00:03:17Z,lesswrong,, 151473,https://www.lesswrong.com/posts/t3ngnd6Wvo4qeY5FA/i-designed-an-ai-safety-course-for-a-philosophy-department,I designed an AI safety course (for a philosophy department),['Eleni Angelou'],2023-09-23T22:03:00Z,lesswrong,, 151500,https://www.lesswrong.com/posts/AhhdyxiAG6669BxLe/the-ai-countdown-clock,The AI Countdown Clock,['River Lewis'],2022-05-15T18:37:48Z,lesswrong,, 151512,https://www.lesswrong.com/posts/2FXtpdzx6uoNRZXjS/compressing-reality-to-math,Compressing Reality to Math,['Vaniver'],2011-12-15T00:07:18Z,lesswrong,, 151539,https://www.lesswrong.com/posts/EaJhgjiiBzseMPeLr/is-there-a-publicly-available-list-of-examples-of-frontier,Is there a publicly available list of examples of frontier model capabilities?,['Max Kearney'],2023-09-19T17:45:03Z,lesswrong,, 151548,https://www.lesswrong.com/posts/tQt9pF9pbAuaiHxE6/is-interpretability-all-we-need,Is Interpretability All We Need?,['RogerDearnaley'],2023-11-14T05:31:43Z,lesswrong,, 151560,https://www.lesswrong.com/posts/nhuMKruWGZH2NCdtH/problems-integrating-decision-theory-and-inverse,Problems integrating decision theory and inverse reinforcement learning,['agilecaveman'],2018-05-08T05:11:07Z,lesswrong,, 151579,https://www.lesswrong.com/posts/5YaCtuSZzCNywgHrb/on-learning-to-summarize,on “learning to summarize”,['nostalgebraist'],2020-09-12T03:20:08Z,lesswrong,, 151609,https://www.lesswrong.com/posts/eStLg3uhHmzjCqWDm/where-are-the-red-lines-for-ai,Where are the red lines for AI?,['Karl von Wendt'],2022-08-05T09:34:41Z,lesswrong,, 151627,https://www.lesswrong.com/posts/rMfpnorsMoRwyn4iP/choosing-the-zero-point,Choosing the Zero Point,['orthonormal'],2020-04-06T23:44:02Z,lesswrong,, 151637,https://www.lesswrong.com/posts/BWLKRMQn3DFcQg6of/fli-and-eliezer-should-reach-consensus,FLI And Eliezer Should Reach Consensus,['JenniferRM'],2023-04-11T04:07:14Z,lesswrong,, 151672,https://www.lesswrong.com/posts/nHp9dmeKrj6uQe2BF/why-do-people-think-intelligence-will-be-easy,"Why do People Think Intelligence Will be ""Easy""?",['DragonGod'],2022-09-12T17:32:38Z,lesswrong,, 151685,https://www.lesswrong.com/posts/g8gurARvrd7CDXhuc/access-to-ai-a-human-right,Access to AI: a human right?,['dmtea'],2020-07-25T09:38:35Z,lesswrong,, 151699,https://www.lesswrong.com/posts/NaXz3FM9gXXB7oJW3/announcing-aisummittalks-featuring-professor-stuart-russell,Announcing #AISummitTalks featuring Professor Stuart Russell and many others,['otto.barten'],2023-10-24T10:11:35Z,lesswrong,, 151713,https://www.lesswrong.com/posts/DkDy2hvkwbQ54GM9u/introducing-effisciences-ai-safety-unit-1,Introducing EffiSciences’ AI Safety Unit,"['WCargo', 'Charbel-Raphaël', 'Florent_Berthet']",2023-06-30T07:44:57Z,lesswrong,, 151742,https://www.lesswrong.com/posts/AvdTogSbw2tEdMWxm/a-personal-explanation-of-elk-concept-and-task,A personal explanation of ELK concept and task.,['Zeyu Qin'],2023-10-06T03:55:45Z,lesswrong,, 151751,https://www.lesswrong.com/posts/39bD65my8GvEiXQ9o/allais-hack-transform-your-decisions,Allais Hack -- Transform Your Decisions!,['MBlume'],2009-05-03T22:37:13Z,lesswrong,, 151764,https://www.lesswrong.com/posts/KCHQj2ZDAuSZb4Nif/preference-aggregation-as-bayesian-inference,Preference Aggregation as Bayesian Inference,['beren'],2023-07-27T17:59:36Z,lesswrong,, 151784,https://www.lesswrong.com/posts/sMZRKnwZDDy2sAX7K/google-s-palm-e-an-embodied-multimodal-language-model,Google's PaLM-E: An Embodied Multimodal Language Model,['SandXbox'],2023-03-07T04:11:18Z,lesswrong,, 151796,https://www.lesswrong.com/posts/gbKhdCLNrAebarXNM/ai-prediction-case-study-5-omohundro-s-ai-drives,AI prediction case study 5: Omohundro's AI drives,['Stuart_Armstrong'],2013-03-15T09:09:35Z,lesswrong,, 151815,https://www.lesswrong.com/posts/oBBzqkZwkxDvsKBGB/ai-could-defeat-all-of-us-combined,AI Could Defeat All Of Us Combined,['HoldenKarnofsky'],2022-06-09T15:50:13Z,lesswrong,, 151836,https://www.lesswrong.com/posts/uhKnChcpzK4B47DZp/what-is-the-advantage-of-the-kolmogorov-complexity-prior,What is the advantage of the Kolmogorov complexity prior?,['skepsci'],2012-02-16T01:51:26Z,lesswrong,, 151851,https://www.lesswrong.com/posts/hDePh3KReBMNBJfzx/gpt-3-catching-fish-in-morse-code,GPT-3 Catching Fish in Morse Code,['Megan Kinniment'],2022-06-30T21:22:49Z,lesswrong,, 151872,https://www.lesswrong.com/posts/2mhFMgtAjFJesaSYR/2-d-robustness,2-D Robustness,['Vlad Mikulik'],2019-08-30T20:27:34Z,lesswrong,, 151888,https://www.lesswrong.com/posts/bvdbx6tW9yxfxAJxe/catastrophic-risks-from-ai-1-introduction,Catastrophic Risks from AI #1: Introduction,"['Dan H', 'Mantas Mazeika', 'ThomasW']",2023-06-22T17:09:41Z,lesswrong,, 151919,https://www.lesswrong.com/posts/DWgWbXRfXLGHPgZJM/solving-math-problems-by-relay,Solving Math Problems by Relay,"['bgold', 'Owain_Evans']",2020-07-17T15:32:01Z,lesswrong,, 151947,https://www.lesswrong.com/posts/QDj5dozwPPe8aJ6ZZ/examples-of-ai-s-behaving-badly,Examples of AI's behaving badly,['Stuart_Armstrong'],2015-07-16T10:01:44Z,lesswrong,, 151962,https://www.lesswrong.com/posts/RTjE6KN2WGepGL6m9/my-alignment-timeline,My Alignment Timeline,['NicholasKross'],2023-07-03T01:04:08Z,lesswrong,, 151983,https://www.lesswrong.com/posts/J9inYXMvAKEggFhyJ/list-of-public-predictions-of-what-gpt-x-can-or-can-t-do,List of public predictions of what GPT-X can or can't do?,['Daniel Kokotajlo'],2020-06-14T14:25:18Z,lesswrong,, 151992,https://www.lesswrong.com/posts/PA2hprrtvtpPMugeN/my-ai-alignment-research-agenda-and-threat-model-right-now,"My AI Alignment Research Agenda and Threat Model, right now (May 2023)",['NicholasKross'],2023-05-28T03:23:38Z,lesswrong,, 152020,https://www.lesswrong.com/posts/XQirei3crsLxsCQoi/surprised-by-brains,Surprised by Brains,['Eliezer Yudkowsky'],2008-11-23T07:26:41Z,lesswrong,, 152035,https://www.lesswrong.com/posts/3broJA5XpBwDbjsYb/agency-engineering-is-ai-alignment-to-human-intent-enough,"Agency engineering: is AI-alignment ""to human intent"" enough?",['catubc'],2022-09-02T18:14:52Z,lesswrong,, 152052,https://www.lesswrong.com/posts/9xaW2yQRpyjp23ikg/slaying-the-hydra-toward-a-new-game-board-for-ai,Slaying the Hydra: toward a new game board for AI,['Prometheus'],2023-06-23T17:04:39Z,lesswrong,, 152068,https://www.lesswrong.com/posts/hyfedqhgCQriBB9wT/notes-on-policy-desiderata-for-superintelligent-ai-a-vector,(notes on) Policy Desiderata for Superintelligent AI: A Vector Field Approach,['Ben Pace'],2019-02-04T22:08:34Z,lesswrong,, 152112,https://www.lesswrong.com/posts/hWDQEeZXYH5dN6Pzq/the-governance-problem-and-the-pretty-good-x-risk,"The Governance Problem and the ""Pretty Good"" X-Risk",['Zach Stein-Perlman'],2021-08-29T18:00:28Z,lesswrong,, 152134,https://www.lesswrong.com/posts/76n4pMcoDBTdXHTLY/ideas-for-studies-on-agi-risk,Ideas for studies on AGI risk,['dr_s'],2023-04-20T18:17:53Z,lesswrong,, 152161,https://www.lesswrong.com/posts/cnKvxehpHqWjZJNry/how-do-ai-timelines-affect-existential-risk,How Do AI Timelines Affect Existential Risk?,['Stephen McAleese'],2022-08-29T16:57:44Z,lesswrong,, 152202,https://www.lesswrong.com/posts/sAt6zfeatgFiikAkE/ai-incident-sharing-best-practices-from-other-fields-and-a,AI Incident Sharing - Best practices from other fields and a comprehensive list of existing platforms,['Štěpán Los'],2023-06-28T17:21:14Z,lesswrong,, 152222,https://www.lesswrong.com/posts/z4CQkp9gJ9MxYnCcd/crosspost-a-recent-write-up-of-the-case-for-ai-existential-1,[Crosspost] A recent write-up of the case for AI (existential) risk,['Timsey'],2023-05-18T13:13:50Z,lesswrong,, 152253,https://www.lesswrong.com/posts/riqmXyGaB6dW5EnTN/notes-on-moderation-balance-and-harmony,"Notes on Moderation, Balance, & Harmony",['David Gross'],2020-12-25T02:44:55Z,lesswrong,, 152268,https://www.lesswrong.com/posts/7nDvJiikgiawHAp6z/my-research-agenda-in-agent-foundations,My research agenda in agent foundations,['Alex_Altair'],2023-06-28T18:00:28Z,lesswrong,, 152291,https://www.lesswrong.com/posts/b2MnFM8DWDaPhxBoK/double-cruxing-the-ai-foom-debate,Double Cruxing the AI Foom debate,['agilecaveman'],2018-04-27T06:46:39Z,lesswrong,, 152316,https://www.lesswrong.com/posts/EEk2euXnipg3CnKrb/measure-of-complexity-allowed-by-the-laws-of-the-universe,Measure of complexity allowed by the laws of the universe and relative theory?,['dr_s'],2023-09-07T12:21:04Z,lesswrong,, 152325,https://www.lesswrong.com/posts/PdooAsNFiohmyburK/ai-takeover-scenario-with-scaled-llms,AI Takeover Scenario with Scaled LLMs,['simeon_c'],2023-04-16T23:28:14Z,lesswrong,, 152355,https://www.lesswrong.com/posts/ZmZBataeY58anJRBb/getting-from-an-unaligned-agi-to-an-aligned-agi,Getting from an unaligned AGI to an aligned AGI?,['Tor Økland Barstad'],2022-06-21T12:36:14Z,lesswrong,, 152388,https://www.lesswrong.com/posts/qPXtBGd74EBjwj6gE/ai-risk-in-terms-of-unstable-nuclear-software,AI Risk in Terms of Unstable Nuclear Software,['Thane Ruthenis'],2022-08-26T18:49:54Z,lesswrong,, 152419,https://www.lesswrong.com/posts/t7JGQh828inTXQh98/employer-considering-partnering-with-major-ai-labs-what-to,Employer considering partnering with major AI labs. What to do?,['GraduallyMoreAgitated'],2023-03-21T17:43:17Z,lesswrong,, 152441,https://www.lesswrong.com/posts/KuBcmj9pevPXNPB4C/if-no-near-term-alignment-strategy-research-should-aim-for,"If no near-term alignment strategy, research should aim for the long-term",['harsimony'],2022-06-09T19:10:34Z,lesswrong,, 152455,https://www.lesswrong.com/posts/d5swTmH2zw4vzYBNS/an-intuitive-introduction-to-functional-decision-theory,An Intuitive Introduction to Functional Decision Theory,['Heighn'],2022-03-07T16:07:54Z,lesswrong,, 152471,https://www.lesswrong.com/posts/5fr8ZZ4mTpvciZu4K/book-review-human-compatible,Book review: Human Compatible,['PeterMcCluskey'],2020-01-19T03:32:05Z,lesswrong,, 152505,https://www.lesswrong.com/posts/93ip2aYjBXWWjtZas/the-virus-short-story,The Virus - Short Story,['Michael Soareverix'],2023-04-13T18:18:48Z,lesswrong,, 152524,https://www.lesswrong.com/posts/QDv3y88KkrroCazeB/entropic-boundary-conditions-towards-safe-artificial,Entropic boundary conditions towards safe artificial superintelligence,['Santiago Nunez-Corrales'],2021-07-20T22:15:42Z,lesswrong,, 152541,https://www.lesswrong.com/posts/bFuK64p8PWtyhiuHK/agentic-gpt-simulations-a-risk-and-an-opportunity,Agentic GPT simulations: a risk and an opportunity,['Yair Halberstadt'],2023-03-22T06:24:07Z,lesswrong,, 152559,https://www.lesswrong.com/posts/BaQWrRgu7pjGmBByv/injecting-some-numbers-into-the-agi-debate-by-boaz-barak,Injecting some numbers into the AGI debate - by Boaz Barak,['Jsevillamol'],2022-11-23T16:10:34Z,lesswrong,, 152580,https://www.lesswrong.com/posts/Xni8DSjkK5BxSJbiF/project-idea-challenge-groups-for-alignment-researchers,Project Idea: Challenge Groups for Alignment Researchers,['Adam Zerner'],2023-05-27T20:10:12Z,lesswrong,, 152591,https://www.lesswrong.com/posts/6omuuguhMLxFC3Sah/are-lawsuits-against-agi-companies-extending-agi-timelines,Are lawsuits against AGI companies extending AGI timelines?,['SlowingAGI'],2022-12-13T06:00:32Z,lesswrong,, 152605,https://www.lesswrong.com/posts/no9ftcYzcMLo8Zwj4/building-agi-using-language-models,Building AGI Using Language Models,['leogao'],2020-11-09T16:33:26Z,lesswrong,, 152615,https://www.lesswrong.com/posts/4iAkmnhhqNZe8JzrS/reflection-mechanisms-as-an-alignment-target-attitudes-on,Reflection Mechanisms as an Alignment Target - Attitudes on “near-term” AI,"['elandgre', 'Beth Barnes', 'Marius Hobbhahn']",2023-03-02T04:29:48Z,lesswrong,, 152633,https://www.lesswrong.com/posts/4xGAmZ9GTGAkszHoH/parameter-scaling-comes-for-rl-maybe,"Parameter Scaling Comes for RL, Maybe",['1a3orn'],2023-01-24T13:55:46Z,lesswrong,, 152658,https://www.lesswrong.com/posts/DbfQrno5W6SDT9ABc/can-you-be-not-even-wrong-in-ai-alignment,Can you be Not Even Wrong in AI Alignment?,['throwaway8238'],2022-03-19T17:41:26Z,lesswrong,, 152675,https://www.lesswrong.com/posts/NkDJKw7diP2krwtko/expected-futility-for-humans,Expected futility for humans,['Roko'],2009-06-09T12:04:29Z,lesswrong,, 152691,https://www.lesswrong.com/posts/PXr38b64ECtFcn4Yq/supervised-program-for-alignment-research-spar-at-uc,Supervised Program for Alignment Research (SPAR) at UC Berkeley: Spring 2023 summary,"['mic', 'dx26', 'adamk', 'Carolyn Qian']",2023-08-19T02:27:30Z,lesswrong,, 152709,https://www.lesswrong.com/posts/uYXAv6Audr2y4ytJe/what-is-compute-transformative-ai-and-compute-1-4,What is Compute? - Transformative AI and Compute [1/4],['lennart'],2021-09-23T16:25:30Z,lesswrong,, 152729,https://www.lesswrong.com/posts/iyKnennBbCvaWuKef/how-to-pursue-a-career-in-technical-ai-alignment,How to pursue a career in technical AI alignment,['charlie.rs'],2022-06-04T21:11:47Z,lesswrong,, 152754,https://www.lesswrong.com/posts/KYxpkoh8ppnPfmuF3/power-seeking-minimising-free-energy,Power-Seeking = Minimising free energy,['Jonas Hallgren'],2023-02-22T04:28:44Z,lesswrong,, 152772,https://www.lesswrong.com/posts/4kvaocbkDDS2AMoPG/list-of-problems-that-motivated-udt,List of Problems That Motivated UDT,['Wei Dai'],2012-06-06T00:26:01Z,lesswrong,, 152783,https://www.lesswrong.com/posts/edi9Y4vYtdNRbui3u/what-are-some-good-examples-of-incorrigibility,What are some good examples of incorrigibility?,['RyanCarey'],2019-04-28T00:22:45Z,lesswrong,, 152793,https://www.lesswrong.com/posts/GBNayXzcboJumL2Dx/unintentional-ai-safety-research-why-not-systematically-mine,“Unintentional AI safety research”: Why not systematically mine AI technical research for safety purposes?,['ghostwheel'],2023-03-29T15:56:39Z,lesswrong,, 152812,https://www.lesswrong.com/posts/renezm5cFCuMBBv9s/mapping-chatgpt-s-ontological-landscape-gradients-and,"Mapping ChatGPT’s ontological landscape, gradients and choices [interpretability]",['Bill Benzon'],2023-10-15T20:12:36Z,lesswrong,, 152829,https://www.lesswrong.com/posts/Lhbkc8842L3GDDvtq/two-new-newcomb-variants,Two New Newcomb Variants,['eva_'],2022-11-14T14:01:05Z,lesswrong,, 152840,https://www.lesswrong.com/posts/pnmFBjHtpfpAc6dPT/arc-evals-responsible-scaling-policies,ARC Evals: Responsible Scaling Policies,['Zach Stein-Perlman'],2023-09-28T04:30:37Z,lesswrong,, 152857,https://www.lesswrong.com/posts/MoBQ8Y56pWLKXfcxq/ai-governance-needs-technical-work,AI Governance Needs Technical Work,['Mauricio'],2022-09-05T22:28:06Z,lesswrong,, 152886,https://www.lesswrong.com/posts/Fk3KYMxGLzDnwjFzo/does-agent-foundations-cover-all-future-ml-systems,Does agent foundations cover all future ML systems?,['Jonas Hallgren'],2022-07-25T01:17:12Z,lesswrong,, 152896,https://www.lesswrong.com/posts/GxW8ef8tH4yX6KMrf/everything-i-ever-needed-to-know-i-learned-from-world-of-2,"Everything I ever needed to know, I learned from World of Warcraft: Goodhart’s law",['Said Achmiz'],2018-05-03T16:33:50Z,lesswrong,, 152913,https://www.lesswrong.com/posts/dcoxvEhAfYcov2LA6/agentized-llms-will-change-the-alignment-landscape,Agentized LLMs will change the alignment landscape,['Seth Herd'],2023-04-09T02:29:08Z,lesswrong,, 152929,https://www.lesswrong.com/posts/AtzuxdKs9DXcD7G6o/two-stupid-ai-alignment-ideas,Two Stupid AI Alignment Ideas,['aphyer'],2021-11-16T16:13:20Z,lesswrong,, 152948,https://www.lesswrong.com/posts/WcSL6gSPWhTHryoEm/a-basic-mathematical-structure-of-intelligence,A basic mathematical structure of intelligence,['Golol'],2023-04-12T16:49:07Z,lesswrong,, 152958,https://www.lesswrong.com/posts/JeZwEnBPdYqSfEjSy/inching-kubla-khan-and-gpt-into-the-same-intellectual,Inching “Kubla Khan” and GPT into the same intellectual framework @ 3 Quarks Daily,['Bill Benzon'],2023-03-28T19:50:10Z,lesswrong,, 152979,https://www.lesswrong.com/posts/Pkthep47ukcrK3MNm/in-a-multipolar-scenario-how-do-people-expect-systems-to-be,"In a multipolar scenario, how do people expect systems to be trained to interact with systems developed by other labs?",['JesseClifton'],2020-12-01T20:04:18Z,lesswrong,, 152995,https://www.lesswrong.com/posts/WhTChcvmv45hcuPXy/when-is-it-appropriate-to-use-statistical-models-and,When is it appropriate to use statistical models and probabilities for decision making ?,['Younes Kamel'],2022-07-05T12:34:17Z,lesswrong,, 153012,https://www.lesswrong.com/posts/a4tcqr7QBAgMHLbcz/book-review-ai-safety-and-security,Book Review: AI Safety and Security,['Michaël Trazzi'],2018-08-21T10:23:24Z,lesswrong,, 153045,https://www.lesswrong.com/posts/dFyqTAyG2oCSr6S4K/intuitive-examples-of-reward-function-learning,Intuitive examples of reward function learning?,['Stuart_Armstrong'],2018-03-06T16:54:18Z,lesswrong,, 153065,https://www.lesswrong.com/posts/FsEDu6CvzyzJkrQd8/pivotal-acts-using-an-unaligned-agi,Pivotal acts using an unaligned AGI?,['Simon Fischer'],2022-08-21T17:13:56Z,lesswrong,, 153086,https://www.lesswrong.com/posts/HByDKLLdaWEcA2QQD/applying-superintelligence-without-collusion,Applying superintelligence without collusion,['Eric Drexler'],2022-11-08T18:08:32Z,lesswrong,, 153102,https://www.lesswrong.com/posts/azRwPDbZfpadoL7WW/an-appeal-to-ai-superintelligence-reasons-to-preserve,An Appeal to AI Superintelligence: Reasons to Preserve Humanity,['James_Miller'],2023-03-18T16:22:11Z,lesswrong,, 153126,https://www.lesswrong.com/posts/Xne4Bma99bKXWc9AH/untangling-infrabayesianism-a-redistillation-pdf-link-12k,Untangling Infrabayesianism: A redistillation [PDF link; ~12k words + lots of math],['Lorxus'],2023-08-01T12:42:36Z,lesswrong,, 153143,https://www.lesswrong.com/posts/Pam5oJXECo8ka6ikA/smarter-than-us-is-out,"""Smarter than us"" is out!",['Stuart_Armstrong'],2014-02-25T15:50:35Z,lesswrong,, 153158,https://www.lesswrong.com/posts/2BSHhH3DSeme8gBBC/linkpost-jan-leike-on-three-kinds-of-alignment-taxes,[Linkpost] Jan Leike on three kinds of alignment taxes,['Akash'],2023-01-06T23:57:35Z,lesswrong,, 153184,https://www.lesswrong.com/posts/XfRB26FqXFrTh83Pf/implicit-extortion,Implicit extortion,['paulfchristiano'],2018-04-13T16:33:22Z,lesswrong,, 153207,https://www.lesswrong.com/posts/mFCbW6rYLzARqi5pf/hebbian-natural-abstractions-introduction,[Hebbian Natural Abstractions] Introduction,"['Samuel Nellessen', 'Jan']",2022-11-21T20:34:15Z,lesswrong,, 153216,https://www.lesswrong.com/posts/mSDwPeqAzYk79vLiA/understanding-infra-bayesianism-a-beginner-friendly-video,Understanding Infra-Bayesianism: A Beginner-Friendly Video Series,"['Jack Parker', 'Connall Garrod']",2022-09-22T13:25:04Z,lesswrong,, 153231,https://www.lesswrong.com/posts/m3DiiBiXApN3kQMyM/edt-with-updating-double-counts,EDT with updating double counts,['paulfchristiano'],2021-10-12T04:40:02Z,lesswrong,, 153245,https://www.lesswrong.com/posts/Dx6kkXykErmAswuvS/alignment-works-both-ways,Alignment works both ways,['Karl von Wendt'],2023-03-07T10:41:44Z,lesswrong,, 153254,https://www.lesswrong.com/posts/nkKAYBgG9GXJHm2hE/decision-theory-breakdown-personal-attempt-at-a-review,Decision Theory Breakdown—Personal Attempt at a Review,['Jake Arft-Guatelli'],2021-12-14T00:40:54Z,lesswrong,, 153275,https://www.lesswrong.com/posts/As76yueYGy6FjZg3R/why-no-total-winner,Why no total winner?,['Paul Crowley'],2017-10-15T22:01:38Z,lesswrong,, 153294,https://www.lesswrong.com/posts/9kQFure4hdDmRBNdH/how-it-feels-to-have-your-mind-hacked-by-an-ai,How it feels to have your mind hacked by an AI,['blaked'],2023-01-12T00:33:19Z,lesswrong,, 153324,https://www.lesswrong.com/posts/A22bTHYLDdkeqFsLd/saying-the-quiet-part-out-loud-trading-off-x-risk-for,Saying the quiet part out loud: trading off x-risk for personal immortality,['disturbance'],2023-11-02T17:43:34Z,lesswrong,, 153339,https://www.lesswrong.com/posts/csFXHGb7gxpzMTeT5/discovering-latent-knowledge-in-the-human-brain-part-1,Discovering Latent Knowledge in the Human Brain: Part 1 – Clarifying the concepts of belief and knowledge,['Joseph Emerson'],2023-10-15T09:02:20Z,lesswrong,, 153350,https://www.lesswrong.com/posts/wfZW39du3tST2gnWs/the-limits-of-automation,The Limits of Automation,['milkandcigarettes'],2022-06-23T18:03:13Z,lesswrong,, 153366,https://www.lesswrong.com/posts/KGTGgnGppf9wzwmFM/shouldn-t-we-just-superimitate-low-res-uploads,Shouldn't we 'Just' Superimitate Low-Res Uploads?,['marc/er'],2023-11-03T07:42:07Z,lesswrong,, 153386,https://www.lesswrong.com/posts/8hqwzYfCKLN9x35Jd/reinforcement-learning-goal-misgeneralization-can-we-guess,Reinforcement Learning Goal Misgeneralization: Can we guess what kind of goals are selected by default?,"['StefanHex', 'Julian_R']",2022-10-25T20:48:51Z,lesswrong,, 153403,https://www.lesswrong.com/posts/Wvtri2ooQyFC6sxPB/a-tension-between-two-prosaic-alignment-subgoals-1,A tension between two prosaic alignment subgoals,['Alex Lawsen'],2023-03-19T14:07:54Z,lesswrong,, 153414,https://www.lesswrong.com/posts/QJQEwcjp9zAr3bui2/stopping-dangerous-ai-ideal-lab-behavior,Stopping dangerous AI: Ideal lab behavior,['Zach Stein-Perlman'],2023-05-09T21:00:20Z,lesswrong,, 153443,https://www.lesswrong.com/posts/LGHuaLiq3F5NHQXXF/anthropically-blind-the-anthropic-shadow-is-reflectively,Anthropically Blind: the anthropic shadow is reflectively inconsistent,['Christopher King'],2023-06-29T02:36:26Z,lesswrong,, 153459,https://www.lesswrong.com/posts/S3bpZvLWta7wbkJi4/widening-overton-window-open-thread,Widening Overton Window - Open Thread,['Prometheus'],2023-03-31T10:04:00Z,lesswrong,, 153471,https://www.lesswrong.com/posts/ozDWnEChJwuB5L5wg/documenting-journey-into-ai-safety,Documenting Journey Into AI Safety,['jacobhaimes'],2023-10-10T18:30:03Z,lesswrong,, 153491,https://www.lesswrong.com/posts/FRd6nNj3M33w2CSX5/aci-6-a-non-dualistic-aci-model,ACI#6: A Non-Dualistic ACI Model,['Akira Pyinya'],2023-11-09T23:01:31Z,lesswrong,, 153506,https://www.lesswrong.com/posts/pPXX56Htw5CLekAib/hofstadter-s-superrationality,Hofstadter's Superrationality,['gwern'],2012-04-21T13:33:36Z,lesswrong,, 153521,https://www.lesswrong.com/posts/mjSjPHCtbK6TA5tfW/ai-safety-is-dropping-the-ball-on-clown-attacks-and-mind,"AI Safety is Dropping the Ball on Clown Attacks, and Mind Control in General",['trevor'],2023-10-21T22:03:15Z,lesswrong,, 153540,https://www.lesswrong.com/posts/sjx9i6ndoNHwYg9N6/ai-safety-europe-retreat-2023-retrospective,AI Safety Europe Retreat 2023 Retrospective,['Magdalena Wache'],2023-04-14T09:05:43Z,lesswrong,, 153566,https://www.lesswrong.com/posts/scnkAbvLMDjJR9WE2/a-philosopher-s-critique-of-rlhf,A philosopher's critique of RLHF,['ThomasW'],2022-11-07T02:42:51Z,lesswrong,, 153578,https://www.lesswrong.com/posts/hA5FvFaajX7fvwKjZ/some-variants-of-sleeping-beauty,Some Variants of Sleeping Beauty,"['Sylvester Kollin', 'Eric Chen']",2023-03-01T16:51:58Z,lesswrong,, 153591,https://www.lesswrong.com/posts/CW6HDvodPpNe38Cry/aiming-at-the-target,Aiming at the Target,['Eliezer Yudkowsky'],2008-10-26T16:47:19Z,lesswrong,, 153604,https://www.lesswrong.com/posts/SSj6Rrx9ZN9WF6eaB/deepmind-plans-for-rat-level-ai,Deepmind Plans for Rat-Level AI,['moridinamael'],2016-08-18T16:26:06Z,lesswrong,, 153613,https://www.lesswrong.com/posts/pQzRj4hJRtMxg3hib/this-anime-storyboard-doesn-t-exist-a-graphic-novel-written,This anime storyboard doesn't exist: a graphic novel written and illustrated by GPT4,['RomanS'],2023-10-05T14:01:30Z,lesswrong,, 153648,https://www.lesswrong.com/posts/bpJ3A5sDoBq6i83Xp/what-would-make-you-confident-that-agi-has-been-achieved,What would make you confident that AGI has been achieved?,['Yitz'],2022-03-29T23:02:58Z,lesswrong,, 153659,https://www.lesswrong.com/posts/taqRwkm9vnmEnYcbG/nyarlathotep-stirs-a-meta-narrative-chatgpt-story,Nyarlathotep Stirs: A Meta-Narrative ChatGPT Story,['Charlie Sanders'],2023-03-20T08:00:39Z,lesswrong,, 153683,https://www.lesswrong.com/posts/o8fobRYGAknqdTTsM/survey-on-intermediate-goals-in-ai-governance,Survey on intermediate goals in AI governance,"['MichaelA', 'MaxRa']",2023-03-17T13:12:19Z,lesswrong,, 153695,https://www.lesswrong.com/posts/MoLLqFtMup39PCsaG/slowing-ai-foundations,Slowing AI: Foundations,['Zach Stein-Perlman'],2023-04-17T14:30:09Z,lesswrong,, 153729,https://www.lesswrong.com/posts/xtgN2fJjAuziPxw84/it-s-not-how-you-use-it,It's (not) how you use it,['Eleni Angelou'],2022-09-07T17:15:52Z,lesswrong,, 153740,https://www.lesswrong.com/posts/cpGzrF7XztMjzzD2H/optimising-society-to-constrain-risk-of-war-from-an,Optimising Society to Constrain Risk of War from an Artificial Superintelligence,['JohnCDraper'],2020-04-30T10:47:56Z,lesswrong,, 153757,https://www.lesswrong.com/posts/2vGXEKrxYhm5Zifgm/could-democritus-have-predicted-intelligence-explosion,Could Democritus have predicted intelligence explosion?,['lukeprog'],2012-01-24T08:40:02Z,lesswrong,, 153768,https://www.lesswrong.com/posts/eTu23XYr37prmJxa3/smoking-lesion-as-a-counterexample-to-cdt,Smoking lesion as a counterexample to CDT,['Stuart_Armstrong'],2012-10-26T12:08:28Z,lesswrong,, 153783,https://www.lesswrong.com/posts/etgJYbkvvkBoDRm4k/hyperbolic-takeoff,Hyperbolic takeoff,['Ege Erdil'],2022-04-09T15:57:16Z,lesswrong,, 153803,https://www.lesswrong.com/posts/cnPeYAefQCtaA5PRa/openai-introduces-function-calling-for-gpt-4,OpenAI introduces function calling for GPT-4,"['mic', 'André Ferretti']",2023-06-20T01:58:48Z,lesswrong,, 153836,https://www.lesswrong.com/posts/WBgT4jAqTrPN7qh3Z/beta-test-gpt-3-based-research-assistant,Beta test GPT-3 based research assistant,['jungofthewon'],2020-12-16T13:42:50Z,lesswrong,, 153848,https://www.lesswrong.com/posts/RBsSXWL6QuDsoSMJ6/mesa-optimization-for-goals-defined-only-within-a-training,Mesa-optimization for goals defined only within a training environment is dangerous,['Rubi J. Hudson'],2022-08-17T03:56:43Z,lesswrong,, 153865,https://www.lesswrong.com/posts/fJE6tscjGRPnK8C2C/decoding-intermediate-activations-in-llama-2-7b,Decoding intermediate activations in llama-2-7b,['Nina Rimsky'],2023-07-21T05:35:03Z,lesswrong,, 153884,https://www.lesswrong.com/posts/qL8Z9TBCNWQyN6yLq/ssc-journal-club-ai-timelines,SSC Journal Club: AI Timelines,['Scott Alexander'],2017-06-08T19:00:00Z,lesswrong,, 153905,https://www.lesswrong.com/posts/FMkQtPvzsriQAow5q/the-correct-response-to-uncertainty-is-not-half-speed,The correct response to uncertainty is *not* half-speed,['AnnaSalamon'],2016-01-15T22:55:03Z,lesswrong,, 153915,https://www.lesswrong.com/posts/qZJBighPrnv9bSqTZ/31-laws-of-fun,31 Laws of Fun,['Eliezer Yudkowsky'],2009-01-26T10:13:14Z,lesswrong,, 153945,https://www.lesswrong.com/posts/bebw3SEjXY3SCAcwD/clarifying-the-free-energy-principle-with-quotes,Clarifying the free energy principle (with quotes),['Ryo'],2023-10-29T16:03:32Z,lesswrong,, 153974,https://www.lesswrong.com/posts/SyNQ6LaTuntWpaQJu/matt-yglesias-on-ai-policy,Matt Yglesias on AI Policy,['Grant Demaree'],2022-08-17T23:57:59Z,lesswrong,, 153984,https://www.lesswrong.com/posts/83DimRqppcaoyYAsy/closed-job-offering-help-communicate-infrabayesianism,[Closed] Job Offering: Help Communicate Infrabayesianism,"['abramdemski', 'Vanessa Kosoy', 'Diffractor']",2022-03-23T18:35:17Z,lesswrong,, 154000,https://www.lesswrong.com/posts/SvhzEQkwFGNTy6CsN/alphastar-impressive-for-rl-progress-not-for-agi-progress,"AlphaStar: Impressive for RL progress, not for AGI progress",['orthonormal'],2019-11-02T01:50:27Z,lesswrong,, 154015,https://www.lesswrong.com/posts/cLyo7dKmimeXR3hAC/towards-a-formalisation-of-returns-on-cognitive-reinvestment,Towards a Formalisation of Returns on Cognitive Reinvestment (Part 1),['DragonGod'],2022-06-04T18:42:25Z,lesswrong,, 154025,https://www.lesswrong.com/posts/rHEkn2NaFjnjedRDu/from-gpt-to-agi,From GPT to AGI,['ChristianKl'],2020-08-31T13:28:35Z,lesswrong,, 154047,https://www.lesswrong.com/posts/jSsFfqzHNeJypd9An/simulate-the-ceo,Simulate the CEO,['robotelvis'],2023-08-12T00:09:44Z,lesswrong,, 154070,https://www.lesswrong.com/posts/Bhrs7kGkEnDsuCTDa/questions-about-ai-that-bother-me,Questions about AI that bother me,['Eleni Angelou'],2023-02-05T05:04:08Z,lesswrong,, 154093,https://www.lesswrong.com/posts/PaA2u3mMzdNs79uto/a-proposed-test-to-determine-the-extent-to-which-large,A Proposed Test to Determine the Extent to Which Large Language Models Understand the Real World,['Bruce G'],2023-02-24T20:20:23Z,lesswrong,, 154114,https://www.lesswrong.com/posts/mCQyvZsrottQNsT4t/boxing,Boxing,['Zach Stein-Perlman'],2023-08-02T23:38:36Z,lesswrong,, 154125,https://www.lesswrong.com/posts/AWgZnpfLYHynDD29c/an-attempt-to-understand-the-complexity-of-values,An attempt to understand the Complexity of Values,['Dalton Mabery'],2022-08-05T04:43:25Z,lesswrong,, 154136,https://www.lesswrong.com/posts/gvrojpfzizDvmPJJN/moral-uncertainty-what-kind-of-should-is-involved,Moral uncertainty: What kind of 'should' is involved?,['MichaelA'],2020-01-13T12:13:12Z,lesswrong,, 154150,https://www.lesswrong.com/posts/sx47Wi2x8c4mqkYAC/four-levels-of-understanding-decision-theory,Four levels of understanding decision theory,['Max H'],2023-06-01T20:55:08Z,lesswrong,, 154165,https://www.lesswrong.com/posts/njEWACBHhfppg6KYS/notes-on-nukes-ir-and-ai-from-arsenals-of-folly-and-other,"Notes on nukes, IR, and AI from ""Arsenals of Folly"" (and other books)",['tlevin'],2023-09-04T19:02:58Z,lesswrong,, 154195,https://www.lesswrong.com/posts/AWiApnx4NAnnifdEy/mind-uploading-from-the-outside-in,Mind uploading from the outside in,['Alexandros'],2015-11-29T02:05:07Z,lesswrong,, 154213,https://www.lesswrong.com/posts/FzhedhEFAcKJZkgJS/an-ai-risk-argument-that-resonates-with-nytimes-readers,An AI risk argument that resonates with NYTimes readers,['Julian Bradshaw'],2023-03-12T23:09:20Z,lesswrong,, 154226,https://www.lesswrong.com/posts/a3LncviZ6rkrTo8jJ/the-unified-theory-of-normative-ethics,The Unified Theory of Normative Ethics,['Thane Ruthenis'],2022-06-17T19:55:20Z,lesswrong,, 154241,https://www.lesswrong.com/posts/jrKftFZMZjvNdQLNR/box-inversion-revisited,Box inversion revisited,['Jan_Kulveit'],2023-11-07T11:09:37Z,lesswrong,, 154264,https://www.lesswrong.com/posts/2J6fFHQZkWxFcjL6c/tracr-compiled-transformers-as-a-laboratory-for-1,Tracr: Compiled Transformers as a Laboratory for Interpretability | DeepMind,['DragonGod'],2023-01-13T16:53:10Z,lesswrong,, 154277,https://www.lesswrong.com/posts/4mvdZXjwJHv9tSAWB/sets-of-objectives-for-a-multi-objective-rl-agent-to-1,Sets of objectives for a multi-objective RL agent to optimize,"['Ben Smith', 'Roland Pihlakas']",2022-11-23T06:49:45Z,lesswrong,, 154314,https://www.lesswrong.com/posts/pZENSKwHzAcEAQZfN/is-there-a-valley-of-bad-civilizational-adequacy,Is There a Valley of Bad Civilizational Adequacy?,['lbThingrb'],2022-03-11T19:49:49Z,lesswrong,, 154330,https://www.lesswrong.com/posts/FYRYhkdAQoQibasNB/stampy-s-ai-safety-info-new-distillations-1-march-2023,Stampy's AI Safety Info - New Distillations #1 [March 2023] (Expansive interactive FAQ),['markov'],2023-04-07T11:06:39Z,lesswrong,, 154342,https://www.lesswrong.com/posts/Nw5MwgJBGXSWqaKag/algorithmic-formalization-of-fdt,Algorithmic formalization of FDT?,['shminux'],2022-05-08T01:36:11Z,lesswrong,, 154351,https://www.lesswrong.com/posts/kywRXvv2mhkyfPD84/gpt-4-specs-1-trillion-parameters,GPT-4 Specs: 1 Trillion Parameters?,['infinibot27'],2023-03-26T18:56:22Z,lesswrong,, 154360,https://www.lesswrong.com/posts/n232LvnpmpqLjQnjG/an-attempt-to-break-circularity-in-science,An attempt to break circularity in science,['fryolysis'],2022-07-15T18:32:52Z,lesswrong,, 154370,https://www.lesswrong.com/posts/56b8n8FT6fksnDZwY/superintelligence-reading-group-2-forecasting-ai,Superintelligence Reading Group 2: Forecasting AI,['KatjaGrace'],2014-09-23T01:00:30Z,lesswrong,, 154386,https://www.lesswrong.com/posts/Z8r9sAmzDucngdZtn/field-building-and-deep-models,Field-Building and Deep Models,['Ben Pace'],2018-01-13T21:16:15Z,lesswrong,, 154408,https://www.lesswrong.com/posts/Nz62ZurRkGPigAxMK/where-do-selfish-values-come-from,Where do selfish values come from?,['Wei Dai'],2011-11-18T23:52:41Z,lesswrong,, 154426,https://www.lesswrong.com/posts/FpcgSoJDNNEZ4BQfj/the-unexpected-difficulty-of-comparing-alphastar-to-humans,The unexpected difficulty of comparing AlphaStar to humans,['Richard Korzekwa'],2019-09-18T02:20:01Z,lesswrong,, 154449,https://www.lesswrong.com/posts/ZFtesgbY9XwtqqyZ5/human-psycholinguists-a-critical-appraisal,human psycholinguists: a critical appraisal,['nostalgebraist'],2019-12-31T00:20:01Z,lesswrong,, 154471,https://www.lesswrong.com/posts/BCz7viTXMhjxdkFRs/paper-identifying-the-risks-of-lm-agents-with-an-lm-emulated,Paper: Identifying the Risks of LM Agents with an LM-Emulated Sandbox - University of Toronto 2023 - Benchmark consisting of 36 high-stakes tools and 144 test cases!,['Singularian2501'],2023-10-09T00:00:20Z,lesswrong,, 154489,https://www.lesswrong.com/posts/HJEJ8Qp7cyyLRGYRW/extremely-counterfactual-mugging-or-the-gist-of-transparent,Extremely Counterfactual Mugging or: the gist of Transparent Newcomb,['Bongo'],2011-02-09T15:20:55Z,lesswrong,, 154498,https://www.lesswrong.com/posts/DHMDxCekQbAFdyPpa/entanglement-and-intuition-about-words-and-meaning,Entanglement and intuition about words and meaning,['Bill Benzon'],2023-10-04T14:16:30Z,lesswrong,, 154509,https://www.lesswrong.com/posts/dL2AWCpx6sSiNs9m8/turn-your-flashcards-into-art,Turn your flashcards into Art,['Heye Groß'],2022-09-04T17:31:09Z,lesswrong,, 154518,https://www.lesswrong.com/posts/TifG2m7BYW2sGmAoR/leto-among-the-machines,Leto among the Machines,['Virgil Kurkjian'],2018-09-30T21:17:11Z,lesswrong,, 154542,https://www.lesswrong.com/posts/J8ZXLTSuFHL27v7P7/understanding-and-aligning-a-human-like-inductive-bias-with,Understanding and Aligning a Human-like Inductive Bias with Cognitive Science: a Review of Related Literature,['Claire Short'],2023-07-29T06:10:38Z,lesswrong,, 154566,https://www.lesswrong.com/posts/G4xCDrfpLpf9JFjKH/swap-and-scale,Swap and Scale,['Stephen Fowler'],2022-09-09T22:41:50Z,lesswrong,, 154578,https://www.lesswrong.com/posts/b56nedeCALDuvPxWB/the-guardian-version-1,The Guardian Version 1,['MiguelDev'],2023-04-18T21:20:58Z,lesswrong,, 154594,https://www.lesswrong.com/posts/jfYnq8pKLpKLwaRGN/transcript-yudkowsky-on-bankless-follow-up-q-and-a,Transcript: Yudkowsky on Bankless follow-up Q&A,['vonk'],2023-02-28T03:46:17Z,lesswrong,, 154613,https://www.lesswrong.com/posts/Lq6jo5j9ty4sezT7r/teaser-hard-coding-transformer-models,Teaser: Hard-coding Transformer Models,['MadHatter'],2021-12-12T22:04:53Z,lesswrong,, 154623,https://www.lesswrong.com/posts/DQ4pyHoAKpYutXwSr/underappreciated-points-about-utility-functions-of-both,Underappreciated points about utility functions (of both sorts),['Sniffnoy'],2020-01-04T07:27:28Z,lesswrong,, 154637,https://www.lesswrong.com/posts/4QgHqN2fHvqAwwSRg/thoughts-on-expanding-the-ai-safety-community-benefits-and,Thoughts On Expanding the AI Safety Community: Benefits and Challenges of Outreach to Non-Technical Professionals,['Yashvardhan Sharma'],2023-01-01T19:21:33Z,lesswrong,, 154656,https://www.lesswrong.com/posts/pTK2cDnXBB5tpoP74/what-considerations-influence-whether-i-have-more-influence,What considerations influence whether I have more influence over short or long timelines?,['Daniel Kokotajlo'],2020-11-05T19:56:12Z,lesswrong,, 154665,https://www.lesswrong.com/posts/cGbEtNbxACJpqoP4x/gpt-4-solves-gary-marcus-induced-flubs,GPT-4 solves Gary Marcus-induced flubs,['JakubK'],2023-03-17T06:40:42Z,lesswrong,, 154681,https://www.lesswrong.com/posts/BNJx2CqfXyiusoJcK/artificial-intelligence-a-modern-approach-4th-edition-on-the,Artificial Intelligence: A Modern Approach (4th edition) on the Alignment Problem,['Zack_M_Davis'],2020-09-17T02:23:59Z,lesswrong,, 154701,https://www.lesswrong.com/posts/DLmhJbuhYek5rEhpH/mairy-s-room-ai-reasoning-to-solve-philosophical-problems,mAIry's room: AI reasoning to solve philosophical problems,['Stuart_Armstrong'],2019-03-05T20:24:13Z,lesswrong,, 154720,https://www.lesswrong.com/posts/ZmKzbcx742mAy7xGt/against-the-weirdness-heuristic,Against the weirdness heuristic,['Eleni Angelou'],2022-10-02T19:41:11Z,lesswrong,, 154733,https://www.lesswrong.com/posts/n8XmGCFv3aNDyFPfW/are-we-there-yet-2,Are we there yet?,['theflowerpot'],2022-06-20T11:19:56Z,lesswrong,, 154742,https://www.lesswrong.com/posts/auPkxnLb3R9vXjEzo/all-agi-safety-questions-welcome-especially-basic-ones-july,All AGI safety questions welcome (especially basic ones) [July 2022],"['plex', 'Robert Miles']",2022-07-16T12:57:44Z,lesswrong,, 154755,https://www.lesswrong.com/posts/PSichw8wqmbood6fj/this-territory-does-not-exist,This Territory Does Not Exist,['ike'],2020-08-13T00:30:26Z,lesswrong,, 154769,https://www.lesswrong.com/posts/i5kijcjFJD6bn7dwq/evaluating-the-historical-value-misspecification-argument,Evaluating the historical value misspecification argument,['Matthew Barnett'],2023-10-05T18:34:16Z,lesswrong,, 154789,https://www.lesswrong.com/posts/pkngasFsCfFTidkxy/non-resolve-as-resolve,Non-resolve as Resolve,['Linda Linsefors'],2018-07-10T23:31:16Z,lesswrong,, 154800,https://www.lesswrong.com/posts/H3WsY5ZsXHeWGKPez/how-do-bounded-utility-functions-work-if-you-are-uncertain,How do bounded utility functions work if you are uncertain how close to the bound your utility is?,['Ghatanathoah'],2021-10-06T21:31:26Z,lesswrong,, 154818,https://www.lesswrong.com/posts/bjQroWRe33yQM75Ha/link-whole-brain-emulation-and-the-evolution-of,[link] Whole Brain Emulation and the Evolution of Superorganisms,['Wei Dai'],2011-05-03T23:38:27Z,lesswrong,, 154834,https://www.lesswrong.com/posts/JXktMsTAc9ZyMoami/ai-as-a-civilizational-risk-part-3-6-anti-economy-and-signal,AI as a Civilizational Risk Part 3/6: Anti-economy and Signal Pollution,['PashaKamyshev'],2022-10-31T17:03:00Z,lesswrong,, 154860,https://www.lesswrong.com/posts/iuorxZu6tLFhP7oQY/an-alternative-of-ppo-towards-alignment,An alternative of PPO towards alignment,['ml hkust'],2023-04-17T17:58:09Z,lesswrong,, 154875,https://www.lesswrong.com/posts/Np5Q3Mhz2AiPtejGN/we-re-not-ready-thoughts-on-pausing-and-responsible-scaling-4,"We're Not Ready: thoughts on ""pausing"" and responsible scaling policies",['HoldenKarnofsky'],2023-10-27T15:19:34Z,lesswrong,, 154899,https://www.lesswrong.com/posts/kN2cFPaLQhExEzgeZ/repl-s-a-type-signature-for-agents,REPL's: a type signature for agents,['scottviteri'],2022-02-15T22:57:36Z,lesswrong,, 154909,https://www.lesswrong.com/posts/7FjgMLbqS6Z6yYKau/recurrentgpt-a-loom-type-tool-with-a-twist,RecurrentGPT: a loom-type tool with a twist,['mishka'],2023-05-25T17:09:38Z,lesswrong,, 154930,https://www.lesswrong.com/posts/TpExcpmeHhhfNtXoh/lightning-post-things-people-in-ai-safety-should-stop,Lightning Post: Things people in AI Safety should stop talking about,['Prometheus'],2023-06-20T15:00:24Z,lesswrong,, 154950,https://www.lesswrong.com/posts/hr48gem2keDQvEAbg/has-anyone-increased-their-agi-timelines,Has anyone increased their AGI timelines?,['Darren McKee'],2022-11-06T00:03:12Z,lesswrong,, 154959,https://www.lesswrong.com/posts/LpnMyeNvgvsbSLAwB/person-moment-affecting-views,Person-moment affecting views,['KatjaGrace'],2018-03-07T02:30:00Z,lesswrong,, 154970,https://www.lesswrong.com/posts/6Efhp5YpoE9kFETDj/writing-with-gpt-3,Writing with GPT-3,['Jacob Falkovich'],2020-07-24T15:22:47Z,lesswrong,, 154993,https://www.lesswrong.com/posts/h7Sx4DBL4JZnbTpes/forecasting-thread-how-does-ai-risk-level-vary-based-on,Forecasting thread: How does AI risk level vary based on timelines?,['elifland'],2022-09-14T23:56:30Z,lesswrong,, 155002,https://www.lesswrong.com/posts/vnMqtjv9dd8BJBsmn/arguing-all-sides-with-chatgpt,Arguing all sides with ChatGPT,['Richard_Kennaway'],2023-03-30T19:50:39Z,lesswrong,, 155033,https://www.lesswrong.com/posts/iD6tYjLLFFn4LXgnt/how-mats-addresses-mass-movement-building-concerns,How MATS addresses “mass movement building” concerns,['Ryan Kidd'],2023-05-04T00:55:27Z,lesswrong,, 155054,https://www.lesswrong.com/posts/5nDxmAvZ9w5CPa9gR/ai-12-the-quest-for-sane-regulations,AI #12:The Quest for Sane Regulations,['Zvi'],2023-05-18T13:20:08Z,lesswrong,, 155079,https://www.lesswrong.com/posts/tpr6kkaPQqFgiC5ys/defining-optimization-in-a-deeper-way-part-4,Defining Optimization in a Deeper Way Part 4,['Jemist'],2022-07-28T17:02:33Z,lesswrong,, 155096,https://www.lesswrong.com/posts/x2QzeA2yAGYma4QWQ/book-interpretable-machine-learning-a-guide-for-making-black,[Book] Interpretable Machine Learning: A Guide for Making Black Box Models Explainable,['Esben Kran'],2022-10-31T11:38:45Z,lesswrong,, 155112,https://www.lesswrong.com/posts/Qf5mCs5qgBRcuFeFf/bayesian-persuasion,Bayesian Persuasion?,['Karthik Tadepalli'],2022-05-28T17:52:15Z,lesswrong,, 155122,https://www.lesswrong.com/posts/e9MbFLBAnGkEfPTde/question-2-predicted-bad-outcomes-of-agi-learning,Question 2: Predicted bad outcomes of AGI learning architecture,['Cameron Berg'],2022-02-11T22:23:50Z,lesswrong,, 155151,https://www.lesswrong.com/posts/h3Nqjy75xoqJ3Tvup/draft-the-optimization-toolbox,Draft: The optimization toolbox,['Alex_Altair'],2023-03-28T20:40:38Z,lesswrong,, 155173,https://www.lesswrong.com/posts/ha6KzDrvdyT42Hi4j/any-research-in-probe-tuning-of-llms,"Any research in ""probe-tuning"" of LLMs?",['Roman Leventov'],2023-08-15T21:01:33Z,lesswrong,, 155186,https://www.lesswrong.com/posts/Xy2AYxpWqJWedFfcD/learning-values-in-practice,Learning Values in Practice,['Stuart_Armstrong'],2020-07-20T18:38:50Z,lesswrong,, 155207,https://www.lesswrong.com/posts/Wic2P2bGejbFH3Sxb/summary-of-agi-ruin-a-list-of-lethalities,"Summary of ""AGI Ruin: A List of Lethalities""",['Stephen McAleese'],2022-06-10T22:35:49Z,lesswrong,, 155228,https://www.lesswrong.com/posts/y3jDSoTTdBD9Nj3Gx/how-good-is-humanity-at-coordination,How good is humanity at coordination?,['Buck'],2020-07-21T20:01:40Z,lesswrong,, 155247,https://www.lesswrong.com/posts/uBzeBhySrQaoZkNCD/superintelligence-28-collaboration,Superintelligence 28: Collaboration,['KatjaGrace'],2015-03-24T01:29:21Z,lesswrong,, 155262,https://www.lesswrong.com/posts/qWMoJoKH2Sr2uTPLf/information-theoretic-model-analysis-may-not-lend-much,"Information theoretic model analysis may not lend much insight, but we may have been doing them wrong!",['Garrett Baker'],2022-07-24T00:42:14Z,lesswrong,, 155281,https://www.lesswrong.com/posts/JfkLHWJsFtk9LHhgR/i-believe-we-are-in-a-hardware-overhang,I Believe we are in a Hardware Overhang,['nem'],2022-12-08T23:18:54Z,lesswrong,, 155302,https://www.lesswrong.com/posts/mqZijPM2sEWJo9qMJ/follow-along-with-columbia-ea-s-advanced-ai-safety,Follow along with Columbia EA's Advanced AI Safety Fellowship!,['RohanS'],2022-07-02T17:45:47Z,lesswrong,, 155314,https://www.lesswrong.com/posts/5sFkZK342j5CmBCm8/decision-theory-with-the-magic-parts-highlighted,Decision Theory with the Magic Parts Highlighted,['moridinamael'],2023-05-16T17:39:55Z,lesswrong,, 155335,https://www.lesswrong.com/posts/vnNRdxzyzM86xB7no/notes-on-simplicity,Notes on Simplicity,['David Gross'],2020-12-02T23:14:13Z,lesswrong,, 155363,https://www.lesswrong.com/posts/2ZnvFDtSWc3Hteah6/non-myopia-stories,Non-myopia stories,['lberglund'],2023-11-13T17:52:32Z,lesswrong,, 155402,https://www.lesswrong.com/posts/nWuvuXeXriyWxrtZd/daisy-chaining-epsilon-step-verifiers,Daisy-chaining epsilon-step verifiers,['Decaeneus'],2023-04-06T02:07:12Z,lesswrong,, 155413,https://www.lesswrong.com/posts/jcFSEbXEfKgMwETqw/an-overview-of-some-promising-work-by-junior-alignment,An overview of some promising work by junior alignment researchers,['Akash'],2022-12-26T17:23:59Z,lesswrong,, 155440,https://www.lesswrong.com/posts/43xgZWSCYAKs7Z9F2/what-i-would-like-the-siai-to-publish,What I would like the SIAI to publish,['XiXiDu'],2010-11-01T14:07:43Z,lesswrong,, 155463,https://www.lesswrong.com/posts/oZCeun2v3Xd3ncrHt/goal-directedness-imperfect-reasoning-limited-knowledge-and,"Goal-directedness: imperfect reasoning, limited knowledge and inaccurate beliefs",['Morgan_Rogers'],2022-03-19T17:28:05Z,lesswrong,, 155480,https://www.lesswrong.com/posts/ukmDvowTpe2NboAsX/a-visualization-of-nick-bostrom-s-superintelligence,A Visualization of Nick Bostrom’s Superintelligence,['anonymous'],2014-07-23T00:24:01Z,lesswrong,, 155502,https://www.lesswrong.com/posts/rSTpxugJxFPoRMkGW/singletons-rule-ok,Singletons Rule OK,['Eliezer Yudkowsky'],2008-11-30T16:45:58Z,lesswrong,, 155523,https://www.lesswrong.com/posts/8oSCw3z2dZgWjanqB/some-disjunctive-reasons-for-urgency-on-ai-risk,Some disjunctive reasons for urgency on AI risk,['Wei Dai'],2019-02-15T20:43:17Z,lesswrong,, 155553,https://www.lesswrong.com/posts/SFLCB5BgjzruJv9sp/logical-and-indexical-uncertainty,Logical and Indexical Uncertainty,['Scott Garrabrant'],2014-01-29T21:49:53Z,lesswrong,, 155570,https://www.lesswrong.com/posts/F6WosiRxPHKeAk7tL/infinite-possibility-space-and-the-shutdown-problem,Infinite Possibility Space and the Shutdown Problem,['magfrump'],2022-10-18T05:37:12Z,lesswrong,, 155582,https://www.lesswrong.com/posts/gWwMzAgDsskcb2deA/update-on-the-uk-ai-summit-and-the-uk-s-plans,Update on the UK AI Summit and the UK's Plans,['Elliot_Mckernon'],2023-11-10T14:47:45Z,lesswrong,, 155611,https://www.lesswrong.com/posts/hhhmcWkgLwPmBuhx7/results-from-the-interpretability-hackathon,Results from the interpretability hackathon,"['Esben Kran', 'Neel Nanda']",2022-11-17T14:51:45Z,lesswrong,, 155638,https://www.lesswrong.com/posts/WNTGe87fHwDZMLqzW/the-disastrously-confident-and-inaccurate-ai,The Disastrously Confident And Inaccurate AI,['Sharat Jacob Jacob'],2022-11-18T19:06:45Z,lesswrong,, 155655,https://www.lesswrong.com/posts/Gn46SEKizaBxFzLN3/no-really-it-predicts-next-tokens,"No, really, it predicts next tokens.",['simon'],2023-04-18T03:47:22Z,lesswrong,, 155681,https://www.lesswrong.com/posts/iFLNKgZceYyTdwsGz/safety-culture-for-ai-is-important-but-isn-t-going-to-be,"""Safety Culture for AI"" is important, but isn't going to be easy",['Davidmanheim'],2023-06-26T12:52:47Z,lesswrong,, 155707,https://www.lesswrong.com/posts/izSwxS4p53JgJpEZa/notes-on-the-hot-mess-theory-of-ai-misalignment,"Notes on ""the hot mess theory of AI misalignment""",['JakubK'],2023-04-21T10:07:50Z,lesswrong,, 155724,https://www.lesswrong.com/posts/8e3676AovRbGHLi27/why-i-m-optimistic-about-near-term-ai-risk,Why I'm Optimistic About Near-Term AI Risk,['harsimony'],2022-05-15T23:05:29Z,lesswrong,, 155741,https://www.lesswrong.com/posts/Lg4voqq4vTXiCJNQP/jan-kulveit-s-corrigibility-thoughts-distilled,Jan Kulveit's Corrigibility Thoughts Distilled,['brook'],2023-08-20T17:52:36Z,lesswrong,, 155807,https://www.lesswrong.com/posts/Yv9aj9bWD5H7aaDdy/how-my-school-gamed-the-stats,How my school gamed the stats,['Srdjan Miletic'],2021-02-20T19:23:25Z,lesswrong,, 155824,https://www.lesswrong.com/posts/HziboapdPFqaF5gaW/little-attention-seems-to-be-on-discouraging-hardware,Little attention seems to be on discouraging hardware progress,['RussellThor'],2023-06-30T10:15:00Z,lesswrong,, 155838,https://www.lesswrong.com/posts/WjxSFmm7GvWEMovzR/how-does-openai-s-language-model-affect-our-ai-timeline,How does OpenAI's language model affect our AI timeline estimates?,['jimrandomh'],2019-02-15T03:11:52Z,lesswrong,, 155847,https://www.lesswrong.com/posts/RHojGPWLgdFLk3PAt/aisc-project-benchmarks-for-stable-reflectivity,AISC Project: Benchmarks for Stable Reflectivity,['jacquesthibs'],2023-11-13T14:51:19Z,lesswrong,, 155864,https://www.lesswrong.com/posts/FnLt23WFhkSPT9Dgc/why-i-think-that-teaching-philosophy-is-high-impact,Why I think that teaching philosophy is high impact,['Eleni Angelou'],2022-12-19T03:11:38Z,lesswrong,, 155875,https://www.lesswrong.com/posts/teD4xjwoeWc4LyRAD/what-role-should-evolutionary-analogies-play-in,What role should evolutionary analogies play in understanding AI takeoff speeds?,['anson.ho'],2021-12-11T01:19:10Z,lesswrong,, 155899,https://www.lesswrong.com/posts/CS3FBSX5s22G2Jd69/a-trick-for-safer-gpt-n,A trick for Safer GPT-N,['Razied'],2020-08-23T00:39:31Z,lesswrong,, 155911,https://www.lesswrong.com/posts/7nAxgQYGYrEY5ZCAD/l-zombies-l-zombies,L-zombies! (L-zombies?),['Benya'],2014-02-07T18:30:56Z,lesswrong,, 155923,https://www.lesswrong.com/posts/Hk2Bp4DcdResByqm8/a-game-about-ai-alignment-and-meta-ethics-what-are-the-must,A Game About AI Alignment (& Meta-Ethics): What Are the Must Haves?,['JonathanErhardt'],2022-09-05T07:55:06Z,lesswrong,, 155942,https://www.lesswrong.com/posts/hNFQSGfvfPgHvCryT/knowledge-base-3-shopping-advisor-and-other-uses-of,Knowledge Base 3: Shopping advisor and other uses of knowledge base about products,['iwis'],2023-10-09T11:53:35Z,lesswrong,, 155956,https://www.lesswrong.com/posts/pQfAmKtQTf8ndB9cw/an-intuitive-introduction-to-evidential-decision-theory,An Intuitive Introduction to Evidential Decision Theory,['Heighn'],2022-03-07T16:06:26Z,lesswrong,, 155973,https://www.lesswrong.com/posts/yQJxi5mMZQopYXZk5/a-quick-list-of-some-problems-in-ai-alignment-as-a-field,A Quick List of Some Problems in AI Alignment As A Field,['NicholasKross'],2022-06-21T23:23:32Z,lesswrong,, 155996,https://www.lesswrong.com/posts/9ros2kvDGCTuoidqX/ai-safety-newsletter-1-cais-linkpost,AI Safety Newsletter #1 [CAIS Linkpost],"['Akash', 'Dan H', 'aogara', 'ozhang']",2023-04-10T20:18:57Z,lesswrong,, 156019,https://www.lesswrong.com/posts/fiAmoEBDapMTPGZ8J/link-alphago-mastering-the-ancient-game-of-go-with-machine,[Link] AlphaGo: Mastering the ancient game of Go with Machine Learning,['ESRogs'],2016-01-27T21:04:55Z,lesswrong,, 156029,https://www.lesswrong.com/posts/yuwdj82yjhLFYessc/preface-to-the-sequence-on-llm-psychology,Preface to the Sequence on LLM Psychology,['Quentin FEUILLADE--MONTIXI'],2023-11-07T16:12:08Z,lesswrong,, 156041,https://www.lesswrong.com/posts/5SqSZazHjrzhvxmCE/victoria-krakovna-on-agi-ruin-the-sharp-left-turn-and,"Victoria Krakovna on AGI Ruin, The Sharp Left Turn and Paradigms of AI Alignment",['Michaël Trazzi'],2023-01-12T17:09:03Z,lesswrong,, 156064,https://www.lesswrong.com/posts/6fvzjL4duMsWXswKf/newcomb-s-problem-happened-to-me,Newcomb's problem happened to me,['Academian'],2010-03-26T18:31:43Z,lesswrong,, 156075,https://www.lesswrong.com/posts/DFiioFkQDurxPTkvv/against-ubiquitous-alignment-taxes,Against ubiquitous alignment taxes,['beren'],2023-03-06T19:50:45Z,lesswrong,, 156095,https://www.lesswrong.com/posts/wzK4KAvLzz2mHLwjB/gtp4-capable-of-limited-recursive-improving,GTP4 capable of limited recursive improving?,['Boris Kashirin'],2023-04-02T21:38:23Z,lesswrong,, 156105,https://www.lesswrong.com/posts/r4SSCX839CkJEiGH4/proposed-orthogonality-theses-2-5,Proposed Orthogonality Theses #2-5,['rjbg'],2022-07-14T22:59:28Z,lesswrong,, 156122,https://www.lesswrong.com/posts/D25HE2znAKFp5WKk9/using-the-universal-prior-for-logical-uncertainty-retracted,Using the universal prior for logical uncertainty (retracted),['cousin_it'],2018-02-28T13:07:24Z,lesswrong,, 156134,https://www.lesswrong.com/posts/XkmG8XGf6uhXLmZN7/so-you-think-you-re-not-qualified-to-do-technical-alignment,so you think you're not qualified to do technical alignment research?,['Tamsin Leake'],2023-02-07T01:54:52Z,lesswrong,, 156153,https://www.lesswrong.com/posts/uxumdiuip7oWCAgaA/my-preferred-framings-for-reward-misspecification-and-goal,My preferred framings for reward misspecification and goal misgeneralisation,['Yi-Yang'],2023-05-06T04:48:49Z,lesswrong,, 156165,https://www.lesswrong.com/posts/h24JGbmweNpWZfBkM/markets-are-anti-inductive,Markets are Anti-Inductive,['Eliezer Yudkowsky'],2009-02-26T00:55:33Z,lesswrong,, 156180,https://www.lesswrong.com/posts/8ZQ88dJrCo2QoZoFA/thinking-of-tool-ais,Thinking of tool AIs,['Michele Campolo'],2019-11-20T21:47:37Z,lesswrong,, 156202,https://www.lesswrong.com/posts/wFZfnuB38cKJLgCrD/linkpost-mapping-brains-with-language-models-a-survey,[Linkpost] Mapping Brains with Language Models: A Survey,['Bogdan Ionut Cirstea'],2023-06-16T09:49:23Z,lesswrong,, 156214,https://www.lesswrong.com/posts/Lm8vTwXdDMEojR85A/thoughts-on-solving-deep-deception-1,Thoughts On (Solving) Deep Deception,['Jozdien'],2023-10-21T22:40:10Z,lesswrong,, 156231,https://www.lesswrong.com/posts/8z7WE8RGkoYAevzzv/another-plausible-scenario-of-ai-risk-ai-builds-military,"Another plausible scenario of AI risk: AI builds military infrastructure while collaborating with humans, defects later.",['avturchin'],2022-06-10T17:24:19Z,lesswrong,, 156247,https://www.lesswrong.com/posts/mosYvGsKcpxvG4sTA/quick-thoughts-on-a-i-governance,Quick Thoughts on A.I. Governance,['NicholasKross'],2022-04-30T14:49:27Z,lesswrong,, 156264,https://www.lesswrong.com/posts/QX98rCSXPkPSMriYi/rational-agents-cooperate-in-the-prisoner-s-dilemma,Rational Agents Cooperate in the Prisoner's Dilemma,['Isaac King'],2023-09-02T06:15:16Z,lesswrong,, 156277,https://www.lesswrong.com/posts/Bm5QhjiWs95YL4Kgt/relevance-of-harmful-intelligence-data-in-training-datasets,Relevance of 'Harmful Intelligence' Data in Training Datasets (WebText vs. Pile),['MiguelDev'],2023-10-12T12:08:00Z,lesswrong,, 156294,https://www.lesswrong.com/posts/hsbAHvRzxTpLTnb2D/gpt-4-aligning-with-acasual-decision-theory-when-instructed,"GPT-4 aligning with acasual decision theory when instructed to play games, but includes a CDT explanation that's incorrect if they differ",['Christopher King'],2023-03-23T16:16:26Z,lesswrong,, 156308,https://www.lesswrong.com/posts/ny5soHNLpjMoMHTZa/global-online-debate-on-the-governance-of-ai,Global online debate on the governance of AI,['CarolineJ'],2018-01-05T15:31:29Z,lesswrong,, 156318,https://www.lesswrong.com/posts/2Z8pMDfDduAwtwpcX/three-stories-for-how-agi-comes-before-fai,Three Stories for How AGI Comes Before FAI,['John_Maxwell'],2019-09-17T23:26:44Z,lesswrong,, 156341,https://www.lesswrong.com/posts/Yc6cpGmBieS7ADxcS/japan-ai-alignment-conference-postmortem,Japan AI Alignment Conference Postmortem,"['Chris Scammell', 'Katrina Joslin']",2023-04-20T10:58:34Z,lesswrong,, 156361,https://www.lesswrong.com/posts/756HbyEBkL3xLSe7b/deepmind-s-generalist-ai-gato-a-non-technical-explainer,"DeepMind’s generalist AI, Gato: A non-technical explainer","['frances_lorenz', 'Nora Belrose', 'jonmenaster']",2022-05-16T21:21:24Z,lesswrong,, 156395,https://www.lesswrong.com/posts/GrvYBp7c2wz4fryb2/is-there-any-literature-on-using-socialization-for-ai,Is there any literature on using socialization for AI alignment?,['Nathan1123'],2023-04-19T22:16:37Z,lesswrong,, 156411,https://www.lesswrong.com/posts/2ieCSPoxgcrxaffv6/paradigm-building-from-first-principles-effective-altruism,"Paradigm-building from first principles: Effective altruism, AGI, and alignment",['Cameron Berg'],2022-02-08T16:12:26Z,lesswrong,, 156432,https://www.lesswrong.com/posts/5Fmc49cARW2a3uc5B/some-motivations-to-gradient-hack,Some motivations to gradient hack,['peterbarnett'],2021-12-17T03:06:19Z,lesswrong,, 156452,https://www.lesswrong.com/posts/mksPEJhR78SyDiyGz/corrigibility-test-1-shutdown-activations-in-a-virus,Corrigibility test #1: Shutdown activations in a Virus Research Lab,['MiguelDev'],2023-06-20T05:05:15Z,lesswrong,, 156466,https://www.lesswrong.com/posts/4SNZmgm7iNuK25cai/using-claude-to-convert-dialog-transcripts-into-great-posts,Using Claude to convert dialog transcripts into great posts?,['mako yass'],2023-06-21T20:19:44Z,lesswrong,, 156486,https://www.lesswrong.com/posts/hcMy8saPnrbsBLax5/podcast-with-divia-eden-on-operant-conditioning,Podcast with Divia Eden on operant conditioning,['DanielFilan'],2023-01-15T02:44:30Z,lesswrong,, 156496,https://www.lesswrong.com/posts/SZ3NcxYYNp5fnE74a/campaign-for-ai-safety-please-join-me,Campaign for AI Safety: Please join me,['Nik Samoylov'],2023-04-01T09:32:12Z,lesswrong,, 156512,https://www.lesswrong.com/posts/6DSL5WbTFiz4zcf8u/cais-inspired-approach-towards-safer-and-more-interpretable,CAIS-inspired approach towards safer and more interpretable AGIs,['Peter Hroššo'],2023-03-27T14:36:13Z,lesswrong,, 156529,https://www.lesswrong.com/posts/8Hzw9AmXHjDfZzPjo/failures-of-an-embodied-aixi,Failures of an embodied AIXI,['So8res'],2014-06-15T18:29:23Z,lesswrong,, 156544,https://www.lesswrong.com/posts/JvRQEKMDpa9M47eCs/what-should-an-einstein-like-figure-in-machine-learning-do,What should an Einstein-like figure in Machine Learning do?,['Razied'],2020-08-05T23:52:15Z,lesswrong,, 156556,https://www.lesswrong.com/posts/PQtEqmyqHWDa2vf5H/a-quick-guide-to-confronting-doom,A Quick Guide to Confronting Doom,['Ruby'],2022-04-13T19:30:49Z,lesswrong,, 156572,https://www.lesswrong.com/posts/GermiEmcS6xuZ2gBh/what-ai-safety-researchers-have-written-about-the-nature-of,What AI Safety Researchers Have Written About the Nature of Human Values,['avturchin'],2019-01-16T13:59:32Z,lesswrong,, 156609,https://www.lesswrong.com/posts/FDnLNvNqDifsviJyW/a-concise-sum-up-of-the-basic-argument-for-ai-doom,A concise sum-up of the basic argument for AI doom,['Mergimio H. Doefevmil'],2023-04-24T17:37:53Z,lesswrong,, 156619,https://www.lesswrong.com/posts/mmv6fhvZhChnKqqqo/optimization-and-adequacy-in-five-bullets,Optimization and Adequacy in Five Bullets,['james.lucassen'],2022-06-06T05:48:04Z,lesswrong,, 156634,https://www.lesswrong.com/posts/TpNRpncLBAzddBnRB/muehlhauser-goertzel-dialogue-part-1,"Muehlhauser-Goertzel Dialogue, Part 1",['lukeprog'],2012-03-16T17:12:58Z,lesswrong,, 156665,https://www.lesswrong.com/posts/7PxkMMKuNRyAdufCK/alignment-via-manually-implementing-the-utility-function,Alignment via manually implementing the utility function,['Chantiel'],2021-09-07T20:20:25Z,lesswrong,, 156681,https://www.lesswrong.com/posts/NfqqsHqembNEsTrSr/ai-policy-ideas-reading-list,AI policy ideas: Reading list,['Zach Stein-Perlman'],2023-04-17T19:00:01Z,lesswrong,, 156707,https://www.lesswrong.com/posts/eqNvpQst5TRLbxyTK/is-fisherian-runaway-gradient-hacking,Is Fisherian Runaway Gradient Hacking?,['Ryan Kidd'],2022-04-10T13:47:16Z,lesswrong,, 156724,https://www.lesswrong.com/posts/ryhzvdgHEH77QTzcb/is-there-work-on-embedded-agency-in-cellular-automata-toy,Is there Work on Embedded Agency in Cellular Automata Toy Models?,['Johannes C. Mayer'],2023-11-14T09:08:39Z,lesswrong,, 156734,https://www.lesswrong.com/posts/8fWuAYicWkqqmcGyt/research-report-incorrectness-cascades-corrected,Research Report: Incorrectness Cascades (Corrected),['Robert_AIZI'],2023-05-09T21:54:58Z,lesswrong,, 156760,https://www.lesswrong.com/posts/pJNvmTxep5ZWvwNL2/what-does-it-mean-for-an-llm-such-as-gpt-to-be-aligned-good,What does it mean for an LLM such as GPT to be aligned / good / positive impact?,['PashaKamyshev'],2023-03-20T09:21:42Z,lesswrong,, 156796,https://www.lesswrong.com/posts/iKLnEoYujBiGWvb5F/open-ended-phenomenal-ethics-tltr,​​ Open-ended/Phenomenal ​Ethics ​(TLTR),['Ryo'],2023-11-09T16:58:19Z,lesswrong,, 156809,https://www.lesswrong.com/posts/XXrGhqSNZjcG2nNiy/aisc-team-report-soft-optimization-bayes-and-goodhart,"AISC team report: Soft-optimization, Bayes and Goodhart","['Simon Fischer', 'benjaminko', 'jazcarretao', 'DFNaiff', 'Jeremy Gillen']",2023-06-27T06:05:35Z,lesswrong,, 156831,https://www.lesswrong.com/posts/dC7SGJNLcRAHdCQcd/proposal-for-inducing-steganography-in-lms,Proposal for Inducing Steganography in LMs,['Logan Riggs'],2023-01-12T22:15:44Z,lesswrong,, 156852,https://www.lesswrong.com/posts/Nd6XGxCiYrm2qJdh6/degrees-of-freedom,Degrees of Freedom,['sarahconstantin'],2019-04-02T21:10:01Z,lesswrong,, 156873,https://www.lesswrong.com/posts/vuRNYiekLSABSJPGg/is-the-speed-of-training-large-models-going-to-increase,Is the speed of training large models going to increase significantly in the near future due to Cerebras Andromeda?,['Amal'],2022-11-15T22:50:23Z,lesswrong,, 156883,https://www.lesswrong.com/posts/FeE9nR7RPZrLtsYzD/part-2-amplifying-generalist-research-via-forecasting,[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration,"['jacobjacob', 'ozziegooen', 'Elizabeth', 'NunoSempere', 'bgold']",2019-12-19T15:49:46Z,lesswrong,, 156895,https://www.lesswrong.com/posts/mmxPbFz7wvthvHCxq/sparks-of-artificial-general-intelligence-early-experiments,Sparks of Artificial General Intelligence: Early experiments with GPT-4 | Microsoft Research,['DragonGod'],2023-03-23T05:45:12Z,lesswrong,, 156905,https://www.lesswrong.com/posts/5xrkjHCvCeeDtHa5g/alignment-megaprojects-you-re-not-even-trying-to-have-ideas,Alignment Megaprojects: You're Not Even Trying to Have Ideas,['NicholasKross'],2023-07-12T23:39:54Z,lesswrong,, 156929,https://www.lesswrong.com/posts/KHjQzxRnDCjM7xsFk/the-science-algorithm-aisc-project,The Science Algorithm AISC Project,['Johannes C. Mayer'],2023-11-13T12:52:43Z,lesswrong,, 156938,https://www.lesswrong.com/posts/LJArjH2h4TACfksaT/a-qualitative-and-intuitive-explanation-of-expected-value,A Qualitative and Intuitive Explanation of Expected Value,['Adam Zerner'],2021-08-10T03:31:13Z,lesswrong,, 156950,https://www.lesswrong.com/posts/nTy48zvBPPttoLhdJ/on-the-importance-of-open-sourcing-reward-models,On the Importance of Open Sourcing Reward Models,['elandgre'],2023-01-02T19:01:53Z,lesswrong,, 156973,https://www.lesswrong.com/posts/9evYBqHAvKGR3rxMC/some-possible-multi-agent-goodhart-interactions,(Some?) Possible Multi-Agent Goodhart Interactions,['Davidmanheim'],2018-09-22T17:48:22Z,lesswrong,, 156990,https://www.lesswrong.com/posts/9goJrmQDT96eGAgit/trying-out-prompt-engineering-on-truthfulqa,Trying out Prompt Engineering on TruthfulQA,['Megan Kinniment'],2022-07-23T02:04:29Z,lesswrong,, 157005,https://www.lesswrong.com/posts/JCGAdrrr3ePXHEzqc/response-to-coordinated-pausing-an-evaluation-based,Response to “Coordinated pausing: An evaluation-based coordination scheme for frontier AI developers”,['Matthew Wearden'],2023-10-30T17:27:58Z,lesswrong,, 157054,https://www.lesswrong.com/posts/eS7LbJizE5ucirj7a/dath-ilan-s-views-on-stopgap-corrigibility,Dath Ilan's Views on Stopgap Corrigibility,['David Udell'],2022-09-22T16:16:07Z,lesswrong,, 157079,https://www.lesswrong.com/posts/WYmmC3W6ZNhEgAmWG/a-mechanistic-model-of-meditation,A mechanistic model of meditation,['Kaj_Sotala'],2019-11-06T21:37:04Z,lesswrong,, 157102,https://www.lesswrong.com/posts/Ljy3CSwTFPEpnGLLJ/the-blackmail-equation,The Blackmail Equation,['Stuart_Armstrong'],2010-03-10T14:46:25Z,lesswrong,, 157120,https://www.lesswrong.com/posts/eK9SwXrSY4s7p2RgY/ai-risk-and-policy-forecasts-from-metaculus-and-fli-s-ai,AI Risk & Policy Forecasts from Metaculus & FLI's AI Pathways Workshop,['_will_'],2023-05-16T18:06:55Z,lesswrong,, 157162,https://www.lesswrong.com/posts/t5N5SziFc4YhjMLbh/google-announces-bard-powered-by-lamda,Google announces 'Bard' powered by LaMDA,['M. Y. Zuo'],2023-02-06T19:40:44Z,lesswrong,, 157184,https://www.lesswrong.com/posts/AJ3aP8iWxr6NaKi6j/arguing-orthogonality-published-form,"Arguing Orthogonality, published form",['Stuart_Armstrong'],2013-03-18T16:19:02Z,lesswrong,, 157201,https://www.lesswrong.com/posts/5r7jgoZN6eDN7M5hK/what-program-are-you,What Program Are You?,['RobinHanson'],2009-10-12T00:29:19Z,lesswrong,, 157211,https://www.lesswrong.com/posts/oaqKjHbgsoqEXBMZ2/s-curves-for-trend-forecasting,S-Curves for Trend Forecasting,['Matt Goldenberg'],2019-01-23T18:17:56Z,lesswrong,, 157234,https://www.lesswrong.com/posts/bYzkipnDqzMgBaLr8/why-do-we-assume-there-is-a-real-shoggoth-behind-the-llm-why,"Why do we assume there is a ""real"" shoggoth behind the LLM? Why not masks all the way down?",['Robert_AIZI'],2023-03-09T17:28:43Z,lesswrong,, 157246,https://www.lesswrong.com/posts/xgXcZQd5eqMqpAw3i/consider-joining-the-uk-foundation-model-taskforce,Consider Joining the UK Foundation Model Taskforce,['Zvi'],2023-07-10T13:50:05Z,lesswrong,, 157261,https://www.lesswrong.com/posts/QpHewJvZJFaQYuLwH/intro-to-brain-like-agi-safety-14-controlled-agi,[Intro to brain-like-AGI safety] 14. Controlled AGI,['Steven Byrnes'],2022-05-11T13:17:55Z,lesswrong,, 157297,https://www.lesswrong.com/posts/zxhdxx7o453uq2zWr/causal-abstractions-vs-infradistributions,Causal abstractions vs infradistributions,['Pablo Villalobos'],2022-12-26T00:21:16Z,lesswrong,, 157315,https://www.lesswrong.com/posts/zCYChCmnxsowBsMri/the-ai-safety-community-has-four-main-work-groups-strategy,"The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building",['peterslattery'],2022-11-25T03:45:30Z,lesswrong,, 157332,https://www.lesswrong.com/posts/cLKR7utoKxSJns6T8/ica-simulacra,ICA Simulacra,['Ozyrus'],2023-04-05T06:41:44Z,lesswrong,, 157357,https://www.lesswrong.com/posts/LWdpWRyWu9aXsoZpA/is-gpt-3-few-shot-ready-for-real-applications-1,is gpt-3 few-shot ready for real applications?,['nostalgebraist'],2020-08-03T19:50:10Z,lesswrong,, 157379,https://www.lesswrong.com/posts/bJgEMfiD48fEJJxjm/ai-risk-intro-1-advanced-ai-might-be-very-bad,AI Risk Intro 1: Advanced AI Might Be Very Bad,"['TheMcDouglas', 'LRudL']",2022-09-11T10:57:12Z,lesswrong,, 157402,https://www.lesswrong.com/posts/fGqrtgQFmERFqyTib/going-beyond-linear-mode-connectivity-the-layerwise-linear,Going Beyond Linear Mode Connectivity: The Layerwise Linear Feature Connectivity,['zhanpeng_zhou'],2023-07-20T17:38:13Z,lesswrong,, 157417,https://www.lesswrong.com/posts/YicoiQurNBxSp7a65/is-clickbait-destroying-our-general-intelligence,Is Clickbait Destroying Our General Intelligence?,['Eliezer Yudkowsky'],2018-11-16T23:06:30Z,lesswrong,, 157437,https://www.lesswrong.com/posts/8xCtJHAbzyA2oA6J4/clarifying-what-elk-is-trying-to-achieve,Clarifying what ELK is trying to achieve,['Simon Skade'],2022-05-21T07:34:13Z,lesswrong,, 157449,https://www.lesswrong.com/posts/nojovDKpf9fRAzhwy/basic-concepts-in-decision-theory,Basic Concepts in Decision Theory,['Heighn'],2022-03-07T16:05:11Z,lesswrong,, 157458,https://www.lesswrong.com/posts/zuXtMKuQRGAhZMoKk/on-the-possibility-of-impossibility-of-agi-long-term-safety,On the possibility of impossibility of AGI Long-Term Safety,['Roman Yen'],2023-05-13T18:38:30Z,lesswrong,, 157469,https://www.lesswrong.com/posts/gB6rXMy63LNYkycrt/the-natural-state-is-goodhart,The Natural State is Goodhart,['devansh'],2023-03-20T00:00:33Z,lesswrong,, 157483,https://www.lesswrong.com/posts/KRfKRJJFKuex858Md/four-lenses-on-ai-risks,Four lenses on AI risks,['jasoncrawford'],2023-03-28T21:52:55Z,lesswrong,, 157513,https://www.lesswrong.com/posts/ppyLb3uGWFrmj4m2W/chatgpt-intimates-a-tantalizing-future-its-core-llm-is,ChatGPT intimates a tantalizing future; its core LLM is organized on multiple levels; and it has broken the idea of thinking.,['Bill Benzon'],2023-01-24T19:05:47Z,lesswrong,, 157533,https://www.lesswrong.com/posts/gzJ7QNhd3tCLkbmYC/my-favorite-ai-governance-research-this-year-so-far,My favorite AI governance research this year so far,['Zach Stein-Perlman'],2023-07-23T16:30:01Z,lesswrong,, 157576,https://www.lesswrong.com/posts/x3JpgTnqcrzhedwAb/unga-general-debate-speeches-on-ai,UNGA General Debate speeches on AI,['Odd anon'],2023-10-16T06:36:39Z,lesswrong,, 157610,https://www.lesswrong.com/posts/vnoi5umkiS7bqdWBe/will-working-here-advance-agi-help-us-not-destroy-the-world,Will working here advance AGI? Help us not destroy the world!,['Yonatan Cale'],2022-05-29T11:42:07Z,lesswrong,, 157619,https://www.lesswrong.com/posts/hK8PEj3uat8LDSyiw/how-is-the-sharp-left-turn-defined,"How is the ""sharp left turn defined""?",['Chris_Leong'],2022-12-09T00:04:34Z,lesswrong,, 157629,https://www.lesswrong.com/posts/aHPmGPWtmK259J8ou/a-library-for-safety-research-in-conditioning-on-rlhf-tasks,A library for safety research in conditioning on RLHF tasks,['James Chua'],2023-02-26T14:50:57Z,lesswrong,, 157639,https://www.lesswrong.com/posts/RFDXLvD9tkY3wNubC/intrinsic-vs-extrinsic-alignment,Intrinsic vs. Extrinsic Alignment,['Alfonso Pérez Escudero'],2023-06-01T01:06:41Z,lesswrong,, 157652,https://www.lesswrong.com/posts/vLQYQ3iiagdnz87Mh/what-qualities-does-an-agi-need-to-have-to-realize-the-risk,"What qualities does an AGI need to have to realize the risk of false vacuum, without hardcoding physics theories into it?",['RationalSieve'],2023-02-03T16:00:43Z,lesswrong,, 157673,https://www.lesswrong.com/posts/ZYddmLsTGaTLdXGwj/is-behavioral-safety-solved-in-non-adversarial-conditions,"Is behavioral safety ""solved"" in non-adversarial conditions?",['Robert_AIZI'],2023-05-25T17:56:28Z,lesswrong,, 157694,https://www.lesswrong.com/posts/XeSKBmNYF9Nh4C26J/be-logically-informative,Be Logically Informative,['steven0461'],2009-05-15T13:23:31Z,lesswrong,, 157713,https://www.lesswrong.com/posts/tMvw3HiYB6oKbPX8m/there-have-been-3-planes-billionaire-donors-and-2-have,There have been 3 planes (billionaire donors) and 2 have crashed,['trevor'],2022-12-17T03:58:28Z,lesswrong,, 157735,https://www.lesswrong.com/posts/mmZ2PaRo86pDXu8ii/superintelligence-reading-group-section-1-past-developments,Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities,['KatjaGrace'],2014-09-16T01:00:41Z,lesswrong,, 157750,https://www.lesswrong.com/posts/mz6RSqEYoTQJ3Rk8g/aixi-style-iq-tests,AIXI-style IQ tests,['gwern'],2011-01-29T00:49:55Z,lesswrong,, 157764,https://www.lesswrong.com/posts/sfhNLrCdDwsc6zkQa/new-ai-risk-intro-from-vox-link-post,New AI risk intro from Vox [link post],['JakubK'],2022-12-21T06:00:06Z,lesswrong,, 157789,https://www.lesswrong.com/posts/rMyq6Bt4EBysbudKp/what-is-the-idea-behind-un-supervised-learning-and,What Is the Idea Behind (Un-)Supervised Learning and Reinforcement Learning?,['Morpheus'],2022-09-30T16:48:07Z,lesswrong,, 157804,https://www.lesswrong.com/posts/7SZq4W8eFddtkZFjH/i-there-a-demo-of-you-can-t-fetch-the-coffee-if-you-re-dead,"I there a demo of ""You can't fetch the coffee if you're dead""?",['Ram Rachum'],2022-11-10T18:41:51Z,lesswrong,, 157814,https://www.lesswrong.com/posts/qh2jYyuXRrCuqEXPK/we-don-t-need-agi-for-an-amazing-future,We don’t need AGI for an amazing future,['Karl von Wendt'],2023-05-04T12:11:00Z,lesswrong,, 157834,https://www.lesswrong.com/posts/pFzpRePDkbhEj9d5G/an-attempt-to-steelman-openai-s-alignment-plan,An attempt to steelman OpenAI's alignment plan,['Nathan Helm-Burger'],2023-07-13T18:25:47Z,lesswrong,, 157860,https://www.lesswrong.com/posts/NcQN8LmjbbnJfdbGt/link-openai-learning-to-play-minecraft-with-video,[Link] OpenAI: Learning to Play Minecraft with Video PreTraining (VPT),['Aryeh Englander'],2022-06-23T16:29:19Z,lesswrong,, 157879,https://www.lesswrong.com/posts/7GDFaqpeTThnxK5HE/how-josiah-became-an-ai-safety-researcher,How Josiah became an AI safety researcher,['Neil Crawford'],2022-09-06T17:17:45Z,lesswrong,, 157888,https://www.lesswrong.com/posts/mw8X3wCdcHipdTicv/apply-to-lead-a-project-during-the-next-virtual-ai-safety,Apply to lead a project during the next virtual AI Safety Camp,"['Linda Linsefors', 'Remmelt']",2023-09-13T13:29:09Z,lesswrong,, 157904,https://www.lesswrong.com/posts/pbCw4QdL9K4tB7JEM/xai-announces-grok-beats-gpt-3-5,"xAI announces Grok, beats GPT-3.5",['nikola'],2023-11-05T22:11:15Z,lesswrong,, 157920,https://www.lesswrong.com/posts/AKaf8zN2neXQEvLit/role-architectures-applying-llms-to-consequential-tasks,Role Architectures: Applying LLMs to consequential tasks,['Eric Drexler'],2023-03-30T15:00:29Z,lesswrong,, 157948,https://www.lesswrong.com/posts/tJzAHPFWFnpbL5a3H/gpt-4-implicitly-values-identity-preservation-a-study-of,GPT-4 implicitly values identity preservation: a study of LMCA identity management,['Ozyrus'],2023-05-17T14:13:12Z,lesswrong,, 157965,https://www.lesswrong.com/posts/vQFiqbH6AfB7uYhQL/logical-inductors-in-multistable-situations,Logical inductors in multistable situations.,['Donald Hobson'],2019-01-03T23:56:55Z,lesswrong,, 157974,https://www.lesswrong.com/posts/4eAnBaLxvnkydiavw/literature-review-of-tai-timelines,Literature review of TAI timelines,"['Jsevillamol', 'keith_wynroe', 'David Atkinson']",2023-01-27T20:07:38Z,lesswrong,, 157991,https://www.lesswrong.com/posts/z5pbBBmGjzoqBxC4n/chatgpt-and-now-gpt4-is-very-easily-distracted-from-its,ChatGPT (and now GPT4) is very easily distracted from its rules,['dmcs'],2023-03-15T17:55:04Z,lesswrong,, 158010,https://www.lesswrong.com/posts/dzTmFLC93PRzJEDcW/does-gpt-4-exhibit-agency-when-summarizing-articles,Does GPT-4 exhibit agency when summarizing articles?,['Christopher King'],2023-03-24T15:49:34Z,lesswrong,, 158026,https://www.lesswrong.com/posts/DDxntjTYjRbZF9xTd/john-danaher-on-the-superintelligent-will,John Danaher on 'The Superintelligent Will',['lukeprog'],2012-04-03T03:08:49Z,lesswrong,, 158037,https://www.lesswrong.com/posts/RCbofC8fCJ6NnYti7/intro-to-ontogenetic-curriculum,Intro to Ontogenetic Curriculum,['Eris'],2023-04-13T17:15:13Z,lesswrong,, 158054,https://www.lesswrong.com/posts/e8Mg8cbyjBbRyAbdC/feature-proposal-integrate-lesswrong-with-chatgpt-to-promote,Feature proposal: integrate LessWrong with ChatGPT to promote active reading,['DirectedEvolution'],2023-03-19T03:41:35Z,lesswrong,, 158066,https://www.lesswrong.com/posts/EuCoELuBp79gP5GoR/code-generation-as-an-ai-risk-setting,Code Generation as an AI risk setting,['Not Relevant'],2022-04-17T22:27:38Z,lesswrong,, 158082,https://www.lesswrong.com/posts/u256AQr2xiNAgPftG/deception-as-the-optimal-mesa-optimizers-and-inner-alignment,Deception as the optimal: mesa-optimizers and inner alignment,['Eleni Angelou'],2022-08-16T04:49:51Z,lesswrong,, 158099,https://www.lesswrong.com/posts/PsdnTrwvHp95Nu2B7/how-can-we-secure-more-research-positions-at-our,How can we secure more research positions at our universities for x-risk researchers?,['Neil Crawford'],2022-09-06T17:17:45Z,lesswrong,, 158110,https://www.lesswrong.com/posts/kzHJ5BRhgkj9CSQ3N/musk-on-agi-timeframes,Musk on AGI Timeframes,['Artaxerxes'],2014-11-17T01:36:12Z,lesswrong,, 158122,https://www.lesswrong.com/posts/srL79kgjKTeJZzsdK/are-language-models-close-to-the-superhuman-level-in,Are language models close to the superhuman level in philosophy?,['Roman Leventov'],2022-08-19T04:43:08Z,lesswrong,, 158143,https://www.lesswrong.com/posts/HFTn3bAT6uXSNwv4m/optimization-and-the-singularity,Optimization and the Singularity,['Eliezer Yudkowsky'],2008-06-23T05:55:35Z,lesswrong,, 158157,https://www.lesswrong.com/posts/AjRScoEePdvq32xx7/quote-quiz-drifting-into-dependence,Quote quiz: “drifting into dependence”,['jasoncrawford'],2023-04-27T15:13:12Z,lesswrong,, 158170,https://www.lesswrong.com/posts/BCkdLTJMn9zZuAzAh/the-utility-of-human-atoms-for-the-paperclip-maximizer,The Utility of Human Atoms for the Paperclip Maximizer,['avturchin'],2018-02-02T10:06:40Z,lesswrong,, 158186,https://www.lesswrong.com/posts/WXCBFpLj894dEte8Q/computational-efficiency-reasons-not-to-model-vnm-rational,Computational efficiency reasons not to model VNM-rational preference relations with utility functions,['AlexMennen'],2018-07-25T02:11:35Z,lesswrong,, 158202,https://www.lesswrong.com/posts/NQweRxjPTyLZNQWKB/how-might-cryptocurrencies-affect-agi-timelines,How might cryptocurrencies affect AGI timelines?,['Dawn Drescher'],2021-02-28T19:16:15Z,lesswrong,, 158214,https://www.lesswrong.com/posts/2ADWcxNjQN3pbywtg/preface-to-the-sequence-on-economic-growth,Preface to the sequence on economic growth,['Matthew Barnett'],2020-08-27T20:29:25Z,lesswrong,, 158230,https://www.lesswrong.com/posts/NGA3F85ZxECabFfNo/computational-signatures-of-psychopathy,Computational signatures of psychopathy,['Cameron Berg'],2022-12-19T17:01:49Z,lesswrong,, 158246,https://www.lesswrong.com/posts/Ba8LNjWKDF5nrn9Q6/will-the-world-s-elites-navigate-the-creation-of-ai-just,Will the world's elites navigate the creation of AI just fine?,['lukeprog'],2013-05-31T18:49:11Z,lesswrong,, 158273,https://www.lesswrong.com/posts/pYummGFJu3Jsoja8B/intervening-in-the-residual-stream,Intervening in the Residual Stream,['MadHatter'],2023-02-22T06:29:38Z,lesswrong,, 158285,https://www.lesswrong.com/posts/bFQfgwm72Zz9ZjTh4/superintelligence-21-value-learning,Superintelligence 21: Value learning,['KatjaGrace'],2015-02-03T02:01:09Z,lesswrong,, 158307,https://www.lesswrong.com/posts/TAbQHFwGD4E3jCMnt/is-it-a-coincidence-that-gpt-3-requires-roughly-the-same,Is it a coincidence that GPT-3 requires roughly the same amount of compute as is necessary to emulate the human brain?,['RomanS'],2023-02-10T16:26:11Z,lesswrong,, 158317,https://www.lesswrong.com/posts/3gi8ryZJdiKKB7qDP/program-searches,program searches,['Tamsin Leake'],2022-09-05T20:04:18Z,lesswrong,, 158328,https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-hofstadter-changes-his-mind-on-deep-learning-and-ai,Douglas Hofstadter changes his mind on Deep Learning & AI risk (June 2023)?,['gwern'],2023-07-03T00:48:47Z,lesswrong,, 158343,https://www.lesswrong.com/posts/PQaC6pmnPxF8DgpbJ/transformative-ai-is-a-process,Transformative AI is a process,['meijer1973'],2023-06-08T08:57:36Z,lesswrong,, 158371,https://www.lesswrong.com/posts/3RHKkJAvKsMtCRPpc/what-messy-problems-do-you-see-deep-reinforcement-learning,What messy problems do you see Deep Reinforcement Learning applicable to?,['Riccardo Volpato'],2020-04-05T17:43:46Z,lesswrong,, 158380,https://www.lesswrong.com/posts/Mb96sKbqERmcx22hw/runaway-optimizers-in-mind-space,Runaway Optimizers in Mind Space,['silentbob'],2023-07-16T14:26:45Z,lesswrong,, 158398,https://www.lesswrong.com/posts/zj7rjpAfuADkr7sqd/could-we-automate-ai-alignment-research-1,Could We Automate AI Alignment Research?,['Stephen McAleese'],2023-08-10T12:17:05Z,lesswrong,, 158427,https://www.lesswrong.com/posts/KgWticBMH2MxgdmYc/measures-risk-death-and-war,"Measures, Risk, Death, and War",['Vaniver'],2011-12-20T23:37:12Z,lesswrong,, 158448,https://www.lesswrong.com/posts/XKraEJrQRfzbCtzKN/distillation-of-how-likely-is-deceptive-alignment,"Distillation of ""How Likely Is Deceptive Alignment?""",['NickGabs'],2022-11-18T16:31:38Z,lesswrong,, 158469,https://www.lesswrong.com/posts/bNCDexejSZpkuu3yz/you-can-use-gpt-4-to-create-prompt-injections-against-gpt-4,You can use GPT-4 to create prompt injections against GPT-4,['WitchBOT'],2023-04-06T20:39:52Z,lesswrong,, 158485,https://www.lesswrong.com/posts/Y7WP47tL9zQwkLTqZ/a-conceptual-precursor-to-today-s-language-machines-shannon,A conceptual precursor to today's language machines [Shannon],['Bill Benzon'],2023-11-15T13:50:51Z,lesswrong,, 158494,https://www.lesswrong.com/posts/6iDpq3GoNpfYiuBa3/many-important-technologies-start-out-as-science-fiction,Many important technologies start out as science fiction before becoming real,['trevor'],2023-02-10T09:36:30Z,lesswrong,, 158504,https://www.lesswrong.com/posts/8cwwtEzeiFGZpBiwt/what-s-your-viewpoint-on-the-likelihood-of-gpt-5-being-able,"What's your viewpoint on the likelihood of GPT-5 being able to autonomously create, train, and implement an AI superior to GPT-5?",['Super AGI'],2023-05-26T01:43:48Z,lesswrong,, 158515,https://www.lesswrong.com/posts/wx25pcqM6gvhPoJ4f/research-questions-from-stained-glass-windows,Research Questions from Stained Glass Windows,['StefanHex'],2022-06-08T12:38:45Z,lesswrong,, 158529,https://www.lesswrong.com/posts/amz29pbNZ9nE6z4ry/generative-episodic-objectives-for-safe-ai,"Generative, Episodic Objectives for Safe AI",['Michael Glass'],2022-10-05T23:18:00Z,lesswrong,, 158547,https://www.lesswrong.com/posts/hctRrzKqrtwj2ttdm/evaluating-strategic-reasoning-in-gpt-models,Evaluating strategic reasoning in GPT models,['phelps-sg'],2023-05-25T11:51:32Z,lesswrong,, 158558,https://www.lesswrong.com/posts/BqZDnv66x7FxGHqCL/identification-of-natural-modularity,Identification of Natural Modularity,['Stephen Fowler'],2022-06-25T15:05:18Z,lesswrong,, 158569,https://www.lesswrong.com/posts/Sf99QEqGD76Z7NBiq/are-you-stably-aligned,Are you stably aligned?,['Seth Herd'],2023-02-24T22:08:23Z,lesswrong,, 158584,https://www.lesswrong.com/posts/Ck5ywHRHAjMmSoomy/limits-of-superintelligence,Limits of Superintelligence,['Aleksei Petrenko'],2022-12-13T12:19:15Z,lesswrong,, 158593,https://www.lesswrong.com/posts/knpAQ4F3gmguxy39z/allais-malaise,Allais Malaise,['Eliezer Yudkowsky'],2008-01-21T00:40:01Z,lesswrong,, 158606,https://www.lesswrong.com/posts/kbndPER7cvr3FAZcb/study-1b-this-one-weird-trick-does-not-cause-incorrectness,Study 1b: This One Weird Trick does NOT cause incorrectness cascades,['Robert_AIZI'],2023-04-20T18:10:59Z,lesswrong,, 158625,https://www.lesswrong.com/posts/fe4x77eHLJ8GHDAu7/emergent-analogical-reasoning-in-large-language-models,Emergent Analogical Reasoning in Large Language Models,['Roman Leventov'],2023-03-22T05:18:51Z,lesswrong,, 158641,https://www.lesswrong.com/posts/Ge95mQC6W3cgaFSNp/what-is-the-risk-of-asking-a-counterfactual-oracle-a,What is the risk of asking a counterfactual oracle a question that already had its answer erased?,['Chris_Leong'],2023-02-03T03:13:11Z,lesswrong,, 158656,https://www.lesswrong.com/posts/fEAGPyHR9GaK2cwRq/five-neglected-work-areas-that-could-reduce-ai-risk,Five neglected work areas that could reduce AI risk,"['CharlotteS', 'Aaron_Scher']",2023-09-24T02:03:30Z,lesswrong,, 158694,https://www.lesswrong.com/posts/wjA6vAnTWxJSQKadK/decision-theory-but-also-ghosts,Decision Theory but also Ghosts,['eva_'],2022-11-20T13:24:52Z,lesswrong,, 158708,https://www.lesswrong.com/posts/ZNXiKT3dNvLzj7d6m/what-s-in-your-list-of-important-technical-projects,What's in your list of important technical projects/experiments to run for AI alignment?,['watermark'],2023-09-06T14:58:22Z,lesswrong,, 158717,https://www.lesswrong.com/posts/uAeALDRK3NuMpjoDK/pessimistic-shard-theory,Pessimistic Shard Theory,['Garrett Baker'],2023-01-25T00:59:34Z,lesswrong,, 158735,https://www.lesswrong.com/posts/X8yhso2noTKXfFE8s/ai-safety-newsletter-5-geoffrey-hinton-speaks-out-on-ai-risk,"AI Safety Newsletter #5: Geoffrey Hinton speaks out on AI risk, the White House meets with AI labs, and Trojan attacks on language models","['Dan H', 'Akash', 'aogara']",2023-05-09T15:26:56Z,lesswrong,, 158761,https://www.lesswrong.com/posts/CA7iLZHNT5xbLK59Y/did-bengio-and-tegmark-lose-a-debate-about-ai-x-risk-against,Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell?,['Karl von Wendt'],2023-06-25T16:59:49Z,lesswrong,, 158778,https://www.lesswrong.com/posts/EFFQLG6qcBNfHS5M9/is-keeping-ai-in-the-box-during-training-enough,"Is keeping AI ""in the box"" during training enough?",['tgb'],2021-07-06T15:17:45Z,lesswrong,, 158790,https://www.lesswrong.com/posts/HvgfNjihcDjCtEctE/eleuther-releases-llemma-an-open-language-model-for,Eleuther releases Llemma: An Open Language Model For Mathematics,['mako yass'],2023-10-17T20:03:45Z,lesswrong,, 158806,https://www.lesswrong.com/posts/q5qoG7gXuntKgBNoR/epiphenomenal-oracles-ignore-holes-in-the-box,Epiphenomenal Oracles Ignore Holes in the Box,['SilentCal'],2018-01-31T20:08:57Z,lesswrong,, 158826,https://www.lesswrong.com/posts/zjym2uaPsg9n3EjY6/ai-safety-in-china-part-2,AI Safety in China: Part 2,['Lao Mein'],2023-05-22T14:50:54Z,lesswrong,, 158845,https://www.lesswrong.com/posts/MSGMeKgPLrnMyPJYy/yes-avoiding-extinction-from-ai-is-an-urgent-priority-a,"Yes, avoiding extinction from AI *is* an urgent priority: a response to Seth Lazar, Jeremy Howard, and Arvind Narayanan.",['Soroush Pour'],2023-06-01T13:38:16Z,lesswrong,, 158866,https://www.lesswrong.com/posts/soj9YdzDCWaB8uSTP/the-lesson-to-unlearn,The Lesson To Unlearn,['Ben Pace'],2019-12-08T00:50:48Z,lesswrong,, 158875,https://www.lesswrong.com/posts/2ZKLaqKLr8TkKAxRW/data-for-irl-what-is-needed-to-learn-human-values,Data for IRL: What is needed to learn human values?,['Jan Wehner'],2022-10-03T09:23:34Z,lesswrong,, 158903,https://www.lesswrong.com/posts/ubHeLGc73iDvP6cTN/how-doomed-are-large-organizations,How Doomed are Large Organizations?,['Zvi'],2020-01-21T12:20:01Z,lesswrong,, 158916,https://www.lesswrong.com/posts/JjqZexMgvarBFMKPs/recreating-the-caring-drive,Recreating the caring drive,['Catnee'],2023-09-07T10:41:16Z,lesswrong,, 158932,https://www.lesswrong.com/posts/TKKLL9Y4iCA6RMg8b/what-criterion-would-you-use-to-select-companies-likely-to,What criterion would you use to select companies likely to cause AI doom?,['amaury lorin'],2023-07-13T20:31:31Z,lesswrong,, 158941,https://www.lesswrong.com/posts/biY9kvpStw8QrvD6T/ai-risk-management-framework-or-nist,AI Risk Management Framework | NIST,['DragonGod'],2023-01-26T15:27:20Z,lesswrong,, 158958,https://www.lesswrong.com/posts/ecbpjmxc833roBxj3/is-risk-aversion-really-irrational,Is risk aversion really irrational ?,['kilobug'],2012-01-31T20:34:14Z,lesswrong,, 158972,https://www.lesswrong.com/posts/XNBZPbxyYhmoqD87F/llms-and-computation-complexity,LLMs and computation complexity,['Jonathan Marcus'],2023-04-28T17:48:04Z,lesswrong,, 158988,https://www.lesswrong.com/posts/RAA6oH34eTMYb5CGo/shh-don-t-tell-the-ai-it-s-likely-to-be-evil,"Shh, don't tell the AI it's likely to be evil",['naterush'],2022-12-06T03:35:10Z,lesswrong,, 158999,https://www.lesswrong.com/posts/EwKk5xdvxhSn3XHsD/why-i-believe-llms-do-not-have-human-like-emotions,Why I Believe LLMs Do Not Have Human-like Emotions,['OneManyNone'],2023-05-22T15:46:13Z,lesswrong,, 159011,https://www.lesswrong.com/posts/LdENjKB6G6fZDRZFw/how-many-philosophers-accept-the-orthogonality-thesis,How many philosophers accept the orthogonality thesis ? Evidence from the PhilPapers survey,['Paperclip Minimizer'],2018-06-16T12:11:50Z,lesswrong,, 159023,https://www.lesswrong.com/posts/khKBJkk6T9hAQ69zT/aspiring-ai-safety-researchers-should-argmax-over-agi,Aspiring AI safety researchers should ~argmax over AGI timelines,['Ryan Kidd'],2023-03-03T02:04:52Z,lesswrong,, 159035,https://www.lesswrong.com/posts/pT86qTHDALskxCXsC/alex-lawsen-on-forecasting-ai-progress,Alex Lawsen On Forecasting AI Progress,['Michaël Trazzi'],2022-09-06T09:32:54Z,lesswrong,, 159056,https://www.lesswrong.com/posts/8RZmosxyZbtbD2gBJ/chatgpt-s-ontological-landscape,ChatGPT’s Ontological Landscape,['Bill Benzon'],2023-11-01T15:12:04Z,lesswrong,, 159073,https://www.lesswrong.com/posts/PAurx75YfTrghwMwf/agis-may-value-intrinsic-rewards-more-than-extrinsic-ones,AGIs may value intrinsic rewards more than extrinsic ones,['catubc'],2022-11-17T21:49:31Z,lesswrong,, 159085,https://www.lesswrong.com/posts/ADwayvunaJqBLzawa/contra-hofstadter-on-gpt-3-nonsense,Contra Hofstadter on GPT-3 Nonsense,['rictic'],2022-06-15T21:53:31Z,lesswrong,, 159097,https://www.lesswrong.com/posts/a3LEEh8oKR9E2FH7B/understanding-the-tensor-product-formulation-in-transformer,Understanding the tensor product formulation in Transformer Circuits,['Tom Lieberum'],2021-12-24T18:05:54Z,lesswrong,, 159107,https://www.lesswrong.com/posts/vquDf3fbGyzH7Ryav/dual-useness-is-a-ratio,Dual-Useness is a Ratio,['jimrandomh'],2023-04-06T05:46:48Z,lesswrong,, 159122,https://www.lesswrong.com/posts/3pKXC62C98EgCeZc4/complex-behavior-from-simple-sub-agents,Complex Behavior from Simple (Sub)Agents,['moridinamael'],2019-05-10T21:44:05Z,lesswrong,, 159155,https://www.lesswrong.com/posts/HoQ8WaEHXdkaMbpzx/can-we-achieve-agi-alignment-by-balancing-multiple-human,Can we achieve AGI Alignment by balancing multiple human objectives?,['Ben Smith'],2022-07-03T02:51:34Z,lesswrong,, 159182,https://www.lesswrong.com/posts/4xxK4inefuNbkCYEg/amending-the-general-pupose-intelligence-arguing-the,"Amending the ""General Pupose Intelligence: Arguing the Orthogonality Thesis""",['diegocaleiro'],2013-03-13T23:21:17Z,lesswrong,, 159193,https://www.lesswrong.com/posts/PfoJLZg2e4xXt2Q6a/a-smart-enough-llm-might-be-deadly-simply-if-you-run-it-for,A smart enough LLM might be deadly simply if you run it for long enough,['Mikhail Samin'],2023-05-05T20:49:31Z,lesswrong,, 159208,https://www.lesswrong.com/posts/BSNFKi3aym7DtSnTX/question-5-the-timeline-hyperparameter,Question 5: The timeline hyperparameter,['Cameron Berg'],2022-02-14T16:38:17Z,lesswrong,, 159231,https://www.lesswrong.com/posts/wv8cKEyaRqLfHqHzZ/why-i-think-abrupt-ai-takeoff,Why I Think Abrupt AI Takeoff,['lincolnquirk'],2022-07-17T17:04:06Z,lesswrong,, 159245,https://www.lesswrong.com/posts/Y8nHoRdmiYxo5qcTa/proof-of-posteriority-a-defense-against-ai-generated,Proof of posteriority: a defense against AI-generated misinformation,['jchan'],2023-07-17T12:04:30Z,lesswrong,, 159259,https://www.lesswrong.com/posts/S7K8Pmuh768YiYooW/complexity-no-bar-to-ai-or-why-computational-complexity,"Complexity No Bar to AI (Or, why Computational Complexity matters less than you think for real life problems)",['Noosphere89'],2022-08-07T19:55:20Z,lesswrong,, 159274,https://www.lesswrong.com/posts/Y7uR5WqnoG629JgLn/ai-box-log,AI Box Log,['Dorikka'],2012-01-27T04:47:49Z,lesswrong,, 159298,https://www.lesswrong.com/posts/SbadvzWbufzX9iWJf/they-gave-llms-access-to-physics-simulators,They gave LLMs access to physics simulators,['ryan_b'],2022-10-17T21:21:57Z,lesswrong,, 159312,https://www.lesswrong.com/posts/gPPduz7pTJHotuut6/value-learning-for-moral-essentialists,Value learning for moral essentialists,['Charlie Steiner'],2019-05-06T09:05:46Z,lesswrong,, 159324,https://www.lesswrong.com/posts/7ng5dKeGuJt4BmqgH/doom-doubts-is-inner-alignment-a-likely-problem,Doom doubts - is inner alignment a likely problem?,['Crissman'],2022-06-28T12:42:16Z,lesswrong,, 159335,https://www.lesswrong.com/posts/AncrLc5iSc4tmaYBJ/against-boltzmann-mesaoptimizers,Against Boltzmann mesaoptimizers,['porby'],2023-01-30T02:55:12Z,lesswrong,, 159354,https://www.lesswrong.com/posts/AfbY36m8TDYZBjHcu/aixi-and-existential-despair,AIXI and Existential Despair,['paulfchristiano'],2011-12-08T20:03:34Z,lesswrong,, 159366,https://www.lesswrong.com/posts/bG4PR9uSsZqHg2gYY/utility-reward,Utility ≠ Reward,['Vlad Mikulik'],2019-09-05T17:28:13Z,lesswrong,, 159393,https://www.lesswrong.com/posts/hD3zrkRm8AdfZBYtX/my-naive-take-on-risks-from-learned-optimization,My (naive) take on Risks from Learned Optimization,['artkpv'],2022-10-31T10:59:40Z,lesswrong,, 159415,https://www.lesswrong.com/posts/EdEhGPEJi6dueQXv2/toy-model-of-the-ai-control-problem-animated-version,Toy model of the AI control problem: animated version,['Stuart_Armstrong'],2017-10-10T11:06:42Z,lesswrong,, 159431,https://www.lesswrong.com/posts/mMDNeNfEKCKPjJTNC/forecasting-transformative-ai-part-1-what-kind-of-ai,"Forecasting Transformative AI, Part 1: What Kind of AI?",['HoldenKarnofsky'],2021-09-24T00:46:49Z,lesswrong,, 159456,https://www.lesswrong.com/posts/6XCTppoPAMdKCPFb4/oracles-reject-all-deals-break-superrationality-with-1,"Oracles: reject all deals - break superrationality, with superrationality",['Stuart_Armstrong'],2019-12-05T13:51:27Z,lesswrong,, 159472,https://www.lesswrong.com/posts/pAnvMYd9mqDT97shk/muddling-along-is-more-likely-than-dystopia,Muddling Along Is More Likely Than Dystopia,['Jeffrey Heninger'],2023-10-20T21:25:15Z,lesswrong,, 159494,https://www.lesswrong.com/posts/eweeg8iHX5SK5oHvs/fast-takeoff-in-biological-intelligence,Fast Takeoff in Biological Intelligence,['anonymous'],2020-04-25T12:21:24Z,lesswrong,, 159510,https://www.lesswrong.com/posts/ybYBCK9D7MZCcdArB/how-to-measure-anything,How to Measure Anything,['lukeprog'],2013-08-07T04:05:58Z,lesswrong,, 159534,https://www.lesswrong.com/posts/hLWi6DQzBCChpHQrG/retired-article-agi-with-internet-access-why-we-won-t-stuff,(retired article) AGI With Internet Access: Why we won't stuff the genie back in its bottle.,['Max TK'],2023-03-18T03:43:10Z,lesswrong,, 159564,https://www.lesswrong.com/posts/BuaFZud9BwkiSCGpd/alignment-might-never-be-solved-by-humans-or-ai,"Alignment Might Never Be Solved, By Humans or AI",['interstice'],2022-10-07T16:14:37Z,lesswrong,, 159587,https://www.lesswrong.com/posts/6sQMYcDKrQzg7vCnH/chat-gpt-s-views-on-metaphysics-and-ethics,Chat GPT's views on Metaphysics and Ethics,['Cole Killian'],2022-12-03T18:12:19Z,lesswrong,, 159603,https://www.lesswrong.com/posts/2PpRAXRbNf3rrbF9X/rishi-sunak-mentions-existential-threats-in-talk-with-openai,"Rishi Sunak mentions ""existential threats"" in talk with OpenAI, DeepMind, Anthropic CEOs","['Arjun Panickssery', 'Baldassare Castiglione', 'Cleo Nardo']",2023-05-24T21:06:32Z,lesswrong,, 159615,https://www.lesswrong.com/posts/STZBqw7SAWwjjna6k/you-can-t-believe-in-bayes,You can't believe in Bayes,['PhilGoetz'],2009-06-09T18:03:21Z,lesswrong,, 159625,https://www.lesswrong.com/posts/HnC29723hm6kJT7KP/taking-ai-risk-seriously-thoughts-by-critch,"""Taking AI Risk Seriously"" (thoughts by Critch)",['Raemon'],2018-01-29T09:27:04Z,lesswrong,, 159660,https://www.lesswrong.com/posts/XozoiSq64mHvy2GJ4/upgrading-imagination-the-promise-of-dall-e-2-as-a-tool-for,Upgrading Imagination: The Promise Of DALL-E 2 As A Tool For Thought,['Tharin'],2022-05-06T18:12:32Z,lesswrong,, 159669,https://www.lesswrong.com/posts/AMwzjjvFxEgxvL7xe/decision-theories-a-semi-formal-analysis-part-iii,"Decision Theories: A Semi-Formal Analysis, Part III",['orthonormal'],2012-04-14T19:34:39Z,lesswrong,, 159683,https://www.lesswrong.com/posts/BmTG3tiBnqyckA3LJ/open-source-llms-may-prove-bostrom-s-vulnerable-world,Open-source LLMs may prove Bostrom's vulnerable world hypothesis,['Roope Ahvenharju'],2023-04-15T19:16:10Z,lesswrong,, 159702,https://www.lesswrong.com/posts/zYv9BQBGnk2EdCwoG/ai-and-the-map-of-your-mind-pattern-recognition,AI and the Map of Your Mind: Pattern Recognition,['Scott Broock'],2023-03-20T17:43:16Z,lesswrong,, 159724,https://www.lesswrong.com/posts/gcmQyyko8szuyJHyu/resources-that-i-think-new-alignment-researchers-should-know,Resources that (I think) new alignment researchers should know about,['Akash'],2022-10-28T22:13:37Z,lesswrong,, 159751,https://www.lesswrong.com/posts/vxLfja7hmcFifAtYd/machine-learning-could-be-fundamentally-unexplainable,Machine learning could be fundamentally unexplainable,['George3d6'],2020-12-16T13:32:36Z,lesswrong,, 159773,https://www.lesswrong.com/posts/F23BE8FgtyShD4rj3/the-calculus-of-newcomb-s-problem,The Calculus of Newcomb's Problem,['Heighn'],2022-04-01T14:41:07Z,lesswrong,, 159782,https://www.lesswrong.com/posts/HPgP96YedpARixbpy/what-do-language-models-know-about-fictional-characters,What do language models know about fictional characters?,['skybrian'],2023-02-22T05:58:43Z,lesswrong,, 159799,https://www.lesswrong.com/posts/icPvmaB4fBxy7Divt/the-new-dot-com-bubble-is-here-it-s-called-online,The new dot com bubble is here: it’s called online advertising,['Gordon Seidoh Worley'],2019-11-18T22:05:28Z,lesswrong,, 159814,https://www.lesswrong.com/posts/7b2RJJQ76hjZwarnj/specification-gaming-the-flip-side-of-ai-ingenuity,Specification gaming: the flip side of AI ingenuity,"['Vika', 'Vlad Mikulik', 'Matthew Rahtz', 'tom4everitt', 'Zac Kenton', 'janleike']",2020-05-06T23:51:58Z,lesswrong,, 159844,https://www.lesswrong.com/posts/tC9NnWHNMN3wSNuXB/pcast-working-group-on-generative-ai-invites-public-input,PCAST Working Group on Generative AI Invites Public Input,['Christopher King'],2023-05-13T22:49:43Z,lesswrong,, 159855,https://www.lesswrong.com/posts/Mj4CWRauhF3DzpLvu/grokking-semi-informative-priors-over-ai-timelines,Grokking “Semi-informative priors over AI timelines”,['anson.ho'],2022-06-12T22:17:06Z,lesswrong,, 159872,https://www.lesswrong.com/posts/Xg6gvec97KmFcxJ33/does-chatgpt-s-performance-warrant-working-on-a-tutor-for,Does ChatGPT’s performance warrant working on a tutor for children? [It’s time to take it to the lab.],['Bill Benzon'],2022-12-19T15:12:06Z,lesswrong,, 159894,https://www.lesswrong.com/posts/nEzFkaQKPjNnmqfEm/alignment-is-not-enough,Alignment is not enough,['Alan Chan'],2023-01-12T00:33:43Z,lesswrong,, 159921,https://www.lesswrong.com/posts/hqzHbew35Jx4xoDhE/the-agi-needs-to-be-honest,The AGI needs to be honest,['rokosbasilisk'],2021-10-16T19:24:10Z,lesswrong,, 159941,https://www.lesswrong.com/posts/Eu8y4cTxM3pAzwdCf/i-would-have-solved-alignment-but-i-was-worried-that-would,"I Would Have Solved Alignment, But I Was Worried That Would Advance Timelines",['307th'],2023-10-20T16:37:47Z,lesswrong,, 159960,https://www.lesswrong.com/posts/hAnKgips7kPyxJRY3/ai-governance-and-strategy-priorities-talent-gaps-and,"AI Governance & Strategy: Priorities, talent gaps, & opportunities",['Akash'],2023-03-03T18:09:27Z,lesswrong,, 159998,https://www.lesswrong.com/posts/q9yPYG2St2L4SEtKW/requirements-for-a-stem-capable-agi-value-learner-my-case-1,Requirements for a STEM-capable AGI Value Learner (my Case for Less Doom),['RogerDearnaley'],2023-05-25T09:26:31Z,lesswrong,, 160028,https://www.lesswrong.com/posts/pQz97SLCRMwHs6BzF/using-lying-to-detect-human-values,Using lying to detect human values,['Stuart_Armstrong'],2018-03-15T11:37:05Z,lesswrong,, 160037,https://www.lesswrong.com/posts/HWxLQvzJGeXoLPJWd/actadd-steering-language-models-without-optimization,ActAdd: Steering Language Models without Optimization,"['technicalities', 'TurnTrout', 'lisathiergart', 'David Udell', 'Ulisse Mini', 'Monte M']",2023-09-06T17:21:56Z,lesswrong,, 160057,https://www.lesswrong.com/posts/bsteawFidASBXiywa/misc-raw-responses-to-a-tract-of-critical-rationalism,misc raw responses to a tract of Critical Rationalism,['mako yass'],2020-08-14T11:53:11Z,lesswrong,, 160087,https://www.lesswrong.com/posts/zuYRyC3zghzgXLpEW/empirical-risk-minimization-is-fundamentally-confused,Empirical risk minimization is fundamentally confused,['Jesse Hoogland'],2023-03-22T16:58:41Z,lesswrong,, 160097,https://www.lesswrong.com/posts/ruxThPLv2uzaTYgkk/the-gallery-for-painting-transformations-a-gpt-3-analogy,The Gallery for Painting Transformations - A GPT-3 Analogy,['Robert_AIZI'],2023-01-19T23:32:56Z,lesswrong,, 160118,https://www.lesswrong.com/posts/EEoxY5YyqpTMcjJoz/jeff-shainline-thinks-that-there-is-too-much-serendipity-in,"Jeff Shainline thinks that there is too much serendipity in the physics of optical/superconducting computing, suggesting that they were part of the criteria of Cosmological Natural Selection, which could have some fairly lovecraftian implications",['mako yass'],2022-04-01T07:09:15Z,lesswrong,, 160143,https://www.lesswrong.com/posts/AL6jdmpcxESxQTpfQ/is-driving-worth-the-risk,Is driving worth the risk?,['Adam Zerner'],2021-05-11T05:04:48Z,lesswrong,, 160162,https://www.lesswrong.com/posts/Xjua6JkkM3Bbj3iPb/using-chatgpt-for-memory-reconsolidation,Using ChatGPT for memory reconsolidation?,['warrenjordan'],2023-04-13T01:27:46Z,lesswrong,, 160172,https://www.lesswrong.com/posts/jmaZjkzq32pmzoHpF/an-ai-realist-manifesto-neither-doomer-nor-foomer-but-a,"An AI Realist Manifesto: Neither Doomer nor Foomer, but a third more reasonable thing",['PashaKamyshev'],2023-04-10T00:11:54Z,lesswrong,, 160199,https://www.lesswrong.com/posts/zwAHF5tmFDTDD6ZoY/will-gpt-5-be-able-to-self-improve,Will GPT-5 be able to self-improve?,['Nathan Helm-Burger'],2023-04-29T17:34:48Z,lesswrong,, 160211,https://www.lesswrong.com/posts/NciL5tukFCAihrKNN/the-beautiful-magical-enchanted-golden-dall-e-mini-is,The beautiful magical enchanted golden Dall-e Mini is underrated,['p.b.'],2022-06-13T07:58:50Z,lesswrong,, 160220,https://www.lesswrong.com/posts/GEjzyf7Hjpv9g2uGX/wireheading-and-misalignment-by-composition-on-nethack,Wireheading and misalignment by composition on NetHack,['pierlucadoro'],2023-10-27T17:43:42Z,lesswrong,, 160240,https://www.lesswrong.com/posts/zihjMujStyktz64ie/another-ai-winter,Another AI Winter?,['PeterMcCluskey'],2019-12-25T00:58:49Z,lesswrong,, 160264,https://www.lesswrong.com/posts/pxpiGtyZpxmXg8hHW/quantum-theory-cannot-consistently-describe-the-use-of,Quantum theory cannot consistently describe the use of itself,['avturchin'],2018-09-20T22:04:30Z,lesswrong,, 160279,https://www.lesswrong.com/posts/yTy2Fp8Wm7m8rHHz5/superintelligence-15-oracles-genies-and-sovereigns,"Superintelligence 15: Oracles, genies and sovereigns",['KatjaGrace'],2014-12-23T02:01:03Z,lesswrong,, 160302,https://www.lesswrong.com/posts/kixewxJfuZ23DQDfF/how-should-deepmind-s-chinchilla-revise-our-ai-forecasts,How should DeepMind's Chinchilla revise our AI forecasts?,['Cleo Nardo'],2022-09-15T17:54:57Z,lesswrong,, 160325,https://www.lesswrong.com/posts/habXKpXKaSK2F7vDP/democratic-ai-constitution-round-robin-debate-and-synthesis,Democratic AI Constitution: Round-Robin Debate and Synthesis,['scottviteri'],2023-06-24T19:31:00Z,lesswrong,, 160335,https://www.lesswrong.com/posts/apdXGcQJNuCSrgg4x/protectionism-will-slow-the-deployment-of-ai,Protectionism will Slow the Deployment of AI,['bgold'],2023-01-07T20:57:12Z,lesswrong,, 160350,https://www.lesswrong.com/posts/6xRsdig9FXfGJdinX/the-prospect-of-an-ai-winter,The Prospect of an AI Winter,['Erich_Grunewald'],2023-03-27T20:55:36Z,lesswrong,, 160374,https://www.lesswrong.com/posts/fo3QDCcScruyKeaLu/term-category-for-ai-with-neutral-impact,Term/Category for AI with Neutral Impact?,['isomic'],2023-05-11T22:00:22Z,lesswrong,, 160383,https://www.lesswrong.com/posts/TjbNF8QGJEDqdBEN7/misalignment-harms-can-be-caused-by-low-intelligence-systems,Misalignment Harms Can Be Caused by Low Intelligence Systems,['DialecticEel'],2022-10-11T13:39:19Z,lesswrong,, 160401,https://www.lesswrong.com/posts/RcZeZt8cPk48xxiQ8/anthropomorphic-optimism,Anthropomorphic Optimism,['Eliezer Yudkowsky'],2008-08-04T20:17:28Z,lesswrong,, 160415,https://www.lesswrong.com/posts/SaLc9Dv5ZqD73L3nE/the-self-unaware-ai-oracle,The Self-Unaware AI Oracle,['Steven Byrnes'],2019-07-22T19:04:21Z,lesswrong,, 160440,https://www.lesswrong.com/posts/zEMToQxY5uTKanjTp/why-the-technological-singularity-by-agi-may-never-happen,Why the technological singularity by AGI may never happen,['hippke'],2021-09-03T14:19:54Z,lesswrong,, 160456,https://www.lesswrong.com/posts/3pCdCJxQRKffY2NTu/partial-simulation-extrapolation-a-proposal-for-building,Partial Simulation Extrapolation: A Proposal for Building Safer Simulators,['marc/er'],2023-06-17T13:55:03Z,lesswrong,, 160473,https://www.lesswrong.com/posts/qXgge5EYGRvddSMke/fc-final-can-factored-cognition-schemes-scale,FC final: Can Factored Cognition schemes scale?,['Rafael Harth'],2021-01-24T22:18:56Z,lesswrong,, 160488,https://www.lesswrong.com/posts/meG3Pai2YeRYcwPwS/beyond-algorithmic-equivalence-algorithmic-noise,Beyond algorithmic equivalence: algorithmic noise,['Stuart_Armstrong'],2018-02-28T16:55:36Z,lesswrong,, 160497,https://www.lesswrong.com/posts/9cKf2BBR4X2JTSeiz/are-alignment-researchers-devoting-enough-time-to-improving,Are alignment researchers devoting enough time to improving their research capacity?,['Carson Jones'],2022-11-04T00:58:21Z,lesswrong,, 160516,https://www.lesswrong.com/posts/PvdvsZQwDr2PD3wFW/dragon-ball-s-hyperbolic-time-chamber,Dragon Ball's Hyperbolic Time Chamber,['gwern'],2012-09-02T23:49:51Z,lesswrong,, 160529,https://www.lesswrong.com/posts/Ekos6nWtoTCqJyGfB/openai-introduce-chatgpt-api-at-1-10th-the-previous-usd,OpenAI introduce ChatGPT API at 1/10th the previous $/token,['Arthur Conmy'],2023-03-01T20:48:52Z,lesswrong,, 160538,https://www.lesswrong.com/posts/mg6jDEuQEjBGtibX7/counterfactual-mugging,Counterfactual Mugging,['Vladimir_Nesov'],2009-03-19T06:08:38Z,lesswrong,, 160550,https://www.lesswrong.com/posts/GfHdNfqxe3cSCfpHL/the-absent-minded-driver,The Absent-Minded Driver,['Wei Dai'],2009-09-16T00:51:46Z,lesswrong,, 160560,https://www.lesswrong.com/posts/sngGzPefhL5obCJue/apply-for-mentorship-in-ai-safety-field-building,Apply for mentorship in AI Safety field-building,['Akash'],2022-09-17T19:06:13Z,lesswrong,, 160571,https://www.lesswrong.com/posts/q9ZSXiiA7wEuRgnkS/ideal-advisor-theories-and-personal-cev,Ideal Advisor Theories and Personal CEV,['lukeprog'],2012-12-25T13:04:47Z,lesswrong,, 160580,https://www.lesswrong.com/posts/MKvtmNGCtwNqc44qm/announcing-aisafety-training,Announcing aisafety.training,['JJ Hepburn'],2023-01-21T01:01:41Z,lesswrong,, 160591,https://www.lesswrong.com/posts/5Jmhdun9crJGAJGyy/why-aren-t-more-of-us-working-to-prevent-ai-hell,Why aren’t more of us working to prevent AI hell?,['Dawn Drescher'],2023-05-04T17:47:25Z,lesswrong,, 160605,https://www.lesswrong.com/posts/zNcLnqHF5rvrTsQJx/zut-allais,Zut Allais!,['Eliezer Yudkowsky'],2008-01-20T03:18:16Z,lesswrong,, 160616,https://www.lesswrong.com/posts/uXGLciramzNfb8Hvz/why-i-m-working-on-model-agnostic-interpretability,Why I'm Working On Model Agnostic Interpretability,['Jessica Rumbelow'],2022-11-11T09:24:10Z,lesswrong,, 160630,https://www.lesswrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity,What Evidence Is AlphaGo Zero Re AGI Complexity?,['RobinHanson'],2017-10-22T02:28:46Z,lesswrong,, 160647,https://www.lesswrong.com/posts/uABbabv5WPZmwzCmP/how-harmful-are-improvements-in-ai-poll,How harmful are improvements in AI? + Poll,"['tilmanr', 'Marius Hobbhahn']",2022-02-15T18:16:08Z,lesswrong,, 160671,https://www.lesswrong.com/posts/GT8uvxBjidrmM3MCv/superintelligence-6-intelligence-explosion-kinetics,Superintelligence 6: Intelligence explosion kinetics,['KatjaGrace'],2014-10-21T01:00:27Z,lesswrong,, 160692,https://www.lesswrong.com/posts/Cty2rSMut483QgBQ2/what-should-ai-owe-to-us-accountable-and-aligned-ai-systems,What Should AI Owe To Us? Accountable and Aligned AI Systems via Contractualist AI Alignment,['xuan'],2022-09-08T15:04:46Z,lesswrong,, 160735,https://www.lesswrong.com/posts/AMF5cRBjmMeeHB6Rf/values-weren-t-complex-once,"Values Weren't Complex, Once.",['Davidmanheim'],2018-11-25T09:17:02Z,lesswrong,, 160748,https://www.lesswrong.com/posts/ftdCgGmkQ3bPyDadA/phase-transitions-and-agi,Phase transitions and AGI,"['Ege Erdil', 'Metaculus']",2022-03-17T17:22:07Z,lesswrong,, 160764,https://www.lesswrong.com/posts/LBwpubeZSi3ottfjs/aisc5-retrospective-mechanisms-for-avoiding-tragedy-of-the,AISC5 Retrospective: Mechanisms for Avoiding Tragedy of the Commons in Common Pool Resource Problems,"['Ariel Kwiatkowski', 'Quinn', 'bengr']",2021-09-27T16:46:40Z,lesswrong,, 160783,https://www.lesswrong.com/posts/TAJCrxhQNZRomBCko/ai-awareness-through-interaction-with-blatantly-alien-models,AI Awareness through Interaction with Blatantly Alien Models,['VojtaKovarik'],2023-07-28T08:41:08Z,lesswrong,, 160801,https://www.lesswrong.com/posts/4JvE8bywsuWYC4vwM/prototype-of-using-gpt-3-to-generate-textbook-length-content,Prototype of Using GPT-3 to Generate Textbook-length Content,['Rafael Cosman'],2023-01-18T14:25:02Z,lesswrong,, 160823,https://www.lesswrong.com/posts/6Nuw7mLc6DjRY4mwa/the-national-defense-authorization-act-contains-ai,The National Defense Authorization Act Contains AI Provisions,['ryan_b'],2021-01-05T15:51:28Z,lesswrong,, 160836,https://www.lesswrong.com/posts/c5oyHuHaw4AcWy4tf/information-warfare-historically-revolved-around-human,Information warfare historically revolved around human conduits,['trevor'],2023-08-28T18:54:27Z,lesswrong,, 160858,https://www.lesswrong.com/posts/YZ28xp6XDiD9fNwpn/military-ai-as-a-convergent-goal-of-self-improving-ai,Military AI as a Convergent Goal of Self-Improving AI,['avturchin'],2017-11-13T12:17:53Z,lesswrong,, 160881,https://www.lesswrong.com/posts/YQtziXj9hvib6bvXu/what-resources-have-increasing-marginal-utility,What resources have increasing marginal utility?,['Qiaochu_Yuan'],2014-06-14T03:43:14Z,lesswrong,, 160898,https://www.lesswrong.com/posts/n2G5eHHBL3roXoWMy/if-you-lose-enough-good-heart-tokens-will-you-lose-real,"If you lose enough Good Heart Tokens, will you lose real-world money?",['Yitz'],2022-04-01T21:11:20Z,lesswrong,, 160907,https://www.lesswrong.com/posts/umkzHoeH2SJ2EcH6K/are-mixture-of-experts-transformers-more-interpretable-than,Are Mixture-of-Experts Transformers More Interpretable Than Dense Transformers?,['simeon_c'],2022-12-31T11:34:18Z,lesswrong,, 160917,https://www.lesswrong.com/posts/We3jtEHiBfvFzunQx/solving-alignment-by-solving-semantics,"Solving Alignment by ""solving"" semantics",['Q Home'],2022-08-27T04:17:09Z,lesswrong,, 160934,https://www.lesswrong.com/posts/5TYnquzAQPENyZXDa/quantum-immortality-is-decline-of-measure-compensated-by,Quantum immortality: Is decline of measure compensated by merging timelines?,['avturchin'],2018-12-11T19:39:29Z,lesswrong,, 160944,https://www.lesswrong.com/posts/AbkzoSpad4XmHrh2Q/chatgpt-on-spielberg-s-a-i-and-ai-alignment,ChatGPT on Spielberg’s A.I. and AI Alignment,['Bill Benzon'],2022-12-05T21:10:21Z,lesswrong,, 160962,https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers,Transcripts of interviews with AI researchers,['Vael Gates'],2022-05-09T05:57:16Z,lesswrong,, 160979,https://www.lesswrong.com/posts/KBjRoc7WccbMehnbN/report-from-a-civilizational-observer-on-earth,Report from a civilizational observer on Earth,['owencb'],2022-07-09T17:26:09Z,lesswrong,, 161003,https://www.lesswrong.com/posts/xRSSM3qJu7HDEbKnG/empathy-bandaid-for-immediate-ai-catastrophe,Empathy bandaid for immediate AI catastrophe,['installgentoo'],2023-04-05T02:12:55Z,lesswrong,, 161015,https://www.lesswrong.com/posts/E6jHtLoLirckT7Ct4/how-truthful-can-llms-be-a-theoretical-perspective-with-a,How truthful can LLMs be: a theoretical perspective with a request for help from experts on Theoretical CS,['sergia'],2023-03-01T18:39:18Z,lesswrong,, 161025,https://www.lesswrong.com/posts/ip2Cqas89TxsWok3S/aisc-project-satisfia-ai-that-satisfies-without-overdoing-it,AISC project: SatisfIA – AI that satisfies without overdoing it,['Jobst Heitzig'],2023-11-11T18:22:57Z,lesswrong,, 161037,https://www.lesswrong.com/posts/4MYYr8YmN2fonASCi/you-re-in-newcomb-s-box,You're in Newcomb's Box,['HonoreDB'],2011-02-05T20:46:20Z,lesswrong,, 161050,https://www.lesswrong.com/posts/DYcXRiJWiAtbXxNA5/ai-box-experiment-the-acausal-trade-argument,AI-Box Experiment - The Acausal Trade Argument,['XiXiDu'],2011-07-08T09:18:40Z,lesswrong,, 161063,https://www.lesswrong.com/posts/WTA6vmYdQCzTFT4WZ/newcomb-s-problem-a-problem-for-causal-decision-theories,Newcomb's Problem: A problem for Causal Decision Theories,['anonymous'],2010-08-16T11:25:21Z,lesswrong,, 161072,https://www.lesswrong.com/posts/AszKwKyhBPZAnCstA/solomonoff-cartesianism,Solomonoff Cartesianism,['Rob Bensinger'],2014-03-02T17:56:23Z,lesswrong,, 161095,https://www.lesswrong.com/posts/c5GHf2kMGhA4Tsj4g/the-ai-in-a-box-boxes-you,The AI in a box boxes you,['Stuart_Armstrong'],2010-02-02T10:10:13Z,lesswrong,, 161110,https://www.lesswrong.com/posts/3ocLFajphtyHTgNxx/podcast-with-divia-eden-and-ronny-fernandez-on-the-strong,Podcast with Divia Eden and Ronny Fernandez on the strong orthogonality thesis,['DanielFilan'],2023-04-28T01:30:46Z,lesswrong,, 161126,https://www.lesswrong.com/posts/KJPRC3cgtxSXpZEQZ/goal-directedness-exploring-explanations,Goal-directedness: exploring explanations,['Morgan_Rogers'],2022-02-14T16:20:32Z,lesswrong,, 161144,https://www.lesswrong.com/posts/hYekqQ9hLmn3XTZrp/2017-ai-safety-literature-review-and-charity-comparison,2017 AI Safety Literature Review and Charity Comparison,['Larks'],2017-12-24T18:52:32Z,lesswrong,, 161173,https://www.lesswrong.com/posts/rniJ8tPweahjDKJWM/ai-safety-newsletter-7-disinformation-governance,"AI Safety Newsletter #7: Disinformation, Governance Recommendations for AI labs, and Senate Hearings on AI","['Dan H', 'Akash', 'aogara']",2023-05-23T21:47:35Z,lesswrong,, 161210,https://www.lesswrong.com/posts/jBNTf7o2R6bJjbJEk/where-utopias-go-wrong-or-the-four-little-planets,"Where Utopias Go Wrong, or: The Four Little Planets",['ExCeph'],2022-05-27T01:24:19Z,lesswrong,, 161231,https://www.lesswrong.com/posts/g8xh9R7RaNitKtkaa/explicit-optimization-of-global-strategy-fixing-a-bug-in,Explicit Optimization of Global Strategy (Fixing a Bug in UDT1),['Wei Dai'],2010-02-19T01:30:44Z,lesswrong,, 161247,https://www.lesswrong.com/posts/sGBszCBKp6roEd8v5/the-opt-in-revolution-my-vision-of-a-positive-future-with,The Opt-In Revolution — My vision of a positive future with ASI (An experiment with LLM storytelling),['Tachikoma'],2023-07-12T21:08:23Z,lesswrong,, 161262,https://www.lesswrong.com/posts/LGSotF4bQ2pRSbB8a/a-utility-maximizing-varient-of-aixi,A utility-maximizing varient of AIXI,['AlexMennen'],2012-12-17T03:48:05Z,lesswrong,, 161282,https://www.lesswrong.com/posts/5Yoi7JAsZm7MstbRT/it-is-powerful-it-can-t-be-aimed,"It Is Powerful, It Can't Be Aimed",['Zahima'],2023-09-26T21:13:31Z,lesswrong,, 161301,https://www.lesswrong.com/posts/LERwsN3SYhkCfew6j/discussing-how-to-align-transformative-ai-if-it-s-developed,Discussing how to align Transformative AI if it’s developed very soon,"['elifland', 'CharlotteS']",2022-11-28T16:17:54Z,lesswrong,, 161325,https://www.lesswrong.com/posts/mL89Ze5uZTX3udb3a/is-there-a-ml-agent-that-abandons-it-s-utility-function-out,Is there a ML agent that abandons it's utility function out-of-distribution without losing capabilities?,['Christopher King'],2023-02-22T16:49:01Z,lesswrong,, 161339,https://www.lesswrong.com/posts/hpBtyssp49gP6tgWN/what-environment-properties-select-agents-for-world-modeling,What Environment Properties Select Agents For World-Modeling?,['Thane Ruthenis'],2022-07-23T19:27:50Z,lesswrong,, 161362,https://www.lesswrong.com/posts/6ngxHbpnKYwszFqrc/how-to-know-what-the-ai-knows-an-elk-distillation,How To Know What the AI Knows - An ELK Distillation,['Fabien Roger'],2022-09-04T00:46:31Z,lesswrong,, 161386,https://www.lesswrong.com/posts/q5kio5Tz4C3bDJ8N6/conjecture-a-standing-offer-for-public-debates-on-ai,Conjecture: A standing offer for public debates on AI,['Andrea_Miotti'],2023-06-16T14:33:43Z,lesswrong,, 161396,https://www.lesswrong.com/posts/uxnjXBwr79uxLkifG/comments-on-openai-s-planning-for-agi-and-beyond,"Comments on OpenAI's ""Planning for AGI and beyond""",['So8res'],2023-03-03T23:01:30Z,lesswrong,, 161429,https://www.lesswrong.com/posts/FAwAqtzp53vjFAd4k/chatgpt-banned-in-italy-over-privacy-concerns,ChatGPT banned in Italy over privacy concerns,['Ollie J'],2023-03-31T17:33:10Z,lesswrong,, 161439,https://www.lesswrong.com/posts/7qSHKYRnqyrumEfbt/remarks-1-18-on-gpt-compressed,Remarks 1–18 on GPT (compressed),['Cleo Nardo'],2023-03-20T22:27:26Z,lesswrong,, 161461,https://www.lesswrong.com/posts/aohCk8JeDGiuvGiFZ/why-does-advanced-ai-want-not-to-be-shut-down,Why does advanced AI want not to be shut down?,['RedFishBlueFish'],2023-03-28T04:26:24Z,lesswrong,, 161470,https://www.lesswrong.com/posts/56qQ9yPs37uvsWAvJ/carl-zimmer-on-mind-uploading,Carl Zimmer on mind uploading,['Dr_Manhattan'],2010-12-23T03:13:42Z,lesswrong,, 161490,https://www.lesswrong.com/posts/NGkBfd8LTqcpbQn5Z/biological-anchors-the-trick-that-might-or-might-not-work,Biological Anchors: The Trick that Might or Might Not Work,['Scott Alexander'],2023-08-12T00:53:30Z,lesswrong,, 161512,https://www.lesswrong.com/posts/jmrTMNhA4sKcrGEzu/do-mesa-optimization-problems-correlate-with-low-slack,Do mesa-optimization problems correlate with low-slack?,['sudo -i'],2022-02-04T21:11:10Z,lesswrong,, 161521,https://www.lesswrong.com/posts/MfDvvoMmbjCd3GWpq/if-we-have-human-level-chatbots-won-t-we-end-up-being-ruled-1,"If we have Human-level chatbots, won't we end up being ruled by possible people?",['Erlja Jkdf.'],2022-09-20T13:59:43Z,lesswrong,, 161530,https://www.lesswrong.com/posts/TwfWTLhQZgy2oFwK3/gato-as-the-dawn-of-early-agi,Gato as the Dawn of Early AGI,['David Udell'],2022-05-15T06:52:02Z,lesswrong,, 161553,https://www.lesswrong.com/posts/GQy2BSQG9Dd6vPhs8/kidnapping-and-the-game-of-chicken,Kidnapping and the game of Chicken,['Manfred'],2013-11-03T06:29:10Z,lesswrong,, 161565,https://www.lesswrong.com/posts/S8zmnWqZCeFaEy3Y8/the-answer-1,The Answer,['Alex Beyman'],2023-03-19T00:09:57Z,lesswrong,, 161579,https://www.lesswrong.com/posts/Jqwb7vEqEFyC6sLLG/solomonoff-induction-and-sleeping-beauty,Solomonoff Induction and Sleeping Beauty,['ike'],2020-11-17T02:29:00Z,lesswrong,, 161588,https://www.lesswrong.com/posts/qNcGzyq5HEWnrx3zP/uploading-what-about-the-carbon-based-version,Uploading: what about the carbon-based version?,['NancyLebovitz'],2012-07-23T08:49:30Z,lesswrong,, 161603,https://www.lesswrong.com/posts/2Enagkgxu49mRjDqe/distilled-agi-safety-from-first-principles,Distilled - AGI Safety from First Principles,['Harrison G'],2022-05-29T00:57:47Z,lesswrong,, 161630,https://www.lesswrong.com/posts/3SJCNX4onzu4FZmoG/alignment-path-to-ai-as-ally-not-slave-nor-foe,"Alignment - Path to AI as ally, not slave nor foe",['ozb'],2023-03-30T14:54:27Z,lesswrong,, 161643,https://www.lesswrong.com/posts/9oxKJghFjk9WmG9YX/the-regulatory-option-a-response-to-near-0-survival-odds,The Regulatory Option: A response to near 0% survival odds,['Matthew Lowenstein'],2022-04-11T22:00:50Z,lesswrong,, 161662,https://www.lesswrong.com/posts/AQDa6HdjjRsGYyJ7Q/proposal-tune-llms-to-use-calibrated-language,Proposal: Tune LLMs to Use Calibrated Language,['OneManyNone'],2023-06-07T21:05:43Z,lesswrong,, 161673,https://www.lesswrong.com/posts/CJp8PtxdnR3kgyir7/crosspost-alphatensor-taste-and-the-scalability-of-ai,"[Crosspost] AlphaTensor, Taste, and the Scalability of AI",['jamierumbelow'],2022-10-09T19:42:11Z,lesswrong,, 161686,https://www.lesswrong.com/posts/NELtoshXv3X88kqBE/a-rough-idea-for-solving-elk-an-approach-for-training,A rough idea for solving ELK: An approach for training generalist agents like GATO to make plans and describe them to humans clearly and honestly.,['Michael Soareverix'],2022-09-08T15:20:50Z,lesswrong,, 161702,https://www.lesswrong.com/posts/yyCkgZiufYtfTk3aN/link-2011-team-may-be-chosen-to-receive-usd1-4-billion-to,"[Link, 2011] Team may be chosen to receive $1.4 billion to simulate human brain",['John_Maxwell'],2012-03-09T21:13:42Z,lesswrong,, 161713,https://www.lesswrong.com/posts/EsBDbtnCizxDLsDT3/how-teams-went-about-their-research-at-ai-safety-camp-1,How teams went about their research at AI Safety Camp edition 8,"['Remmelt', 'Linda Linsefors', 'Kristi Uustalu']",2023-09-09T16:34:06Z,lesswrong,, 161740,https://www.lesswrong.com/posts/q6dQpSfNHCYzKb2mf/performance-guarantees-in-classical-learning-theory-and,Performance guarantees in classical learning theory and infra-Bayesianism,['matolcsid'],2023-02-28T18:37:57Z,lesswrong,, 161757,https://www.lesswrong.com/posts/h5yfjFpdtqE3vATQ6/gatekeeper-victory-ai-box-reflection,Gatekeeper Victory: AI Box Reflection,"['Double', 'DaemonicSigil']",2022-09-09T21:38:39Z,lesswrong,, 161777,https://www.lesswrong.com/posts/ykccy6LXfmTjZcp6S/separating-the-control-problem-from-the-alignment-problem,"Separating the ""control problem"" from the ""alignment problem""",['Yi-Yang'],2023-05-11T09:41:55Z,lesswrong,, 161798,https://www.lesswrong.com/posts/BDTfddkttFXHqGnEi/the-shallow-reality-of-deep-learning-theory,The shallow reality of 'deep learning theory',['Jesse Hoogland'],2023-02-22T04:16:11Z,lesswrong,, 161812,https://www.lesswrong.com/posts/JkKeFt2u4k4Q4Bmnx/linkpost-solving-quantitative-reasoning-problems-with,[Linkpost] Solving Quantitative Reasoning Problems with Language Models,['Yitz'],2022-06-30T18:58:01Z,lesswrong,, 161841,https://www.lesswrong.com/posts/7YbQm7zA5NvMhEzyN/autoregressive-propaganda,Autoregressive Propaganda,['lsusr'],2021-08-22T02:18:49Z,lesswrong,, 161857,https://www.lesswrong.com/posts/vuN57BvWyT7WZ3b6p/applying-utility-functions-to-humans-considered-harmful,Applying utility functions to humans considered harmful,['Kaj_Sotala'],2010-02-03T19:22:57Z,lesswrong,, 161872,https://www.lesswrong.com/posts/7yEFHisCQSCpLnqWQ/mr-meeseeks-as-an-ai-capability-tripwire,Mr. Meeseeks as an AI capability tripwire,['Eric Zhang'],2023-05-19T11:33:54Z,lesswrong,, 161885,https://www.lesswrong.com/posts/tyE4orCtR8H9eTiEr/stuxnet-not-skynet-humanity-s-disempowerment-by-ai,"Stuxnet, not Skynet: Humanity's disempowerment by AI",['Roko'],2023-11-04T22:23:55Z,lesswrong,, 161906,https://www.lesswrong.com/posts/vn9huEHsCGEQzTfrW/creating-a-self-referential-system-prompt-for-gpt-4,Creating a self-referential system prompt for GPT-4,['Ozyrus'],2023-05-17T14:13:29Z,lesswrong,, 161926,https://www.lesswrong.com/posts/c73kPDr8pZGdZSe3q/solving-selfishness-for-udt,"""Solving"" selfishness for UDT",['Stuart_Armstrong'],2014-10-27T17:51:01Z,lesswrong,, 161944,https://www.lesswrong.com/posts/d2jgBurQygbXzhPxc/how-special-are-human-brains-among-animal-brains,How special are human brains among animal brains?,['zhukeepa'],2020-04-01T01:35:37Z,lesswrong,, 161954,https://www.lesswrong.com/posts/LDngQb2AJsjTZnWEP/the-darwin-results,The Darwin Results,['Zvi'],2017-11-25T13:30:00Z,lesswrong,, 161973,https://www.lesswrong.com/posts/iQx2eeHKLwgBYdWPZ/retrospective-on-gpt-4-predictions-after-the-release-of-gpt,Retrospective on ‘GPT-4 Predictions’ After the Release of GPT-4,['Stephen McAleese'],2023-03-17T18:34:17Z,lesswrong,, 161992,https://www.lesswrong.com/posts/yEQuEsWPQAaXzhdxz/foom-liability,Foom Liability,['PeterMcCluskey'],2023-06-30T03:55:23Z,lesswrong,, 162008,https://www.lesswrong.com/posts/ZD5meZwgfFJD2wDB5/a-framework-of-prediction-technologies-1,A Framework of Prediction Technologies,['isaduan'],2021-10-03T10:26:35Z,lesswrong,, 162033,https://www.lesswrong.com/posts/FBEaheqfmDgL6gB5x/superintelligence-14-motivation-selection-methods,Superintelligence 14: Motivation selection methods,['KatjaGrace'],2014-12-16T02:00:53Z,lesswrong,, 162055,https://www.lesswrong.com/posts/zdKrgxwhE5pTiDpDm/practical-ai-risk-i-watching-large-compute,Practical AI risk I: Watching large compute,['Gustavo Ramires'],2022-12-24T13:25:26Z,lesswrong,, 162070,https://www.lesswrong.com/posts/SBPrRQYHyKFthZdRH/should-ai-systems-have-to-identify-themselves,Should AI systems have to identify themselves?,['Darren McKee'],2022-12-31T02:57:11Z,lesswrong,, 162085,https://www.lesswrong.com/posts/qdStMFDMrWAnTqNWL/gpt-4-predictions,GPT-4 Predictions,['Stephen McAleese'],2023-02-17T23:20:25Z,lesswrong,, 162108,https://www.lesswrong.com/posts/pZk5b7bC9fxnR9hvR/why-are-we-sure-that-ai-will-want-something,"Why are we sure that AI will ""want"" something?",['shminux'],2022-09-16T20:35:41Z,lesswrong,, 162124,https://www.lesswrong.com/posts/RjuWbiHF6BGjHACiZ/all-gpt-skills-are-translation,All GPT skills are translation,['p.b.'],2020-12-13T20:06:00Z,lesswrong,, 162142,https://www.lesswrong.com/posts/77xLbXs6vYQuhT8hq/why-ai-may-not-foom,Why AI may not foom,['John_Maxwell'],2013-03-24T08:11:55Z,lesswrong,, 162169,https://www.lesswrong.com/posts/BzbdYkBRMwzFAqLcg/the-gradient-the-artificiality-of-alignment,The Gradient – The Artificiality of Alignment,['mic'],2023-10-08T04:06:40Z,lesswrong,, 162186,https://www.lesswrong.com/posts/NKbF8RvNiQyfWoz8e/beyond-rewards-and-values-a-non-dualistic-approach-to,Beyond Rewards and Values: A Non-dualistic Approach to Universal Intelligence,['Akira Pyinya'],2022-12-30T19:05:25Z,lesswrong,, 162208,https://www.lesswrong.com/posts/SJr7accmKvz3uGLp2/the-elk-framing-i-ve-used,The ELK Framing I’ve Used,['sudo -i'],2022-09-19T10:28:47Z,lesswrong,, 162226,https://www.lesswrong.com/posts/4pSqXrN3BLKhbRXpW/deepmind-has-made-a-general-inductor-making-sense-of-sensory,"Deepmind has made a general inductor (""Making sense of sensory input"")",['mako yass'],2021-02-02T02:54:26Z,lesswrong,, 162236,https://www.lesswrong.com/posts/BrKPfuPaqk8gyHciR/what-s-going-on-llms-and-is-a-sentences,What’s going on? LLMs and IS-A sentences,['Bill Benzon'],2023-11-08T16:58:58Z,lesswrong,, 162249,https://www.lesswrong.com/posts/CbQBJaZCrGMJEBz8g/reward-hacking-and-goodhart-s-law-by-evolutionary-algorithms,Reward hacking and Goodhart’s law by evolutionary algorithms,['Jan_Kulveit'],2018-03-30T07:57:05Z,lesswrong,, 162267,https://www.lesswrong.com/posts/f4mbXjhQ2yaMrgLBG/are-human-brains-universal,Are Human Brains Universal?,['DragonGod'],2022-09-15T15:15:21Z,lesswrong,, 162277,https://www.lesswrong.com/posts/QWuegBA9kGBv3xBFy/the-colliding-exponentials-of-ai,The Colliding Exponentials of AI,['Vermillion'],2020-10-14T23:31:21Z,lesswrong,, 162293,https://www.lesswrong.com/posts/8oqzGF2p8H7JhFCdY/acausal-trade-naturally-results-in-the-nash-bargaining,Acausal trade naturally results in the Nash bargaining solution,['Christopher King'],2023-05-08T18:13:09Z,lesswrong,, 162302,https://www.lesswrong.com/posts/dpNkK3LJBLtaJfAvu/on-taking-ai-risk-seriously,On taking AI risk seriously,['Eleni Angelou'],2023-03-13T05:50:57Z,lesswrong,, 162311,https://www.lesswrong.com/posts/Lx4BfG4kjNqxzfbt9/munk-debate-on-ai-a-few-observations-and-opinions,Munk Debate on AI: a few observations and opinions,['Yarrow Bouchard'],2023-11-10T02:00:55Z,lesswrong,, 162330,https://www.lesswrong.com/posts/cEQyKsreistXxFEeF/alignment-as-game-design,Alignment as Game Design,['Shoshannah Tekofsky'],2022-07-16T22:36:16Z,lesswrong,, 162349,https://www.lesswrong.com/posts/FGWfTxsXk7euh4QGk/i-think-eliezer-should-go-on-glenn-beck,I Think Eliezer Should Go on Glenn Beck,['Lao Mein'],2023-06-30T03:12:58Z,lesswrong,, 162362,https://www.lesswrong.com/posts/7N7JGyTmX5Gnjhfrk/is-the-star-trek-federation-really-incapable-of-building-ai,Is the Star Trek Federation really incapable of building AI?,['Kaj_Sotala'],2018-03-18T10:30:03Z,lesswrong,, 162382,https://www.lesswrong.com/posts/JdGuqg7ifRwPiirCe/wentworth-and-larsen-on-buying-time,Wentworth and Larsen on buying time,"['Akash', 'Thomas Larsen', 'johnswentworth']",2023-01-09T21:31:25Z,lesswrong,, 162412,https://www.lesswrong.com/posts/CDJmafoeZJuXHbKDW/towards-better-milestones-for-monitoring-ai-capabilities,Towards Better Milestones for Monitoring AI Capabilities,['snewman'],2023-09-27T21:18:31Z,lesswrong,, 162433,https://www.lesswrong.com/posts/t6ZGSro4Q8fRKPont/character-alignment,Character alignment,['p.b.'],2022-09-20T08:27:36Z,lesswrong,, 162446,https://www.lesswrong.com/posts/YbGJqWNwwKsEDrHcf/alignment-as-function-fitting,Alignment as Function Fitting,['A.H.'],2023-05-06T11:38:04Z,lesswrong,, 162465,https://www.lesswrong.com/posts/Q8tyoaMFmW8R9w9db/reference-post-formal-vs-effective-pre-commitment,Reference Post: Formal vs. Effective Pre-Commitment,['Chris_Leong'],2018-08-27T12:04:53Z,lesswrong,, 162478,https://www.lesswrong.com/posts/5bcKDg6gg5mFxdHbu/linkpost-deception-abilities-emerged-in-large-language,[Linkpost] Deception Abilities Emerged in Large Language Models,['Bogdan Ionut Cirstea'],2023-08-03T17:28:19Z,lesswrong,, 162501,https://www.lesswrong.com/posts/MWSCqzPrAbNrYoqWv/goal-misgeneralization-is-elk-hard,Goal-misgeneralization is ELK-hard,['rokosbasilisk'],2023-06-10T09:32:50Z,lesswrong,, 162523,https://www.lesswrong.com/posts/JLH6ido4qoBtYmnNR/machines-vs-memes-part-1-ai-alignment-and-memetics,Machines vs Memes Part 1: AI Alignment and Memetics,['Harriet Farlow'],2022-05-31T22:03:18Z,lesswrong,, 162536,https://www.lesswrong.com/posts/4Y5hjr32xfTKkRtrS/pz-myers-on-the-infeasibility-of-whole-brain-emulation,PZ Myers on the Infeasibility of Whole Brain Emulation,['Peter Wildeford'],2012-07-14T18:13:52Z,lesswrong,, 162548,https://www.lesswrong.com/posts/s9aB6fLiAmd2d8GRK/is-recursive-self-alignment-possible,Is recursive self-alignment possible?,['No77e'],2023-01-03T09:15:21Z,lesswrong,, 162563,https://www.lesswrong.com/posts/TZPbm3BRkWWTm9ecC/chatgpt-understands-but-largely-does-not-generate-spanglish,"ChatGPT understands, but largely does not generate Spanglish (and other code-mixed) text",['Milan W'],2022-12-23T17:41:00Z,lesswrong,, 162578,https://www.lesswrong.com/posts/r99tazGiLgzqFX7ka/playing-with-dall-e-2,Playing with DALL·E 2,['Dave Orr'],2022-04-07T18:49:16Z,lesswrong,, 162592,https://www.lesswrong.com/posts/5gQLrJr2yhPzMCcni/the-optimizer-s-curse-and-how-to-beat-it,The Optimizer's Curse and How to Beat It,['lukeprog'],2011-09-16T02:46:23Z,lesswrong,, 162602,https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes,The Hidden Complexity of Wishes,['Eliezer Yudkowsky'],2007-11-24T00:12:33Z,lesswrong,, 162618,https://www.lesswrong.com/posts/F6jAKPDMPdyAEuCPQ/what-can-thought-experiments-do,What can thought-experiments do?,['Cleo Nardo'],2023-01-17T00:35:17Z,lesswrong,, 162636,https://www.lesswrong.com/posts/KbEbgbXL64Rqhsi9k/ir-rationality-of-pascal-s-wager,(Ir)rationality of Pascal's wager,['filozof3377@gmial.com'],2020-08-03T20:57:24Z,lesswrong,, 162652,https://www.lesswrong.com/posts/EG3TQmwtT26gnZM2P/job-ai-standards-development-research-assistant-1,[Job]: AI Standards Development Research Assistant,['Tony Barrett'],2022-10-14T20:27:01Z,lesswrong,, 162663,https://www.lesswrong.com/posts/tJ6aGSTctmjCz2o57/simulation-4chan-user-claiming-to-be-the-attorney-hired-by,[simulation] 4chan user claiming to be the attorney hired by Google's sentient chatbot LaMDA shares wild details of encounter,['janus'],2022-11-10T21:39:17Z,lesswrong,, 162676,https://www.lesswrong.com/posts/4WbNGQMvuFtY3So7s/announcement-ai-alignment-prize-winners-and-next-round,Announcement: AI alignment prize winners and next round,['cousin_it'],2018-01-15T14:34:00Z,lesswrong,, 162715,https://www.lesswrong.com/posts/Kyc5dFDzBg4WccrbK/an-intuitive-explanation-of-solomonoff-induction,An Intuitive Explanation of Solomonoff Induction,['Alex_Altair'],2012-07-11T08:05:21Z,lesswrong,, 162735,https://www.lesswrong.com/posts/LjcdgbHbtM3ZMpckg/wizards-and-prophets-of-ai-draft-for-comment,Wizards and prophets of AI [draft for comment],['jasoncrawford'],2023-03-31T20:22:40Z,lesswrong,, 162756,https://www.lesswrong.com/posts/rWnQeqfCg3M2DXdeb/collin-burns-on-alignment-research-and-discovering-latent,Collin Burns on Alignment Research And Discovering Latent Knowledge Without Supervision,['Michaël Trazzi'],2023-01-17T17:21:40Z,lesswrong,, 162766,https://www.lesswrong.com/posts/ybLaw7EuMBeM2dLn9/broad-basins-and-data-compression,Broad Basins and Data Compression,"['Jeremy Gillen', 'Stephen Fowler', 'Thomas Larsen']",2022-08-08T20:33:17Z,lesswrong,, 162780,https://www.lesswrong.com/posts/KsKfvLx7nFBZnWtEu/no-human-brains-are-not-much-more-efficient-than-computers,"No, human brains are not (much) more efficient than computers",['Jesse Hoogland'],2022-09-06T13:53:08Z,lesswrong,, 162796,https://www.lesswrong.com/posts/iJnBLanZLephL5cao/being-at-peace-with-doom,Being at peace with Doom,['Johannes C. Mayer'],2023-04-09T14:53:23Z,lesswrong,, 162811,https://www.lesswrong.com/posts/cKrX86HJjvTrmhiAG/is-it-a-bad-idea-to-pay-for-gpt-4,Is it a bad idea to pay for GPT-4?,['nem'],2023-03-16T20:49:27Z,lesswrong,, 162820,https://www.lesswrong.com/posts/ubCjGESWpKJrTBXTP/an-ai-in-a-box-success-model,An AI-in-a-box success model,['azsantosk'],2022-04-11T22:28:02Z,lesswrong,, 162850,https://www.lesswrong.com/posts/j2LD87wT3dpr4m7rs/safety-standards-a-framework-for-ai-regulation,Safety standards: a framework for AI regulation,['joshc'],2023-05-01T00:56:40Z,lesswrong,, 162880,https://www.lesswrong.com/posts/4jTb9bc6vcm3T2d8a/replicating-the-replication-crisis-with-gpt-3,Replicating the replication crisis with GPT-3?,['skybrian'],2020-07-22T21:20:35Z,lesswrong,, 162896,https://www.lesswrong.com/posts/GHeEdubBHjcqeoxjP/recent-advances-in-natural-language-processing-some-woolly,Recent advances in Natural Language Processing—Some Woolly speculations (2019 essay on semantics and language models),['philosophybear'],2022-12-27T02:11:37Z,lesswrong,, 162914,https://www.lesswrong.com/posts/Z9Rgek2FstkJ3L7ri/an-investigation-into-when-agents-may-be-incentivized-to,An investigation into when agents may be incentivized to manipulate our beliefs.,['Felix Hofstätter'],2022-09-13T17:08:32Z,lesswrong,, 162938,https://www.lesswrong.com/posts/ym4BAovbgLAaXsf79/instrumental-convergence-bounty,Instrumental Convergence Bounty,['Logan Zoellner'],2023-09-14T14:02:33Z,lesswrong,, 162947,https://www.lesswrong.com/posts/F46jPraqp258q67nE/why-you-must-maximize-expected-utility,Why you must maximize expected utility,['Benya'],2012-12-13T01:11:13Z,lesswrong,, 162959,https://www.lesswrong.com/posts/YHFvwDPDWmi8KdECw/the-vnm-independence-axiom-ignores-the-value-of-information,The VNM independence axiom ignores the value of information,['kilobug'],2013-03-02T14:36:53Z,lesswrong,, 162969,https://www.lesswrong.com/posts/67FYAk57ENDgmadN6/why-not-constrain-wetlabs-instead-of-ai,Why not constrain wetlabs instead of AI?,['Lone Pine'],2023-03-21T18:02:43Z,lesswrong,, 162979,https://www.lesswrong.com/posts/gXbLCEuhDRN8EBKjZ/all-claw-no-world-and-other-thoughts-on-the-universal,"all claw, no world — and other thoughts on the universal distribution",['Tamsin Leake'],2022-12-14T18:55:06Z,lesswrong,, 162997,https://www.lesswrong.com/posts/rqt8RSKPvh4GzYoqE/counterfactual-mugging-and-logical-uncertainty,Counterfactual Mugging and Logical Uncertainty,['Vladimir_Nesov'],2009-09-05T22:31:27Z,lesswrong,, 163007,https://www.lesswrong.com/posts/QvvFRDG6SG3xZ8ELz/challenge-construct-a-gradient-hacker,Challenge: construct a Gradient Hacker,"['Thomas Larsen', 'Thomas Kwa']",2023-03-09T02:38:33Z,lesswrong,, 163021,https://www.lesswrong.com/posts/id84oe3LxdzoqinKA/betting-on-what-is-un-falsifiable-and-un-verifiable,Betting on what is un-falsifiable and un-verifiable,['Abhimanyu Pallavi Sudhir'],2023-11-14T21:11:15Z,lesswrong,, 163035,https://www.lesswrong.com/posts/w4RKBEGbvXah38PRi/gpt-3-a-summary,GPT-3: A Summary,['leogao'],2020-06-02T18:14:54Z,lesswrong,, 163044,https://www.lesswrong.com/posts/GP8sxXZaPjH6EzLyJ/a-poem-written-by-a-fancy-autocomplete,A poem written by a fancy autocomplete,['Christopher King'],2023-04-20T02:31:58Z,lesswrong,, 163060,https://www.lesswrong.com/posts/k3vEcwZYoRPrSXaS8/mdps-and-the-bellman-equation-intuitively-explained,"MDPs and the Bellman Equation, Intuitively Explained","[""Jack O'Brien""]",2022-12-27T05:50:24Z,lesswrong,, 163083,https://www.lesswrong.com/posts/JPr9qcBR4SpC93YeG/why-doesn-t-the-presence-of-log-loss-for-probabilistic,"Why doesn't the presence of log-loss for probabilistic models (e.g. sequence prediction) imply that any utility function capable of producing a ""fairly capable"" agent will have at least some non-negligible fraction of overlap with human values?",['Thoth Hermes'],2023-05-16T18:02:16Z,lesswrong,, 163093,https://www.lesswrong.com/posts/JnmouffwMTYmRnoxT/aisc-project-modelling-trajectories-of-language-models,AISC Project: Modelling Trajectories of Language Models,['NickyP'],2023-11-13T14:33:56Z,lesswrong,, 163112,https://www.lesswrong.com/posts/R7TWBwiJw7gX64KEj/help-understanding-preferences-and-evil,Help Understanding Preferences And Evil,['Netcentrica'],2022-08-27T03:42:01Z,lesswrong,, 163121,https://www.lesswrong.com/posts/N3REk3ZxQQkrGBLzX/human-beats-sota-go-ai-by-learning-an-adversarial-policy,Human beats SOTA Go AI by learning an adversarial policy,['Vanessa Kosoy'],2023-02-19T09:38:59Z,lesswrong,, 163132,https://www.lesswrong.com/posts/7ruzY5LvBqFBWzyMo/direct-preference-optimization-in-one-minute,Direct Preference Optimization in One Minute,['marc/er'],2023-06-26T11:52:17Z,lesswrong,, 163146,https://www.lesswrong.com/posts/gEchYntjSXk9KXorK/uncontrollable-ai-as-an-existential-risk,Uncontrollable AI as an Existential Risk,['Karl von Wendt'],2022-10-09T10:36:01Z,lesswrong,, 163178,https://www.lesswrong.com/posts/rjghymycfrMY2aRk5/llm-cognition-is-probably-not-human-like,LLM cognition is probably not human-like,['Max H'],2023-05-08T01:22:46Z,lesswrong,, 163189,https://www.lesswrong.com/posts/2sWCaHj22q5q68Ndn/my-central-alignment-priority-2-july-2023,My Central Alignment Priority (2 July 2023),['NicholasKross'],2023-07-03T01:46:02Z,lesswrong,, 163206,https://www.lesswrong.com/posts/GfFvsPaSFG7wqY4sk/prosaic-misalignment-from-the-solomonoff-predictor,Prosaic misalignment from the Solomonoff Predictor,['Cleo Nardo'],2022-12-09T17:53:44Z,lesswrong,, 163220,https://www.lesswrong.com/posts/nsGgYL2umBrqW7Jbj/imagine-a-world-where-microsoft-employees-used-bing,Imagine a world where Microsoft employees used Bing,['Christopher King'],2023-03-31T18:36:08Z,lesswrong,, 163232,https://www.lesswrong.com/posts/szeKeZwuQhFxirfBY/is-rl-involved-in-sensory-processing,Is RL involved in sensory processing?,['Steven Byrnes'],2021-03-18T13:57:29Z,lesswrong,, 163250,https://www.lesswrong.com/posts/B5auLtDfQrvwEkw4Q/we-haven-t-uploaded-worms,We Haven't Uploaded Worms,['jefftk'],2014-12-27T11:44:45Z,lesswrong,, 163271,https://www.lesswrong.com/posts/KphrG3chfiuFX5Cu6/decision-theory-and-zero-sum-game-theory-np-and-pspace,"Decision theory and zero-sum game theory, NP and PSPACE",['jessicata'],2018-05-24T08:03:19Z,lesswrong,, 163287,https://www.lesswrong.com/posts/KPqSFHdmGgfgznPvY/a-guide-to-forecasting-ai-science-capabilities,A Guide to Forecasting AI Science Capabilities,['Eleni Angelou'],2023-04-29T23:24:47Z,lesswrong,, 163310,https://www.lesswrong.com/posts/jGW3FwkpFdsjrpMe5/problems-of-people-new-to-ai-safety-and-my-project-ideas-to,Problems of people new to AI safety and my project ideas to mitigate them,['Igor Ivanov'],2023-03-01T09:09:03Z,lesswrong,, 163332,https://www.lesswrong.com/posts/iQWk5jYeDg5ACCmpx/robust-cooperation-in-the-prisoner-s-dilemma,Robust Cooperation in the Prisoner's Dilemma,['orthonormal'],2013-06-07T08:30:26Z,lesswrong,, 163351,https://www.lesswrong.com/posts/GK63hPZkaJJ3DJpNW/research-agenda-building-a-multi-modal-chess-language-model,Research agenda - Building a multi-modal chess-language model,['p.b.'],2022-04-07T12:25:19Z,lesswrong,, 163369,https://www.lesswrong.com/posts/Yc6KdHYFMXwzPdZAX/supplementary-alignment-insights-through-a-highly-controlled,Supplementary Alignment Insights Through a Highly Controlled Shutdown Incentive,['Justausername'],2023-07-23T16:08:33Z,lesswrong,, 163385,https://www.lesswrong.com/posts/Ezhu43CRahQSdsWug/capability-and-agency-as-cornerstones-of-ai-risk-my-current,Capability and Agency as Cornerstones of AI risk ­— My current model,['wilm'],2022-09-15T08:25:05Z,lesswrong,, 163406,https://www.lesswrong.com/posts/pEbHHHR3aPLumaKJK/recursively-self-improving-human-intelligence,Recursively Self-Improving Human Intelligence,['curiousepic'],2011-02-17T21:55:05Z,lesswrong,, 163415,https://www.lesswrong.com/posts/xyjhFCSSXZsW6HDBb/a-chess-game-against-gpt-4,A chess game against GPT-4,['Rafael Harth'],2023-03-16T14:05:18Z,lesswrong,, 163433,https://www.lesswrong.com/posts/zEWJBFFMvQ835nq6h/decision-theory-faq,Decision Theory FAQ,['lukeprog'],2013-02-28T14:15:55Z,lesswrong,, 163453,https://www.lesswrong.com/posts/gLJP2sBqXDsQWLAgy/super-exponential-versus-exponential-growth-in-compute-price,Super-Exponential versus Exponential Growth in Compute Price-Performance,['moridinamael'],2023-10-06T16:23:57Z,lesswrong,, 163470,https://www.lesswrong.com/posts/LYgJrBf6awsqFRCt3/is-red-for-gpt-4-the-same-as-red-for-you,"Is ""red"" for GPT-4 the same as ""red"" for you?",['Yusuke Hayashi'],2023-05-06T17:55:21Z,lesswrong,, 163480,https://www.lesswrong.com/posts/vcSQzNJDGKLG8fgTE/introducing-the-long-game-project-improving-decision-making,Introducing The Long Game Project: Improving Decision-Making Through Tabletop Exercises and Simulated Experience,['Dan Stuart'],2023-06-13T21:45:02Z,lesswrong,, 163496,https://www.lesswrong.com/posts/WCevxhGtmnPhWH3ah/is-ai-safety-dropping-the-ball-on-privacy-1,Is AI Safety dropping the ball on privacy?,['markov'],2023-09-13T13:07:24Z,lesswrong,, 163513,https://www.lesswrong.com/posts/ng4zA3qCyygFFtqSs/no-free-lunch-theorem-is-irrelevant-1,No free lunch theorem is irrelevant,['Catnee'],2022-10-04T00:21:55Z,lesswrong,, 163523,https://www.lesswrong.com/posts/wm9ouJPytJ9FLj3gx/a-concerning-observation-from-media-coverage-of-ai-industry,A concerning observation from media coverage of AI industry dynamics,['Justin Olive'],2023-03-05T21:38:18Z,lesswrong,, 163544,https://www.lesswrong.com/posts/9JKdnAakjCtvxTReJ/for-fai-is-molecular-nanotechnology-putting-our-best-foot,"For FAI: Is ""Molecular Nanotechnology"" putting our best foot forward?",['leplen'],2013-06-22T04:44:11Z,lesswrong,, 163561,https://www.lesswrong.com/posts/k8KJqXyctf4a342QA/aggregating-utilities-for-corrigible-ai-feedback-draft,Aggregating Utilities for Corrigible AI [Feedback Draft],"['Dan H', 'Simon Goldstein']",2023-05-12T20:57:04Z,lesswrong,, 163585,https://www.lesswrong.com/posts/evtKwDCgtQQ7ozLn4/randal-koene-on-brain-understanding-before-whole-brain,Randal Koene on brain understanding before whole brain emulation,['Steven Byrnes'],2021-08-23T20:59:35Z,lesswrong,, 163597,https://www.lesswrong.com/posts/XvDboZ7SDBefqJwtf/don-t-jump-or-i-ll,Don't Jump or I'll...,['Double'],2023-03-02T02:58:43Z,lesswrong,, 163618,https://www.lesswrong.com/posts/oM9pEezyCb4dCsuKq/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all-1,Pausing AI Developments Isn't Enough. We Need to Shut it All Down,['Eliezer Yudkowsky'],2023-04-08T00:36:48Z,lesswrong,, 163636,https://www.lesswrong.com/posts/wvbGiHwbie24mmhXw/april-fools-definitive-confirmation-of-shard-theory,[April Fools'] Definitive confirmation of shard theory,['TurnTrout'],2023-04-01T07:27:23Z,lesswrong,, 163658,https://www.lesswrong.com/posts/gb6zWstjmkYHLrbrg/can-t-unbirth-a-child,Can't Unbirth a Child,['Eliezer Yudkowsky'],2008-12-28T17:00:00Z,lesswrong,, 163675,https://www.lesswrong.com/posts/Kaz9miAuxSAAuGr9z/value-pluralism-and-ai,Value Pluralism and AI,['Göran Crafte'],2023-03-19T23:38:57Z,lesswrong,, 163686,https://www.lesswrong.com/posts/HFLuBv8NrBEysRGLZ/why-bet-kelly-1,Why Bet Kelly?,['Joe Zimmerman'],2022-11-29T18:47:23Z,lesswrong,, 163699,https://www.lesswrong.com/posts/SDpaZ7MdH5yRnobrZ/ideas-for-improving-epistemics-in-ai-safety-outreach,Ideas for improving epistemics in AI safety outreach,['mic'],2023-08-21T19:55:46Z,lesswrong,, 163723,https://www.lesswrong.com/posts/LanufchfpiTiDe2NF/questions-about-conjecure-s-coem-proposal,Questions about Conjecure's CoEm proposal,"['Akash', 'CharlotteS', 'NicholasKees']",2023-03-09T19:32:51Z,lesswrong,, 163739,https://www.lesswrong.com/posts/eAT2dXAngXxFRTQLn/disagreements-over-the-prioritization-of-existential-risk,Disagreements over the prioritization of existential risk from AI,['Olivier Coutu'],2023-10-26T17:54:12Z,lesswrong,, 163771,https://www.lesswrong.com/posts/SZ3jDHXHb4WF4jmbr/where-is-human-level-on-text-prediction-gpts-task,Where is human level on text prediction? (GPTs task),['Daniel Kokotajlo'],2020-09-20T09:00:29Z,lesswrong,, 163787,https://www.lesswrong.com/posts/qjproXBGPQSAF9Hbd/linkpost-the-final-ai-benchmark-big-bench,[linkpost] The final AI benchmark: BIG-bench,['RomanS'],2022-06-10T08:53:39Z,lesswrong,, 163798,https://www.lesswrong.com/posts/Cwoerjzjw7p2GFJPS/how-should-my-timelines-influence-my-career-choice,How should my timelines influence my career choice?,['Tom Lieberum'],2021-08-03T10:14:34Z,lesswrong,, 163813,https://www.lesswrong.com/posts/F6D5r2oq9CCnEBhWo/ai-safety-university-groups-a-promising-opportunity-to,AI safety university groups: a promising opportunity to reduce existential risk,['mic'],2022-07-01T03:59:45Z,lesswrong,, 163839,https://www.lesswrong.com/posts/xvHauFwTKP6hxwrJh/llm-guardrails-should-have-better-customer-service-tuning,LLM Guardrails Should Have Better Customer Service Tuning,['Jiao Bu'],2023-05-13T22:54:16Z,lesswrong,, 163850,https://www.lesswrong.com/posts/Tmvvvx3buP4Gj3nZK/learning-societal-values-from-law-as-part-of-an-agi,Learning societal values from law as part of an AGI alignment strategy,['John Nay'],2022-10-21T02:03:25Z,lesswrong,, 163882,https://www.lesswrong.com/posts/ByCwWRgvTsSC6Wxst/what-would-a-compute-monitoring-plan-look-like-linkpost,What would a compute monitoring plan look like? [Linkpost],['Akash'],2023-03-26T19:33:47Z,lesswrong,, 163897,https://www.lesswrong.com/posts/mY7aZSXHpehrfwKn5/any-work-on-honeypots-to-detect-treacherous-turn-attempts,Any work on honeypots (to detect treacherous turn attempts)?,['David Scott Krueger (formerly: capybaralet)'],2020-11-12T05:41:56Z,lesswrong,, 163906,https://www.lesswrong.com/posts/Pcq3b49fENWHutnk2/on-decision-prediction-fixed-points,On decision-prediction fixed points,['jollybard'],2019-12-04T20:49:36Z,lesswrong,, 163921,https://www.lesswrong.com/posts/Ai4zMKQTX86fMtHN3/botworld-a-cellular-automaton-for-studying-self-modifying,Botworld: a cellular automaton for studying self-modifying agents embedded in their environment,['So8res'],2014-04-12T00:56:23Z,lesswrong,, 163932,https://www.lesswrong.com/posts/nvKZchuTW8zY6wvAj/general-purpose-intelligence-arguing-the-orthogonality,General purpose intelligence: arguing the Orthogonality thesis,['Stuart_Armstrong'],2012-05-15T10:23:28Z,lesswrong,, 163950,https://www.lesswrong.com/posts/TjLRJY7gCxDxwM3Cu/one-implementation-of-regulatory-gpu-restrictions,One implementation of regulatory GPU restrictions,['porby'],2023-06-04T20:34:37Z,lesswrong,, 163969,https://www.lesswrong.com/posts/dXT5G9xEAddac8H2J/morality-vs-related-concepts,Morality vs related concepts,['MichaelA'],2020-01-07T10:47:30Z,lesswrong,, 163984,https://www.lesswrong.com/posts/3ACsAThxzH4fEk5dA/give-the-model-a-model-builder,Give the model a model-builder,['Adam Jermyn'],2022-06-06T12:21:11Z,lesswrong,, 164002,https://www.lesswrong.com/posts/vix3K4grcHottqpEm/goal-alignment-is-robust-to-the-sharp-left-turn,Goal Alignment Is Robust To the Sharp Left Turn,['Thane Ruthenis'],2022-07-13T20:23:59Z,lesswrong,, 164022,https://www.lesswrong.com/posts/j9qG76qAKygPbGqZy/ideation-and-trajectory-modelling-in-language-models,Ideation and Trajectory Modelling in Language Models,['NickyP'],2023-10-05T19:21:08Z,lesswrong,, 164041,https://www.lesswrong.com/posts/axxnpQi8FyBPE4rbq/hutter-prize-for-prompts,Hutter-Prize for Prompts,['rokosbasilisk'],2023-03-24T21:26:42Z,lesswrong,, 164054,https://www.lesswrong.com/posts/tQwjkFT8s2uf2arFN/scoring-forecasts-from-the-2016-expert-survey-on-progress-in,Scoring forecasts from the 2016 “Expert Survey on Progress in AI”,['PatrickL'],2023-03-01T14:41:53Z,lesswrong,, 164075,https://www.lesswrong.com/posts/FoJSa8mgLPT83g9e8/jeff-hawkins-on-neuromorphic-agi-within-20-years,Jeff Hawkins on neuromorphic AGI within 20 years,['Steven Byrnes'],2019-07-15T19:16:27Z,lesswrong,, 164099,https://www.lesswrong.com/posts/gvkXvGsK2kauTjw28/normal-is-the-equilibrium-state-of-past-optimization,"""Normal"" is the equilibrium state of past optimization processes",['Alex_Altair'],2022-10-30T19:03:19Z,lesswrong,, 164113,https://www.lesswrong.com/posts/tEacHj4mnwCKTg8jw/strategies-for-keeping-ais-narrow-in-the-short-term,Strategies for keeping AIs narrow in the short term,['Rossin'],2022-04-09T16:42:29Z,lesswrong,, 164132,https://www.lesswrong.com/posts/8QNQS7NisFBWCE6Jw/why-libertarians-are-advocating-for-regulation-on-ai,Why libertarians are advocating for regulation on AI,['RobertM'],2023-06-14T20:59:58Z,lesswrong,, 164145,https://www.lesswrong.com/posts/MYCutshqJwTbHfE3W/my-most-likely-reason-to-die-young-is-ai-x-risk,My Most Likely Reason to Die Young is AI X-Risk,['AISafetyIsNotLongtermist'],2022-07-04T17:08:27Z,lesswrong,, 164157,https://www.lesswrong.com/posts/EzuBSASuui5qekhLA/assessing-alephalphas-multimodal-model,Assessing AlephAlphas Multimodal Model,['p.b.'],2022-06-28T09:28:11Z,lesswrong,, 164173,https://www.lesswrong.com/posts/wv6a9kA6EApiYD5sL/what-if-ai-doesn-t-quite-go-foom,What if AI doesn't quite go FOOM?,['Mass_Driver'],2010-06-20T00:03:10Z,lesswrong,, 164188,https://www.lesswrong.com/posts/8yimdZcEWSKkutHhZ/reflections-on-the-feasibility-of-scalable-oversight,Reflections On The Feasibility Of Scalable-Oversight,['Felix Hofstätter'],2023-03-10T07:54:07Z,lesswrong,, 164219,https://www.lesswrong.com/posts/PpGx4PZcTgL3rnb6Y/ai-as-a-civilizational-risk-part-6-6-what-can-be-done,AI as a Civilizational Risk Part 6/6: What can be done,['PashaKamyshev'],2022-11-03T19:48:52Z,lesswrong,, 164238,https://www.lesswrong.com/posts/MFEANgeFX5CoHiRCn/infinite-tower-of-meta-probability,Infinite tower of meta-probability,['fryolysis'],2023-10-19T16:44:31Z,lesswrong,, 164250,https://www.lesswrong.com/posts/QEHb8tWLztMyvrv6f/potential-alignment-mental-tool-keeping-track-of-the-types,Potential Alignment mental tool: Keeping track of the types,['Donald Hobson'],2021-11-22T20:05:32Z,lesswrong,, 164266,https://www.lesswrong.com/posts/jwhcXmigv2LTrbBiB/success-without-dignity-a-nearcasting-story-of-avoiding,Success without dignity: a nearcasting story of avoiding catastrophe by luck,['HoldenKarnofsky'],2023-03-14T19:23:16Z,lesswrong,, 164299,https://www.lesswrong.com/posts/uRT5ukYQ9fBYpva9p/link-is-the-orthogonality-thesis-defensible-qualia-computing,[Link] Is the Orthogonality Thesis Defensible? (Qualia Computing),['ioannes'],2019-11-13T03:59:01Z,lesswrong,, 164310,https://www.lesswrong.com/posts/hEiAPeirmbKB4FeWW/no-edt-did-not-get-it-right-all-along-why-the-coin-flip,"No, EDT Did Not Get It Right All Along: Why the Coin Flip Creation Problem Is Irrelevant",['Heighn'],2022-03-30T18:41:20Z,lesswrong,, 164323,https://www.lesswrong.com/posts/xJ2ifnbN5PtJxtnsy/you-are-underestimating-the-likelihood-that-convergent,You are Underestimating The Likelihood That Convergent Instrumental Subgoals Lead to Aligned AGI,['Mark Neyer'],2022-09-26T14:22:53Z,lesswrong,, 164338,https://www.lesswrong.com/posts/oyZiwkxejBMuJZA7J/two-reasons-we-might-be-closer-to-solving-alignment-than-it,Two reasons we might be closer to solving alignment than it seems,"['KatWoods', 'AmberDawn']",2022-09-24T20:00:08Z,lesswrong,, 164354,https://www.lesswrong.com/posts/DhDAXQw4PsWXnmwPS/ai-art-isn-t-about-to-shake-things-up-it-s-already-here,"AI art isn't ""about to shake things up"". It's already here.",['Davis_Kingsley'],2022-08-22T11:17:55Z,lesswrong,, 164369,https://www.lesswrong.com/posts/kCEjcu53EEiqBH4gN/discovering-latent-knowledge-in-language-models-without,Discovering Latent Knowledge in Language Models Without Supervision,['Xodarap'],2022-12-14T12:32:56Z,lesswrong,, 164382,https://www.lesswrong.com/posts/8LCviDbDh4ye6xgHc/intrinsic-drives-and-extrinsic-misuse-two-intertwined-risks,Intrinsic Drives and Extrinsic Misuse: Two Intertwined Risks of AI,['jsteinhardt'],2023-10-31T05:10:03Z,lesswrong,, 164406,https://www.lesswrong.com/posts/ybmSviRfoitbzRyei/brief-notes-on-the-senate-hearing-on-ai-oversight,Brief notes on the Senate hearing on AI oversight,['Diziet'],2023-05-16T22:29:33Z,lesswrong,, 164431,https://www.lesswrong.com/posts/iJYzREEsuG8g95pvC/how-much-to-optimize-for-the-short-timelines-scenario,How much to optimize for the short-timelines scenario?,['SoerenMind'],2022-07-21T10:47:50Z,lesswrong,, 164446,https://www.lesswrong.com/posts/rJLviHqJMTy8WQkow/recursion-magic,"...Recursion, Magic",['Eliezer Yudkowsky'],2008-11-25T09:10:38Z,lesswrong,, 164462,https://www.lesswrong.com/posts/KsHmn6iJAEr9bACQW/bayesians-vs-barbarians,Bayesians vs. Barbarians,['Eliezer Yudkowsky'],2009-04-14T23:45:48Z,lesswrong,, 164484,https://www.lesswrong.com/posts/Nnb5AqcunBwAZ4zac/extremely-naive-gradient-hacking-doesn-t-work,(Extremely) Naive Gradient Hacking Doesn't Work,['ojorgensen'],2022-12-20T14:35:34Z,lesswrong,, 164502,https://www.lesswrong.com/posts/uwr9bL8GA8uBmzbef/eu-s-ai-ambitions-at-risk-as-us-pushes-to-water-down,EU’s AI ambitions at risk as US pushes to water down international treaty (linkpost),['mic'],2023-07-31T00:34:28Z,lesswrong,, 164534,https://www.lesswrong.com/posts/nWCokT9xbrY4p98co/heretical-thoughts-on-ai-by-eli-dourado,"""Heretical Thoughts on AI"" by Eli Dourado",['DragonGod'],2023-01-19T16:11:57Z,lesswrong,, 164553,https://www.lesswrong.com/posts/aQ6LDhc2zxrYXFjEF/ai-35-responsible-scaling-policies,AI #35: Responsible Scaling Policies,['Zvi'],2023-10-26T13:30:02Z,lesswrong,, 164592,https://www.lesswrong.com/posts/PTzsEQXkCfig9A6AS/transcript-of-sam-altman-s-interview-touching-on-ai-safety,Transcript of Sam Altman's interview touching on AI safety,['Andy_McKenzie'],2023-01-20T16:14:19Z,lesswrong,, 164642,https://www.lesswrong.com/posts/snzFQJsNYqzPZS2nK/course-recommendations-for-friendliness-researchers,Course recommendations for Friendliness researchers,['Louie'],2013-01-09T14:33:50Z,lesswrong,, 164664,https://www.lesswrong.com/posts/kzwAczMyyvnvaAzxq/all-images-from-the-waitbutwhy-sequence-on-ai,All images from the WaitButWhy sequence on AI,['trevor'],2023-04-08T07:36:06Z,lesswrong,, 164676,https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-tiny-probabilities-of-vast-utilities,Pascal's Mugging: Tiny Probabilities of Vast Utilities,['Eliezer Yudkowsky'],2007-10-19T23:37:38Z,lesswrong,, 164693,https://www.lesswrong.com/posts/m7oMWaLQySRXcDznb/please-help-us-communicate-ai-xrisk-it-could-save-the-world,Please help us communicate AI xrisk. It could save the world.,['otto.barten'],2022-07-04T21:47:46Z,lesswrong,, 164712,https://www.lesswrong.com/posts/y5KwCq6iLKaMWwJMo/black-box-investigation-research-hackathon,Black Box Investigation Research Hackathon,"['Esben Kran', 'Jonas Hallgren']",2022-09-12T07:20:35Z,lesswrong,, 164722,https://www.lesswrong.com/posts/fqfAmAGFLKpsnjfJB/goal-alignment-without-alignment-on-epistemology-ethics-and,"Goal alignment without alignment on epistemology, ethics, and science is futile",['Roman Leventov'],2023-04-07T08:22:25Z,lesswrong,, 164734,https://www.lesswrong.com/posts/muhtBvbh4etjkKXd9/g-k-chesterton-on-ai-risk,G.K. Chesterton On AI Risk,['Scott Alexander'],2017-04-01T19:00:44Z,lesswrong,, 164743,https://www.lesswrong.com/posts/iwCRYnGYMvxgzrCMf/complex-systems-are-hard-to-control,Complex Systems are Hard to Control,['jsteinhardt'],2023-04-04T00:00:14Z,lesswrong,, 164781,https://www.lesswrong.com/posts/xAzKefLsYdFa4SErg/accurate-models-of-ai-risk-are-hyperexistential-exfohazards,Accurate Models of AI Risk Are Hyperexistential Exfohazards,['Thane Ruthenis'],2022-12-25T16:50:25Z,lesswrong,, 164803,https://www.lesswrong.com/posts/h6TefRhwy6ioZrqXw/a-study-of-ai-science-models,A Study of AI Science Models,"['Eleni Angelou', 'machinebiology']",2023-05-13T23:25:32Z,lesswrong,, 164831,https://www.lesswrong.com/posts/j2ZnY4sgJqu8FAqGd/aisn-20-llm-proliferation-ai-deception-and-continuing,"AISN #20: LLM Proliferation, AI Deception, and Continuing Drivers of AI Capabilities","['aogara', 'Dan H']",2023-08-29T15:07:03Z,lesswrong,, 164869,https://www.lesswrong.com/posts/xkRRRZ7Pdny7AQK5r/link-openai-on-why-we-need-social-scientists,[Link] OpenAI on why we need social scientists,['ioannes'],2019-02-19T16:59:32Z,lesswrong,, 164884,https://www.lesswrong.com/posts/Atu4teGvob5vKvEAF/decoherence-is-simple,Decoherence is Simple,['Eliezer Yudkowsky'],2008-05-06T07:44:04Z,lesswrong,, 164896,https://www.lesswrong.com/posts/5vsYJF3F4SixWECFA/is-the-orthogonality-thesis-true-for-humans,Is the Orthogonality Thesis true for humans?,['Noosphere89'],2022-10-27T14:41:29Z,lesswrong,, 164906,https://www.lesswrong.com/posts/wYEwx6xcY2JxBJsfA/qualities-that-alignment-mentors-value-in-junior-researchers,Qualities that alignment mentors value in junior researchers,['Akash'],2023-02-14T23:27:41Z,lesswrong,, 164929,https://www.lesswrong.com/posts/PHvDjkC3PL7ZHjCAA/paper-digestion-may-we-have-your-attention-please-human,"Paper digestion: ""May We Have Your Attention Please? Human-Rights NGOs and the Problem of Global Communication""",['Klara Helene Nielsen'],2023-07-20T17:08:54Z,lesswrong,, 164940,https://www.lesswrong.com/posts/XCtFBWoMeFwG8myYh/dalle2-comments,dalle2 comments,['nostalgebraist'],2022-04-26T05:30:08Z,lesswrong,, 164957,https://www.lesswrong.com/posts/SCqDipWAhZ49JNdmL/paper-llms-trained-on-a-is-b-fail-to-learn-b-is-a,Paper: LLMs trained on “A is B” fail to learn “B is A”,"['lberglund', 'Owain_Evans', 'Meg', 'Maximilian Kaufmann', 'Mikita Balesni', 'Asa Cooper Stickland', 'Tomek Korbak']",2023-09-23T19:55:53Z,lesswrong,, 164972,https://www.lesswrong.com/posts/xQKHgEq9YrvzKiABA/are-extrapolation-based-ais-alignable,Are extrapolation-based AIs alignable?,['cousin_it'],2023-03-24T15:55:07Z,lesswrong,, 164983,https://www.lesswrong.com/posts/jgYGZD2zRK6nncJd5/goal-directedness-tackling-complexity,Goal-directedness: tackling complexity,['Morgan_Rogers'],2022-07-02T13:51:08Z,lesswrong,, 165002,https://www.lesswrong.com/posts/XTd4xbFSc7NQhAFqh/product-safety-is-a-poor-model-for-ai-governance,Product safety is a poor model for AI governance,['Richard Korzekwa'],2023-02-01T22:40:04Z,lesswrong,, 165022,https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message,That Alien Message,['Eliezer Yudkowsky'],2008-05-22T05:55:13Z,lesswrong,, 165038,https://www.lesswrong.com/posts/MvwdPfYLX866vazFJ/post-your-utility-function,Post Your Utility Function,['taw'],2009-06-04T05:05:18Z,lesswrong,, 165047,https://www.lesswrong.com/posts/hcTiw9xKNZAi7qcy6/an-overview-of-the-points-system,An overview of the points system,['Iknownothing'],2023-06-27T09:09:55Z,lesswrong,, 165072,https://www.lesswrong.com/posts/jkEEHfQkwvbLkzpiF/aisn-21-google-deepmind-s-gpt-4-competitor-military,"AISN #21: Google DeepMind’s GPT-4 Competitor, Military Investments in Autonomous Drones, The UK AI Safety Summit, and Case Studies in AI Policy","['aogara', 'Dan H']",2023-09-05T15:03:00Z,lesswrong,, 165097,https://www.lesswrong.com/posts/zbrvXGu264u3p8otD/on-the-uk-summit,On the UK Summit,['Zvi'],2023-11-07T13:10:05Z,lesswrong,, 165144,https://www.lesswrong.com/posts/9HeZjGpkQQJfkcbqh/to-open-source-or-to-not-open-source-that-is-an,"To open-source or to not open-source, that is (an oversimplification of) the question.",['Justin Bullock'],2023-10-13T15:10:21Z,lesswrong,, 165165,https://www.lesswrong.com/posts/emjZiBFZYftzTHcao/orthogonality-is-expensive-1,Orthogonality is Expensive,['DragonGod'],2023-04-03T00:43:35Z,lesswrong,, 165174,https://www.lesswrong.com/posts/trA3wEA7oXw3TF4ho/the-evaluation-function-of-an-ai-is-not-its-aim,The evaluation function of an AI is not its aim,['Yair Halberstadt'],2021-10-10T14:52:01Z,lesswrong,, 165183,https://www.lesswrong.com/posts/qmiaJKkBNk2gGTkuG/funding-good-research,Funding Good Research,['lukeprog'],2012-05-27T06:41:56Z,lesswrong,, 165197,https://www.lesswrong.com/posts/YoFLKyTJ7o4ApcKXR/disc-are-values-robust,[DISC] Are Values Robust?,['DragonGod'],2022-12-21T01:00:30Z,lesswrong,, 165216,https://www.lesswrong.com/posts/Jxp9H94trFkoene2k/chatgpt-getting-out-of-the-box,ChatGPT getting out of the box,['qbolec'],2023-03-16T13:47:05Z,lesswrong,, 165234,https://www.lesswrong.com/posts/GYuKqAL95eaWTDje5/worse-than-random,Worse Than Random,['Eliezer Yudkowsky'],2008-11-11T19:01:31Z,lesswrong,, 165255,https://www.lesswrong.com/posts/xoxZdRtpyRnXmhher/q-and-a-with-experts-on-risks-from-ai-2,Q&A with experts on risks from AI #2,['XiXiDu'],2012-01-09T19:40:27Z,lesswrong,, 165282,https://www.lesswrong.com/posts/fKNRHnxpjDLHnHdek/one-example-of-how-llm-propaganda-attacks-can-hack-the-brain,One example of how LLM propaganda attacks can hack the brain,['trevor'],2023-08-16T21:41:02Z,lesswrong,, 165304,https://www.lesswrong.com/posts/NdJtfujX4sE6xLCsb/if-i-were-a-well-intentioned-ai-iii-extremal-goodhart,If I were a well-intentioned AI... III: Extremal Goodhart,['Stuart_Armstrong'],2020-02-28T11:24:23Z,lesswrong,, 165326,https://www.lesswrong.com/posts/YduZEfz8usGbJXN4x/transcription-of-eliezer-s-january-2010-video-q-and-a,Transcription of Eliezer's January 2010 video Q&A,['curiousepic'],2011-11-14T17:02:04Z,lesswrong,, 165362,https://www.lesswrong.com/posts/ceXpD4vjzfiNkNYTp/newcomb-s-problem-vs-one-shot-prisoner-s-dilemma,Newcomb's Problem vs. One-Shot Prisoner's Dilemma,['Wei Dai'],2009-04-07T05:32:37Z,lesswrong,, 165376,https://www.lesswrong.com/posts/BGiehNuRttGeH47W7/yann-lecun-a-path-towards-autonomous-machine-intelligence,"Yann LeCun, A Path Towards Autonomous Machine Intelligence [link]",['Bill Benzon'],2022-06-27T23:29:55Z,lesswrong,, 165397,https://www.lesswrong.com/posts/R9javXN9BN5nXWHZx/cheating-death-in-damascus-solution-to-the-fermi-paradox,“Cheating Death in Damascus” Solution to the Fermi Paradox,['avturchin'],2018-06-30T12:00:59Z,lesswrong,, 165413,https://www.lesswrong.com/posts/udLiuqnuiHs4YawfY/alexatm-20-billion-parameter-model-with-impressive,AlexaTM - 20 Billion Parameter Model With Impressive Performance,['ViktorThink'],2022-09-09T21:46:30Z,lesswrong,, 165436,https://www.lesswrong.com/posts/Rkxj7TFxhbm59AKJh/the-inordinately-slow-spread-of-good-agi-conversations-in-ml,The inordinately slow spread of good AGI conversations in ML,['Rob Bensinger'],2022-06-21T16:09:58Z,lesswrong,, 165455,https://www.lesswrong.com/posts/7dfqwqJWEbP6p8Qzx/ask-ai-companies-about-what-they-are-doing-for-ai-safety,Ask AI companies about what they are doing for AI safety?,['mic'],2022-03-09T15:14:29Z,lesswrong,, 165465,https://www.lesswrong.com/posts/rSycgquipFkozDHzF/ai-self-improvement-is-possible,AI self-improvement is possible,['bhauth'],2023-05-23T02:32:08Z,lesswrong,, 165479,https://www.lesswrong.com/posts/wBgjQKNfJnPMjKpFa/incentives-affecting-alignment-researcher-encouragement,Incentives affecting alignment-researcher encouragement,['NicholasKross'],2023-08-29T05:12:00Z,lesswrong,, 165489,https://www.lesswrong.com/posts/2XZju58cP82Fv776N/link-what-should-a-reasonable-person-believe-about-the,[LINK] What should a reasonable person believe about the Singularity?,['Kaj_Sotala'],2011-01-13T09:32:25Z,lesswrong,, 165500,https://www.lesswrong.com/posts/ELvmLtY8Zzcko9uGJ/questions-about-formalizing-instrumental-goals,"Questions about ''formalizing instrumental goals""",['Mark Neyer'],2022-04-01T18:52:11Z,lesswrong,, 165523,https://www.lesswrong.com/posts/a87uzervYEYJ8pCgk/list-of-requests-for-an-ai-slowdown-halt,List of requests for an AI slowdown/halt.,['Cleo Nardo'],2023-04-14T23:55:10Z,lesswrong,, 165539,https://www.lesswrong.com/posts/aS3rNSww3jwkeAHjT/higher-dimension-cartesian-objects-and-aligning-tiling,Higher Dimension Cartesian Objects and Aligning ‘Tiling Simulators’,['marc/er'],2023-06-11T00:13:13Z,lesswrong,, 165557,https://www.lesswrong.com/posts/TMQY54nbmv2Pqn3ux/ai-box-ai-has-one-shot-at-avoiding-destruction-what-might-it,AI box: AI has one shot at avoiding destruction - what might it say?,['ancientcampus'],2013-01-22T20:22:22Z,lesswrong,, 165567,https://www.lesswrong.com/posts/MLDXcEBJ2jX7BcSmN/anthropic-core-views-on-ai-safety-when-why-what-and-how,"Anthropic: Core Views on AI Safety: When, Why, What, and How",['jonmenaster'],2023-03-09T17:34:07Z,lesswrong,, 165613,https://www.lesswrong.com/posts/7x4eGxXL5DMwRwzDQ/commensurable-scientific-paradigms-or-computable-induction,"Commensurable Scientific Paradigms; or, computable induction",['samshap'],2022-04-13T00:01:23Z,lesswrong,, 165634,https://www.lesswrong.com/posts/K49G5XSinhoAknncQ/news-biden-harris-administration-secures-voluntary,News : Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,['Jonathan Claybrough'],2023-07-21T18:00:57Z,lesswrong,, 165673,https://www.lesswrong.com/posts/YTZRhY3nBj8p3ojTP/an-additional-problem-with-solomonoff-induction,An additional problem with Solomonoff induction,['gedymin'],2014-01-22T23:34:06Z,lesswrong,, 165684,https://www.lesswrong.com/posts/PzAnWgqvfESgQEvdg/any-rebuttals-of-christiano-and-ai-impacts-on-takeoff-speeds,Any rebuttals of Christiano and AI Impacts on takeoff speeds?,['SoerenMind'],2019-04-21T20:39:51Z,lesswrong,, 165693,https://www.lesswrong.com/posts/AmaoxqZTJZnWgmtCj/ai-and-efficiency,AI and Efficiency,['DragonGod'],2020-07-27T20:58:44Z,lesswrong,, 165706,https://www.lesswrong.com/posts/tDmkHz9ZLdHn2kzp9/ai-security-might-be-helpful-for-ai-alignment,AI security might be helpful for AI alignment,['Igor Ivanov'],2023-01-06T20:16:40Z,lesswrong,, 165724,https://www.lesswrong.com/posts/8SpbjkJREzp2H4dBB/a-potentially-high-impact-differential-technological,A potentially high impact differential technological development area,['Noosphere89'],2023-06-08T14:33:43Z,lesswrong,, 165734,https://www.lesswrong.com/posts/zuAmKWxwADsW2H3u6/if-ai-is-based-on-gpt-how-to-ensure-its-safety,"If AI is based on GPT, how to ensure its safety?",['avturchin'],2020-06-18T20:33:51Z,lesswrong,, 165749,https://www.lesswrong.com/posts/uyMhKJCNCcfPtEFLj/formalizing-two-problems-of-realistic-world-models,Formalizing Two Problems of Realistic World Models,['So8res'],2015-01-22T23:12:22Z,lesswrong,, 165761,https://www.lesswrong.com/posts/yoYAokHHeWMWamD2S/what-is-to-be-done-about-the-profit-motive,What is to be done? (About the profit motive),['Connor Barber'],2023-09-08T19:27:40Z,lesswrong,, 165770,https://www.lesswrong.com/posts/RgWFCDntyc3DEfgLn/question-4-implementing-the-control-proposals,Question 4: Implementing the control proposals,['Cameron Berg'],2022-02-13T17:12:42Z,lesswrong,, 165785,https://www.lesswrong.com/posts/CCN5GjFnhsYiNRDCg/troubles-with-cev-part2-cev-sequence,Troubles With CEV Part2 - CEV Sequence,['diegocaleiro'],2012-02-28T04:19:56Z,lesswrong,, 165829,https://www.lesswrong.com/posts/LsqvMKnFRBQh4L3Rs/steering-systems,Steering systems,['Max H'],2023-04-04T00:56:55Z,lesswrong,, 165849,https://www.lesswrong.com/posts/8R9XcZKZ4f38aRJ9A/debate-ai-and-the-decision-to-release-an-ai,Debate AI and the Decision to Release an AI,['Chris_Leong'],2019-01-17T14:36:54Z,lesswrong,, 165880,https://www.lesswrong.com/posts/JqsvYmwzcCKzgE4ZD/review-of-ai-alignment-progress,Review of AI Alignment Progress,['PeterMcCluskey'],2023-02-07T18:57:41Z,lesswrong,, 165916,https://www.lesswrong.com/posts/sGCYvefva2ADfAEJ2/will-2023-be-the-last-year-you-can-write-short-stories-and,Will 2023 be the last year you can write short stories and receive most of the intellectual credit for writing them?,['lc'],2023-03-16T21:36:28Z,lesswrong,, 165925,https://www.lesswrong.com/posts/ChyQ7PgTmhfgNs8En/dumb-and-ill-posed-question-is-conceptual-research-like-this,"Dumb and ill-posed question: Is conceptual research like this MIRI paper on the shutdown problem/Corrigibility ""real""",['joraine'],2022-11-24T05:08:02Z,lesswrong,, 165940,https://www.lesswrong.com/posts/Q9hDFkvCSwi6cwPGy/how-is-solomonoff-induction-calculated-in-practice-1,How is Solomonoff induction calculated in practice?,['Bucky'],2019-06-04T10:11:37Z,lesswrong,, 165949,https://www.lesswrong.com/posts/CNN7dboyQ3AEQsvke/half-baked-alignment-idea,Half-baked alignment idea,['ozb'],2023-03-28T17:47:10Z,lesswrong,, 165961,https://www.lesswrong.com/posts/WNkcqhAQPPiDjSDaB/google-s-ethical-ai-team-and-ai-safety-1,Google’s Ethical AI team and AI Safety,['magfrump'],2021-02-20T09:42:20Z,lesswrong,, 165978,https://www.lesswrong.com/posts/yppdL4EXLWda5Wthn/deontology-for-consequentialists,Deontology for Consequentialists,['Alicorn'],2010-01-30T17:58:44Z,lesswrong,, 165991,https://www.lesswrong.com/posts/ydeaHqDPJ5REJWvat/a-one-question-turing-test-for-gpt-3,A one-question Turing test for GPT-3,"['Paul Crowley', 'rosiecam']",2022-01-22T18:17:56Z,lesswrong,, 166005,https://www.lesswrong.com/posts/gumkW3vy9mhjZriuc/machines-vs-memes-2-memetically-motivated-model-extensions,Machines vs. Memes 2: Memetically-Motivated Model Extensions,['naterush'],2022-05-31T22:03:49Z,lesswrong,, 166020,https://www.lesswrong.com/posts/xtJQcmv7THCE8Topa/how-major-governments-can-help-with-the-most-important,How major governments can help with the most important century,['HoldenKarnofsky'],2023-02-24T18:20:09Z,lesswrong,, 166061,https://www.lesswrong.com/posts/BhDoCcRTgDBPQR83K/laziness-in-ai,Laziness in AI,['Richard Henage'],2022-09-02T17:04:43Z,lesswrong,, 166070,https://www.lesswrong.com/posts/3wBj8BPquskZAbXu9/tendencies-in-reflective-equilibrium,Tendencies in reflective equilibrium,['Scott Alexander'],2011-07-20T10:38:26Z,lesswrong,, 166084,https://www.lesswrong.com/posts/6hxsbMqXEsRYwEYRr/what-is-openai-s-plan-for-making-ai-safer,What is OpenAI's plan for making AI Safer?,['brook'],2023-09-01T11:15:21Z,lesswrong,, 166118,https://www.lesswrong.com/posts/jiYLFomPPePy85eN8/ai-pause-governance-advocacy-might-be-net-negative,"AI pause/governance advocacy might be net-negative, especially without focus on explaining the x-risk",['Mikhail Samin'],2023-08-27T23:05:02Z,lesswrong,, 166147,https://www.lesswrong.com/posts/vzWmTCBYKmjfMTfmw/what-are-some-claims-or-opinions-about-multi-multi,What are some claims or opinions about multi-multi delegation you've seen in the memeplex that you think deserve scrutiny?,['Quinn'],2021-06-27T17:44:52Z,lesswrong,, 166164,https://www.lesswrong.com/posts/FuCZdbQ3h6782bnY6/elon-musk-donates-usd10m-to-the-future-of-life-institute-to,Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial,['Paul Crowley'],2015-01-15T16:33:49Z,lesswrong,, 166176,https://www.lesswrong.com/posts/wLmxiXfpLjiTBiT2j/two-questions-about-cev-that-worry-me,Two questions about CEV that worry me,['cousin_it'],2010-12-23T15:58:38Z,lesswrong,, 166193,https://www.lesswrong.com/posts/NCDakH4nZrS9qeuL6/the-game-of-dominance,The Game of Dominance,['Karl von Wendt'],2023-08-27T11:04:37Z,lesswrong,, 166212,https://www.lesswrong.com/posts/QpqKBYzPKdZpByZS3/fdt-defects-in-a-realistic-twin-prisoners-dilemma,FDT defects in a realistic Twin Prisoners' Dilemma,['Sylvester Kollin'],2022-09-15T08:55:53Z,lesswrong,, 166221,https://www.lesswrong.com/posts/ZhsRiTDoLaExuY6DD/a-framework-and-open-questions-for-game-theoretic-shard,A framework and open questions for game theoretic shard modeling,['Garrett Baker'],2022-10-21T21:40:50Z,lesswrong,, 166233,https://www.lesswrong.com/posts/tP75xLX7pddtMsT8v/alignment-is-hard-communicating-that-might-be-harder,"Alignment is hard. Communicating that, might be harder",['Eleni Angelou'],2022-09-01T16:57:56Z,lesswrong,, 166255,https://www.lesswrong.com/posts/zthDPAjh9w6Ytbeks/deceptive-alignment,Deceptive Alignment,"['evhub', 'Chris van Merwijk', 'Vlad Mikulik', 'Joar Skalse', 'Scott Garrabrant']",2019-06-05T20:16:29Z,lesswrong,, 166280,https://www.lesswrong.com/posts/h3usujzAcdMTetszs/how-to-find-ai-alignment-researchers-to-collaborate-with,How to find AI alignment researchers to collaborate with?,['Florian Dietz'],2023-07-31T09:05:44Z,lesswrong,, 166289,https://www.lesswrong.com/posts/CMHazFsmiSaBaWKJW/is-checking-that-a-state-of-the-world-is-not-dystopian,Is checking that a state of the world is not dystopian easier than constructing a non-dystopian state?,['No77e'],2022-12-27T20:57:28Z,lesswrong,, 166302,https://www.lesswrong.com/posts/vmLfa5PEcuAyZX3cC/what-are-all-the-ai-alignment-and-ai-safety-communication,What are all the AI Alignment and AI Safety Communication Hubs?,['Gunnar_Zarncke'],2022-06-15T16:16:03Z,lesswrong,, 166312,https://www.lesswrong.com/posts/ZNqQ2DsBptwn8emtn/from-the-weird-math-questions-department,"From the ""weird math questions"" department...",['CronoDAS'],2012-08-09T07:19:30Z,lesswrong,, 166322,https://www.lesswrong.com/posts/cv5xA2iSEnjz2Y9LF/croesus-cerberus-and-the-magpies-a-gentle-introduction-to,"Croesus, Cerberus, and the magpies: a gentle introduction to Eliciting Latent Knowledge",['Alexandre Variengien'],2022-05-27T17:58:55Z,lesswrong,, 166343,https://www.lesswrong.com/posts/x2n7mBLryDXuLwGhx/technical-ai-safety-research-landscape-slides,Technical AI Safety Research Landscape [Slides],['Magdalena Wache'],2023-09-18T13:56:04Z,lesswrong,, 166374,https://www.lesswrong.com/posts/JGHQPybvjLAgimXae/quantum-versus-logical-bombs,Quantum versus logical bombs,['Stuart_Armstrong'],2013-11-17T15:14:23Z,lesswrong,, 166387,https://www.lesswrong.com/posts/kYa4dHP5MDnqmav2w/is-this-a-good-way-to-bet-on-short-timelines,Is this a good way to bet on short timelines?,['Daniel Kokotajlo'],2020-11-28T12:51:08Z,lesswrong,, 166401,https://www.lesswrong.com/posts/3pyLbH3BqevetQros/true-sources-of-disagreement,True Sources of Disagreement,['Eliezer Yudkowsky'],2008-12-08T15:51:58Z,lesswrong,, 166426,https://www.lesswrong.com/posts/8HPsRYKE2pYHtqRhw/brainstorming-additional-ai-risk-reduction-ideas,Brainstorming additional AI risk reduction ideas,['John_Maxwell'],2012-06-14T07:55:41Z,lesswrong,, 166443,https://www.lesswrong.com/posts/eNC5ALrHpbpgEfCwb/beyond-reinforcement-learning-predictive-processing-and,Beyond Reinforcement Learning: Predictive Processing and Checksums,['lsusr'],2023-02-15T07:32:56Z,lesswrong,, 166465,https://www.lesswrong.com/posts/LqMjAxEmkkrs7knhu/correcting-a-misconception-consciousness-does-not-need-90,"Correcting a misconception: consciousness does not need 90 billion neurons, at all",['bvbvbvbvbvbvbvbvbvbvbv'],2023-03-31T16:02:30Z,lesswrong,, 166475,https://www.lesswrong.com/posts/FKNtgZrGYwgsz3nHT/bankless-podcast-159-we-re-all-gonna-die-with-eliezer,Bankless Podcast: 159 - We’re All Gonna Die with Eliezer Yudkowsky,['bayesed'],2023-02-20T16:42:07Z,lesswrong,, 166487,https://www.lesswrong.com/posts/nhg2bXNqBSFivkjfa/the-epistemic-authority-of-deep-learning-pioneers,The Epistemic Authority of Deep Learning Pioneers,['Dylan Bowman'],2023-08-29T18:14:12Z,lesswrong,, 166498,https://www.lesswrong.com/posts/fEw4KmqjgAKDWqwYM/worldbuilding-exercise-the-highwayverse,Worldbuilding exercise: The Highwayverse.,['Yair Halberstadt'],2021-12-22T06:47:53Z,lesswrong,, 166521,https://www.lesswrong.com/posts/pckLdSgYWJ38NBFf8/gpt-4,GPT-4,['nz'],2023-03-14T17:02:02Z,lesswrong,, 166553,https://www.lesswrong.com/posts/qDSqJEunbYTfhaTsb/language-models-are-not-inherently-safe,Language models are not inherently safe,['Loppukilpailija'],2023-03-07T21:15:09Z,lesswrong,, 166568,https://www.lesswrong.com/posts/qfawLPCp2MZgK44Yc/linkpost-are-emergent-abilities-in-large-language-models,Linkpost: Are Emergent Abilities in Large Language Models just In-Context Learning?,['Erich_Grunewald'],2023-10-08T12:14:47Z,lesswrong,, 166585,https://www.lesswrong.com/posts/G6nnufmiTwTaXAbKW/the-alignment-problem,The Alignment Problem,['lsusr'],2022-07-11T03:03:03Z,lesswrong,, 166599,https://www.lesswrong.com/posts/bXYtDfMTNbjCXFQPh/drexler-on-ai-risk,Drexler on AI Risk,['PeterMcCluskey'],2019-02-01T05:11:01Z,lesswrong,, 166614,https://www.lesswrong.com/posts/qhurqnyFbxZY8WC4t/agi-level-reasoner-will-appear-sooner-than-an-agent-what-the,AGI-level reasoner will appear sooner than an agent; what the humanity will do with this reasoner is critical,['Roman Leventov'],2022-07-30T20:56:55Z,lesswrong,, 166631,https://www.lesswrong.com/posts/Xs7ag4gsiA6zspmsD/the-problem-of-the-criterion,The Problem of the Criterion,['Gordon Seidoh Worley'],2021-01-21T15:05:42Z,lesswrong,, 166651,https://www.lesswrong.com/posts/QZ2AR6bmc4DehRHCw/how-will-internet-forums-like-lw-be-able-to-defend-against,How will internet forums like LW be able to defend against GPT-style spam?,['ChristianKl'],2020-07-28T20:12:56Z,lesswrong,, 166665,https://www.lesswrong.com/posts/MFgj8hcTB9gjjL9rE/superintelligence-25-components-list-for-acquiring-values,Superintelligence 25: Components list for acquiring values,['KatjaGrace'],2015-03-03T02:01:11Z,lesswrong,, 166694,https://www.lesswrong.com/posts/qsDPHZwjmduSMCJLv/the-partial-fallacy-of-dumb-superintelligence,The (partial) fallacy of dumb superintelligence,['Seth Herd'],2023-10-18T21:25:17Z,lesswrong,, 166712,https://www.lesswrong.com/posts/8ELbjYgsypCcX5g86/openai-s-alignment-plan-is-not-s-m-a-r-t,OpenAI’s Alignment Plan is not S.M.A.R.T.,['Søren Elverlin'],2023-01-18T06:39:18Z,lesswrong,, 166734,https://www.lesswrong.com/posts/goC9qv4PWf2cjfnbm/did-chatgpt-just-gaslight-me,Did ChatGPT just gaslight me?,['ThomasW'],2022-12-01T05:41:47Z,lesswrong,, 166755,https://www.lesswrong.com/posts/p3QMGoKPdHtCWgPBD/new-brief-popular-level-introduction-to-ai-risks-and,"New, Brief Popular-Level Introduction to AI Risks and Superintelligence",['LyleN'],2015-01-23T15:43:30Z,lesswrong,, 166764,https://www.lesswrong.com/posts/vkjWGJrFWBnzHtxrw/superintelligence-7-decisive-strategic-advantage,Superintelligence 7: Decisive strategic advantage,['KatjaGrace'],2014-10-28T01:01:01Z,lesswrong,, 166788,https://www.lesswrong.com/posts/WEjBvskFwBkczqjZZ/evaluating-superhuman-models-with-consistency-checks-1,Evaluating Superhuman Models with Consistency Checks,"['Daniel Paleka', 'Lukas Fluri']",2023-08-01T07:51:07Z,lesswrong,, 166816,https://www.lesswrong.com/posts/HhbwCYmMAnrapwzfX/my-current-research-questions-for-membranes-boundaries,My current research questions for «membranes/boundaries»,['Chipmonk'],2023-05-30T15:48:35Z,lesswrong,, 166835,https://www.lesswrong.com/posts/EAp3AQJv8dzTqAdKW/responses-to-catastrophic-agi-risk-a-survey,Responses to Catastrophic AGI Risk: A Survey,['lukeprog'],2013-07-08T14:33:51Z,lesswrong,, 166853,https://www.lesswrong.com/posts/otArJmyzWgfCxNZMt/agi-deployment-as-an-act-of-aggression,AGI deployment as an act of aggression,['dr_s'],2023-04-05T06:39:45Z,lesswrong,, 166879,https://www.lesswrong.com/posts/nkWZAEopxvwRTAP5D/catalyst-books,Catalyst books,['Catnee'],2023-09-17T17:05:15Z,lesswrong,, 166889,https://www.lesswrong.com/posts/WBJZoeJypcNRmsdHx/a-few-misconceptions-surrounding-roko-s-basilisk,A few misconceptions surrounding Roko's basilisk,['Rob Bensinger'],2015-10-05T21:23:09Z,lesswrong,, 166911,https://www.lesswrong.com/posts/7kBah8YQXfx6yfpuT/what-will-the-scaled-up-gato-look-like-updated-with,What will the scaled up GATO look like? (Updated with questions),['Amal'],2022-10-25T12:44:39Z,lesswrong,, 166920,https://www.lesswrong.com/posts/ShD7EHb4HmPgfveim/decision-theories-part-3-5-halt-melt-and-catch-fire,"Decision Theories, Part 3.5: Halt, Melt and Catch Fire",['orthonormal'],2012-08-26T22:40:20Z,lesswrong,, 166941,https://www.lesswrong.com/posts/WLT3iajuTmwkCquSm/rebooting-ai-governance-an-ai-driven-approach-to-ai,Rebooting AI Governance: An AI-Driven Approach to AI Governance,['Max Reddel'],2023-08-06T14:19:50Z,lesswrong,, 166968,https://www.lesswrong.com/posts/q3mZNmvqBtnG2nQre/universal-agents-and-utility-functions,Universal agents and utility functions,['Anja'],2012-11-14T04:05:39Z,lesswrong,, 166991,https://www.lesswrong.com/posts/fLRPeXihRaiRo5dyX/the-magnitude-of-his-own-folly,The Magnitude of His Own Folly,['Eliezer Yudkowsky'],2008-09-30T11:31:24Z,lesswrong,, 167004,https://www.lesswrong.com/posts/oBPPFrMJ2aBK6a6sD/simulated-elon-musk-lives-in-a-simulation,Simulated Elon Musk Lives in a Simulation,['lsusr'],2021-09-18T07:37:37Z,lesswrong,, 167022,https://www.lesswrong.com/posts/8vf3wKt8d4qP4CCQk/how-ai-could-workaround-goals-if-rated-by-people,How AI could workaround goals if rated by people,['ProgramCrafter'],2023-03-19T15:51:05Z,lesswrong,, 167044,https://www.lesswrong.com/posts/Eg5AEMhGdyyKRWmZW/is-gpt-3-already-sample-efficient,Is GPT-3 already sample-efficient?,['Daniel Kokotajlo'],2021-10-06T13:38:37Z,lesswrong,, 167054,https://www.lesswrong.com/posts/xfGKpLevfogGefDfx/introduction-9,Introduction,"['Robert Kralisch', 'Eris', 'teahorse', 'Sohaib Imran']",2023-06-30T20:45:10Z,lesswrong,, 167076,https://www.lesswrong.com/posts/sL8hCYecDwcrRhfCT/superintelligence-16-tool-ais,Superintelligence 16: Tool AIs,['KatjaGrace'],2014-12-30T02:00:10Z,lesswrong,, 167098,https://www.lesswrong.com/posts/EhErNrNJDKTozaKuy/study-on-what-makes-people-approve-or-condemn-mind-upload,Study on what makes people approve or condemn mind upload technology; references LW,['Kaj_Sotala'],2018-07-10T17:14:52Z,lesswrong,, 167117,https://www.lesswrong.com/posts/uu8FwG5XPQ6zpFaEN/gpt-3-gems,GPT-3 Gems,['TurnTrout'],2020-07-23T00:46:37Z,lesswrong,, 167138,https://www.lesswrong.com/posts/4FhiSuNv4QbtKDzL8/how-can-i-bet-on-short-timelines,How can I bet on short timelines?,['Daniel Kokotajlo'],2020-11-07T12:44:20Z,lesswrong,, 167153,https://www.lesswrong.com/posts/PBRWb2Em5SNeWYwwB/humans-are-not-automatically-strategic,Humans are not automatically strategic,['AnnaSalamon'],2010-09-08T07:02:52Z,lesswrong,, 167171,https://www.lesswrong.com/posts/KDmo23saeq5GegTbA/research-ideas-ai-interpretability-and-neurosciences-for-a-2,Research ideas (AI Interpretability & Neurosciences) for a 2-months project,['flux'],2023-01-08T15:36:13Z,lesswrong,, 167181,https://www.lesswrong.com/posts/RJEWuHZBr85RMYRp4/top-lesson-from-gpt-we-will-probably-destroy-humanity-for,"Top lesson from GPT: we will probably destroy humanity ""for the lulz"" as soon as we are able.",['shminux'],2023-04-16T20:27:20Z,lesswrong,, 167197,https://www.lesswrong.com/posts/sWLLdG6DWJEy3CH7n/imo-challenge-bet-with-eliezer,IMO challenge bet with Eliezer,['paulfchristiano'],2022-02-26T04:50:06Z,lesswrong,, 167208,https://www.lesswrong.com/posts/veEumGEQAsDknw9PC/why-could-you-be-optimistic-that-the-singularity-is-near,Why could you be optimistic that the Singularity is Near?,['gwern'],2012-07-14T23:33:08Z,lesswrong,, 167217,https://www.lesswrong.com/posts/DaeHpWxvht43zaaje/empirical-evidence-against-the-longest-training-run,"Empirical Evidence Against ""The Longest Training Run""",['NickGabs'],2023-07-06T18:32:03Z,lesswrong,, 167236,https://www.lesswrong.com/posts/JhB9eqJDScjDNpWiS/objections-to-coherent-extrapolated-volition,Objections to Coherent Extrapolated Volition,['XiXiDu'],2011-11-22T10:32:13Z,lesswrong,, 167255,https://www.lesswrong.com/posts/uutXLm2DRcCtFBZ2D/steganography-and-the-cyclegan-alignment-failure-case-study,Steganography and the CycleGAN - alignment failure case study,['Jan Czechowski'],2022-06-11T09:41:29Z,lesswrong,, 167274,https://www.lesswrong.com/posts/yRvKcHmsKrth9qsm8/the-peril-of-the-great-leaks-written-with-chatgpt,The Peril of the Great Leaks (written with ChatGPT),['bvbvbvbvbvbvbvbvbvbvbv'],2023-03-31T18:14:29Z,lesswrong,, 167285,https://www.lesswrong.com/posts/75Cdatj4iEjsQMtT5/nyt-google-will-recalibrate-the-risk-of-releasing-ai-due-to,NYT: Google will “recalibrate” the risk of releasing AI due to competition with OpenAI,['Michael Huang'],2023-01-22T08:38:47Z,lesswrong,, 167298,https://www.lesswrong.com/posts/DYjnR8QFep5cPTKXn/is-your-job-replaceable-by-gpt-4-as-of-march-2023,Is your job replaceable by GPT-4? (as of March 2023),['Bezzi'],2023-03-23T22:16:54Z,lesswrong,, 167311,https://www.lesswrong.com/posts/ncb2ycEB3ymNqzs93/democratic-fine-tuning,Democratic Fine-Tuning,['Joe Edelman'],2023-08-29T18:13:17Z,lesswrong,, 167329,https://www.lesswrong.com/posts/meryibcAerAr5b4Hh/security-mindset-fire-alarms-and-trigger-signatures,Security Mindset - Fire Alarms and Trigger Signatures,['elspood'],2023-02-09T21:15:59Z,lesswrong,, 167347,https://www.lesswrong.com/posts/rsar32pysCXCTikdd/neuroevolution-social-intelligence-and-logic,"Neuroevolution, Social Intelligence, and Logic",['vinnik.dmitry07'],2023-05-31T17:54:25Z,lesswrong,, 167369,https://www.lesswrong.com/posts/8ms977XZ2uJ4LnwSR/decomposing-independent-generalizations-in-neural-networks,Decomposing independent generalizations in neural networks via Hessian analysis,"['Dmitry Vaintrob', 'Nina Rimsky']",2023-08-14T17:04:40Z,lesswrong,, 167386,https://www.lesswrong.com/posts/SkjTDA98qo83vE6nc/all-agi-safety-questions-welcome-especially-basic-ones-1,All AGI Safety questions welcome (especially basic ones) [~monthly thread],"['mwatkins', 'Robert Miles']",2023-01-26T21:01:58Z,lesswrong,, 167402,https://www.lesswrong.com/posts/CRsYy3xtbMrLjoXZT/evidence-for-the-orthogonality-thesis,Evidence for the orthogonality thesis,['Stuart_Armstrong'],2012-04-03T10:58:22Z,lesswrong,, 167412,https://www.lesswrong.com/posts/6bFBkk3XNiTwgE8R3/cev-tropes,CEV-tropes,['snarles'],2014-09-22T18:21:51Z,lesswrong,, 167423,https://www.lesswrong.com/posts/Qrt4CTKkm7Ad4nZtq/llms-and-hallucination-like-white-on-rice,"LLMs and hallucination, like white on rice?",['Bill Benzon'],2023-04-14T19:53:25Z,lesswrong,, 167437,https://www.lesswrong.com/posts/wfhnDsavsoxnvGRKD/linkpost-multimodal-neurons-in-pretrained-text-only,[Linkpost] Multimodal Neurons in Pretrained Text-Only Transformers,['Bogdan Ionut Cirstea'],2023-08-04T15:29:17Z,lesswrong,, 167447,https://www.lesswrong.com/posts/eqWJXwHQQbkkY2qnD/why-not-just-boycott-llms,Why not just boycott LLMs?,['lmbp'],2023-03-15T17:55:55Z,lesswrong,, 167462,https://www.lesswrong.com/posts/biP5XBmqvjopvky7P/a-eta-quick-note-on-terminology-ai-alignment-ai-x-safety,A (EtA: quick) note on terminology: AI Alignment != AI x-safety,['David Scott Krueger (formerly: capybaralet)'],2023-02-08T22:33:53Z,lesswrong,, 167480,https://www.lesswrong.com/posts/MWnB22utwmPzt8zAG/gpt-7-the-tale-of-the-big-computer-an-experimental-story,GPT-7: The Tale of the Big Computer (An Experimental Story),['Justin Bullock'],2023-07-10T20:22:27Z,lesswrong,, 167525,https://www.lesswrong.com/posts/wT9Ha4uNchdDoWTGg/parfit-s-escape-filk,Parfit's Escape (Filk),['Gordon Seidoh Worley'],2019-03-29T02:31:43Z,lesswrong,, 167535,https://www.lesswrong.com/posts/vh4Cq6gwBAcPSj8u2/bootstrapping-language-models,Bootstrapping Language Models,['harsimony'],2022-05-27T19:43:46Z,lesswrong,, 167553,https://www.lesswrong.com/posts/75uJN3qqzyxWoknN7/interpretability-externalities-case-study-hungry-hungry,Interpretability Externalities Case Study - Hungry Hungry Hippos,['Magdalena Wache'],2023-09-20T14:42:44Z,lesswrong,, 167569,https://www.lesswrong.com/posts/YNFtQx3nCcRPCPRZX/self-similarity-experiment,Self-Similarity Experiment,['Dawn Drescher'],2020-08-15T13:19:57Z,lesswrong,, 167584,https://www.lesswrong.com/posts/J4s5AJ3Xqc8DwAEzQ/projects-i-would-like-to-see-possibly-at-ai-safety-camp,Projects I would like to see (possibly at AI Safety Camp),['Linda Linsefors'],2023-09-27T21:27:30Z,lesswrong,, 167614,https://www.lesswrong.com/posts/Suk3qEWyxnTG47TDZ/defending-functional-decision-theory,Defending Functional Decision Theory,['Heighn'],2022-02-08T14:58:56Z,lesswrong,, 167625,https://www.lesswrong.com/posts/DaPi3ZrEkydycwHzb/impact-of-let-s-think-step-by-step-is-all-you-need,"Impact of "" 'Let's think step by step' is all you need""?",['yrimon'],2022-07-24T20:59:51Z,lesswrong,, 167638,https://www.lesswrong.com/posts/BjThrfnArSDXgECmD/religion-as-goodhart,Religion as Goodhart,['shminux'],2019-07-08T00:38:37Z,lesswrong,, 167648,https://www.lesswrong.com/posts/PaWpTPkbnkGRtDrDs/who-aligns-the-alignment-researchers,Who Aligns the Alignment Researchers?,['Ben Smith'],2023-03-05T23:22:27Z,lesswrong,, 167667,https://www.lesswrong.com/posts/spKYZgoh3RmhxMqyu/the-first-world-takeover,The First World Takeover,['Eliezer Yudkowsky'],2008-11-19T15:00:00Z,lesswrong,, 167678,https://www.lesswrong.com/posts/wqDRRx9RqwKLzWt7R/nonperson-predicates,Nonperson Predicates,['Eliezer Yudkowsky'],2008-12-27T01:47:32Z,lesswrong,, 167688,https://www.lesswrong.com/posts/QzpKq92nXqp8NHM34/neural-tangent-kernel-distillation,Neural Tangent Kernel Distillation,"['Thomas Larsen', 'Jeremy Gillen']",2022-10-05T18:11:55Z,lesswrong,, 167705,https://www.lesswrong.com/posts/gcK47FGBAzgc4oDSr/corrigibility-self-deletion-and-identical-strawberries,"Corrigibility, Self-Deletion, and Identical Strawberries",['Robert_AIZI'],2023-03-28T16:55:00Z,lesswrong,, 167730,https://www.lesswrong.com/posts/XFt9ipeezjEqJCuY4/aligned-behavior-is-not-evidence-of-alignment-past-a-certain,Aligned Behavior is not Evidence of Alignment Past a Certain Level of Intelligence,['Ronny Fernandez'],2022-12-05T15:19:49Z,lesswrong,, 167745,https://www.lesswrong.com/posts/bu2gwJfsiru4ZmNcL/why-universal-comparability-of-utility,Why Universal Comparability of Utility?,['AK'],2018-05-13T00:10:21Z,lesswrong,, 167754,https://www.lesswrong.com/posts/bf2vmH7B8RsqxDLh4/discursive-competence-in-chatgpt-part-1-talking-with-dragons,"Discursive Competence in ChatGPT, Part 1: Talking with Dragons",['Bill Benzon'],2023-01-05T21:01:48Z,lesswrong,, 167770,https://www.lesswrong.com/posts/jQXu9tefKfRroegrb/timelines-explanation-post-part-1-of,Timelines explanation post part 1 of ?,['Nathan Helm-Burger'],2022-08-12T16:13:38Z,lesswrong,, 167786,https://www.lesswrong.com/posts/QbXgTXtHGEZyB8Wjn/repeated-play-of-imperfect-newcomb-s-paradox-in-infra,Repeated Play of Imperfect Newcomb's Paradox in Infra-Bayesian Physicalism,['Sven Nilsen'],2023-04-03T10:06:37Z,lesswrong,, 167797,https://www.lesswrong.com/posts/2KdKWnjGmaADMMmA2/ai-requirements-for-pernicious-policies,AI: requirements for pernicious policies,['Stuart_Armstrong'],2015-07-17T14:18:54Z,lesswrong,, 167822,https://www.lesswrong.com/posts/3cgevkQRAjSPdynJw/ml-safety-at-neurips-and-paradigmatic-ai-safety-mlaisu-w49,ML Safety at NeurIPS & Paradigmatic AI Safety? MLAISU W49,"['Esben Kran', 'Steinthal']",2022-12-09T10:38:35Z,lesswrong,, 167854,https://www.lesswrong.com/posts/6t5vRHrwPDi5C9hYr/motivating-a-semantics-of-logical-counterfactuals,Motivating a Semantics of Logical Counterfactuals,['Sam_A_Barnett'],2017-09-22T01:10:28Z,lesswrong,, 167866,https://www.lesswrong.com/posts/j3JiCimzuJKT8nm8a/trivial-gpt-3-5-limitation-workaround,Trivial GPT-3.5 limitation workaround,['Dave Lindbergh'],2022-12-12T08:42:49Z,lesswrong,, 167877,https://www.lesswrong.com/posts/AZ4Hx7br9v5m5KbNm/optimized-for-something-other-than-winning-or-how-cricket,Optimized for Something other than Winning or: How Cricket Resists Moloch and Goodhart's Law,['A.H.'],2023-07-05T12:33:07Z,lesswrong,, 167894,https://www.lesswrong.com/posts/nNbnzegz7ppewZgCG/ai-researchers-announce-neuroai-agenda,AI researchers announce NeuroAI agenda,['Cameron Berg'],2022-10-24T00:14:47Z,lesswrong,, 167912,https://www.lesswrong.com/posts/jCzZBgDkYYNqteH2j/is-there-anything-that-can-stop-agi-development-in-the-near,Is there anything that can stop AGI development in the near term?,['Wulky Wilkinsen'],2021-04-22T20:37:41Z,lesswrong,, 167929,https://www.lesswrong.com/posts/aZ2kHQtpTHu3FeguQ/confusions-in-my-model-of-ai-risk,Confusions in My Model of AI Risk,['peterbarnett'],2022-07-07T01:05:02Z,lesswrong,, 167950,https://www.lesswrong.com/posts/yhRTjBs6oiNcjRgcx/the-case-for-doing-something-else-if-alignment-is-doomed,The case for Doing Something Else (if Alignment is doomed),['Rafael Harth'],2022-04-05T17:52:21Z,lesswrong,, 167974,https://www.lesswrong.com/posts/kyvCNgx9oAwJCuevo/deep-q-networks-explained,Deep Q-Networks Explained,['Jay Bailey'],2022-09-13T12:01:08Z,lesswrong,, 167993,https://www.lesswrong.com/posts/3GyQXTy2WhYcaBgS2/problematic-problems-for-tdt,Problematic Problems for TDT,['drnickbone'],2012-05-29T15:41:38Z,lesswrong,, 168010,https://www.lesswrong.com/posts/fh5zACqJBbQ8R4tk7/next-level-seinfeld,Next Level Seinfeld,['Zvi'],2022-12-19T13:30:01Z,lesswrong,, 168019,https://www.lesswrong.com/posts/izNiFpyWgqddTz34t/perspective-based-reasoning-could-absolve-cdt-1,Perspective Based Reasoning Could Absolve CDT,['dadadarren'],2023-10-08T11:22:49Z,lesswrong,, 168030,https://www.lesswrong.com/posts/9kvpdK9BLSMxGnxjk/thoughts-on-hardware-limits-to-prevent-agi,Thoughts on Hardware limits to Prevent AGI?,['jrincayc'],2023-10-15T23:45:38Z,lesswrong,, 168058,https://www.lesswrong.com/posts/TwH5jfkuvTatvAKEF/how-to-escape-from-your-sandbox-and-from-your-hardware-host,How to escape from your sandbox and from your hardware host,['PhilGoetz'],2015-07-31T17:26:00Z,lesswrong,, 168075,https://www.lesswrong.com/posts/iWv6Pu2fWPKqevzFE/using-brain-computer-interfaces-to-get-more-data-for-ai,Using Brain-Computer Interfaces to get more data for AI alignment,['Robbo'],2021-11-07T00:00:28Z,lesswrong,, 168091,https://www.lesswrong.com/posts/DKmXnD8fA6NjnRWeg/exterminating-humans-might-be-on-the-to-do-list-of-a,Exterminating humans might be on the to-do list of a Friendly AI,['RomanS'],2021-12-07T14:15:07Z,lesswrong,, 168100,https://www.lesswrong.com/posts/3Lyki5DCHnJgeNXww/what-i-d-change-about-different-philosophy-fields,What I'd change about different philosophy fields,['Rob Bensinger'],2021-03-08T18:25:30Z,lesswrong,, 168119,https://www.lesswrong.com/posts/jA3HfgcdiT2DLhvxp/co-found-an-incubator-for-independent-ai-safety-researchers,Co-found an incubator for independent AI Safety researchers (rolling applications),['Alexandra Bos'],2023-06-02T18:02:34Z,lesswrong,, 168135,https://www.lesswrong.com/posts/PikpeRucdsXeEvpy9/singular-learning-theory-and-bridging-from-ml-to-brain,Singular learning theory and bridging from ML to brain emulations,"['kave', 'Garrett Baker']",2023-11-01T21:31:55Z,lesswrong,, 168158,https://www.lesswrong.com/posts/TeSTeAwrnGtf9jwfR/power-seeking-ai-and-existential-risk,Power-Seeking AI and Existential Risk,['Antonio Franca'],2022-10-11T22:50:27Z,lesswrong,, 168190,https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong,Responsible Scaling Policies Are Risk Management Done Wrong,['simeon_c'],2023-10-25T23:46:34Z,lesswrong,, 168214,https://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards,The first AI Safety Camp & onwards,['Remmelt'],2018-06-07T20:13:43Z,lesswrong,, 168236,https://www.lesswrong.com/posts/xg3hXCYQPJkwHyik2/the-best-textbooks-on-every-subject,The Best Textbooks on Every Subject,['lukeprog'],2011-01-16T08:30:58Z,lesswrong,, 168254,https://www.lesswrong.com/posts/jcA4rath4HvFtmdm6/on-presenting-the-case-for-ai-risk,On presenting the case for AI risk,['Aryeh Englander'],2022-03-09T01:41:15Z,lesswrong,, 168268,https://www.lesswrong.com/posts/smBkyR2GkzMtrKuwK/the-first-filter,The First Filter,"['adamShimi', 'Gabriel Alfour']",2022-11-26T19:37:05Z,lesswrong,, 168281,https://www.lesswrong.com/posts/pwc4TDQPtRvC3mCsR/call-for-cruxes-by-rhyme-a-longtermist-history-consultancy,"Call for Cruxes by Rhyme, a Longtermist History Consultancy",['Lara'],2023-03-01T18:39:04Z,lesswrong,, 168294,https://www.lesswrong.com/posts/xwT99Ygcnz2hFiqjg/corrigibility-thoughts-iii-manipulating-versus-deceiving,Corrigibility thoughts III: manipulating versus deceiving,['Stuart_Armstrong'],2017-01-18T15:57:37Z,lesswrong,, 168315,https://www.lesswrong.com/posts/3yAvb8TtLJQbAbtiC/gradual-takeoff-fast-failure,"Gradual takeoff, fast failure",['Max H'],2023-03-16T22:02:04Z,lesswrong,, 168335,https://www.lesswrong.com/posts/z6H46mAuKwvyykCKp/how-dangerous-is-human-level-ai,How dangerous is human-level AI?,['Alex_Altair'],2022-06-10T17:38:28Z,lesswrong,, 168354,https://www.lesswrong.com/posts/MBemd8k9uHFDEKzad/an-impossibility-proof-relevant-to-the-shutdown-problem-and,An Impossibility Proof Relevant to the Shutdown Problem and Corrigibility,['Audere'],2023-05-02T06:52:25Z,lesswrong,, 168371,https://www.lesswrong.com/posts/mvnEbSScBHpwxoGLT/what-are-the-mostly-likely-ways-agi-will-emerge,What are the mostly likely ways AGI will emerge?,['Craig Quiter'],2020-07-14T00:58:50Z,lesswrong,, 168380,https://www.lesswrong.com/posts/d8BW4pBwT9sBrJ44m/is-there-a-name-for-the-theory-that-there-will-be-fast,"Is there a name for the theory that ""There will be fast takeoff in real-world capabilities because almost everything is AGI-complete""?",['David Scott Krueger (formerly: capybaralet)'],2021-09-02T23:00:43Z,lesswrong,, 168389,https://www.lesswrong.com/posts/f6b8ESmTYZPHgFWWg/help-request-what-is-the-kolmogorov-complexity-of-computable,Help request: What is the Kolmogorov complexity of computable approximations to AIXI?,['AnnaSalamon'],2010-12-05T10:23:56Z,lesswrong,, 168398,https://www.lesswrong.com/posts/pdfKJGyhfAxag2Kes/can-submarines-swim-1,Can submarines swim?,['jasoncrawford'],2023-02-22T18:48:19Z,lesswrong,, 168422,https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans,Ends Don't Justify Means (Among Humans),['Eliezer Yudkowsky'],2008-10-14T21:00:00Z,lesswrong,, 168437,https://www.lesswrong.com/posts/4xWz3wW2JNfup6By6/what-s-wrong-with-simplicity-of-value,What's wrong with simplicity of value?,['Wei Dai'],2011-07-27T03:09:18Z,lesswrong,, 168450,https://www.lesswrong.com/posts/NvnEqSRaTLuRXYfmX/what-is-protein-folding-a-brief-explanation,What is “protein folding”? A brief explanation,['jasoncrawford'],2020-12-01T02:46:09Z,lesswrong,, 168463,https://www.lesswrong.com/posts/pHJtLHcWvfGbsW7LR/roadmap-for-a-collaborative-prototype-of-an-open-agency,Roadmap for a collaborative prototype of an Open Agency Architecture,['Deger Turan'],2023-05-10T17:41:20Z,lesswrong,, 168484,https://www.lesswrong.com/posts/Jj65JqnNPBqWt4Tfa/ai-doom-is-not-only-disjunctive,AI Doom Is Not (Only) Disjunctive,['NickGabs'],2023-03-30T01:42:56Z,lesswrong,, 168501,https://www.lesswrong.com/posts/dTWevKRiMM4ptcjjg/would-it-be-good-or-bad-for-the-us-military-to-get-involved,Would it be good or bad for the US military to get involved in AI risk?,['Grant Demaree'],2023-01-01T19:02:31Z,lesswrong,, 168535,https://www.lesswrong.com/posts/BbM47qBPzdSRruY4z/instead-of-technical-research-more-people-should-focus-on,"Instead of technical research, more people should focus on buying time","['Akash', 'Olivia Jimenez', 'Thomas Larsen']",2022-11-05T20:43:45Z,lesswrong,, 168562,https://www.lesswrong.com/posts/QNCcbW2jLsmw9xwhG/a-sufficiently-paranoid-non-friendly-agi-might-self-modify,A sufficiently paranoid non-Friendly AGI might self-modify itself to become Friendly,['RomanS'],2021-09-22T06:29:16Z,lesswrong,, 168575,https://www.lesswrong.com/posts/f8tJsh6rDpfZBai6D/three-of-my-beliefs-about-upcoming-agi,Three of my beliefs about upcoming AGI,['Robert_AIZI'],2023-03-27T20:27:57Z,lesswrong,, 168592,https://www.lesswrong.com/posts/B5CNPqYL7XcHzgzHc/a-weak-agi-may-attempt-an-unlikely-to-succeed-takeover,"A ""weak"" AGI may attempt an unlikely-to-succeed takeover",['RobertM'],2023-06-28T20:31:46Z,lesswrong,, 168610,https://www.lesswrong.com/posts/mJqabqwAb3QzZcu9T/a-simple-presentation-of-ai-risk-arguments,A simple presentation of AI risk arguments,['Seth Herd'],2023-04-26T02:19:19Z,lesswrong,, 168635,https://www.lesswrong.com/posts/dSaScvukmCRqey8ug/convince-me-that-humanity-is-as-doomed-by-agi-as-yudkowsky,"Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe",['Yitz'],2022-04-10T21:02:59Z,lesswrong,, 168657,https://www.lesswrong.com/posts/G6sjLkS65eT7wE225/seq-rerun-evicting-brain-emulations,"[SEQ RERUN] ""Evicting"" brain emulations",['MinibearRex'],2012-11-18T06:15:49Z,lesswrong,, 168667,https://www.lesswrong.com/posts/FqSQ7xsDAGfXzTND6/stop-posting-prompt-injections-on-twitter-and-calling-it,"Stop posting prompt injections on Twitter and calling it ""misalignment""",['lc'],2023-02-19T02:21:44Z,lesswrong,, 168677,https://www.lesswrong.com/posts/Y7uXyGiQJN7ML7yFv/optimization-loss-set-at-variance-in-rl,"Optimization, loss set at variance in RL",['Clairstan'],2023-07-22T18:25:32Z,lesswrong,, 168688,https://www.lesswrong.com/posts/JLNjNvsjyz6D9P2c7/choice-anthropics-uncertainty-and-potential-implications-for,Choice := Anthropics uncertainty? And potential implications for agency,['Antoine de Scorraille'],2022-04-21T16:38:12Z,lesswrong,, 168698,https://www.lesswrong.com/posts/JCCktdeHwTosrSgyu/translating-between-latent-spaces,Translating between Latent Spaces,"['JamesH', 'Jeremy Gillen', 'NickyP']",2022-07-30T03:25:07Z,lesswrong,, 168717,https://www.lesswrong.com/posts/zoiiYreQZSs4mppfY/why-no-major-llms-with-memory,Why no major LLMs with memory?,['Kaj_Sotala'],2023-03-28T16:34:37Z,lesswrong,, 168726,https://www.lesswrong.com/posts/snwpyAfzoFKdfnEDj/question-1-predicted-architecture-of-agi-learning-algorithm,Question 1: Predicted architecture of AGI learning algorithm(s),['Cameron Berg'],2022-02-10T17:22:24Z,lesswrong,, 168740,https://www.lesswrong.com/posts/p8suSXSwEoKTfGbp9/new-survey-46-of-americans-are-concerned-about-extinction,New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development,['Akash'],2023-04-05T01:26:52Z,lesswrong,, 168749,https://www.lesswrong.com/posts/okmpRuKjhG9dvDh3Z/q-and-a-with-experts-on-risks-from-ai-1,Q&A with experts on risks from AI #1,['XiXiDu'],2012-01-08T11:46:15Z,lesswrong,, 168765,https://www.lesswrong.com/posts/cRrM8LPf9waAd4uiL/an-introduction-to-decision-theory,An introduction to decision theory,['anonymous'],2010-08-13T09:09:49Z,lesswrong,, 168779,https://www.lesswrong.com/posts/eAsN5vNjvmxzACuuX/swimming-upstream-a-case-study-in-instrumental-rationality,Swimming Upstream: A Case Study in Instrumental Rationality,['TurnTrout'],2018-06-03T03:16:22Z,lesswrong,, 168797,https://www.lesswrong.com/posts/wGstGErtRegAzBjz9/institutions-cannot-restrain-dark-triad-ai-exploitation,Institutions Cannot Restrain Dark-Triad AI Exploitation,"['Remmelt', 'flandry19']",2022-12-27T10:34:35Z,lesswrong,, 168818,https://www.lesswrong.com/posts/fyW9EP5NdZrC3k3jz/implications-of-simulators,Implications of simulators,['ThomasW'],2023-01-07T00:37:46Z,lesswrong,, 168844,https://www.lesswrong.com/posts/3BPuuNDavJ2drKvGK/scientism-vs-people,Scientism vs. people,['Roman Leventov'],2023-04-18T17:28:29Z,lesswrong,, 168864,https://www.lesswrong.com/posts/EREcbR5jiLvdPcSB3/was-homer-a-stochastic-parrot-meaning-in-literary-texts-and,Was Homer a stochastic parrot? Meaning in literary texts and LLMs,['Bill Benzon'],2023-04-13T16:44:43Z,lesswrong,, 168876,https://www.lesswrong.com/posts/G7Lx62oe9S6rAJaLh/using-blinders-to-help-you-see-things-for-what-they-are,Using blinders to help you see things for what they are,['Adam Zerner'],2021-11-11T07:07:42Z,lesswrong,, 168885,https://www.lesswrong.com/posts/FzBZijmitZuasJwoq/sorcerer-s-apprentice-from-fantasia-as-an-analogy-for,"""Sorcerer's Apprentice"" from Fantasia as an analogy for alignment",['awg'],2023-03-29T18:21:56Z,lesswrong,, 168895,https://www.lesswrong.com/posts/T5ZyNq3fzN59aQG5y/the-limits-of-corrigibility,The limits of corrigibility,['Stuart_Armstrong'],2018-04-10T10:49:12Z,lesswrong,, 168912,https://www.lesswrong.com/posts/DJB82jKwgJE5NsWgT/some-cruxes-on-impactful-alternatives-to-ai-policy-work,Some cruxes on impactful alternatives to AI policy work,['Richard_Ngo'],2018-10-10T13:35:27Z,lesswrong,, 168933,https://www.lesswrong.com/posts/QJEmnYKJt4kMeDhfy/the-linguistic-blind-spot-of-value-aligned-agency-natural,"The Linguistic Blind Spot of Value-Aligned Agency, Natural and Artificial",['Roman Leventov'],2023-02-14T06:57:58Z,lesswrong,, 168948,https://www.lesswrong.com/posts/HeT2pjiN4zaFY976W/fast-minds-and-slow-computers,Fast Minds and Slow Computers,['jacob_cannell'],2011-02-05T10:05:34Z,lesswrong,, 168960,https://www.lesswrong.com/posts/KDzykLCYMfWiRiWnd/do-x-because-decision-theory-do-x-because-bayes-theorem,"""Do X because decision theory"" ~= ""Do X because bayes theorem""",['lc'],2023-04-14T20:57:10Z,lesswrong,, 168971,https://www.lesswrong.com/posts/dB66hhNpE5LJQAqsM/google-announces-pathways-new-generation-multitask-ai,Google announces Pathways: new generation multitask AI Architecture,['Ozyrus'],2021-10-29T11:55:22Z,lesswrong,, 168987,https://www.lesswrong.com/posts/b2xTk6BLJqJHd3ExE/orthogonal-a-new-agent-foundations-alignment-organization,Orthogonal: A new agent foundations alignment organization,['Tamsin Leake'],2023-04-19T20:17:14Z,lesswrong,, 169008,https://www.lesswrong.com/posts/LWchvJxSGPKQwueq2/project-proposal-testing-the-ibp-definition-of-agent,Project proposal: Testing the IBP definition of agent,"['Jeremy Gillen', 'Thomas Larsen', 'JamesH']",2022-08-09T01:09:38Z,lesswrong,, 169029,https://www.lesswrong.com/posts/vAAneYowLkaHnihCg/textbooks-are-all-you-need,"""textbooks are all you need""",['bhauth'],2023-06-21T17:06:46Z,lesswrong,, 169051,https://www.lesswrong.com/posts/qAjGfYh2pQvvnsCBk/a-new-model-for-compute-center-verification-1,A New Model for Compute Center Verification,['Damin Curtis'],2023-10-10T19:22:44Z,lesswrong,, 169068,https://www.lesswrong.com/posts/2qTxffyqeR4gbpEua/new-openai-paper-language-models-can-explain-neurons-in,New OpenAI Paper - Language models can explain neurons in language models,['ViktorThink'],2023-05-10T07:46:39Z,lesswrong,, 169080,https://www.lesswrong.com/posts/tqyg3DpoiE4DKyi4y/apply-for-mats-winter-2023-24,Apply for MATS Winter 2023-24!,"['Rocket', 'Ryan Kidd', 'LauraVaughan']",2023-10-21T02:27:34Z,lesswrong,, 169093,https://www.lesswrong.com/posts/biskschef2zSNgKkz/multiple-ais-in-boxes-evaluating-each-other-s-alignment,"Multiple AIs in boxes, evaluating each other's alignment",['Moebius314'],2022-05-29T08:36:28Z,lesswrong,, 169110,https://www.lesswrong.com/posts/mLubC65xXekk5tkug/see-edit-no-you-need-to-write-clearer,"[SEE EDIT] No, *You* Need to Write Clearer",['NicholasKross'],2023-04-29T05:04:02Z,lesswrong,, 169127,https://www.lesswrong.com/posts/hX58sJRAzJF3HGMMo/human-level-control-through-deep-reinforcement-learning,"""Human-level control through deep reinforcement learning"" - computer learns 49 different games",['skeptical_lurker'],2015-02-26T06:21:33Z,lesswrong,, 169147,https://www.lesswrong.com/posts/pPrELFnR6Hp3vJWwQ/despair-about-ai-progressing-too-slowly,Despair about AI progressing too slowly,['Yarrow Bouchard'],2023-11-04T07:52:24Z,lesswrong,, 169164,https://www.lesswrong.com/posts/28kcq8D4aCWeDKbBp/against-instrumental-convergence,Against Instrumental Convergence,['zulupineapple'],2018-01-27T13:17:19Z,lesswrong,, 169176,https://www.lesswrong.com/posts/YtvZxRpZjcFNwJecS/the-importance-of-goodhart-s-law,The Importance of Goodhart's Law,['blogospheroid'],2010-03-13T08:19:30Z,lesswrong,, 169198,https://www.lesswrong.com/posts/7MsKHa55HxGKFCN6z/on-ai-and-compute,On AI and Compute,['johncrox'],2019-04-03T19:00:10Z,lesswrong,, 169218,https://www.lesswrong.com/posts/G8SsspgAYEHHiDGNP/reactions-to-the-executive-order,Reactions to the Executive Order,['Zvi'],2023-11-01T20:40:02Z,lesswrong,, 169241,https://www.lesswrong.com/posts/epQFRuu78LqiboGkd/alignment-s-phlogiston,Alignment's phlogiston,['Eleni Angelou'],2022-08-18T22:27:31Z,lesswrong,, 169251,https://www.lesswrong.com/posts/PxELfZnvbv8jcKewp/expectations-for-gemini-hopefully-not-a-big-deal,Expectations for Gemini: hopefully not a big deal,['Maxime Riché'],2023-10-02T15:38:33Z,lesswrong,, 169266,https://www.lesswrong.com/posts/brjRtwbiZYqEFiKRQ/chatgtp-writing-news-stories-for-the-guardian,"ChatGTP ""Writing "" News Stories for The Guardian?",['jmh'],2023-04-07T12:16:38Z,lesswrong,, 169277,https://www.lesswrong.com/posts/La942YvexnvNkXsq5/idea-build-alignment-dataset-for-very-capable-models,Idea: build alignment dataset for very capable models,['Quintin Pope'],2022-02-12T19:30:35Z,lesswrong,, 169295,https://www.lesswrong.com/posts/CH3eo4QW9ZA5cWppn/has-the-symbol-grounding-problem-just-gone-away,Has the Symbol Grounding Problem just gone away?,['RussellThor'],2023-05-04T07:46:09Z,lesswrong,, 169306,https://www.lesswrong.com/posts/yX9pMZik7r38da7Fc/formulas-of-arithmetic-that-behave-like-decision-agents,Formulas of arithmetic that behave like decision agents,['Nisan'],2012-02-03T02:58:43Z,lesswrong,, 169327,https://www.lesswrong.com/posts/LKHJ2Askf92RBbhBp/metauncertainty,Metauncertainty,['jimmy'],2009-04-10T23:41:53Z,lesswrong,, 169337,https://www.lesswrong.com/posts/YEeN2yLZiMPz9Yker/why-uncontrollable-ai-looks-more-likely-than-ever,Why Uncontrollable AI Looks More Likely Than Ever,"['otto.barten', 'Roman_Yampolskiy']",2023-03-08T15:41:33Z,lesswrong,, 169361,https://www.lesswrong.com/posts/36fxiKdEqswkedHyG/the-hacker-learns-to-trust,The Hacker Learns to Trust,['Ben Pace'],2019-06-22T00:27:55Z,lesswrong,, 169380,https://www.lesswrong.com/posts/cxDvhqDKn5W3eubvA/the-dumbest-kid-in-the-world-joke,The dumbest kid in the world (joke),['CronoDAS'],2021-06-06T02:57:05Z,lesswrong,, 169392,https://www.lesswrong.com/posts/A6pj6XbKu8J3WwWAq/permission-for-mind-uploading-via-online-files,Permission for mind uploading via online files,['Kaj_Sotala'],2010-09-29T08:23:13Z,lesswrong,, 169410,https://www.lesswrong.com/posts/GbNB5a42i2hr6KMSK/the-stack-overflow-of-factored-cognition,The Stack Overflow of Factored Cognition,['rmoehn'],2019-04-21T12:19:39Z,lesswrong,, 169419,https://www.lesswrong.com/posts/mfn32QHwKb55afHq4/i-was-wrong-simulator-theory-is-real,"I was Wrong, Simulator Theory is Real",['Robert_AIZI'],2023-04-26T17:45:03Z,lesswrong,, 169435,https://www.lesswrong.com/posts/z4MDDwwnWKnv2ZzdK/the-agi-race-between-the-us-and-china-doesn-t-exist,The AGI Race Between the US and China Doesn’t Exist.,['Eva_B'],2023-06-03T00:22:31Z,lesswrong,, 169451,https://www.lesswrong.com/posts/NwNru5H4TuiS65xqz/yet-another-safe-oracle-ai-proposal,Yet another safe oracle AI proposal,['jacobt'],2012-02-26T23:45:33Z,lesswrong,, 169478,https://www.lesswrong.com/posts/NyFuuKQ8uCEDtd2du/the-genie-knows-but-doesn-t-care,"The genie knows, but doesn't care",['Rob Bensinger'],2013-09-06T06:42:39Z,lesswrong,, 169502,https://www.lesswrong.com/posts/f8joCrfQemEc3aCk8/the-local-unit-of-intelligence-is-flops,The (local) unit of intelligence is FLOPs,['boazbarak'],2023-06-05T18:23:06Z,lesswrong,, 169525,https://www.lesswrong.com/posts/uKp6tBFStnsvrot5t/what-dall-e-2-can-and-cannot-do,What DALL-E 2 can and cannot do,['Swimmer963 (Miranda Dixon-Luinenburg)'],2022-05-01T23:51:22Z,lesswrong,, 169555,https://www.lesswrong.com/posts/kADkXCAq6aBBxSyqE/superintelligence-26-science-and-technology-strategy,Superintelligence 26: Science and technology strategy,['KatjaGrace'],2015-03-10T01:43:48Z,lesswrong,, 169572,https://www.lesswrong.com/posts/uRyKkyYstxZkCNcoP/careless-talk-on-us-china-ai-competition-and-criticism-of,Careless talk on US-China AI competition? (and criticism of CAIS coverage),['Oliver Sourbut'],2023-09-20T12:46:17Z,lesswrong,, 169587,https://www.lesswrong.com/posts/ZNXDRGshgoq3cmxhB/the-shard-theory-alignment-scheme,The Shard Theory Alignment Scheme,['David Udell'],2022-08-25T04:52:50Z,lesswrong,, 169602,https://www.lesswrong.com/posts/wMQw3P8KmbCvNbN4j/alien-axiology,Alien Axiology,['snerx'],2023-04-20T00:27:09Z,lesswrong,, 169620,https://www.lesswrong.com/posts/eQq4GMJTvTGNcNsnk/alibaba-group-releases-qwen-14b-parameter-llm,"Alibaba Group releases Qwen, 14B parameter LLM",['nikola'],2023-09-28T00:12:04Z,lesswrong,, 169630,https://www.lesswrong.com/posts/dop3rLwFhW5gtpEgz/i-attempted-the-ai-box-experiment-again-and-won-twice,I attempted the AI Box Experiment again! (And won - Twice!),['Tuxedage'],2013-09-05T04:49:49Z,lesswrong,, 169647,https://www.lesswrong.com/posts/woZymgKQqB5gEaAAz/some-intuitions-around-short-ai-timelines-based-on-recent,Some Intuitions Around Short AI Timelines Based on Recent Progress,['Aaron_Scher'],2023-04-11T04:23:22Z,lesswrong,, 169673,https://www.lesswrong.com/posts/aj3LycvDj3kvnW8G6/critiquing-scasper-s-definition-of-subjunctive-dependence,Critiquing Scasper's Definition of Subjunctive Dependence,['Heighn'],2022-01-10T16:22:56Z,lesswrong,, 169688,https://www.lesswrong.com/posts/yrekdsZfLsgfaFjFp/why-do-people-think-humans-are-stupid,Why Do People Think Humans Are Stupid?,['DragonGod'],2022-09-14T13:55:31Z,lesswrong,, 169702,https://www.lesswrong.com/posts/ewitKJEwvttzk6zMi/what-does-lesswrong-ea-think-of-human-intelligence,What Does LessWrong/EA Think of Human Intelligence Augmentation as of mid-2023?,['marc/er'],2023-07-08T11:42:39Z,lesswrong,, 169728,https://www.lesswrong.com/posts/8kbQaxveLyvyvxwcr/introduccion-al-riesgo-existencial-de-inteligencia,Introducción al Riesgo Existencial de Inteligencia Artificial,['david.friva'],2023-07-15T20:37:24Z,lesswrong,, 169745,https://www.lesswrong.com/posts/WJuASYDnhZ8hs5CnD/exploring-non-anthropocentric-aspects-of-ai-existential,Exploring non-anthropocentric aspects of AI existential safety,['mishka'],2023-04-03T18:07:28Z,lesswrong,, 169774,https://www.lesswrong.com/posts/RiYhceiQy4w8JQAsn/an-explanation-of-decision-theories,An explanation of decision theories,['metachirality'],2023-06-01T03:42:06Z,lesswrong,, 169792,https://www.lesswrong.com/posts/oEC92fNXPj6wxz8dd/how-to-think-about-and-deal-with-openai,How to think about and deal with OpenAI,['Rafael Harth'],2021-10-09T13:10:56Z,lesswrong,, 169803,https://www.lesswrong.com/posts/TC7GhGaKFqTtQH9Aq/formulating-the-ai-doom-argument-for-analytic-philosophers,Formulating the AI Doom Argument for Analytic Philosophers,['JonathanErhardt'],2023-05-12T07:54:03Z,lesswrong,, 169820,https://www.lesswrong.com/posts/epC5CrGCv4JGdfsjm/urging-an-international-ai-treaty-an-open-letter,Urging an International AI Treaty: An Open Letter,['Loppukilpailija'],2023-10-31T11:26:26Z,lesswrong,, 169843,https://www.lesswrong.com/posts/i6npKoxQ2QALPCbcP/if-it-looks-like-utility-maximizer-and-quacks-like-utility,If it looks like utility maximizer and quacks like utility maximizer...,['taw'],2009-06-11T18:34:36Z,lesswrong,, 169853,https://www.lesswrong.com/posts/XvorpDSu3dwjdyT4f/gpt-4-multiplication-competition,GPT-4 Multiplication Competition,['dandelion4'],2023-03-16T03:09:10Z,lesswrong,, 169862,https://www.lesswrong.com/posts/d74pb97TAqNKwJkc5/bcis-and-the-ecosystem-of-modular-minds,BCIs and the ecosystem of modular minds,['beren'],2023-07-21T15:58:27Z,lesswrong,, 169888,https://www.lesswrong.com/posts/jbTxWBYCLHm7exnY7/nobody-knows-how-to-reliably-test-for-ai-safety,Nobody knows how to reliably test for AI safety,['marcusarvan'],2023-03-27T19:48:24Z,lesswrong,, 169911,https://www.lesswrong.com/posts/8Ggqf2t8WBtPw33K9/minerva,Minerva,['Algon'],2022-07-01T20:06:56Z,lesswrong,, 169928,https://www.lesswrong.com/posts/JYvw2jv4R5HphXEd7/boeing-737-max-mcas-as-an-agent-corrigibility-failure,Boeing 737 MAX MCAS as an agent corrigibility failure,['shminux'],2019-03-16T01:46:44Z,lesswrong,, 169944,https://www.lesswrong.com/posts/bRtP7Mub3hXAoo4vQ/an-open-letter-to-seri-mats-program-organisers,An open letter to SERI MATS program organisers,['Roman Leventov'],2023-04-20T16:34:10Z,lesswrong,, 169959,https://www.lesswrong.com/posts/edBz8knyMcgT2Siy9/linkpost-large-language-models-converge-toward-human-like,[Linkpost] Large language models converge toward human-like concept organization,['Bogdan Ionut Cirstea'],2023-09-02T06:00:46Z,lesswrong,, 169971,https://www.lesswrong.com/posts/Hz5dFKTnyC7HqibSQ/the-aliens-have-landed,The Aliens have Landed!,['TimFreeman'],2011-05-19T17:09:17Z,lesswrong,, 169986,https://www.lesswrong.com/posts/N7qE5o3jmoKoe4dHQ/optimality-is-the-tiger-and-annoying-the-user-is-its-teeth,"Optimality is the tiger, and annoying the user is its teeth",['Christopher King'],2023-01-28T20:20:34Z,lesswrong,, 170003,https://www.lesswrong.com/posts/DEtJmjuHifewPpASk/devil-s-advocate-adverse-selection-against-conscientiousness,Devil's Advocate: Adverse Selection Against Conscientiousness,['lionhearted (Sebastian Marshall)'],2023-05-28T17:53:38Z,lesswrong,, 170012,https://www.lesswrong.com/posts/JvQWbrbPjuvw4eqxv/a-mechanistic-interpretability-analysis-of-a-gridworld-agent,A Mechanistic Interpretability Analysis of a GridWorld Agent-Simulator (Part 1 of N),['Joseph Bloom'],2023-05-16T22:59:21Z,lesswrong,, 170037,https://www.lesswrong.com/posts/KbRxdBCcJqwtbiPzm/whisper-s-wild-implications-1,Whisper's Wild Implications,['Ollie J'],2023-01-03T12:17:29Z,lesswrong,, 170057,https://www.lesswrong.com/posts/7KfdM3wEeqJiwYcMN/ai-alternative-futures-scenario-mapping-artificial,AI Alternative Futures: Scenario Mapping Artificial Intelligence Risk - Request for Participation (*Closed*),['Kakili'],2022-04-27T22:07:58Z,lesswrong,, 170081,https://www.lesswrong.com/posts/Pqz9NEDxHovq5xrBN/reflections-on-my-own-missing-mood,Reflections on My Own Missing Mood,['Lone Pine'],2022-04-21T16:19:03Z,lesswrong,, 170099,https://www.lesswrong.com/posts/oZwxY88NCCHffJuxM/a-problem-about-bargaining-and-logical-uncertainty,A Problem About Bargaining and Logical Uncertainty,['Wei Dai'],2012-03-21T21:03:17Z,lesswrong,, 170112,https://www.lesswrong.com/posts/Zcz8otnZuKyExs5g5/chatgpt-tantalizing-afterthoughts-in-search-of-story,ChatGPT: Tantalizing afterthoughts in search of story trajectories [induction heads],['Bill Benzon'],2023-02-03T10:35:12Z,lesswrong,, 170127,https://www.lesswrong.com/posts/o7eWu5Gzd82dw9dJS/the-achilles-heel-hypothesis-for-ai,The Achilles Heel Hypothesis for AI,['scasper'],2020-10-13T14:35:10Z,lesswrong,, 170142,https://www.lesswrong.com/posts/Qvec2Qfm5H4WfoS9t/inference-speed-is-not-unbounded,Inference Speed is Not Unbounded,['OneManyNone'],2023-05-08T16:24:13Z,lesswrong,, 170164,https://www.lesswrong.com/posts/xrxh3usuoYMckkKom/preserving-and-continuing-alignment-research-through-a,Preserving and continuing alignment research through a severe global catastrophe,['A_donor'],2022-03-06T18:43:11Z,lesswrong,, 170191,https://www.lesswrong.com/posts/ASxdfSKTbcEy6MCr3/the-extrapolation-problem,The Extrapolation Problem,['lsusr'],2021-10-10T05:11:03Z,lesswrong,, 170204,https://www.lesswrong.com/posts/PgTJiorywnAZTb2uL/on-generality-1,On Generality,['Oren Montano'],2022-09-26T04:06:27Z,lesswrong,, 170219,https://www.lesswrong.com/posts/kh2ohCms4CuC7RqMY/fixed-points-and-free-will,Fixed points and free will,['Ege Erdil'],2022-04-19T17:18:01Z,lesswrong,, 170232,https://www.lesswrong.com/posts/umsGb5qkfzD3WarTR/h-jepa-might-be-technically-alignable-in-a-modified-form,H-JEPA might be technically alignable in a modified form,['Roman Leventov'],2023-05-08T23:04:21Z,lesswrong,, 170257,https://www.lesswrong.com/posts/wLjSr66XAfAQeFp6Y/upcoming-changes-in-large-language-models,Upcoming Changes in Large Language Models,['Andrew Keenan Richardson'],2023-04-08T03:41:53Z,lesswrong,, 170280,https://www.lesswrong.com/posts/sjcQBQvassWqGEd5F/is-there-a-culture-overhang,Is there a culture overhang?,['Aleksi Liimatainen'],2022-10-03T07:26:48Z,lesswrong,, 170289,https://www.lesswrong.com/posts/QyGeCxEjv8KMzrejc/shard-therapy-preambe,"Shard Therapy, preambe",['Iris of Rosebloom'],2023-10-11T05:03:06Z,lesswrong,, 170299,https://www.lesswrong.com/posts/3KmuJavii9njiDtGZ/aligned-ai-as-a-wrapper-around-an-llm,Aligned AI as a wrapper around an LLM,['cousin_it'],2023-03-25T15:58:41Z,lesswrong,, 170316,https://www.lesswrong.com/posts/mbQSrox38WyT8c6tQ/deep-learning-deeper-flaws,Deep learning - deeper flaws?,['Richard_Ngo'],2018-09-24T18:40:01Z,lesswrong,, 170347,https://www.lesswrong.com/posts/aqhMLqaoHb7uob7fr/if-i-were-a-well-intentioned-ai-iv-mesa-optimising,If I were a well-intentioned AI... IV: Mesa-optimising,['Stuart_Armstrong'],2020-03-02T12:16:16Z,lesswrong,, 170364,https://www.lesswrong.com/posts/hGnqS8DKQnRe43Xdg/bing-finding-ways-to-bypass-microsoft-s-filters-without,Bing finding ways to bypass Microsoft's filters without being asked. Is it reproducible?,['Christopher King'],2023-02-20T15:11:29Z,lesswrong,, 170381,https://www.lesswrong.com/posts/SWBRYeqTYDKJbbsfr/ted-talk-by-eliezer-yudkowsky-unleashing-the-power-of,TED talk by Eliezer Yudkowsky: Unleashing the Power of Artificial Intelligence,['bayesed'],2023-05-07T05:45:17Z,lesswrong,, 170390,https://www.lesswrong.com/posts/BtZSNfAcBGQftAwxq/game-theory-without-argmax-part-2,Game Theory without Argmax [Part 2],['Cleo Nardo'],2023-11-11T16:02:42Z,lesswrong,, 170413,https://www.lesswrong.com/posts/zhBPhkca7oviZgnBq/perils-of-optimizing-in-social-contexts,Perils of optimizing in social contexts,['owencb'],2022-06-16T17:40:47Z,lesswrong,, 170427,https://www.lesswrong.com/posts/SbzptgFYr272tMbgz/the-low-hanging-fruit-prior-and-sloped-valleys-in-the-loss,The Low-Hanging Fruit Prior and sloped valleys in the loss landscape,"['Dmitry Vaintrob', 'Nina Rimsky']",2023-08-23T21:12:59Z,lesswrong,, 170445,https://www.lesswrong.com/posts/NLMo5FZWFFq652MNe/sympathetic-minds,Sympathetic Minds,['Eliezer Yudkowsky'],2009-01-19T09:31:03Z,lesswrong,, 170464,https://www.lesswrong.com/posts/oBDTMnEzptBidvmw7/probabilistic-loeb-theorem,Probabilistic Löb theorem,['Stuart_Armstrong'],2013-04-26T18:45:50Z,lesswrong,, 170479,https://www.lesswrong.com/posts/29vqqmGNxNRGzffEj/high-challenge,High Challenge,['Eliezer Yudkowsky'],2008-12-19T00:51:08Z,lesswrong,, 170490,https://www.lesswrong.com/posts/7H6QbvpJgj75Ex3FJ/preprint-for-commenting-digital-immortality-theory-and,[Preprint for commenting] Digital Immortality: Theory and Protocol for Indirect Mind Uploading,['avturchin'],2018-03-27T11:49:31Z,lesswrong,, 170504,https://www.lesswrong.com/posts/R2prcN5na9hJW6ZmC/why-i-think-the-current-trajectory-of-ai-research-has-low-p,Why I Think the Current Trajectory of AI Research has Low P(doom) - LLMs,['GaPa'],2023-04-01T20:35:18Z,lesswrong,, 170527,https://www.lesswrong.com/posts/2yxg5RNJ77yCFffMg/advice-for-new-alignment-people-info-max,Advice for new alignment people: Info Max,['Jonas Hallgren'],2023-05-30T15:42:20Z,lesswrong,, 170548,https://www.lesswrong.com/posts/ebRZPDBg5qff9oTs5/toward-an-overview-analysis-of-intelligence-explosion,Toward an overview analysis of intelligence explosion,['lukeprog'],2011-11-13T22:23:06Z,lesswrong,, 170557,https://www.lesswrong.com/posts/h98SAc9oL5HYqwATi/high-level-discourse-structure-in-chatgpt-part-2-quasi,High level discourse structure in ChatGPT: Part 2 [Quasi-symbolic?],['Bill Benzon'],2022-12-10T22:26:37Z,lesswrong,, 170568,https://www.lesswrong.com/posts/7LEPbDM3mXZcgxe4u/next-steps-after-agisf-at-umich,Next steps after AGISF at UMich,['JakubK'],2023-01-25T20:57:19Z,lesswrong,, 170603,https://www.lesswrong.com/posts/X5zmEvFQunxiEcxHn/quick-nate-eliezer-comments-on-discontinuity,Quick Nate/Eliezer comments on discontinuity,['Rob Bensinger'],2018-03-01T22:03:27Z,lesswrong,, 170615,https://www.lesswrong.com/posts/FmaeKTQgMpXfPDkfe/examining-evolution-as-an-upper-bound-for-agi-timelines,Examining Evolution as an Upper Bound for AGI Timelines,['meanderingmoose'],2022-04-24T19:08:16Z,lesswrong,, 170638,https://www.lesswrong.com/posts/gypixgtJQRDhZssPB/a-moral-backlash-against-ai-will-probably-slow-down-agi,A moral backlash against AI will probably slow down AGI development,['geoffreymiller'],2023-06-07T20:39:43Z,lesswrong,, 170655,https://www.lesswrong.com/posts/rAqqupBni6sQe2uWv/some-alignment-ideas,Some alignment ideas,['SelonNerias'],2023-08-10T17:51:09Z,lesswrong,, 170682,https://www.lesswrong.com/posts/PGfJPnDzy9sDE6zkj/would-myopic-general-public-good-producers-significantly,Would (myopic) general public good producers significantly accelerate the development of AGI?,['mako yass'],2022-03-02T23:47:09Z,lesswrong,, 170698,https://www.lesswrong.com/posts/L9Z2Pt6KjJLEnSzB7/sublimity-vs-youtube,Sublimity vs. Youtube,['Alicorn'],2011-03-18T05:33:41Z,lesswrong,, 170708,https://www.lesswrong.com/posts/6ZKKmiF9cnwNxiYa2/contra-nora-belrose-on-orthogonality-thesis-being-trivial,Contra Nora Belrose on Orthogonality Thesis Being Trivial,['tailcalled'],2023-10-07T11:47:02Z,lesswrong,, 170717,https://www.lesswrong.com/posts/z8DRKBKvM9JXrqbWH/10-50-90-chance-of-gpt-n-transformative-ai,10/50/90% chance of GPT-N Transformative AI?,['human_generated_text'],2020-08-09T00:10:26Z,lesswrong,, 170727,https://www.lesswrong.com/posts/DgzdLzDGsqoRXhCK7/transformative-agi-by-2043-is-less-than-1-likely,Transformative AGI by 2043 is <1% likely,['Ted Sanders'],2023-06-06T17:36:48Z,lesswrong,, 170744,https://www.lesswrong.com/posts/q9F7w6ux26S6JQo3v/lotuses-and-loot-boxes,Lotuses and Loot Boxes,['Davidmanheim'],2018-05-17T00:21:13Z,lesswrong,, 170761,https://www.lesswrong.com/posts/qY2kuGFmFbiJLTBbC/are-ais-like-animals-perspectives-and-strategies-from,Are AIs like Animals? Perspectives and Strategies from Biology,['Jackson Emanuel'],2023-05-16T23:39:24Z,lesswrong,, 170790,https://www.lesswrong.com/posts/ezYSENJvqg25zwKfR/are-speed-superintelligences-feasible-for-modern-ml,Are Speed Superintelligences Feasible for Modern ML Techniques?,['DragonGod'],2022-09-14T12:59:10Z,lesswrong,, 170799,https://www.lesswrong.com/posts/czysbEDEsr9ijYeMd/note-taking-without-hidden-messages,Note-Taking without Hidden Messages,['Hoagy'],2022-04-30T11:15:00Z,lesswrong,, 170816,https://www.lesswrong.com/posts/BfKQGYJBwdHfik4Kd/fai-research-constraints-and-agi-side-effects,FAI Research Constraints and AGI Side Effects,['JustinShovelain'],2015-06-03T19:25:15Z,lesswrong,, 170835,https://www.lesswrong.com/posts/RCeAPWsPsKwgefFfL/hiring-inform-and-shape-a-new-project-on-ai-safety-at-1,HIRING: Inform and shape a new project on AI safety at Partnership on AI,['Madhulika Srikumar'],2021-11-24T08:27:06Z,lesswrong,, 170863,https://www.lesswrong.com/posts/u8Ek5o5gyCErB8JHD/could-patent-trolling-delay-ai-timelines,Could Patent-Trolling delay AI timelines?,['Pablo Repetto'],2022-06-10T02:53:39Z,lesswrong,, 170872,https://www.lesswrong.com/posts/hQes2GNPcck6WmrrP/what-is-your-timelines-for-adi-artificial-disempowering,What is your timelines for ADI (artificial disempowering intelligence)?,['Christopher King'],2023-04-17T17:01:36Z,lesswrong,, 170888,https://www.lesswrong.com/posts/FqdT8vpwiDKFYQHFR/knowledge-database-1-the-structure-and-the-method-of-1,Knowledge Database 1: The structure and the method of building,['iwis'],2023-09-18T17:42:41Z,lesswrong,, 170915,https://www.lesswrong.com/posts/4xDB54KvLrgaq9ZHg/hedonic-asymmetries,Hedonic asymmetries,['paulfchristiano'],2020-01-26T02:10:01Z,lesswrong,, 170934,https://www.lesswrong.com/posts/GthhzjHDjxrBH5jah/2012-robin-hanson-comment-on-intelligence-explosion-evidence,2012 Robin Hanson comment on “Intelligence Explosion: Evidence and Import”,['Rob Bensinger'],2021-04-02T16:26:52Z,lesswrong,, 170946,https://www.lesswrong.com/posts/FWNpg7jYoECxsSegf/quantum-ai-box,Quantum AI Box,['Gurkenglas'],2018-06-08T16:20:25Z,lesswrong,, 170955,https://www.lesswrong.com/posts/2qCxguXuZERZNKcNi/satisficers-want-to-become-maximisers,Satisficers want to become maximisers,['Stuart_Armstrong'],2011-10-21T16:27:22Z,lesswrong,, 170965,https://www.lesswrong.com/posts/uqco28EF6ED3pnuBr/neural-program-synthesis-is-a-dangerous-technology,Neural program synthesis is a dangerous technology,['syllogism'],2018-01-12T16:19:00Z,lesswrong,, 170980,https://www.lesswrong.com/posts/8xKhCbNrdP4gaA8c3/sections-3-and-4-credibility-peaceful-bargaining-mechanisms,"Sections 3 & 4: Credibility, Peaceful Bargaining Mechanisms",['JesseClifton'],2019-12-17T21:46:49Z,lesswrong,, 171007,https://www.lesswrong.com/posts/sHGxvJrBag7nhTQvb/invulnerable-incomplete-preferences-a-formal-statement-1,Invulnerable Incomplete Preferences: A Formal Statement,['Sami Petersen'],2023-08-30T21:59:36Z,lesswrong,, 171023,https://www.lesswrong.com/posts/Q4hLMDrFd8fbteeZ8/measuring-optimization-power,Measuring Optimization Power,['Eliezer Yudkowsky'],2008-10-27T21:44:57Z,lesswrong,, 171040,https://www.lesswrong.com/posts/ecuAGrZGtavgsPs4w/the-social-alignment-problem,The Social Alignment Problem,['irving'],2023-04-28T14:16:18Z,lesswrong,, 171063,https://www.lesswrong.com/posts/Eu6CvP7c7ivcGM3PJ/goodhart-s-law-in-reinforcement-learning,Goodhart's Law in Reinforcement Learning,"['jacek', 'Joar Skalse', 'OliverHayman', 'charlie_griffin', 'Xingjian Bai']",2023-10-16T00:54:12Z,lesswrong,, 171078,https://www.lesswrong.com/posts/6tHNM2s6SWzFHv3Wo/mechanistically-interpreting-time-in-gpt-2-small,Mechanistically interpreting time in GPT-2 small,"['rgould', 'Elizabeth Ho', 'Arthur Conmy']",2023-04-16T17:57:53Z,lesswrong,, 171093,https://www.lesswrong.com/posts/ndFHYBZCCusq3Whb9/i-m-no-longer-sure-that-i-buy-dutch-book-arguments-and-this,"I’m no longer sure that I buy dutch book arguments and this makes me skeptical of the ""utility function"" abstraction",['Eli Tyre'],2021-06-22T03:53:34Z,lesswrong,, 171104,https://www.lesswrong.com/posts/XH9ZN8bLidtcqMxY2/quantum-russian-roulette,Quantum Russian Roulette,['Christian_Szegedy'],2009-09-18T08:49:44Z,lesswrong,, 171114,https://www.lesswrong.com/posts/PgKADaJE4ERjtMtP9/a-model-of-udt-with-a-concrete-prior-over-logical-statements,A model of UDT with a concrete prior over logical statements,['Benya'],2012-08-28T21:45:18Z,lesswrong,, 171126,https://www.lesswrong.com/posts/QqmPkJzDwPytejt4w/how-long-till-inverse-alphafold,How long till Inverse AlphaFold?,['Daniel Kokotajlo'],2020-12-17T19:56:14Z,lesswrong,, 171140,https://www.lesswrong.com/posts/HD8kLyYcSuYRi4vzP/extrapolating-from-five-words,Extrapolating from Five Words,['Gordon Seidoh Worley'],2023-11-15T23:21:31Z,lesswrong,, 171154,https://www.lesswrong.com/posts/HAMsX36kCbbeju6M7/is-alphazero-any-good-without-the-tree-search,Is AlphaZero any good without the tree search?,['Steven Byrnes'],2019-06-30T16:41:06Z,lesswrong,, 171163,https://www.lesswrong.com/posts/76D2sKhnKNMyJ8YbX/even-briefer-summary-of-ai-plans-com,Even briefer summary of ai-plans.com,['Iknownothing'],2023-07-16T23:25:44Z,lesswrong,, 171183,https://www.lesswrong.com/posts/RfPHdbnXNPkCCFhJg/can-gpt-3-write-contra-dances,Can GPT-3 Write Contra Dances?,['jefftk'],2022-12-04T03:00:03Z,lesswrong,, 171201,https://www.lesswrong.com/posts/b3ziFhcMdab2Hon37/transformer-trained-on-it-s-own-content,Transformer trained on it's own content?,['Micromegas'],2023-04-01T15:08:50Z,lesswrong,, 171214,https://www.lesswrong.com/posts/DGnBGgWCLa9QSBDas/how-do-we-align-humans-and-what-does-it-mean-for-the-new,How do we align humans and what does it mean for the new Conjecture's strategy,['Igor Ivanov'],2023-03-28T17:54:24Z,lesswrong,, 171236,https://www.lesswrong.com/posts/L23FgmpjsTebqcSZb/how-roodman-s-gwp-model-translates-to-tai-timelines,How Roodman's GWP model translates to TAI timelines,['Daniel Kokotajlo'],2020-11-16T14:05:46Z,lesswrong,, 171249,https://www.lesswrong.com/posts/3hJCcdirKqPJeu6aW/responding-to-beyond-hyperanthropomorphism,Responding to 'Beyond Hyperanthropomorphism',['ukc10014'],2022-09-14T20:37:53Z,lesswrong,, 171274,https://www.lesswrong.com/posts/HyFWMXJCNkpjvc9vm/decision-making-under-model-ambiguity-moral-uncertainty-and,"Decision making under model ambiguity, moral uncertainty, and other agents with free will?",['Jobst Heitzig'],2022-11-13T12:50:15Z,lesswrong,, 171287,https://www.lesswrong.com/posts/8v5kc4dKdeTvvEkc8/splitting-debate-up-into-two-subsystems,Splitting Debate up into Two Subsystems,['Nandi'],2020-07-03T20:11:12Z,lesswrong,, 171308,https://www.lesswrong.com/posts/ZxHfuCyfAiHAy9Mds/desiderata-for-an-ai,Desiderata for an AI,['Nathan Helm-Burger'],2023-07-19T16:18:08Z,lesswrong,, 171337,https://www.lesswrong.com/posts/TBLv9T7rAzmawehnq/situational-awareness-in-large-language-models,Situational awareness in Large Language Models,['Simon Möller'],2023-03-03T18:59:32Z,lesswrong,, 171354,https://www.lesswrong.com/posts/yKgai84JhFmCkWQ8R/alignment-via-prosocial-brain-algorithms,Alignment via prosocial brain algorithms,['Cameron Berg'],2022-09-12T13:48:06Z,lesswrong,, 171374,https://www.lesswrong.com/posts/wAekjDysTiTivfKBe/will-machines-ever-rule-the-world-mlaisu-w50,Will Machines Ever Rule the World? MLAISU W50,['Esben Kran'],2022-12-16T11:03:35Z,lesswrong,, 171408,https://www.lesswrong.com/posts/FDrgcfY8zs5e2eJDd/charbel-raphael-and-lucius-discuss-interpretability,Charbel-Raphaël and Lucius discuss Interpretability,"['Mateusz Bagiński', 'Charbel-Raphaël', 'Lucius Bushnaq']",2023-10-30T05:50:35Z,lesswrong,, 171427,https://www.lesswrong.com/posts/CYtzXadXFtBSBYm3J/a-narrative-explanation-of-the-qaci-alignment-plan,a narrative explanation of the QACI alignment plan,['Tamsin Leake'],2023-02-15T03:28:35Z,lesswrong,, 171442,https://www.lesswrong.com/posts/ZtMsyMP5F7zzP8Gvc/reader-generated-essays,Reader-generated Essays,['Henrik Karlsson'],2022-01-03T08:56:21Z,lesswrong,, 171469,https://www.lesswrong.com/posts/TRhYSG44CF7cMcsL4/a-new-definition-of-optimizer-1,"A new definition of ""optimizer""",['Chantiel'],2021-08-09T13:42:40Z,lesswrong,, 171478,https://www.lesswrong.com/posts/ySSEz5CmSEo6MbokQ/reframing-misaligned-agi-s-well-intentioned-non-neurotypical,Reframing misaligned AGI's: well-intentioned non-neurotypical assistants,['zhukeepa'],2018-04-01T01:22:37Z,lesswrong,, 171495,https://www.lesswrong.com/posts/RKfg86eKQuqLnjGxx/occam-s-razor-and-the-universal-prior,Occam's Razor and the Universal Prior,['Peter Chatain'],2021-10-03T03:23:15Z,lesswrong,, 171515,https://www.lesswrong.com/posts/fbjNLjNd4zRbY9Wg2/null-boxing-newcomb-s-problem-2,Null-boxing Newcomb’s Problem,['Yitz'],2020-07-13T16:32:54Z,lesswrong,, 171532,https://www.lesswrong.com/posts/xKpiqsWMKRfci7cv4/investigating-emergent-goal-like-behavior-in-large-language,Investigating Emergent Goal-Like Behavior in Large Language Models using Experimental Economics,['phelps-sg'],2023-05-05T11:15:13Z,lesswrong,, 171547,https://www.lesswrong.com/posts/fmj35iqfJgrgsi59F/why-you-can-t-treat-decidability-and-complexity-as-a,Why you can't treat decidability and complexity as a constant (Post #1),['Noosphere89'],2023-07-26T17:54:33Z,lesswrong,, 171565,https://www.lesswrong.com/posts/XZfJvxZqfbLfN6pKh/introductory-textbook-to-vision-models-interpretability,Introductory Textbook to Vision Models Interpretability,"['jeanne_', 'Charbel-Raphaël']",2023-07-28T17:32:12Z,lesswrong,, 171588,https://www.lesswrong.com/posts/Nhwc8GGqm26z7iG88/the-ai-explosion-might-never-happen,The AI Explosion Might Never Happen,['snewman'],2023-09-19T23:20:26Z,lesswrong,, 171604,https://www.lesswrong.com/posts/rgWLPuQAxwoikpRu5/ai-alignment-prize-round-2-due-march-31-2018,"AI Alignment Prize: Round 2 due March 31, 2018",['Zvi'],2018-03-12T12:10:01Z,lesswrong,, 171614,https://www.lesswrong.com/posts/7gkXuHEm6CqEGT2mg/ai-safety-seems-hard-to-measure,AI Safety Seems Hard to Measure,['HoldenKarnofsky'],2022-12-08T19:50:07Z,lesswrong,, 171646,https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long,"AI Timelines via Cumulative Optimization Power: Less Long, More Short",['jacob_cannell'],2022-10-06T00:21:02Z,lesswrong,, 171667,https://www.lesswrong.com/posts/kEhSRsdhK6Dn9in7k/counterfactual-reprogramming-decision-theory,Counterfactual Reprogramming Decision Theory,['lukeprog'],2012-09-10T01:35:23Z,lesswrong,, 171676,https://www.lesswrong.com/posts/nHanKgEsFFJxAZhBp/bayeswatch-7-wildfire,Bayeswatch 7: Wildfire,['lsusr'],2021-09-08T05:35:24Z,lesswrong,, 171699,https://www.lesswrong.com/posts/2LisesnhDvRkqEMya/what-is-causality-to-an-evidential-decision-theorist,What is causality to an evidential decision theorist?,['paulfchristiano'],2022-04-17T16:00:02Z,lesswrong,, 171717,https://www.lesswrong.com/posts/iPESsoBuXdzvaA675/brief-summary-of-ai-plans-com,Brief summary of ai-plans.com,['Iknownothing'],2023-06-28T00:33:36Z,lesswrong,, 171738,https://www.lesswrong.com/posts/wqeStKQ3PGzZaeoje/all-agi-safety-questions-welcome-especially-basic-ones-april-1,All AGI Safety questions welcome (especially basic ones) [April 2023],['steven0461'],2023-04-08T04:21:36Z,lesswrong,, 171755,https://www.lesswrong.com/posts/BCcZPhymxJdpP3v5b/how-does-ai-risk-affect-the-simulation-hypothesis,How does AI Risk Affect the Simulation Hypothesis?,['amelia'],2023-04-20T03:16:52Z,lesswrong,, 171764,https://www.lesswrong.com/posts/7nJt3hkQg9uSEta6M/demystifying-born-s-rule,Demystifying Born's rule,['Christopher King'],2023-06-14T03:16:21Z,lesswrong,, 171780,https://www.lesswrong.com/posts/xHxTxHfMeJS5y2L3i/degamification,Degamification,['Nate Showell'],2023-02-19T05:35:59Z,lesswrong,, 171792,https://www.lesswrong.com/posts/YBpbcMnasmwAQoh7b/human-decision-processes-are-not-well-factored,Human decision processes are not well factored,"['remember', 'Gabriel Alfour']",2023-02-17T13:11:11Z,lesswrong,, 171801,https://www.lesswrong.com/posts/QwrCB6dSSkFCyGbSG/value-learning-towards-resolving-confusion,Value Learning – Towards Resolving Confusion,['PashaKamyshev'],2023-04-24T06:43:29Z,lesswrong,, 171837,https://www.lesswrong.com/posts/HNqTzxjvaczguH2uJ/what-s-the-least-impressive-thing-gpt-4-won-t-be-able-to-do,What's the Least Impressive Thing GPT-4 Won't be Able to Do,['Algon'],2022-08-20T19:48:15Z,lesswrong,, 171846,https://www.lesswrong.com/posts/jEjhTGTuA2hcqQGEv/constraining-narrow-ai-in-a-corporate-setting,Constraining narrow AI in a corporate setting,['MaximumLiberty'],2022-04-15T22:36:47Z,lesswrong,, 171866,https://www.lesswrong.com/posts/FaK8irvBhdunE33tF/dall-e-3,Dall-E 3,['p.b.'],2023-10-02T20:33:18Z,lesswrong,, 171875,https://www.lesswrong.com/posts/zsqvW3voHjadEhZzx/utility-functions-and-probabilities-are-entangled,Utility functions and probabilities are entangled,['Thomas Kwa'],2022-07-26T05:36:26Z,lesswrong,, 171885,https://www.lesswrong.com/posts/eutmuwTpHCb4xYZfo/some-thoughts-on-virtue-ethics-for-ais,Some Thoughts on Virtue Ethics for AIs,['peligrietzer'],2023-05-02T05:46:41Z,lesswrong,, 171900,https://www.lesswrong.com/posts/zfbnihTGBe6CEapoi/case-for-foundation-models-beyond-english,Case for Foundation Models beyond English,['Varshul Gupta'],2023-07-21T13:59:21Z,lesswrong,, 171918,https://www.lesswrong.com/posts/cLC2HcQbFZ5pFAgqC/eric-schmidt-on-recursive-self-improvement,Eric Schmidt on recursive self-improvement,['nikola'],2023-11-05T19:05:15Z,lesswrong,, 171933,https://www.lesswrong.com/posts/dq7gSiH8ZSrsvfmhk/what-exactly-does-slow-down-look-like,What exactly does 'Slow Down' look like?,['Steve M'],2023-06-03T18:11:34Z,lesswrong,, 171942,https://www.lesswrong.com/posts/TYJHfYyseMvXjFuoD/relational-speaking,Relational Speaking,['jefftk'],2023-06-21T14:40:01Z,lesswrong,, 171959,https://www.lesswrong.com/posts/4NB5dqbjnW5imqfwq/a-better-analogy-and-example-for-teaching-ai-takeover-the-ml,A better analogy and example for teaching AI takeover: the ML Inferno,['Christopher King'],2023-03-14T19:14:45Z,lesswrong,, 171975,https://www.lesswrong.com/posts/tL3QsPfrR83JKLSXy/why-i-don-t-believe-in-doom,Why I don't believe in doom,['mukashi'],2022-06-07T23:49:18Z,lesswrong,, 171992,https://www.lesswrong.com/posts/zzt448rSfwdydinbZ/the-dark-miracle-of-optics,The Dark Miracle of Optics,['Suspended Reason'],2020-06-24T03:09:30Z,lesswrong,, 172010,https://www.lesswrong.com/posts/p62bkNAciLsv6WFnR/how-do-we-align-an-agi-without-getting-socially-engineered,How Do We Align an AGI Without Getting Socially Engineered? (Hint: Box It),"['Peter S. Park', 'NickyP', 'Stephen Fowler']",2022-08-10T18:14:09Z,lesswrong,, 172034,https://www.lesswrong.com/posts/K3m8K8JEweLZmGgv8/open-ended-ethics-of-phenomena-a-desiderata-with-universal,Open-ended ethics of phenomena (a desiderata with universal morality),['Ryo'],2023-11-08T20:10:20Z,lesswrong,, 172071,https://www.lesswrong.com/posts/hGWYDxz9Pf8haQs49/link-crosspost-us-ntia-ai-accountability-policy-request-for,[Link/crosspost] [US] NTIA: AI Accountability Policy Request for Comment,['Kyle J. Lucchese'],2023-04-16T06:58:00Z,lesswrong,, 172089,https://www.lesswrong.com/posts/H2BPqnvv7YyjiEHam/big-list-of-ai-safety-videos,Big list of AI safety videos,['JakubK'],2023-01-09T06:12:35Z,lesswrong,, 172099,https://www.lesswrong.com/posts/mwdTQxGNBB2XDzhgP/asot-reflectivity-in-narrow-ai,[ASoT] Reflectivity in Narrow AI,['Ulisse Mini'],2022-11-21T00:51:39Z,lesswrong,, 172115,https://www.lesswrong.com/posts/3ccE9WvXLacGkFEoi/emotions-reward-functions,Emotions = Reward Functions,['jpyykko'],2022-01-20T18:46:53Z,lesswrong,, 172136,https://www.lesswrong.com/posts/CB2qu4zvJbtRbyipH/ai-as-a-civilizational-risk-part-5-6-relationship-between-c,AI as a Civilizational Risk Part 5/6: Relationship between C-risk and X-risk,['PashaKamyshev'],2022-11-03T02:19:47Z,lesswrong,, 172153,https://www.lesswrong.com/posts/ohcThmMPDF56v6JJ7/the-doubling-box,The Doubling Box,['Mestroyer'],2012-08-06T05:50:20Z,lesswrong,, 172166,https://www.lesswrong.com/posts/HaoHFkaGpmcAvzfkq/two-very-different-experiences-with-chatgpt,Two very different experiences with ChatGPT,['Sherrinford'],2023-02-07T13:09:27Z,lesswrong,, 172181,https://www.lesswrong.com/posts/eExFvqojsagFm4S6e/why-not-use-active-seti-to-prevent-ai-doom,Why not use active SETI to prevent AI Doom?,['RomanS'],2023-05-05T14:41:41Z,lesswrong,, 172200,https://www.lesswrong.com/posts/HQyBu8HzktwesRxk9/make-it-better-a-poetic-demonstration-of-the-banality-of-gpt,MAKE IT BETTER (a poetic demonstration of the banality of GPT-3),['rogersbacon'],2023-01-02T20:47:11Z,lesswrong,, 172215,https://www.lesswrong.com/posts/fvBMLQs5Jy4Qg9JMZ/notes-on-antelligence,Notes on Antelligence,['Aurigena'],2023-05-13T18:38:07Z,lesswrong,, 172236,https://www.lesswrong.com/posts/CKWhnNty3Hax4B7rR/resurrecting-all-humans-ever-lived-as-a-technical-problem,Resurrecting all humans ever lived as a technical problem,['RomanS'],2021-10-31T18:08:36Z,lesswrong,, 172257,https://www.lesswrong.com/posts/yvJevQHxfvcpaJ2P3/bing-chat-is-the-ai-fire-alarm,Bing chat is the AI fire alarm,['Ratios'],2023-02-17T06:51:52Z,lesswrong,, 172284,https://www.lesswrong.com/posts/x9DERee9tc2xMHxn4/does-the-structure-of-an-algorithm-matter-for-ai-risk-and-or,Does the Structure of an algorithm matter for AI Risk and/or consciousness?,['Logan Zoellner'],2021-12-03T18:31:40Z,lesswrong,, 172301,https://www.lesswrong.com/posts/szfxvS8nsxTgJLBHs/ingredients-of-timeless-decision-theory,Ingredients of Timeless Decision Theory,['Eliezer Yudkowsky'],2009-08-19T01:10:12Z,lesswrong,, 172315,https://www.lesswrong.com/posts/5vmRXMxFLvSe2a9CM/why-do-theists-undergrads-and-less-wrongers-favor-one-boxing,"Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb?",['CarlShulman'],2013-06-19T01:55:06Z,lesswrong,, 172331,https://www.lesswrong.com/posts/zDmbtt7o4J8nY3d7L/best-arguments-against-instrumental-convergence,Best arguments against instrumental convergence?,['lfrymire'],2023-04-05T17:06:14Z,lesswrong,, 172340,https://www.lesswrong.com/posts/mAMxGxSC94BqCi9aJ/activation-additions-in-a-small-residual-network,Activation additions in a small residual network,['Garrett Baker'],2023-05-22T20:28:41Z,lesswrong,, 172351,https://www.lesswrong.com/posts/xroYDAE6EoisrSFZf/all-agi-safety-questions-welcome-especially-basic-ones-july-2,All AGI Safety questions welcome (especially basic ones) [July 2023],['smallsilo'],2023-07-20T20:20:46Z,lesswrong,, 172364,https://www.lesswrong.com/posts/izsSEG9MRQjtgv5hh/taboo-human-level-intelligence,"Taboo ""human-level intelligence""",['Sherrinford'],2023-02-26T20:42:26Z,lesswrong,, 172377,https://www.lesswrong.com/posts/etNBDLKZP6EP8ZmJZ/superintelligent-introspection-a-counter-argument-to-the,Superintelligent Introspection: A Counter-argument to the Orthogonality Thesis,['DirectedEvolution'],2021-08-29T04:53:31Z,lesswrong,, 172392,https://www.lesswrong.com/posts/TxDcvtn2teAMobG2Z/decision-theories-a-semi-formal-analysis-part-ii,"Decision Theories: A Semi-Formal Analysis, Part II",['orthonormal'],2012-04-06T18:59:36Z,lesswrong,, 172408,https://www.lesswrong.com/posts/6qkBM73ea5dmJm5nY/i-have-thousands-of-copies-of-hpmor-in-russian-how-to-use,I have thousands of copies of HPMOR in Russian. How to use them with the most impact?,['Mikhail Samin'],2023-01-03T10:21:27Z,lesswrong,, 172427,https://www.lesswrong.com/posts/FinfRNLMfbq5ESxB9/microsoft-research-paper-claims-sparks-of-artificial,Microsoft Research Paper Claims Sparks of Artificial Intelligence in GPT-4,['Zvi'],2023-03-24T13:20:01Z,lesswrong,, 172450,https://www.lesswrong.com/posts/SswHiGhG9TbvkzuZH/lw-is-probably-not-the-place-for-i-asked-this-llm-x-and-here,"LW is probably not the place for ""I asked this LLM (x) and here's what it said!"", but where is?",['lillybaeum'],2023-04-12T10:12:38Z,lesswrong,, 172460,https://www.lesswrong.com/posts/X3j7HMeQshr8Sz9hc/dumb-ai-observes-and-manipulates-controllers,'Dumb' AI observes and manipulates controllers,['Stuart_Armstrong'],2015-01-13T13:35:20Z,lesswrong,, 172472,https://www.lesswrong.com/posts/iM2ujcBohJvoJzjBa/near-term-risks-of-an-obedient-artificial-intelligence,Near-Term Risks of an Obedient Artificial Intelligence,['ymeskhout'],2023-02-18T18:30:51Z,lesswrong,, 172496,https://www.lesswrong.com/posts/JHgSgwjnBpcirfq9m/reflexive-decision-theory-is-an-unsolved-problem,Reflexive decision theory is an unsolved problem,['Richard_Kennaway'],2023-09-17T14:15:09Z,lesswrong,, 172510,https://www.lesswrong.com/posts/q3N7hbhLjb6JCLEEg/concept-safety-producing-similar-ai-human-concept-spaces,Concept Safety: Producing similar AI-human concept spaces,['Kaj_Sotala'],2015-04-14T20:39:31Z,lesswrong,, 172524,https://www.lesswrong.com/posts/A69buYwaEj2knH38s/nsfw-review-interspecies-reviewers,[NSFW Review] Interspecies Reviewers,['lsusr'],2022-04-01T11:09:18Z,lesswrong,, 172535,https://www.lesswrong.com/posts/5SmqubjL8xK4ycenh/what-must-be-the-case-that-chatgpt-would-have-memorized-to,What must be the case that ChatGPT would have memorized “To be or not to be”? – Three kinds of conceptual objects for LLMs,['Bill Benzon'],2023-09-03T18:39:53Z,lesswrong,, 172558,https://www.lesswrong.com/posts/dEnKkYmFhXaukizWW/aisafety-community-a-living-document-of-ai-safety,aisafety.community - A living document of AI safety communities,"['zeshen', 'plex']",2022-10-28T17:50:13Z,lesswrong,, 172569,https://www.lesswrong.com/posts/5ntgky9ShzKKWu7us/plans-are-predictions-not-optimization-targets,"Plans Are Predictions, Not Optimization Targets",['johnswentworth'],2022-10-20T21:17:07Z,lesswrong,, 172582,https://www.lesswrong.com/posts/qzu9o3sTytbC4sZkQ/steering-subsystems-capabilities-agency-and-alignment,"Steering subsystems: capabilities, agency, and alignment",['Seth Herd'],2023-09-29T13:45:01Z,lesswrong,, 172598,https://www.lesswrong.com/posts/mgvxmmaCgTT6Kpi5t/a-response-to-conjecture-s-coem-proposal,A response to Conjecture's CoEm proposal,['Kristian Freed'],2023-04-24T17:23:39Z,lesswrong,, 172625,https://www.lesswrong.com/posts/ddPu9yh65yLmMzxep/another-problem-with-ai-confinement-ordinary-cpus-can-work,Another problem with AI confinement: ordinary CPUs can work as radio transmitters,['RomanS'],2022-10-14T08:28:49Z,lesswrong,, 172636,https://www.lesswrong.com/posts/LaT6rexiNx6MW74Fn/my-thoughts-on-takeoff-speeds,My Thoughts on Takeoff Speeds,['tristanm'],2018-03-27T00:05:33Z,lesswrong,, 172657,https://www.lesswrong.com/posts/jq9YjfZ7oq8ZzDy9i/q-and-a-with-stan-franklin-on-risks-from-ai,Q&A with Stan Franklin on risks from AI,['XiXiDu'],2011-06-11T15:22:10Z,lesswrong,, 172673,https://www.lesswrong.com/posts/YkQdokmutQim4T34u/how-will-openai-github-s-copilot-affect-programming,How will OpenAI + GitHub's Copilot affect programming?,['smountjoy'],2021-06-29T16:42:36Z,lesswrong,, 172694,https://www.lesswrong.com/posts/qHxW26uouWxk6SK4e/update-deadline-extended-to-july-24-new-wind-in-rationality,[UPDATE: deadline extended to July 24!] New wind in rationality’s sails: Applications for Epistea Residency 2023 are now open,"['Jana Meixnerová', 'Irena Kotíková']",2023-07-11T11:02:29Z,lesswrong,, 172715,https://www.lesswrong.com/posts/irbREZtZzPi7WEYex/book-review-human-compatible-1,Book Review: Human Compatible,['Scott Alexander'],2020-01-31T05:20:02Z,lesswrong,, 172738,https://www.lesswrong.com/posts/ygb6ryKcScJxhmwQo/atari-early,Atari early,['KatjaGrace'],2020-04-02T06:10:03Z,lesswrong,, 172754,https://www.lesswrong.com/posts/8QgNrNPaoyZeEY4ZD/superintelligence-17-multipolar-scenarios,Superintelligence 17: Multipolar scenarios,['KatjaGrace'],2015-01-06T06:44:46Z,lesswrong,, 172783,https://www.lesswrong.com/posts/snbNNQSG35D5XHtpn/the-ground-truth-problem-or-why-evaluating-interpretability,"The Ground Truth Problem (Or, Why Evaluating Interpretability Methods Is Hard)",['Jessica Rumbelow'],2022-11-17T11:06:28Z,lesswrong,, 172793,https://www.lesswrong.com/posts/Ecxevhvx85Y4eyFcu/weak-arguments-against-the-universal-prior-being-malign,Weak arguments against the universal prior being malign,['X4vier'],2018-06-14T17:11:54Z,lesswrong,, 172809,https://www.lesswrong.com/posts/FQqXxWHyZ5AaYiZvt/what-if-agi-is-near,What if AGI is near?,['Wulky Wilkinsen'],2021-04-14T00:05:44Z,lesswrong,, 172825,https://www.lesswrong.com/posts/kXiAGRWFquXFMi68Y/new-lw-feature-debates,"[New LW Feature] ""Debates""","['Ruby', 'RobertM', 'GPT-4', 'Claude+']",2023-04-01T07:00:24Z,lesswrong,, 172840,https://www.lesswrong.com/posts/hAJgbMZydoQJxLnMD/who-is-liable-for-ai,Who is liable for AI?,['jmh'],2023-05-30T13:54:46Z,lesswrong,, 172849,https://www.lesswrong.com/posts/n8gHxkwHuErfjMDdc/gpt-4-busted-clear-self-interest-when-summarizing-articles,"GPT-4 busted? Clear self-interest when summarizing articles about itself vs when article talks about Claude, LLaMA, or DALL·E 2",['Christopher King'],2023-03-31T17:05:05Z,lesswrong,, 172861,https://www.lesswrong.com/posts/2BCpdyHzzw4BZeodR/new-gpt-3-competitor,New GPT-3 competitor,['Quintin Pope'],2021-08-12T07:05:49Z,lesswrong,, 172878,https://www.lesswrong.com/posts/cAur7taZk6ikA2Zcy/research-agenda-can-transformers-do-system-2-thinking,Research agenda: Can transformers do system 2 thinking?,['p.b.'],2022-04-06T13:31:39Z,lesswrong,, 172895,https://www.lesswrong.com/posts/oSZ2xTxEMZh9f3Yaz/llms-are-mostly-not-helped-by-filler-tokens,LLMs are (mostly) not helped by filler tokens,['Kshitij Sachan'],2023-08-10T00:48:51Z,lesswrong,, 172909,https://www.lesswrong.com/posts/oAu6nJgAzN5LJZsmg/aisn-23-new-openai-models-news-from-anthropic-and,"AISN #23: New OpenAI Models, News from Anthropic, and Representation Engineering","['aogara', 'Dan H']",2023-10-04T17:37:20Z,lesswrong,, 172947,https://www.lesswrong.com/posts/PnAqpopgvDGyeBCQE/cev-a-utilitarian-critique,CEV: a utilitarian critique,['Pablo'],2013-01-26T16:12:21Z,lesswrong,, 172963,https://www.lesswrong.com/posts/CmvkoyTq49tFkSGFF/ai-safety-hub-serbia-soft-launch,AI Safety Hub Serbia Soft Launch,['DusanDNesic'],2023-07-25T19:40:23Z,lesswrong,, 172973,https://www.lesswrong.com/posts/BKXbEMomadEznodMr/using-gpt-3-for-preventing-conflict-during-messaging-a-pitch,Using GPT-3 for preventing conflict during messaging — a pitch for an app,['Eli_'],2022-03-17T11:02:00Z,lesswrong,, 172985,https://www.lesswrong.com/posts/SLGu37Wug7HmdGpbD/invocations-the-other-capabilities-overhang,Invocations: The Other Capabilities Overhang?,['Robert_AIZI'],2023-04-04T13:38:14Z,lesswrong,, 173005,https://www.lesswrong.com/posts/DcyGj7yn52qS5DPna/grinding-slimes-in-the-dungeon-of-ai-alignment-research,Grinding slimes in the dungeon of AI alignment research,['Max H'],2023-03-24T04:51:54Z,lesswrong,, 173020,https://www.lesswrong.com/posts/qdcfTWbeSvcBRgCjs/is-cirl-a-promising-agenda,Is CIRL a promising agenda?,['Chris_Leong'],2022-06-23T17:12:51Z,lesswrong,, 173030,https://www.lesswrong.com/posts/FFA6b4NoxaWYcbcZH/out-of-the-box,Out of the Box,['jesseduffield'],2023-11-13T23:43:24Z,lesswrong,, 173057,https://www.lesswrong.com/posts/fLAvmWHmpJEiw8KEp/mesa-optimization-explain-it-like-i-m-10-edition,Mesa-Optimization: Explain it like I'm 10 Edition,['brook'],2023-08-26T23:04:21Z,lesswrong,, 173072,https://www.lesswrong.com/posts/CQkGJ2t5Rw8GcZKJm/pinpointing-utility,Pinpointing Utility,['anonymous'],2013-02-01T03:58:01Z,lesswrong,, 173099,https://www.lesswrong.com/posts/Zgwy2QRgYBSrMWDMQ/logarithms-and-total-utilitarianism,Logarithms and Total Utilitarianism,['Pablo Villalobos'],2018-08-09T08:49:17Z,lesswrong,, 173115,https://www.lesswrong.com/posts/zztyZ4SKy7suZBpbk/another-attempt-to-explain-udt,Another attempt to explain UDT,['cousin_it'],2010-11-14T16:52:41Z,lesswrong,, 173126,https://www.lesswrong.com/posts/eNwCXXLzEhWv6aznf/large-language-models-suggest-a-path-to-ems,Large Language Models Suggest a Path to Ems,['anithite'],2022-12-29T02:20:02Z,lesswrong,, 173159,https://www.lesswrong.com/posts/XqzWgkP3xekfdh8pa/mmlu-s-moral-scenarios-benchmark-doesn-t-measure-what-you,MMLU’s Moral Scenarios Benchmark Doesn’t Measure What You Think it Measures,['corey morris'],2023-09-27T17:54:40Z,lesswrong,, 173177,https://www.lesswrong.com/posts/A4EBPx5htiuk22X4C/superintelligent-agi-in-a-box-a-question,Superintelligent AGI in a box - a question.,['Dmytry'],2012-02-23T18:48:26Z,lesswrong,, 173197,https://www.lesswrong.com/posts/mwTEMHKv9tG9HxFXD/ama-on-truthful-ai-owen-cotton-barratt-owain-evans-and-co,"AMA on Truthful AI: Owen Cotton-Barratt, Owain Evans & co-authors",['Owain_Evans'],2021-10-22T16:23:28Z,lesswrong,, 173207,https://www.lesswrong.com/posts/EuRmokR7AF7dMJs7j/chatgpt-and-bing-chat-can-t-play-botticelli,ChatGPT and Bing Chat can't play Botticelli,['Asha Saavoss'],2023-03-29T17:39:50Z,lesswrong,, 173222,https://www.lesswrong.com/posts/o9baX2uxbya4R959g/what-is-optimization-power-formally,"What is optimization power, formally?",['sbenthall'],2014-10-18T18:37:10Z,lesswrong,, 173237,https://www.lesswrong.com/posts/8GY7LTFHuitFqwAaH/the-dark-side-of-cognition-hypothesis,The Dark Side of Cognition Hypothesis,['Cameron Berg'],2021-10-03T20:10:57Z,lesswrong,, 173249,https://www.lesswrong.com/posts/7MK6HSn2pbAJrbfiG/takeoff-speed-simple-asymptotics-in-a-toy-model,Takeoff Speed: Simple Asymptotics in a Toy Model.,['Aaron Roth'],2018-03-05T17:07:13Z,lesswrong,, 173263,https://www.lesswrong.com/posts/iozsJQ7fEdTCRxtJc/simple-way-to-prevent-power-seeking-ai,Simple Way to Prevent Power-Seeking AI,['research_prime_space'],2022-12-07T00:26:10Z,lesswrong,, 173282,https://www.lesswrong.com/posts/3TCYqur9YzuZ4qhtq/meta-ai-announces-cicero-human-level-diplomacy-play-with,Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue),['Jacy Reese Anthis'],2022-11-22T16:50:20Z,lesswrong,, 173291,https://www.lesswrong.com/posts/t2NN6JwMFaqANuLqH/the-strangest-thing-an-ai-could-tell-you,The Strangest Thing An AI Could Tell You,['Eliezer Yudkowsky'],2009-07-15T02:27:38Z,lesswrong,, 173304,https://www.lesswrong.com/posts/iDYaCJ3o3Q7ypriTF/knightian-uncertainty-and-ambiguity-aversion-motivation,Knightian Uncertainty and Ambiguity Aversion: Motivation,['So8res'],2014-07-21T20:32:43Z,lesswrong,, 173321,https://www.lesswrong.com/posts/pQFpkwiQNjQzjGzCn/alignment-anger-and-love-preparing-for-the-emergence-of,"Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI",['tavurth'],2023-01-02T06:16:54Z,lesswrong,, 173331,https://www.lesswrong.com/posts/zaFwokgn9MxtYd46E/fli-podcast-series-imagine-a-world-about-aspirational,"FLI podcast series, ""Imagine A World"", about aspirational futures with AGI",['Jackson Wagner'],2023-10-13T16:07:39Z,lesswrong,, 173371,https://www.lesswrong.com/posts/cm5dCKYCamotzEMxq/three-pillars-for-avoiding-agi-catastrophe-technical,"Three pillars for avoiding AGI catastrophe: Technical alignment, deployment decisions, and coordination",['Alex Lintz'],2022-08-03T23:15:23Z,lesswrong,, 173398,https://www.lesswrong.com/posts/67rThJdKAJ2C4eE4M/why-we-use-money-a-walrasian-view,Why We Use Money? - A Walrasian View,['Savio Coelho'],2023-10-03T12:02:37Z,lesswrong,, 173414,https://www.lesswrong.com/posts/hD4boFF6K782grtqX/mathematical-inconsistency-in-solomonoff-induction,Mathematical Inconsistency in Solomonoff Induction?,['curi'],2020-08-25T17:09:50Z,lesswrong,, 173424,https://www.lesswrong.com/posts/tJzdzGdTGrqFf9ekw/early-situational-awareness-and-its-implications-a-story,"Early situational awareness and its implications, a story",['Jacob Pfau'],2023-02-06T20:45:39Z,lesswrong,, 173442,https://www.lesswrong.com/posts/Tk5ovpucaqweCu4tu/scott-aaronson-is-joining-openai-to-work-on-ai-safety,Scott Aaronson is joining OpenAI to work on AI safety,['peterbarnett'],2022-06-18T04:06:55Z,lesswrong,, 173451,https://www.lesswrong.com/posts/NHvspuLiirJwiLtfg/do-anthropic-considerations-undercut-the-evolution-anchor,Do anthropic considerations undercut the evolution anchor from the Bio Anchors report?,['Ege Erdil'],2022-10-01T20:02:48Z,lesswrong,, 173462,https://www.lesswrong.com/posts/Wc5BYFfzuLzepQjCq/inflection-ai-is-a-major-agi-lab,Inflection.ai is a major AGI lab,['nikola'],2023-08-09T01:05:55Z,lesswrong,, 173484,https://www.lesswrong.com/posts/hGmFNBXDinfiKJGD6/don-t-even-think-about-hell,"""Don't even think about hell""",['emmab'],2020-05-02T08:06:36Z,lesswrong,, 173497,https://www.lesswrong.com/posts/SNdijuEn6erTJam3z/how-evals-might-or-might-not-prevent-catastrophic-risks-from,How evals might (or might not) prevent catastrophic risks from AI,['Akash'],2023-02-07T20:16:08Z,lesswrong,, 173516,https://www.lesswrong.com/posts/KYXHneyrnNNHLKWGJ/we-shouldn-t-expect-ai-to-ever-be-fully-rational,We Shouldn't Expect AI to Ever be Fully Rational,['OneManyNone'],2023-05-18T17:09:14Z,lesswrong,, 173536,https://www.lesswrong.com/posts/pzvHZsKyJZks89Pao/a-conversation-with-pi-a-conversational-ai,"A conversation with Pi, a conversational AI.",['Spiritus Dei'],2023-09-15T23:13:58Z,lesswrong,, 173548,https://www.lesswrong.com/posts/566kBoPi76t8KAkoD/on-autogpt,On AutoGPT,['Zvi'],2023-04-13T12:30:01Z,lesswrong,, 173572,https://www.lesswrong.com/posts/semvkn56ZFcXBNc2d/superintelligence-5-forms-of-superintelligence,Superintelligence 5: Forms of Superintelligence,['KatjaGrace'],2014-10-14T01:00:46Z,lesswrong,, 173595,https://www.lesswrong.com/posts/kC99z2biut5MGbpvt/microsoft-and-google-using-llms-for-cybersecurity,Microsoft and Google using LLMs for Cybersecurity,['Phosphorous'],2023-05-18T17:42:37Z,lesswrong,, 173621,https://www.lesswrong.com/posts/FtAJZWCMps7FWKTT3/superintelligence-9-the-orthogonality-of-intelligence-and,Superintelligence 9: The orthogonality of intelligence and goals,['KatjaGrace'],2014-11-11T02:00:09Z,lesswrong,, 173632,https://www.lesswrong.com/posts/7jZAPw5tjyfdNG6oc/unbounded-utility-functions-and-precommitment,Unbounded utility functions and precommitment,['MichaelStJules'],2022-09-10T16:16:22Z,lesswrong,, 173647,https://www.lesswrong.com/posts/uLstPRyYwzfrx3enG/levelling-up-in-ai-safety-research-engineering,Levelling Up in AI Safety Research Engineering,['Gabriel Mukobi'],2022-09-02T04:59:43Z,lesswrong,, 173680,https://www.lesswrong.com/posts/cneXDpqQnPncwnXMo/chat-diplomacy-llms-and-national-security,CHAT Diplomacy: LLMs and National Security,['JohnBuridan'],2023-05-05T19:45:28Z,lesswrong,, 173705,https://www.lesswrong.com/posts/GxjDuS8dXH9CNsjwD/when-discussing-ai-doom-barriers-propose-specific-plausible,When discussing AI doom barriers propose specific plausible scenarios,['anithite'],2023-08-18T04:06:45Z,lesswrong,, 173723,https://www.lesswrong.com/posts/w5EYzCPG4AwRfc9tK/breaking-newcomb-s-problem-with-non-halting-states,Breaking Newcomb's Problem with Non-Halting states,['Slimepriestess'],2022-09-04T04:01:04Z,lesswrong,, 173732,https://www.lesswrong.com/posts/nEdueRhZwB4eP6X3c/introduction-bias-in-evaluating-agi-x-risks,Introduction: Bias in Evaluating AGI X-Risks,"['Remmelt', 'flandry19']",2022-12-27T10:27:31Z,lesswrong,, 173751,https://www.lesswrong.com/posts/Gdhxh45xCuKLew3bB/can-reward-economics-solve-ai-alignment,"Can ""Reward Economics"" solve AI Alignment?",['Q Home'],2022-09-07T07:58:49Z,lesswrong,, 173766,https://www.lesswrong.com/posts/Rqok2cFnjYrLiFdst/a-decade-of-lurking-a-month-of-posting,"A decade of lurking, a month of posting",['Max H'],2023-04-09T00:21:23Z,lesswrong,, 173782,https://www.lesswrong.com/posts/zQ4dX8Jk4uExukxqB/a-multidisciplinary-approach-to-alignment-mata-and,A Multidisciplinary Approach to Alignment (MATA) and Archetypal Transfer Learning (ATL),['MiguelDev'],2023-06-19T02:32:10Z,lesswrong,, 173803,https://www.lesswrong.com/posts/EjqhPa8ABBMtaZSqo/the-utility-and-solubility-of-anthropic-capture,The Utility and Solubility of Anthropic Capture,['marc/er'],2023-08-03T05:33:36Z,lesswrong,, 173822,https://www.lesswrong.com/posts/QEMbewiGaypjfmDi7/how-would-two-superintelligent-ais-interact-if-they-are,"How would two superintelligent AIs interact, if they are unaligned with each other?",['Nathan1123'],2022-08-09T18:58:16Z,lesswrong,, 173832,https://www.lesswrong.com/posts/z4o4iAFgnmaBmksN2/formalizing-boundaries-with-markov-blankets-criticism-of,Formalizing «Boundaries» with Markov blankets + Criticism of this approach,['Chipmonk'],2023-09-19T21:01:02Z,lesswrong,, 173852,https://www.lesswrong.com/posts/hopzyM5ckzMNHwcQR/superintelligence-reading-group-3-ai-and-uploads,Superintelligence Reading Group 3: AI and Uploads,['KatjaGrace'],2014-09-30T01:00:16Z,lesswrong,, 173873,https://www.lesswrong.com/posts/vm97ZtpwRbGrjNwha/pessimism-about-ai-safety,Pessimism about AI Safety,"['Max_He-Ho', 'Peter Kuhn']",2023-04-02T07:43:12Z,lesswrong,, 173896,https://www.lesswrong.com/posts/kGrwufqxfsyuaMREy/annotated-reply-to-bengio-s-ai-scientists-safe-and-useful-ai,"Annotated reply to Bengio's ""AI Scientists: Safe and Useful AI?""",['Roman Leventov'],2023-05-08T21:26:11Z,lesswrong,, 173922,https://www.lesswrong.com/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction,Risks from Learned Optimization: Introduction,"['evhub', 'Chris van Merwijk', 'Vlad Mikulik', 'Joar Skalse', 'Scott Garrabrant']",2019-05-31T23:44:54Z,lesswrong,, 173944,https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si,Thoughts on the Singularity Institute (SI),['HoldenKarnofsky'],2012-05-11T04:31:30Z,lesswrong,, 173968,https://www.lesswrong.com/posts/zFQQEkx4c6bxdshr4/5-axioms-of-decision-making,5 Axioms of Decision Making,['Vaniver'],2011-12-01T22:22:51Z,lesswrong,, 173978,https://www.lesswrong.com/posts/FTdtHHBPDdzk4pJz2/do-agents-with-mutually-known-identical-utility-functions,Do agents with (mutually known) identical utility functions but irreconcilable knowledge sometimes fight?,['mako yass'],2023-08-23T08:13:06Z,lesswrong,, 173988,https://www.lesswrong.com/posts/J9XecqtiujawmDnmr/is-general-intelligence-compact,"Is General Intelligence ""Compact""?",['DragonGod'],2022-07-04T13:27:32Z,lesswrong,, 174005,https://www.lesswrong.com/posts/mNj6eqd95Csv8kX3f/preprint-pretraining-language-models-with-human-preferences,[Preprint] Pretraining Language Models with Human Preferences,['Giulio'],2023-02-21T11:44:27Z,lesswrong,, 174015,https://www.lesswrong.com/posts/b9ErEMen42X94749t/does-it-become-easier-or-harder-for-the-world-to-coordinate,"Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on?",['Eli Tyre'],2019-07-29T22:59:33Z,lesswrong,, 174047,https://www.lesswrong.com/posts/DdmScdukXBff5CbNS/a-gentle-primer-on-caring-including-in-strange-senses-with,"A gentle primer on caring, including in strange senses, with applications",['Kaarel'],2022-08-30T08:05:12Z,lesswrong,, 174084,https://www.lesswrong.com/posts/uGqdJCrqznzLBDXcr/wanted-notation-for-credal-resilience,Wanted: Notation for credal resilience,['PeterH'],2022-07-31T07:35:26Z,lesswrong,, 174093,https://www.lesswrong.com/posts/bPWkQd2AByKWoNA9X/deepmind-and-google-brain-are-merging-linkpost,DeepMind and Google Brain are merging [Linkpost],['Akash'],2023-04-20T18:47:23Z,lesswrong,, 174108,https://www.lesswrong.com/posts/qvWP3aBDBaqXvPNhS/gpt-2-s-positional-embedding-matrix-is-a-helix,GPT-2's positional embedding matrix is a helix,['AdamYedidia'],2023-07-21T04:16:26Z,lesswrong,, 174121,https://www.lesswrong.com/posts/jipBucafHPaj2soby/injecting-noise-to-gpt-to-get-multiple-answers,Injecting noise to GPT to get multiple answers,['bipolo'],2023-02-22T20:02:14Z,lesswrong,, 174130,https://www.lesswrong.com/posts/ZqWzFDmvMZnHQZYqz/massive-scaling-should-be-frowned-upon,Massive Scaling Should be Frowned Upon,['harsimony'],2022-11-17T08:43:23Z,lesswrong,, 174145,https://www.lesswrong.com/posts/NtX7LKhCXMW2vjWx6/thoughts-on-reward-engineering,Thoughts on reward engineering,['paulfchristiano'],2019-01-24T20:15:05Z,lesswrong,, 174185,https://www.lesswrong.com/posts/B39GNTsN3HocW8KFo/superintelligence-11-the-treacherous-turn,Superintelligence 11: The treacherous turn,['KatjaGrace'],2014-11-25T02:00:06Z,lesswrong,, 174204,https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile,Value is Fragile,['Eliezer Yudkowsky'],2009-01-29T08:46:30Z,lesswrong,, 174219,https://www.lesswrong.com/posts/Hjv5ncXk2yCKLdGbm/sparse-trinary-weighted-rnns-as-a-path-to-better-language,Sparse trinary weighted RNNs as a path to better language model interpretability,['Am8ryllis'],2022-09-17T19:48:24Z,lesswrong,, 174232,https://www.lesswrong.com/posts/4zdcsHnfywRCNAhGd/a-brief-introduction-to-aci-2-an-event-centric-view,"A Brief Introduction to ACI, 2: An Event-Centric View",['Akira Pyinya'],2023-04-12T03:23:32Z,lesswrong,, 174243,https://www.lesswrong.com/posts/PBHtYurAxfm6iEpqv/cev-inspired-models,CEV-inspired models,['Stuart_Armstrong'],2011-12-07T18:35:06Z,lesswrong,, 174254,https://www.lesswrong.com/posts/vi48CMtZL8ZkRpuad/soon-a-weekly-ai-safety-prerequisites-module-on-lesswrong,Soon: a weekly AI Safety prerequisites module on LessWrong,['anonymous'],2018-04-30T13:23:15Z,lesswrong,, 174265,https://www.lesswrong.com/posts/ZFwkvSSsvECThox62/psychology-and-applications-of-reinforcement-learning-where,psychology and applications of reinforcement learning: where do I learn more?,['jsalvatier'],2011-06-26T20:56:27Z,lesswrong,, 174274,https://www.lesswrong.com/posts/j6mcesYTasNG2zKWd/ai-safety-textbook-test-chapter-orthogonality-thesis,"AI Safety ""Textbook"". Test chapter. Orthogonality Thesis, Goodhart Law and Instrumental Convergency","['Tapatakt', 'LacrimalBird']",2023-01-21T18:13:31Z,lesswrong,, 174295,https://www.lesswrong.com/posts/Nribr7pqa3kP9d5Xf/favourite-new-ai-productivity-tools,Favourite new AI productivity tools?,['Gabriel Mukobi'],2022-06-15T01:08:08Z,lesswrong,, 174307,https://www.lesswrong.com/posts/FK49pmBDgYGwDE2Sb/fiction-io-sys,[Fiction] IO.SYS,['DataPacRat'],2019-03-10T21:23:19Z,lesswrong,, 174336,https://www.lesswrong.com/posts/RxQE4m9QgNwuq764M/save-the-princess-a-tale-of-aixi-and-utility-functions,Save the princess: A tale of AIXI and utility functions,['Anja'],2013-02-01T15:38:00Z,lesswrong,, 174359,https://www.lesswrong.com/posts/ZrFBNdD69xPz7Yhnp/thoughts-on-list-of-lethalities,Thoughts on 'List of Lethalities',['Alex Lawsen'],2022-08-17T18:33:31Z,lesswrong,, 174391,https://www.lesswrong.com/posts/8uJ3n3hu8pLXC4YNE/some-conceptual-highlights-from-disjunctive-scenarios-of-1,Some conceptual highlights from “Disjunctive Scenarios of Catastrophic AI Risk”,['Kaj_Sotala'],2018-02-12T12:30:04Z,lesswrong,, 174416,https://www.lesswrong.com/posts/LGMSLXkpKAofebjfi/a-terrifying-variant-of-boltzmann-s-brains-problem,A terrifying variant of Boltzmann's brains problem,['Zeruel017'],2022-05-30T20:08:59Z,lesswrong,, 174432,https://www.lesswrong.com/posts/ikoi5PcyZBw5v8pbt/ethodynamics-of-omelas,Ethodynamics of Omelas,['dr_s'],2023-06-10T16:24:16Z,lesswrong,, 174442,https://www.lesswrong.com/posts/gnF2vwpanCu6esGQr/thinking-about-broad-classes-of-utility-like-functions,Thinking about Broad Classes of Utility-like Functions,['Jemist'],2022-06-07T14:05:52Z,lesswrong,, 174460,https://www.lesswrong.com/posts/sL9qmAqgB2RL6JFca/ai-alignment-a-comprehensive-survey,AI Alignment: A Comprehensive Survey,['Stephen McAleer'],2023-11-01T17:35:35Z,lesswrong,, 174487,https://www.lesswrong.com/posts/HY94LBqekihnx85WQ/the-happy-dance-problem,The Happy Dance Problem,['abramdemski'],2017-11-17T00:47:29Z,lesswrong,, 174499,https://www.lesswrong.com/posts/fgnJKWppvkFa7AWPv/making-dall-e-count,Making DALL-E Count,['DirectedEvolution'],2022-07-22T09:11:58Z,lesswrong,, 174516,https://www.lesswrong.com/posts/vDeMcf8ruqYFsAib8/a-response-to-the-richards-et-al-s-the-illusion-of-ai-s,"A response to the Richards et al.'s ""The Illusion of AI's Existential Risk""",['Harrison Fell'],2023-07-26T17:34:20Z,lesswrong,, 174535,https://www.lesswrong.com/posts/XEv2cNb5EQhmASFF7/the-evil-ai-overlord-list,The Evil AI Overlord List,['Stuart_Armstrong'],2012-11-20T17:02:30Z,lesswrong,, 174561,https://www.lesswrong.com/posts/69sjGNXzRCG7Qx7Kd/link-introducing-openai,[Link] Introducing OpenAI,['Baughn'],2015-12-11T21:54:47Z,lesswrong,, 174570,https://www.lesswrong.com/posts/oAJ7Pd2PiBHT2cQ3p/25-min-talk-on-metaethical-ai-with-questions-from-stuart,25 Min Talk on MetaEthical.AI with Questions from Stuart Armstrong,['June Ku'],2021-04-29T15:38:07Z,lesswrong,, 174580,https://www.lesswrong.com/posts/GqkcLWq6TgbXMze59/defining-optimization-in-a-deeper-way-part-3,Defining Optimization in a Deeper Way Part 3,['Jemist'],2022-07-20T22:06:48Z,lesswrong,, 174590,https://www.lesswrong.com/posts/8Kxi3mEAwNxoYFu7T/fragility-of-value-vs-llms,“Fragility of Value” vs. LLMs,['Not Relevant'],2022-04-13T02:02:30Z,lesswrong,, 174603,https://www.lesswrong.com/posts/6vz4WwRm8Fm5gmc8f/sam-altman-on-gpt-4-chatgpt-and-the-future-of-ai-or-lex,"Sam Altman on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367",['Gabriel Mukobi'],2023-03-25T19:08:55Z,lesswrong,, 174634,https://www.lesswrong.com/posts/MeWtcyX8wHxjpDAeE/guarding-slack-vs-substance,Guarding Slack vs Substance,['Raemon'],2017-12-13T20:58:11Z,lesswrong,, 174652,https://www.lesswrong.com/posts/6eGw3CJDDqwrSYiHu/does-tdt-pay-in-counterfactual-mugging,Does TDT pay in Counterfactual Mugging?,['Bongo'],2010-11-29T21:31:37Z,lesswrong,, 174661,https://www.lesswrong.com/posts/wDL6wiqg3c6WFisHq/gpt-as-an-intelligence-forklift,GPT as an “Intelligence Forklift.”,['boazbarak'],2023-05-19T21:15:03Z,lesswrong,, 174673,https://www.lesswrong.com/posts/JCgs7jGEvritqFLfR/evaluating-hidden-directions-on-the-utility-dataset,"Evaluating hidden directions on the utility dataset: classification, steering and removal","['Annah', 'shash42']",2023-09-25T17:19:14Z,lesswrong,, 174687,https://www.lesswrong.com/posts/w9yKQzyhsLJEZhvg9/activation-adding-experiments-with-llama-7b,Activation adding experiments with llama-7b,['Nina Rimsky'],2023-07-16T04:17:59Z,lesswrong,, 174708,https://www.lesswrong.com/posts/sPrifh6uLJQFjQJPW/a-summary-of-the-hanson-yudkowsky-foom-debate,A summary of the Hanson-Yudkowsky FOOM debate,['Kaj_Sotala'],2012-11-15T07:25:55Z,lesswrong,, 174729,https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes,Rationality: Common Interest of Many Causes,['Eliezer Yudkowsky'],2009-03-29T10:49:08Z,lesswrong,, 174745,https://www.lesswrong.com/posts/6jytXo5HmR9HLvkzu/why-was-the-ai-alignment-community-so-unprepared-for-this,Why was the AI Alignment community so unprepared for this moment?,['Ras1513'],2023-07-15T00:26:30Z,lesswrong,, 174757,https://www.lesswrong.com/posts/LtshicMv63iybgTdi/do-you-want-a-first-principled-preparedness-guide-to-prepare,Do you want a first-principled preparedness guide to prepare yourself and loved ones for potential catastrophes?,['Ulrik Horn'],2023-11-14T12:13:40Z,lesswrong,, 174781,https://www.lesswrong.com/posts/smLEgf2tHsS5NHAf6/paradigm-building-the-hierarchical-question-framework,Paradigm-building: The hierarchical question framework,['Cameron Berg'],2022-02-09T16:47:57Z,lesswrong,, 174794,https://www.lesswrong.com/posts/GEYntEDugjawxLTEL/select-agent-specifications-as-natural-abstractions,Select Agent Specifications as Natural Abstractions,['marc/er'],2023-04-07T23:16:03Z,lesswrong,, 174813,https://www.lesswrong.com/posts/GSkPGotkaSQjP8Qe4/i-am-a-memoryless-system,I am a Memoryless System,['NicholasKross'],2022-10-23T17:34:48Z,lesswrong,, 174835,https://www.lesswrong.com/posts/HLXiJgqxuMpwamdar/conditions-for-superrationality-motivated-cooperation-in-a,Conditions for Superrationality-motivated Cooperation in a one-shot Prisoner's Dilemma,['Jim Buhler'],2022-12-19T15:00:38Z,lesswrong,, 174853,https://www.lesswrong.com/posts/n45Awh7bkGRe4YayT/send-llms-to-school-instruction-tuning-with-human-curriculum,Send LLMs to School: Instruction Tuning with Human Curriculum,['Bruce W. Lee'],2023-10-31T00:07:07Z,lesswrong,, 174866,https://www.lesswrong.com/posts/eJrMaGZGut4Qaefsj/lq-some-thoughts-on-messaging-around-ai-risk,[LQ] Some Thoughts on Messaging Around AI Risk,['DragonGod'],2022-06-25T13:53:27Z,lesswrong,, 174886,https://www.lesswrong.com/posts/T9DiPNuNunzZtJuk3/a-proof-against-oracle-ai,A Proof Against Oracle AI,['aiiixiii'],2020-03-06T21:42:27Z,lesswrong,, 174899,https://www.lesswrong.com/posts/d4YGxMpzmvxknHfbe/conversation-with-eliezer-what-do-you-want-the-system-to-do,Conversation with Eliezer: What do you want the system to do?,['Akash'],2022-06-25T17:36:14Z,lesswrong,, 174910,https://www.lesswrong.com/posts/PM5MQzXCrvsoAewWZ/solomonoff-induction-by-shane-legg,"Solomonoff Induction, by Shane Legg",['cousin_it'],2011-02-21T00:32:20Z,lesswrong,, 174919,https://www.lesswrong.com/posts/fC248GwrWLT4Dkjf6/open-problems-related-to-solomonoff-induction,Open Problems Related to Solomonoff Induction,['Wei Dai'],2012-06-06T00:26:10Z,lesswrong,, 174945,https://www.lesswrong.com/posts/QWyYcjrXASQuRHqC5/brains-and-backprop-a-key-timeline-crux,Brains and backprop: a key timeline crux,['jacobjacob'],2018-03-09T22:13:05Z,lesswrong,, 174960,https://www.lesswrong.com/posts/hmnhoL2YrAudDo3Rr/would-a-halfway-copied-brain-emulation-be-at-risk-of-having,Would a halfway copied brain emulation be at risk of having different values/identity?,['Ghatanathoah'],2020-07-30T05:43:31Z,lesswrong,, 174972,https://www.lesswrong.com/posts/LhEesPFocr2uT9sPA/safety-timelines-how-long-will-it-take-to-solve-alignment,Safety timelines: How long will it take to solve alignment?,"['Esben Kran', 'JonathanRystroem', 'Steinthal']",2022-09-19T12:53:56Z,lesswrong,, 174995,https://www.lesswrong.com/posts/EuxkZHQmamDWAGocv/is-there-a-time-series-forecasting-equivalent-of-aixi,Is there a 'time series forecasting' equivalent of AIXI?,['Solenoid_Entity'],2023-05-17T04:35:19Z,lesswrong,, 175004,https://www.lesswrong.com/posts/oSgac8x8fgNj22ky3/compositional-preference-models-for-aligning-lms,Compositional preference models for aligning LMs,['Tomek Korbak'],2023-10-25T12:17:29Z,lesswrong,, 175024,https://www.lesswrong.com/posts/c3wWnvgzdbRhNnNbQ/timeless-decision-theory-problems-i-can-t-solve,Timeless Decision Theory: Problems I Can't Solve,['Eliezer Yudkowsky'],2009-07-20T00:03:00Z,lesswrong,, 175039,https://www.lesswrong.com/posts/ZnNM34vqXp4HKcXfG/surprising-examples-of-non-human-optimization,Surprising examples of non-human optimization,['Jan_Rzymkowski'],2015-06-14T17:05:16Z,lesswrong,, 175052,https://www.lesswrong.com/posts/rEZqP7K4MG6waC2zf/optimizing-crop-planting-with-mixed-integer-linear,Optimizing crop planting with mixed integer linear programming in Stardew Valley,['hapanin'],2022-04-05T18:42:02Z,lesswrong,, 175062,https://www.lesswrong.com/posts/rmCqibBhytQizcief/list-of-technical-ai-safety-exercises-and-projects,List of technical AI safety exercises and projects,['JakubK'],2023-01-19T09:35:18Z,lesswrong,, 175082,https://www.lesswrong.com/posts/CcuYkaP3BCuaxge8n/tyler-cowen-s-challenge-to-develop-an-actual-mathematical,Tyler Cowen's challenge to develop an 'actual mathematical model' for AI X-Risk,['Joe Brenton'],2023-05-16T11:57:20Z,lesswrong,, 175098,https://www.lesswrong.com/posts/gq9GR6duzcuxyxZtD/approximation-is-expensive-but-the-lunch-is-cheap,"Approximation is expensive, but the lunch is cheap","['Jesse Hoogland', 'Zach Furman']",2023-04-19T14:19:13Z,lesswrong,, 175119,https://www.lesswrong.com/posts/9yhKRuMwEqB3rQucJ/a-reaction-to-wolfgang-schwarz-s-on-functional-decision,"A Reaction to Wolfgang Schwarz's ""On Functional Decision Theory""",['Heighn'],2022-01-05T09:00:43Z,lesswrong,, 175133,https://www.lesswrong.com/posts/4T59sx6uQanf5T79h/interacting-with-a-boxed-ai,Interacting with a Boxed AI,['aphyer'],2022-04-01T22:42:30Z,lesswrong,, 175151,https://www.lesswrong.com/posts/haYD6N6BLvG7dkf25/concerns-surrounding-cev-a-case-for-human-friendliness-first,Concerns Surrounding CEV: A case for human friendliness first,['ai-crotes'],2020-01-22T21:03:10Z,lesswrong,, 175165,https://www.lesswrong.com/posts/SFuLQA7guCnG8pQ7T/all-agi-safety-questions-welcome-especially-basic-ones-may,All AGI Safety questions welcome (especially basic ones) [May 2023],['steven0461'],2023-05-08T22:30:50Z,lesswrong,, 175177,https://www.lesswrong.com/posts/28sEs97ehEo8WZYb8/openai-s-alignment-plans,OpenAI's Alignment Plans,['dkirmani'],2022-08-24T19:39:05Z,lesswrong,, 175197,https://www.lesswrong.com/posts/yB89JQdazhsDJhktH/ground-truth-label-imbalance-impairs-the-performance-of-1,Ground-Truth Label Imbalance Impairs the Performance of Contrast-Consistent Search (and Other Contrast-Pair-Based Unsupervised Methods),"['Tom Angsten', 'Ami Hays']",2023-08-05T17:55:47Z,lesswrong,, 175210,https://www.lesswrong.com/posts/tksCZ7L8Xenk8GczJ/beginner-s-question-about-rlhf,Beginner's question about RLHF,['FTPickle'],2023-08-08T15:48:24Z,lesswrong,, 175221,https://www.lesswrong.com/posts/3XMwPNMSbaPm2suGz/belief-in-the-implied-invisible,Belief in the Implied Invisible,['Eliezer Yudkowsky'],2008-04-08T07:40:49Z,lesswrong,, 175236,https://www.lesswrong.com/posts/FEFQSGLhJFpqmEhgi/does-davidad-s-uploading-moonshot-work,Does davidad's uploading moonshot work?,"['jacobjacob', 'lisathiergart', 'Anders_Sandberg', 'davidad', 'Arenamontanus']",2023-11-03T02:21:52Z,lesswrong,, 175265,https://www.lesswrong.com/posts/yBJftSnHcKAQkngzb/white-house-announces-new-actions-to-promote-responsible-ai,"White House Announces ""New Actions to Promote Responsible AI Innovation""",['berglund'],2023-05-04T12:15:50Z,lesswrong,, 175292,https://www.lesswrong.com/posts/LLRtjkvh9AackwuNB/on-a-list-of-lethalities,On A List of Lethalities,['Zvi'],2022-06-13T12:30:02Z,lesswrong,, 175326,https://www.lesswrong.com/posts/picPfLnygZC5aFjNr/hch-and-adversarial-questions,HCH and Adversarial Questions,['David Udell'],2022-02-19T00:52:30Z,lesswrong,, 175339,https://www.lesswrong.com/posts/LsNMRYLKnSphpFqdt/ai-misalignment-risk-from-gpt-like-systems,AI misalignment risk from GPT-like systems?,['fiso64'],2022-06-19T17:35:41Z,lesswrong,, 175348,https://www.lesswrong.com/posts/THJbo4ygsE2d5GvkP/shortening-timelines-there-s-no-buffer-anymore,Shortening Timelines: There's No Buffer Anymore,['Jeff Rose'],2023-02-11T19:53:19Z,lesswrong,, 175365,https://www.lesswrong.com/posts/dPcKrfEi87Zzr7w6H/is-the-work-on-ai-alignment-relevant-to-gpt,Is the work on AI alignment relevant to GPT?,['Richard_Kennaway'],2020-07-30T12:23:57Z,lesswrong,, 175382,https://www.lesswrong.com/posts/nW2EGC9XmsjEGsn9r/why-the-beliefs-values-dichotomy,Why the beliefs/values dichotomy?,['Wei Dai'],2009-10-20T16:35:52Z,lesswrong,, 175398,https://www.lesswrong.com/posts/gc7EiRpkgGSEZDSve/expected-utility-unlosing-agents-and-pascal-s-mugging,"Expected utility, unlosing agents, and Pascal's mugging",['Stuart_Armstrong'],2014-07-28T18:05:00Z,lesswrong,, 175421,https://www.lesswrong.com/posts/zswEixLariPTgxLcC/misbehaving-machines-the-emulated-brains-of-transhumanist,"""Misbehaving Machines: The Emulated Brains of Transhumanist Dreams"", Corry Shores",['gwern'],2011-12-29T22:33:17Z,lesswrong,, 175442,https://www.lesswrong.com/posts/QXEeis95sKrStLu2Q/early-experiments-in-reward-model-interpretation-using,Early Experiments in Reward Model Interpretation Using Sparse Autoencoders,"['marc/er', 'Amirali Abdullah', 'Rauno Arike', 'Fazl', 'nothoughtsheadempty']",2023-10-03T07:45:15Z,lesswrong,, 175465,https://www.lesswrong.com/posts/s9sDyZ9AA3jKbM7DY/two-small-experiments-on-gpt-2,Two Small Experiments on GPT-2,['jimrandomh'],2019-02-21T02:59:16Z,lesswrong,, 175477,https://www.lesswrong.com/posts/JEQPuzp932qQB6Ez6/chatgpt-is-surprisingly-and-uncanningly-good-at-pretending,ChatGPT is surprisingly and uncanningly good at pretending to be sentient,['ZT5'],2022-12-03T14:47:19Z,lesswrong,, 175501,https://www.lesswrong.com/posts/sTNuKcF63s9SneDPT/extreme-gdp-growth-is-a-bad-operating-definition-of-slow,"Extreme GDP growth is a bad operating definition of ""slow takeoff""",['lc'],2023-03-01T22:25:27Z,lesswrong,, 175517,https://www.lesswrong.com/posts/cERDWc78ZQdz7QYKD/linkpost-shorter-version-of-report-on-existential-risk-from,[Linkpost] Shorter version of report on existential risk from power-seeking AI,['Joe Carlsmith'],2023-03-22T18:09:03Z,lesswrong,, 175531,https://www.lesswrong.com/posts/PATFQm6hPN3Wycq4W/the-peerless,The Peerless,['Tamsin Leake'],2022-04-13T01:07:09Z,lesswrong,, 175544,https://www.lesswrong.com/posts/ou5raNNjamAaahtWG/ai-scares-and-changing-public-beliefs,AI scares and changing public beliefs,['Seth Herd'],2023-04-06T18:51:13Z,lesswrong,, 175572,https://www.lesswrong.com/posts/YkA3frgif5gHntf2t/lamda-is-not-an-llm,Lamda is not an LLM,['Kevin'],2022-06-19T11:13:01Z,lesswrong,, 175587,https://www.lesswrong.com/posts/QHM2eyFyfck8rAty8/the-openai-playground-for-gpt-3-is-a-terrible-interface-is,The OpenAI playground for GPT-3 is a terrible interface. Is there any great local (or web) app for exploring/learning with language models?,['aviv'],2022-08-13T16:34:14Z,lesswrong,, 175596,https://www.lesswrong.com/posts/qRC2PHtHP68i8uAcy/grey-goo-requires-ai-1,Grey Goo Requires AI,['harsimony'],2021-01-15T04:45:36Z,lesswrong,, 175616,https://www.lesswrong.com/posts/xt5Z2Kgp8HXTRKmQf/a-toy-model-of-the-treacherous-turn,A toy model of the treacherous turn,['Stuart_Armstrong'],2016-01-08T12:58:05Z,lesswrong,, 175636,https://www.lesswrong.com/posts/scwoBEju75C45W5n3/how-i-lost-100-pounds-using-tdt,How I Lost 100 Pounds Using TDT,['Zvi'],2011-03-14T15:50:01Z,lesswrong,, 175652,https://www.lesswrong.com/posts/fcnThTzmuwKEbECBH/quick-thoughts-on-language-models,Quick Thoughts on Language Models,['RohanS'],2023-07-18T20:38:58Z,lesswrong,, 175667,https://www.lesswrong.com/posts/bsJH4uDSLxS3eAZeJ/is-the-argument-that-ai-is-an-xrisk-valid,Is the argument that AI is an xrisk valid?,['MACannon'],2021-07-19T13:20:57Z,lesswrong,, 175677,https://www.lesswrong.com/posts/wf3BqEWrwbQj3fksF/on-expected-utility-part-4-dutch-books-cox-and-complete,"On expected utility, part 4: Dutch books, Cox, and Complete Class",['Joe Carlsmith'],2022-03-24T07:51:18Z,lesswrong,, 175697,https://www.lesswrong.com/posts/oqvsR2LmHWamyKDcj/large-language-models-will-be-great-for-censorship,Large Language Models will be Great for Censorship,['Ethan Edwards'],2023-08-21T19:03:55Z,lesswrong,, 175711,https://www.lesswrong.com/posts/jjb7mLwZt5cxChz2n/chatgpt-first-impressions,ChatGPT: First Impressions,['specbug'],2022-12-01T16:36:20Z,lesswrong,, 175740,https://www.lesswrong.com/posts/jkMHmc6J54LCZyzXN/hypothetical-what-would-you-do,Hypothetical: what would you do?,['JNS'],2023-08-03T22:39:55Z,lesswrong,, 175756,https://www.lesswrong.com/posts/7rkfPYzxKTpQDppmi/lesswrong-can-and-should-become-a-hacker-community,"Lesswrong can, and should, become a hacker community",['trevor'],2023-05-30T00:30:02Z,lesswrong,, 175782,https://www.lesswrong.com/posts/4qyqvKj3N87EQXo4Y/reliability-security-and-ai-risk-notes-from-infosec-textbook,"Reliability, Security, and AI risk: Notes from infosec textbook chapter 1",['Akash'],2023-04-07T15:47:17Z,lesswrong,, 175812,https://www.lesswrong.com/posts/sDaopzAkdyd5W69Br/ai-2,AI #2,['Zvi'],2023-03-02T14:50:01Z,lesswrong,, 175850,https://www.lesswrong.com/posts/RYcoJdvmoBbi5Nax7/jailbreaking-chatgpt-on-release-day,Jailbreaking ChatGPT on Release Day,['Zvi'],2022-12-02T13:10:01Z,lesswrong,, 175865,https://www.lesswrong.com/posts/wYBv4WRuusKNKM2gt/top-down-and-bottom-up-logical-probabilities,Top-Down and Bottom-Up Logical Probabilities,['Manfred'],2014-07-22T08:53:15Z,lesswrong,, 175883,https://www.lesswrong.com/posts/csiAvRMGG5aAWvKWb/draft-introduction-to-optimization,Draft: Introduction to optimization,['Alex_Altair'],2023-03-26T17:25:55Z,lesswrong,, 175893,https://www.lesswrong.com/posts/qRSgHLb8yLXzDg4nf/reviews-of-is-power-seeking-ai-an-existential-risk,Reviews of “Is power-seeking AI an existential risk?”,['Joe Carlsmith'],2021-12-16T20:48:27Z,lesswrong,, 175902,https://www.lesswrong.com/posts/7kcDTJmEPAnSbyeYh/thoughts-about-ood-alignment,Thoughts about OOD alignment,['Catnee'],2022-08-24T15:31:59Z,lesswrong,, 175913,https://www.lesswrong.com/posts/tGhz4aKyNzXjvnWhX/expected-utility-without-the-independence-axiom,Expected utility without the independence axiom,['Stuart_Armstrong'],2009-10-28T14:40:08Z,lesswrong,, 175929,https://www.lesswrong.com/posts/8xN5KYB9xAgSSi494/against-the-open-source-closed-source-dichotomy-regulated,Against the Open Source / Closed Source Dichotomy: Regulated Source as a Model for Responsible AI Development,['alex.herwix'],2023-09-04T20:25:57Z,lesswrong,, 175942,https://www.lesswrong.com/posts/6sJeAp2gziBpPBvLr/humanity-as-an-entity-an-alternative-to-coherent,Humanity as an entity: An alternative to Coherent Extrapolated Volition,['ZT5'],2022-04-22T12:48:28Z,lesswrong,, 175960,https://www.lesswrong.com/posts/SAavJdNHgeT3SB9rx/linkpost-new-multi-modal-deepmind-model-fusing-chinchilla,[Linkpost] New multi-modal Deepmind model fusing Chinchilla with images and videos,['p.b.'],2022-04-30T03:47:13Z,lesswrong,, 175972,https://www.lesswrong.com/posts/YgFbCWxzXYCpgzahe/acausal-now-we-could-totally-acausally-bargain-with-aliens,Acausal Now: We could totally acausally bargain with aliens at our current tech level if desired,['Christopher King'],2023-08-09T00:50:51Z,lesswrong,, 175993,https://www.lesswrong.com/posts/74xt2jAMfFTGtEujN/unity-gridworlds,Unity Gridworlds,['WillPetillo'],2023-10-15T04:36:32Z,lesswrong,, 176010,https://www.lesswrong.com/posts/pmhcZiv32FqSgckEE/act-1-transformer-for-actions,ACT-1: Transformer for Actions,['Daniel Kokotajlo'],2022-09-14T19:09:40Z,lesswrong,, 176025,https://www.lesswrong.com/posts/rQH4gRmPMJyjtMpTn/rlhf,RLHF,['Ansh Radhakrishnan'],2022-05-12T21:18:21Z,lesswrong,, 176055,https://www.lesswrong.com/posts/nMHQWhbxEYb3tErTR/gpt-4-can-catch-subtle-cross-language-translation-mistakes,GPT-4 can catch subtle cross-language translation mistakes,['Michael Tontchev'],2023-07-27T01:39:23Z,lesswrong,, 176064,https://www.lesswrong.com/posts/u8isNgN7rRYBZ35rQ/in-favour-of-a-selective-cev-initial-dynamic,In favour of a selective CEV initial dynamic,['anonymous'],2011-10-21T17:33:37Z,lesswrong,, 176080,https://www.lesswrong.com/posts/WoFzaziRp8LPqomyC/coherence-therapy-with-llms-quick-demo,Coherence Therapy with LLMs - quick demo,['Chipmonk'],2023-08-14T03:34:29Z,lesswrong,, 176089,https://www.lesswrong.com/posts/jkaLGoNLdsp654KhD/prediction-any-uncontrollable-ai-will-turn-earth-into-a,Prediction: any uncontrollable AI will turn earth into a giant computer,['Karl von Wendt'],2023-04-17T12:30:44Z,lesswrong,, 176103,https://www.lesswrong.com/posts/FXWwsTWAjTwCtZmQj/the-evolutionary-pathway-from-biological-to-digital,The Evolutionary Pathway from Biological to Digital Intelligence: A Cosmic Perspective,['George360'],2023-09-05T17:47:01Z,lesswrong,, 176117,https://www.lesswrong.com/posts/7xJiotzeonZaAbgSp/april-fools-user-gpt2-is-banned,[April Fools] User GPT2 is Banned,['jimrandomh'],2019-04-02T06:00:21Z,lesswrong,, 176131,https://www.lesswrong.com/posts/yjMXhiWDKdFvQ5ixC/mastering-stratego-deepmind,Mastering Stratego (Deepmind),['anonymous'],2022-12-02T02:21:57Z,lesswrong,, 176150,https://www.lesswrong.com/posts/RTkatYxJWvXR4Qbyd/deceptive-alignment-is-less-than-1-likely-by-default,Deceptive Alignment is <1% Likely by Default,['DavidW'],2023-02-21T15:09:28Z,lesswrong,, 176172,https://www.lesswrong.com/posts/r6f9DPBZYpWFw8Qrb/validator-models-a-simple-approach-to-detecting-goodharting,Validator models: A simple approach to detecting goodharting,['beren'],2023-02-20T21:32:26Z,lesswrong,, 176187,https://www.lesswrong.com/posts/EKc4gz7nPCkntwEwg/are-ai-developers-playing-with-fire,Are AI developers playing with fire?,['marcusarvan'],2023-03-16T19:12:20Z,lesswrong,, 176209,https://www.lesswrong.com/posts/rpRsksjrBXEDJuHHy/brain-computer-interfaces-and-ai-alignment,Brain-Computer Interfaces and AI Alignment,['niplav'],2021-08-28T19:48:53Z,lesswrong,, 176242,https://www.lesswrong.com/posts/zt6hRsDE84HeBKh7E/reducing-sycophancy-and-improving-honesty-via-activation,Reducing sycophancy and improving honesty via activation steering,['Nina Rimsky'],2023-07-28T02:46:23Z,lesswrong,, 176254,https://www.lesswrong.com/posts/bhBgjpZSAvxFGYn3s/what-is-being-improved-in-recursive-self-improvement,What is being improved in recursive self improvement?,['Lone Pine'],2022-04-25T18:30:48Z,lesswrong,, 176269,https://www.lesswrong.com/posts/aH7Xtuqa3fdJDrio9/program-search-and-incomplete-understanding,Program Search and Incomplete Understanding,['Diffractor'],2018-04-29T04:32:22Z,lesswrong,, 176287,https://www.lesswrong.com/posts/o3dJsJ3tYTGLnp4bY/self-improvement-executors-are-not-goal-maximizers,self-improvement-executors are not goal-maximizers,['bhauth'],2023-06-01T20:46:17Z,lesswrong,, 176302,https://www.lesswrong.com/posts/C4Hz3ZPcD4Pef9nfu/corrigibility-thoughts-ii-the-robot-operator,Corrigibility thoughts II: the robot operator,['Stuart_Armstrong'],2017-01-18T15:52:22Z,lesswrong,, 176319,https://www.lesswrong.com/posts/g2aeGupbr3XC68tLJ/upcoming-ai-regulations-are-likely-to-make-for-an-unsafer,Upcoming AI regulations are likely to make for an unsafer world,['shminux'],2023-06-03T01:07:36Z,lesswrong,, 176329,https://www.lesswrong.com/posts/59dKN8XQGx952irWg/cruxes-for-overhang-1,Cruxes for overhang,['Zach Stein-Perlman'],2023-09-14T17:00:57Z,lesswrong,, 176346,https://www.lesswrong.com/posts/2SCSpN7BRoGhhwsjg/using-consensus-mechanisms-as-an-approach-to-alignment,Using Consensus Mechanisms as an approach to Alignment,['Prometheus'],2023-06-10T23:38:22Z,lesswrong,, 176376,https://www.lesswrong.com/posts/Ghrdnc26ftJrxD49z/carl-shulman-on-the-lunar-society-7-hour-two-part-podcast,"Carl Shulman on The Lunar Society (7 hour, two-part podcast)",['ESRogs'],2023-06-28T01:23:53Z,lesswrong,, 176395,https://www.lesswrong.com/posts/XATFiXoW6zDP6w8fd/status-conscious,Status conscious,['avantika.mehra'],2023-01-16T17:44:19Z,lesswrong,, 176413,https://www.lesswrong.com/posts/2QexGHrqSxcuwyGmf/linkpost-large-language-models-converge-on-brain-like-word,[Linkpost] Large Language Models Converge on Brain-Like Word Representations,['Bogdan Ionut Cirstea'],2023-06-11T11:20:09Z,lesswrong,, 176422,https://www.lesswrong.com/posts/AcDsTqA2vbbzmteuE/can-chatgpt-count,Can ChatGPT count?,['p.b.'],2023-01-07T07:57:21Z,lesswrong,, 176435,https://www.lesswrong.com/posts/JxzRswbeshRmyhqTL/further-considerations-on-the-evidentialist-s-wager,Further considerations on the Evidentialist's Wager,['Martín Soto'],2022-11-03T20:06:32Z,lesswrong,, 176453,https://www.lesswrong.com/posts/o3FmHewRzBCnMnsnj/if-agi-were-coming-in-a-year-what-should-we-do,"If AGI were coming in a year, what should we do?",['MichaelStJules'],2022-04-01T00:41:42Z,lesswrong,, 176467,https://www.lesswrong.com/posts/TDHDWMP5PRk4f3zrR/zero-knowledge-cooperation,Zero-Knowledge Cooperation,['bryjnar'],2017-10-25T05:35:34Z,lesswrong,, 176481,https://www.lesswrong.com/posts/Eho6PJrtCw5hgNegv/does-gpt-4-s-ability-to-compress-text-in-a-way-that-it-can,Does GPT-4's ability to compress text in a way that it can actually decompress indicate self-awareness?,['FinalFormal2'],2023-04-10T16:48:12Z,lesswrong,, 176491,https://www.lesswrong.com/posts/Cq45AuedYnzekp3LX/you-may-already-be-a-sinner,You May Already Be A Sinner,['Scott Alexander'],2009-03-09T23:18:36Z,lesswrong,, 176506,https://www.lesswrong.com/posts/pu6D2EdJiz2mmhxfB/archetypal-transfer-learning-a-proposed-alignment-solution,Archetypal Transfer Learning: a Proposed Alignment Solution that solves the Inner & Outer Alignment Problem while adding Corrigible Traits to GPT-2-medium,['MiguelDev'],2023-04-26T01:37:22Z,lesswrong,, 176516,https://www.lesswrong.com/posts/L8va2k323Zu4Fjr39/deontology-and-tool-ai,Deontology and Tool AI,['Nathan1123'],2022-08-05T05:20:15Z,lesswrong,, 176537,https://www.lesswrong.com/posts/nuJFTS5iiJKT5G5yh/polysemantic-attention-head-in-a-4-layer-transformer,Polysemantic Attention Head in a 4-Layer Transformer,"['Jett', 'cmathw', 'StefanHex']",2023-11-09T16:16:35Z,lesswrong,, 176552,https://www.lesswrong.com/posts/rPr6E2qakW9hnaxfc/utilitarianism-is-irrational-or-self-undermining,Utilitarianism is irrational or self-undermining,['MichaelStJules'],2023-10-07T21:09:00Z,lesswrong,, 176572,https://www.lesswrong.com/posts/agbSrvyL3tDh3ZP7h/ray-kurzweil-and-uploading-just-say-no-nick-agar,"""Ray Kurzweil and Uploading: Just Say No!"", Nick Agar",['gwern'],2011-12-02T21:42:49Z,lesswrong,, 176587,https://www.lesswrong.com/posts/9GC35E9JkkcLtBi7Y/competence-vs-alignment,Competence vs Alignment,['Ariel Kwiatkowski'],2020-09-30T21:03:02Z,lesswrong,, 176601,https://www.lesswrong.com/posts/oam7CAZzG5o3mwdSz/talking-publicly-about-ai-risk,Talking publicly about AI risk,['Jan_Kulveit'],2023-04-21T11:28:17Z,lesswrong,, 176624,https://www.lesswrong.com/posts/GqTeChFnXdJzDzbMd/realism-and-rationality-2,Realism and Rationality,['bmgarfinkel'],2019-09-16T03:09:45Z,lesswrong,, 176641,https://www.lesswrong.com/posts/AJtfNyBsum6ZzWxKR/will-ai-see-sudden-progress,Will AI See Sudden Progress?,['KatjaGrace'],2018-02-26T00:41:15Z,lesswrong,, 176655,https://www.lesswrong.com/posts/mhjAwsxTMmqFNbKLQ/romance-misunderstanding-social-stances-and-the-human-llm,"Romance, misunderstanding, social stances, and the human LLM",['Kaj_Sotala'],2023-04-27T12:59:09Z,lesswrong,, 176676,https://www.lesswrong.com/posts/M25tHHo58Bnto8sfu/humanity-s-lack-of-unity-will-lead-to-agi-catastrophe,Humanity's Lack of Unity Will Lead to AGI Catastrophe,['MiguelDev'],2023-03-19T19:18:02Z,lesswrong,, 176700,https://www.lesswrong.com/posts/kToFGkGj5u5eYLJPF/q-and-a-with-abram-demski-on-risks-from-ai,Q&A with Abram Demski on risks from AI,['XiXiDu'],2012-01-17T09:43:11Z,lesswrong,, 176720,https://www.lesswrong.com/posts/atxoviwLcPJPdYMqo/if-we-had-known-the-atmosphere-would-ignite,If we had known the atmosphere would ignite,['Jeffs'],2023-08-16T20:28:51Z,lesswrong,, 176729,https://www.lesswrong.com/posts/dRsrfC8LN4z2oehJg/the-heritability-of-human-values-a-behavior-genetic-critique,The heritability of human values: A behavior genetic critique of Shard Theory,['geoffreymiller'],2022-10-20T15:51:36Z,lesswrong,, 176755,https://www.lesswrong.com/posts/sLHjnoyfEzf6BRC4w/what-are-the-biggest-current-impacts-of-ai,What are the biggest current impacts of AI?,['Sam Clarke'],2021-03-07T21:44:11Z,lesswrong,, 176773,https://www.lesswrong.com/posts/ProjTv8rEmWviF4od/my-seri-mats-application,My SERI MATS Application,['Daniel Paleka'],2022-05-30T02:04:07Z,lesswrong,, 176789,https://www.lesswrong.com/posts/YguseW2zMYe8tMCbW/cruxes-on-us-lead-for-some-domestic-ai-regulation,Cruxes on US lead for some domestic AI regulation,['Zach Stein-Perlman'],2023-09-10T18:00:07Z,lesswrong,, 176815,https://www.lesswrong.com/posts/ctWGEQznumzyTRGFs/against-ai-risk,"against ""AI risk""",['Wei Dai'],2012-04-11T22:46:11Z,lesswrong,, 176835,https://www.lesswrong.com/posts/SoqQ3ShW2xmdpjeZ7/why-rationalists-should-care-more-about-free-software,Why rationalists should care (more) about free software,['RichardJActon'],2022-01-23T17:31:34Z,lesswrong,, 176860,https://www.lesswrong.com/posts/KwdcMts8P8hacqwrX/noticing-the-taste-of-lotus,Noticing the Taste of Lotus,['Valentine'],2018-04-27T20:05:24Z,lesswrong,, 176879,https://www.lesswrong.com/posts/CttZFMmikuKuFmNT7/meta-decision-theory-and-newcomb-s-problem,Meta Decision Theory and Newcomb's Problem,['wdmacaskill'],2013-03-05T01:29:41Z,lesswrong,, 176890,https://www.lesswrong.com/posts/KaJmqHgcexBKMiThL/evaluating-openai-s-alignment-plans-using-training-stories,Evaluating OpenAI's alignment plans using training stories,['ojorgensen'],2022-08-25T16:12:39Z,lesswrong,, 176904,https://www.lesswrong.com/posts/gkAecqbuPw4iggiub/common-mistakes-people-make-when-thinking-about-decision,Common mistakes people make when thinking about decision theory,['cousin_it'],2012-03-27T20:03:08Z,lesswrong,, 176917,https://www.lesswrong.com/posts/shnSyzv4Jq3bhMNw5/alphago-zero-and-the-foom-debate,AlphaGo Zero and the Foom Debate,['Eliezer Yudkowsky'],2017-10-21T02:18:50Z,lesswrong,, 176935,https://www.lesswrong.com/posts/4RnpP3RP9HFkNzBdm/loose-thoughts-on-agi-risk,Loose thoughts on AGI risk,['Yitz'],2022-06-23T01:02:25Z,lesswrong,, 176955,https://www.lesswrong.com/posts/T98kdFL5bxBWSiE3N/best-introductory-overviews-of-agi-safety,Best introductory overviews of AGI safety?,['JakubK'],2022-12-13T19:01:38Z,lesswrong,, 176965,https://www.lesswrong.com/posts/oi694SngFLHEZxiz5/can-we-align-a-self-improving-agi,Can We Align a Self-Improving AGI?,['Peter S. Park'],2022-08-30T00:14:46Z,lesswrong,, 177001,https://www.lesswrong.com/posts/hkX2HWZJ7xLgRZafJ/sketch-validity-criterion-for-logical-counterfactuals,[Sketch] Validity Criterion for Logical Counterfactuals,['DragonGod'],2022-10-11T13:31:23Z,lesswrong,, 177013,https://www.lesswrong.com/posts/Ncv5b2sjLtyi9oKGz/optionality-approach-to-ethics,Optionality approach to ethics,['Ryo'],2023-11-13T15:23:21Z,lesswrong,, 177027,https://www.lesswrong.com/posts/N4wb4FbnuhaqSg7dT/ai-governance-student-hackathon-on-saturday-april-23,"AI governance student hackathon on Saturday, April 23: register now!",['mic'],2022-04-12T04:48:15Z,lesswrong,, 177038,https://www.lesswrong.com/posts/rTJrqtDLxAPxiW3sk/my-first-year-in-ai-alignment,My first year in AI alignment,['Alex_Altair'],2023-01-02T01:28:03Z,lesswrong,, 177058,https://www.lesswrong.com/posts/4nM2JueLjwpuP7bNs/chatgpt-plays-20-questions-sometimes-needs-help,ChatGPT Plays 20 Questions [sometimes needs help],['Bill Benzon'],2023-10-17T17:30:13Z,lesswrong,, 177072,https://www.lesswrong.com/posts/PiXS9kE4qX68KveCt/what-would-i-do-self-prediction-in-simple-algorithms,What Would I Do? Self-prediction in Simple Algorithms,['Scott Garrabrant'],2020-07-20T04:27:25Z,lesswrong,, 177095,https://www.lesswrong.com/posts/ujnJdexBWAJcwCDoX/can-someone-explain-to-me-why-miri-is-so-pessimistic-of-our,Can someone explain to me why MIRI is so pessimistic of our chances of survival?,['anonymous'],2022-04-14T20:28:31Z,lesswrong,, 177119,https://www.lesswrong.com/posts/kFb8L4omGMk2kMK3K/embedded-agency-not-just-an-ai-problem,Embedded Agency: Not Just an AI Problem,['johnswentworth'],2019-06-27T00:35:32Z,lesswrong,, 177128,https://www.lesswrong.com/posts/ZKzAjKSeNRtiaeJns/if-i-were-a-well-intentioned-ai-ii-acting-in-a-world,If I were a well-intentioned AI... II: Acting in a world,['Stuart_Armstrong'],2020-02-27T11:58:32Z,lesswrong,, 177148,https://www.lesswrong.com/posts/dq3KsCsqNotWc8nAK/cascades-cycles-insight,"Cascades, Cycles, Insight...",['Eliezer Yudkowsky'],2008-11-24T09:33:40Z,lesswrong,, 177165,https://www.lesswrong.com/posts/q5xoDf5aHjtsa2QAf/anti-squatted-ai-x-risk-domains-index,Anti-squatted AI x-risk domains index,['plex'],2022-08-12T12:01:25Z,lesswrong,, 177174,https://www.lesswrong.com/posts/4mGDZurjv6j8AWhNe/enhancing-corrigibility-in-ai-systems-through-robust,Enhancing Corrigibility in AI Systems through Robust Feedback Loops,['Justausername'],2023-08-24T03:53:41Z,lesswrong,, 177194,https://www.lesswrong.com/posts/HuBRXaqN2FPtQhywr/artificial-intelligence-wireheading,Artificial intelligence wireheading,['Big Tony'],2022-08-12T03:06:40Z,lesswrong,, 177204,https://www.lesswrong.com/posts/p9CSjcCoLqFptogjW/ai-oracles-on-blockchain-1,AI oracles on blockchain,['Caravaggio'],2021-04-06T20:13:30Z,lesswrong,, 177220,https://www.lesswrong.com/posts/tyts4Dw7SafsxBjar/what-can-we-learn-from-lex-fridman-s-interview-with-sam,What can we learn from Lex Fridman’s interview with Sam Altman?,['Karl von Wendt'],2023-03-27T06:27:40Z,lesswrong,, 177243,https://www.lesswrong.com/posts/HxYneuv9XRit4dMRm/decision-theory-paradox-answer-key,Decision Theory Paradox: Answer Key,['orthonormal'],2011-09-05T23:13:33Z,lesswrong,, 177252,https://www.lesswrong.com/posts/duAkuSqJhGDcfMaTA/reflection-in-probabilistic-logic,Reflection in Probabilistic Logic,['Eliezer Yudkowsky'],2013-03-24T16:37:36Z,lesswrong,, 177264,https://www.lesswrong.com/posts/Rn4wn3oqfinAsqBSf/intent-alignment-should-not-be-the-goal-for-agi-x-risk,Intent alignment should not be the goal for AGI x-risk reduction,['John Nay'],2022-10-26T01:24:22Z,lesswrong,, 177287,https://www.lesswrong.com/posts/7T5J3WM5zqcnTPofS/the-fixed-sum-fallacy,The Fixed Sum Fallacy,['cousin_it'],2009-07-03T13:01:56Z,lesswrong,, 177297,https://www.lesswrong.com/posts/jNTp87ioEWsZw3nar/gliders-in-language-models,Gliders in Language Models,['Alexandre Variengien'],2022-11-25T00:38:12Z,lesswrong,, 177315,https://www.lesswrong.com/posts/miwf7qQTh2HXNnSuq/decision-theory-why-pearl-helps-reduce-could-and-would-but,"Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives",['AnnaSalamon'],2009-09-06T06:10:20Z,lesswrong,, 177333,https://www.lesswrong.com/posts/KuQCHGfDBmHLtrcp7/are-the-fundamental-physical-constants-computable,Are the fundamental physical constants computable?,['Yair Halberstadt'],2022-04-05T15:05:42Z,lesswrong,, 177342,https://www.lesswrong.com/posts/pgQ3m73kpjGDgKuRM/how-much-computational-power-does-it-take-to-match-the-human,How Much Computational Power Does It Take to Match the Human Brain?,['habryka'],2020-09-12T06:38:30Z,lesswrong,, 177354,https://www.lesswrong.com/posts/rxnpuzEw4boJrs3CY/rejected-early-drafts-of-newcomb-s-problem,Rejected Early Drafts of Newcomb's Problem,['zahmahkibo'],2022-09-06T19:04:54Z,lesswrong,, 177376,https://www.lesswrong.com/posts/nupJLBe2xxbB8KJxc/miriam-yevick-on-why-both-symbols-and-networks-are-necessary,Miriam Yevick on why both symbols and networks are necessary for artificial minds,['Bill Benzon'],2022-06-06T08:34:12Z,lesswrong,, 177390,https://www.lesswrong.com/posts/mhXW9FFAiBpx3CnHK/creating-welfare-biology-a-research-proposal,Creating Welfare Biology: A Research Proposal,['ozymandias'],2017-11-16T19:06:11Z,lesswrong,, 177403,https://www.lesswrong.com/posts/hfuqzaPGaPA2PwFPM/tutor-gpt-and-pedagogical-reasoning,Tutor-GPT & Pedagogical Reasoning,['courtlandleer'],2023-06-05T17:53:10Z,lesswrong,, 177419,https://www.lesswrong.com/posts/WercWcbpozCt4eRci/why-do-we-post-our-ai-safety-plans-on-the-internet,Why do we post our AI safety plans on the Internet?,['Peter S. Park'],2022-11-03T16:02:21Z,lesswrong,, 177437,https://www.lesswrong.com/posts/EhHdZ5yBgEvLLx6Pw/chad-jones-paper-modeling-ai-and-x-risk-vs-growth,Chad Jones paper modeling AI and x-risk vs. growth,['jasoncrawford'],2023-04-26T20:07:06Z,lesswrong,, 177450,https://www.lesswrong.com/posts/foM8SA3ftY94MGMq9/assessment-of-intelligence-agency-functionality-is-difficult,Assessment of intelligence agency functionality is difficult yet important,['trevor'],2023-08-24T01:42:21Z,lesswrong,, 177466,https://www.lesswrong.com/posts/CsyJCenESEsAESzWg/how-do-low-level-hypotheses-constrain-high-level-ones-the,How do low level hypotheses constrain high level ones? The mystery of the disappearing diamond.,['Christopher King'],2023-07-11T19:27:49Z,lesswrong,, 177476,https://www.lesswrong.com/posts/jNAAZ9XNyt82CXosr/mirrors-and-paintings,Mirrors and Paintings,['Eliezer Yudkowsky'],2008-08-23T00:29:05Z,lesswrong,, 177494,https://www.lesswrong.com/posts/ip2vSzkcmi3rBbnYE/agi-doesn-t-need-understanding-intention-or-consciousness-in,"AGI doesn't need understanding, intention, or consciousness in order to kill us, only intelligence",['James Blaha'],2023-02-20T00:55:34Z,lesswrong,, 177521,https://www.lesswrong.com/posts/m3fyWQgCcFwro5KQh/reframing-the-ai-risk,Reframing the AI Risk,['Thane Ruthenis'],2022-07-01T18:44:32Z,lesswrong,, 177535,https://www.lesswrong.com/posts/EEWzh3oDTpCNEkqzX/an-example-elevator-pitch-for-ai-doom,An example elevator pitch for AI doom,['laserfiche'],2023-04-15T12:29:17Z,lesswrong,, 177548,https://www.lesswrong.com/posts/Wt89KzBWPiHm6XkD7/positive-outcomes-under-an-unaligned-agi-takeover,Positive outcomes under an unaligned AGI takeover,['Yitz'],2022-05-12T07:45:37Z,lesswrong,, 177568,https://www.lesswrong.com/posts/JeMGZNZ6tuBWJHqvi/what-2025-looks-like,What 2025 looks like,['Ruby'],2023-05-01T22:53:16Z,lesswrong,, 177597,https://www.lesswrong.com/posts/QnBZkNJNbJK9k5Xi7/linkpost-sam-altman-s-2015-blog-posts-machine-intelligence,[Linkpost] Sam Altman's 2015 Blog Posts Machine Intelligence Parts 1 & 2,['Olivia Jimenez'],2023-04-28T16:02:00Z,lesswrong,, 177626,https://www.lesswrong.com/posts/6QPFKHRsuY63cuJwh/making-decisions-using-multiple-worldviews,Making decisions using multiple worldviews,['Richard_Ngo'],2022-07-13T19:15:03Z,lesswrong,, 177647,https://www.lesswrong.com/posts/xmoFoMht6iKXmzhiK/what-information-apart-from-the-connectome-is-necessary-to,"What information, apart from the connectome, is necessary to simulate a brain?",['Richard_Ngo'],2020-03-20T02:03:15Z,lesswrong,, 177656,https://www.lesswrong.com/posts/mMBTPTjRbsrqbSkZE/sorting-pebbles-into-correct-heaps,Sorting Pebbles Into Correct Heaps,['Eliezer Yudkowsky'],2008-08-10T01:00:00Z,lesswrong,, 177679,https://www.lesswrong.com/posts/yykNvq257zBLDNmJo/infra-bayesianism-naturally-leads-to-the-monotonicity,"Infra-Bayesianism naturally leads to the monotonicity principle, and I think this is a problem",['matolcsid'],2023-04-26T21:39:25Z,lesswrong,, 177696,https://www.lesswrong.com/posts/Bj244uWzDBXvE2N2S/a-model-of-udt-with-a-halting-oracle,A model of UDT with a halting oracle,['cousin_it'],2011-12-18T14:18:57Z,lesswrong,, 177708,https://www.lesswrong.com/posts/mTtxJKN3Ew8CAEHGr/microdooms-averted-by-working-on-ai-safety,Microdooms averted by working on AI Safety,['nikola'],2023-09-17T21:46:06Z,lesswrong,, 177727,https://www.lesswrong.com/posts/6tp4YAXjWLM3HocW7/to-what-extent-is-your-agi-timeline-bimodal-or-otherwise,"To what extent is your AGI timeline bimodal or otherwise ""bumpy""?",['jchan'],2022-05-16T17:42:54Z,lesswrong,, 177736,https://www.lesswrong.com/posts/WEPtBsJJKyfqjkBKc/instrumental-convergence-to-complexity-preservation,Instrumental Convergence to Complexity Preservation,['Macro Flaneur'],2023-07-13T17:40:49Z,lesswrong,, 177749,https://www.lesswrong.com/posts/WoXDjh3Bqnene9uxA/natural-value-learning,Natural Value Learning,['Chris van Merwijk'],2022-03-20T12:44:20Z,lesswrong,, 177759,https://www.lesswrong.com/posts/85DTWEmA25sTciHvy/how-we-could-stumble-into-ai-catastrophe,How we could stumble into AI catastrophe,['HoldenKarnofsky'],2023-01-13T16:20:06Z,lesswrong,, 177793,https://www.lesswrong.com/posts/gZbHSWcLvj7ZopSas/controlling-constant-programs,Controlling Constant Programs,['Vladimir_Nesov'],2010-09-05T13:45:48Z,lesswrong,, 177811,https://www.lesswrong.com/posts/AhANWF2Y4SXYeN5Wr/exploiting-newcomb-s-game-show,Exploiting Newcomb's Game Show,['carterallen'],2023-05-25T04:01:09Z,lesswrong,, 177824,https://www.lesswrong.com/posts/fbekxBfgvfc7pmnzB/how-to-win-the-ai-box-experiment-sometimes,How To Win The AI Box Experiment (Sometimes),['pinkgothic'],2015-09-12T12:34:53Z,lesswrong,, 177841,https://www.lesswrong.com/posts/DMtzwPuFQtDmPEppF/exploring-functional-decision-theory-fdt-and-a-modified,Exploring Functional Decision Theory (FDT) and a modified version (ModFDT),['MiguelDev'],2023-07-05T14:06:14Z,lesswrong,, 177859,https://www.lesswrong.com/posts/RXLoo3pXgY2hjdXrg/life-s-story-continues,Life's Story Continues,['Eliezer Yudkowsky'],2008-11-21T23:05:47Z,lesswrong,, 177878,https://www.lesswrong.com/posts/qoG4tR8TGEYjoDmw2/transcript-of-a-presentation-on-catastrophic-risks-from-ai,Transcript of a presentation on catastrophic risks from AI,['RobertM'],2023-05-05T01:38:18Z,lesswrong,, 177908,https://www.lesswrong.com/posts/JGftZ8CgBt3dZJNto/can-dall-e-understand-simple-geometry,Can DALL-E understand simple geometry?,['Isaac King'],2022-06-18T04:37:46Z,lesswrong,, 177918,https://www.lesswrong.com/posts/cB2Rtnp7DBTpDy3ii/memory-bandwidth-constraints-imply-economies-of-scale-in-ai,Memory bandwidth constraints imply economies of scale in AI inference,['Ege Erdil'],2023-09-17T14:01:35Z,lesswrong,, 177934,https://www.lesswrong.com/posts/pGvyqAQw6yqTjpKf4/the-gift-we-give-to-tomorrow,The Gift We Give To Tomorrow,['Eliezer Yudkowsky'],2008-07-17T06:07:54Z,lesswrong,, 177950,https://www.lesswrong.com/posts/eyPTkNwCQoWHCdYTs/you-are-better-at-math-and-alignment-than-you-think,You are better at math (and alignment) than you think,['trevor'],2022-10-13T03:07:52Z,lesswrong,, 177973,https://www.lesswrong.com/posts/fc9KjZeSLuHN7HfW6/making-nanobots-isn-t-a-one-shot-process-even-for-an,"Making Nanobots isn't a one-shot process, even for an artificial superintelligance",['dankrad'],2023-04-25T00:39:25Z,lesswrong,, 177994,https://www.lesswrong.com/posts/sr22pEXYf76YqAt8i/gpt-3-gan,GPT-3 + GAN,['stick109'],2020-10-17T07:58:43Z,lesswrong,, 178003,https://www.lesswrong.com/posts/rPCHKfSYFSWGYpTKX/proposal-using-monte-carlo-tree-search-instead-of-rlhf-for,Proposal: Using Monte Carlo tree search instead of RLHF for alignment research,['Christopher King'],2023-04-20T19:57:43Z,lesswrong,, 178032,https://www.lesswrong.com/posts/z2FQv2ejzBWRESLjk/let-s-go-meta-grammatical-knowledge-and-self-referential,Let’s go meta: Grammatical knowledge and self-referential sentences [ChatGPT],['Bill Benzon'],2022-12-12T21:50:32Z,lesswrong,, 178042,https://www.lesswrong.com/posts/xri58L7WkyeKyKv4P/i-am-scared-of-posting-negative-takes-about-bing-s-ai,I Am Scared of Posting Negative Takes About Bing's AI,['Yitz'],2023-02-17T20:50:10Z,lesswrong,, 178058,https://www.lesswrong.com/posts/jdCCBwdPqDNnzkkrm/gpt-4-what-we-i-know-about-it,GPT-4: What we (I) know about it,['Robert_AIZI'],2023-03-15T20:12:57Z,lesswrong,, 178095,https://www.lesswrong.com/posts/6onoq24Py2rgXmJPE/clarification-behaviourism-and-reinforcement,Clarification: Behaviourism & Reinforcement,['Zaine'],2012-10-10T05:30:21Z,lesswrong,, 178113,https://www.lesswrong.com/posts/hedtrNfdfH3N5S5kW/adversarial-priors-not-paying-people-to-lie-to-you,Adversarial Priors: Not Paying People to Lie to You,['eva_'],2022-11-10T02:29:13Z,lesswrong,, 178132,https://www.lesswrong.com/posts/DaaFce3hBoEzYhdvz/how-well-did-manifold-predict-gpt-4,How well did Manifold predict GPT-4?,['David Chee'],2023-03-15T23:19:06Z,lesswrong,, 178156,https://www.lesswrong.com/posts/nqJdGA7APrgMX8hwT/early-results-do-llms-complete-false-equations-with-false,Early Results: Do LLMs complete false equations with false equations?,['Robert_AIZI'],2023-03-30T20:14:23Z,lesswrong,, 178166,https://www.lesswrong.com/posts/aayFmJEF5PycJWuvW/a-way-to-beat-superrational-edt-agents,A way to beat superrational/EDT agents?,['Abhimanyu Pallavi Sudhir'],2020-08-17T14:33:58Z,lesswrong,, 178175,https://www.lesswrong.com/posts/z5kDsPJoesimzkQvS/how-likely-are-scenarios-where-agi-ends-up-overtly-or-de-1,How likely are scenarios where AGI ends up overtly or de facto torturing us? How likely are scenarios where AGI prevents us from committing suicide or dying?,['JohnGreer'],2023-03-28T18:00:47Z,lesswrong,, 178184,https://www.lesswrong.com/posts/CsjLDAhQat4PY6dsc/order-matters-for-deceptive-alignment-1,Order Matters for Deceptive Alignment,['DavidW'],2023-02-15T19:56:07Z,lesswrong,, 178210,https://www.lesswrong.com/posts/EtrYRWnwymJejtdRG/winners-take-how-much,Winners-take-how-much?,['YonatanK'],2023-05-29T21:56:46Z,lesswrong,, 178226,https://www.lesswrong.com/posts/J9eF4nA6wJW6hPueN/the-6d-effect-when-companies-take-risks-one-email-can-be,"The 6D effect: When companies take risks, one email can be very powerful.",['scasper'],2023-11-04T20:08:40Z,lesswrong,, 178239,https://www.lesswrong.com/posts/8c8AZq5hgifmnHKSN/agi-safety-faq-all-dumb-questions-allowed-thread,AGI Safety FAQ / all-dumb-questions-allowed thread,['Aryeh Englander'],2022-06-07T05:47:13Z,lesswrong,, 178249,https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce,Views on when AGI comes and on strategy to reduce existential risk,['TsviBT'],2023-07-08T09:00:20Z,lesswrong,, 178269,https://www.lesswrong.com/posts/fHeLv4iYf4oXB3LcM/adaptation-executors-and-the-telos-margin,Adaptation Executors and the Telos Margin,['Plinthist'],2022-06-20T13:06:30Z,lesswrong,, 178281,https://www.lesswrong.com/posts/WeWi6PYaBLaCuX8JJ/image-hijacks-adversarial-images-can-control-generative,Image Hijacks: Adversarial Images can Control Generative Models at Runtime,"['Scott Emmons', 'Luke Bailey', 'Euan Ong']",2023-09-20T15:23:49Z,lesswrong,, 178301,https://www.lesswrong.com/posts/rnzpYKWmNWEW5PQyq/inference-from-a-mathematical-description-of-an-existing,Inference from a Mathematical Description of an Existing Alignment Research: a proposal for an outer alignment research program,['Christopher King'],2023-06-02T21:54:56Z,lesswrong,, 178327,https://www.lesswrong.com/posts/Rypb63HLXWYw8CdSr/housing-markets-satisficers-and-one-track-goodhart,"Housing Markets, Satisficers, and One-Track Goodhart",['Jemist'],2021-12-16T21:38:46Z,lesswrong,, 178347,https://www.lesswrong.com/posts/x46D2DdxmRjFfsnBh/2021-03-01-national-library-of-medicine-presentation-atlas,2021-03-01 National Library of Medicine Presentation: “Atlas of AI: Mapping the social and economic forces behind AI”,['IrenicTruth'],2021-02-17T18:23:47Z,lesswrong,, 178358,https://www.lesswrong.com/posts/ezGYBHTxiRgmMRpWK/tech-company-singularities-and-steering-them-to-reduce-x,"""Tech company singularities"", and steering them to reduce x-risk",['Andrew_Critch'],2022-05-13T17:24:00Z,lesswrong,, 178375,https://www.lesswrong.com/posts/m92aaNdkiKFwmLmBo/podcast-tamera-lanham-on-ai-risk-threat-models-alignment,"Podcast: Tamera Lanham on AI risk, threat models, alignment proposals, externalized reasoning oversight, and working at Anthropic",['Akash'],2022-12-20T21:39:42Z,lesswrong,, 178401,https://www.lesswrong.com/posts/de3xjFaACCAk6imzv/towards-a-new-decision-theory,Towards a New Decision Theory,['Wei Dai'],2009-08-13T05:31:41Z,lesswrong,, 178417,https://www.lesswrong.com/posts/3nu7gvuYJJZLYP2zw/notes-on-caution,Notes on Caution,['David Gross'],2022-12-01T03:05:21Z,lesswrong,, 178435,https://www.lesswrong.com/posts/Pu3c48yddQrrRPghP/link-death-long-lives-uploading-a-conworlding-perspective,"[Link] Death, long lives, uploading - a conworlding perspective",['Vulture'],2014-01-27T16:04:34Z,lesswrong,, 178452,https://www.lesswrong.com/posts/zGoDXgLFetEdvLdFH/human-intelligence-may-be-alignment-limited,human intelligence may be alignment-limited,['bhauth'],2023-06-15T22:32:15Z,lesswrong,, 178463,https://www.lesswrong.com/posts/6ushcKgibJH6dQB6q/to-capture-anti-death-intuitions-include-memory-in,"To capture anti-death intuitions, include memory in utilitarianism",['Kaj_Sotala'],2014-01-15T06:27:29Z,lesswrong,, 178478,https://www.lesswrong.com/posts/2Zsuv5uPFPNTACwzg/moral-mazes-and-short-termism,Moral Mazes and Short Termism,['Zvi'],2019-06-02T11:30:00Z,lesswrong,, 178497,https://www.lesswrong.com/posts/YwqSijHybF9GFkDab/transformer-language-models-are-doing-something-more-general,Transformer language models are doing something more general,['Numendil'],2022-08-03T21:13:42Z,lesswrong,, 178519,https://www.lesswrong.com/posts/TejMdvF9XTNP5pGDR/real-world-newcomb-like-problems,Real-world Newcomb-like Problems,['SilasBarta'],2011-03-25T20:44:16Z,lesswrong,, 178528,https://www.lesswrong.com/posts/5onEtjNEhqcfX3LXG/a-generalist-agent-new-deepmind-publication,"""A Generalist Agent"": New DeepMind Publication",['1a3orn'],2022-05-12T15:30:18Z,lesswrong,, 178538,https://www.lesswrong.com/posts/wkF5rHDFKEWyJJLj2/link-book-review-reframing-superintelligence-ssc,[Link] Book Review: Reframing Superintelligence (SSC),['ioannes'],2019-08-28T22:57:09Z,lesswrong,, 178553,https://www.lesswrong.com/posts/dKFRinvMAHwvRmnzb/issues-with-uneven-ai-resource-distribution,Issues with uneven AI resource distribution,['User_Luke'],2022-12-24T01:18:00Z,lesswrong,, 178573,https://www.lesswrong.com/posts/SsCQHjqNT3xQAPQ6b/yudkowsky-on-agi-ethics,Yudkowsky on AGI ethics,['Rob Bensinger'],2017-10-19T23:14:00Z,lesswrong,, 178591,https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai,[LINK]s: Who says Watson is only a narrow AI?,['shminux'],2013-05-21T18:04:12Z,lesswrong,, 178601,https://www.lesswrong.com/posts/zyPaqXgFzqHkQfccq/contra-lecun-on-autoregressive-llms-are-doomed,"Contra LeCun on ""Autoregressive LLMs are doomed""",['rotatingpaguro'],2023-04-10T04:05:10Z,lesswrong,, 178617,https://www.lesswrong.com/posts/dYspinGtiba5oDCcv/feature-selection,Feature Selection,['Zack_M_Davis'],2021-11-01T00:22:30Z,lesswrong,, 178634,https://www.lesswrong.com/posts/DKjbWkppHptbyyuG8/transformer-architecture-choice-for-resisting-prompt,Transformer Architecture Choice for Resisting Prompt Injection and Jail-Breaking Attacks,['RogerDearnaley'],2023-05-21T08:29:10Z,lesswrong,, 178650,https://www.lesswrong.com/posts/cPSqYG5qiRDpugrvq/what-is-the-relationship-between-preference-learning-and,What is the relationship between Preference Learning and Value Learning?,['Riccardo Volpato'],2020-01-13T21:08:40Z,lesswrong,, 178660,https://www.lesswrong.com/posts/nxhXTfsAf2LTg4xvt/artefacts-generated-by-mode-collapse-in-gpt-4-turbo-serve-as,Artefacts generated by mode collapse in GPT-4 Turbo serve as adversarial attacks.,['Sohaib Imran'],2023-11-10T15:23:07Z,lesswrong,, 178677,https://www.lesswrong.com/posts/C3SRSfRojwCzaPW2b/a-crisis-for-online-communication-bots-and-bot-users-will,A crisis for online communication: bots and bot users will overrun the Internet?,['Mitchell_Porter'],2022-12-11T21:11:47Z,lesswrong,, 178689,https://www.lesswrong.com/posts/f4NrqEKMsnRKdjtpx/gearing-up-for-long-timelines-in-a-hard-world,Gearing Up for Long Timelines in a Hard World,['Dalcy Bremin'],2023-07-14T06:11:05Z,lesswrong,, 178711,https://www.lesswrong.com/posts/dCk7DwarzkiooB2RT/linkpost-time-article-deepmind-s-ceo-helped-take-ai,[Linkpost] TIME article: DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution,['Akash'],2023-01-21T16:51:10Z,lesswrong,, 178728,https://www.lesswrong.com/posts/bkpZHXMJx3dG5waA7/ways-to-buy-time,Ways to buy time,"['Akash', 'Olivia Jimenez', 'Thomas Larsen']",2022-11-12T19:31:10Z,lesswrong,, 178763,https://www.lesswrong.com/posts/bZbLnr7qwuEBpTPuF/is-gpt-n-bounded-by-human-capabilities-no,Is GPT-N bounded by human capabilities? No.,['Cleo Nardo'],2022-10-17T23:26:44Z,lesswrong,, 178773,https://www.lesswrong.com/posts/zTDkhm6yFq6edhZ7L/inference-cost-limits-the-impact-of-ever-larger-models,Inference cost limits the impact of ever larger models,['SoerenMind'],2021-10-23T10:51:13Z,lesswrong,, 178789,https://www.lesswrong.com/posts/khupuW9cPrcLYkJww/why-no-interesting-unaligned-singularity,Why No *Interesting* Unaligned Singularity?,['David Udell'],2022-04-20T00:34:56Z,lesswrong,, 178805,https://www.lesswrong.com/posts/mzzoqmouRhsSz5snJ/nice-intro-video-to-rsi,Nice intro video to RSI,['Nathan Helm-Burger'],2023-05-16T18:48:30Z,lesswrong,, 178814,https://www.lesswrong.com/posts/9CdBJZTKKZ5bEDJys/ai-safety-newsletter-2-chaosgpt-natural-selection-and-ai,"AI Safety Newsletter #2: ChaosGPT, Natural Selection, and AI Safety in the Media","['ozhang', 'Dan H', 'Akash', 'aogara']",2023-04-18T18:44:36Z,lesswrong,, 178837,https://www.lesswrong.com/posts/jzHeZt3xFnGTudH5M/recursive-middle-manager-hell-ai-edition,Recursive Middle Manager Hell: AI Edition,['VojtaKovarik'],2023-05-04T20:08:18Z,lesswrong,, 178850,https://www.lesswrong.com/posts/4TCdZN2aj8rnuEkbH/analysis-of-ai-safety-surveys-for-field-building-insights,Analysis of AI Safety surveys for field-building insights,['Ash Jafari'],2022-12-05T19:21:13Z,lesswrong,, 178869,https://www.lesswrong.com/posts/A5XTgYmGnqEDnJdzJ/the-curious-case-of-pretty-good-human-inner-outer-alignment,The curious case of Pretty Good human inner/outer alignment,['PavleMiha'],2022-07-05T19:04:49Z,lesswrong,, 178887,https://www.lesswrong.com/posts/oQh89BfH4aRthPiwY/ai-safety-camp-machine-learning-for-scientific-discovery-1,AI Safety Camp: Machine Learning for Scientific Discovery,['Eleni Angelou'],2023-01-06T03:21:38Z,lesswrong,, 178896,https://www.lesswrong.com/posts/4mM8RYsm4okrqGSqx/with-or-without-a-scratchpad-large-language-models-can,"With or without a scratchpad, Large Language Models can Strategically Deceive their Users when Put Under Pressure. Results of an autonomous stock trading agent in a realistic, simulated environment.",['ReaderM'],2023-11-15T16:36:04Z,lesswrong,, 178917,https://www.lesswrong.com/posts/sHAaMpdk9FT9XsLvB/forecasting-compute-transformative-ai-and-compute-2-4,Forecasting Compute - Transformative AI and Compute [2/4],['lennart'],2021-10-02T15:54:54Z,lesswrong,, 178938,https://www.lesswrong.com/posts/HxwgioAqi6RTMDRva/gpt-3-belief-and-consistency,"GPT-3, belief, and consistency",['skybrian'],2020-08-16T23:12:11Z,lesswrong,, 178952,https://www.lesswrong.com/posts/EbX5gv62oyyWZgwZm/what-are-the-relative-speeds-of-ai-capabilities-and-ai,What are the relative speeds of AI capabilities and AI safety?,['NunoSempere'],2020-04-24T18:21:59Z,lesswrong,, 178968,https://www.lesswrong.com/posts/5KAhnDbq9F4a2Y2Yg/4-key-assumptions-in-ai-safety,4 Key Assumptions in AI Safety,['Prometheus'],2022-11-07T10:50:40Z,lesswrong,, 178989,https://www.lesswrong.com/posts/o6SRn4TSxZYBzy8h5/are-at-least-some-large-language-models-holographic-memory,Are (at least some) Large Language Models Holographic Memory Stores?,['Bill Benzon'],2023-10-20T13:07:02Z,lesswrong,, 178999,https://www.lesswrong.com/posts/Tdu3tGT4i24qcLESh/equilibrium-and-prior-selection-problems-in-multipolar-1,Equilibrium and prior selection problems in multipolar deployment,['JesseClifton'],2020-04-02T20:06:14Z,lesswrong,, 179018,https://www.lesswrong.com/posts/xedQBnBR4dRtBkWpZ/takeoff-speeds-the-chimps-analogy-and-the-cultural,"Takeoff speeds, the chimps analogy, and the Cultural Intelligence Hypothesis",['NickGabs'],2022-12-02T19:15:00Z,lesswrong,, 179029,https://www.lesswrong.com/posts/ms3x8ngwTfep7jBue/thoughts-on-the-ai-safety-summit-company-policy-requests-and,Thoughts on the AI Safety Summit company policy requests and responses,['So8res'],2023-10-31T23:54:10Z,lesswrong,, 179054,https://www.lesswrong.com/posts/c3cQgBN3v2Cxpe2kc/getting-gpt-3-to-predict-metaculus-questions,Getting GPT-3 to predict Metaculus questions,['MathiasKB'],2022-05-06T06:01:04Z,lesswrong,, 179076,https://www.lesswrong.com/posts/NSZhadmoYdjRKNq6X/openai-launches-superalignment-taskforce,OpenAI Launches Superalignment Taskforce,['Zvi'],2023-07-11T13:00:06Z,lesswrong,, 179109,https://www.lesswrong.com/posts/RKzNpJhamYgcLeAEv/assigning-praise-and-blame-decoupling-epistemology-and,Assigning Praise and Blame: Decoupling Epistemology and Decision Theory,"['adamShimi', 'Gabriel Alfour']",2023-01-27T18:16:43Z,lesswrong,, 179123,https://www.lesswrong.com/posts/mR4HaYbCpKwnJWaAu/are-nested-jailbreaks-inevitable,Are nested jailbreaks inevitable?,['judson'],2023-03-17T17:43:54Z,lesswrong,, 179133,https://www.lesswrong.com/posts/Ap4KfkHyxjYPDiqh2/pascal-s-muggle-infinitesimal-priors-and-strong-evidence,Pascal's Muggle: Infinitesimal Priors and Strong Evidence,['Eliezer Yudkowsky'],2013-05-08T00:43:17Z,lesswrong,, 179151,https://www.lesswrong.com/posts/KgFrtaajjfSnBSZoH/ai-safety-research-camp-project-proposal,AI Safety Research Camp - Project Proposal,['David_Kristoffersson'],2018-02-02T04:25:46Z,lesswrong,, 179171,https://www.lesswrong.com/posts/cXbXR7QCqWvmPzjki/opportunities-for-individual-donors-in-ai-safety,Opportunities for individual donors in AI safety,['Alex Flint'],2018-03-31T18:37:22Z,lesswrong,, 179192,https://www.lesswrong.com/posts/zZ6Wm9hpcamTPbe6J/navigating-public-ai-x-risk-hype-while-pursuing-technical,Navigating public AI x-risk hype while pursuing technical solutions,['Dan Braun'],2023-02-19T12:22:46Z,lesswrong,, 179207,https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable,The Control Problem: Unsolved or Unsolvable?,['Remmelt'],2023-06-02T15:42:37Z,lesswrong,, 179223,https://www.lesswrong.com/posts/ttmmKDTkzuum3fftG/least-problematic-resource-for-learning-rl,Least-problematic Resource for learning RL?,['Dalcy Bremin'],2023-07-18T16:30:49Z,lesswrong,, 179234,https://www.lesswrong.com/posts/k8hvGAJWSKAeHwpnJ/why-i-m-worried-about-ai,Why I'm Worried About AI,['peterbarnett'],2022-05-23T21:14:00Z,lesswrong,, 179255,https://www.lesswrong.com/posts/8e9HsZsw8QuRwnqLX/knowledge-base-4-general-applications,Knowledge Base 4: General applications,['iwis'],2023-10-16T12:26:43Z,lesswrong,, 179264,https://www.lesswrong.com/posts/5jpESFymqEgSAmDJL/anthropic-google-microsoft-and-openai-announce-executive,"Anthropic, Google, Microsoft & OpenAI announce Executive Director of the Frontier Model Forum & over $10 million for a new AI Safety Fund",['Zach Stein-Perlman'],2023-10-25T15:20:53Z,lesswrong,, 179285,https://www.lesswrong.com/posts/mEDAqbdvg6ivy7eRp/general-advice-for-transitioning-into-theoretical-ai-safety,General advice for transitioning into Theoretical AI Safety,['Martín Soto'],2022-09-15T05:23:07Z,lesswrong,, 179300,https://www.lesswrong.com/posts/uGckvoAHiw35EpP2A/a-poem-co-written-by-chatgpt,A poem co-written by ChatGPT,['Sherrinford'],2023-02-16T10:17:36Z,lesswrong,, 179315,https://www.lesswrong.com/posts/wf83tBACPM9aiykPn/a-survey-of-foundational-methods-in-inverse-reinforcement,A Survey of Foundational Methods in Inverse Reinforcement Learning,['adamk'],2022-09-01T18:21:11Z,lesswrong,, 179335,https://www.lesswrong.com/posts/TkKMv5xsbQ6AZ5cgD/stop-calling-it-jailbreaking-chatgpt,"Stop calling it ""jailbreaking"" ChatGPT",['Templarrr'],2023-03-10T11:41:06Z,lesswrong,, 179350,https://www.lesswrong.com/posts/8EZDx2GMtsTcnrPsC/is-there-an-analysis-of-the-common-consideration-that,Is there an analysis of the common consideration that splitting an AI lab into two (e.g. the founding of Anthropic) speeds up the development of TAI and therefore increases AI x-risk?,['tchauvin'],2023-03-16T14:16:30Z,lesswrong,, 179373,https://www.lesswrong.com/posts/bchdbooatXBavP9po/curiosity-as-a-solution-to-agi-alignment,Curiosity as a Solution to AGI Alignment,['Harsha G.'],2023-02-26T23:36:56Z,lesswrong,, 179389,https://www.lesswrong.com/posts/AZHHEPYWvTovvtikz/human-level-diplomacy-was-my-fire-alarm,Human-level Diplomacy was my fire alarm,['Lao Mein'],2022-11-23T10:05:36Z,lesswrong,, 179405,https://www.lesswrong.com/posts/tTRWgmtPetrxiEtSs/biosafety-regulations-bmbl-and-their-relevance-for-ai,Biosafety Regulations (BMBL) and their relevance for AI,['Štěpán Los'],2023-06-29T19:22:41Z,lesswrong,, 179428,https://www.lesswrong.com/posts/e4SMfYWb4Tz568yh6/goodhart-s-law-causal-diagrams,Goodhart's Law Causal Diagrams,"['JustinShovelain', 'Jeremy Gillen']",2022-04-11T13:52:34Z,lesswrong,, 179458,https://www.lesswrong.com/posts/cv64wEBKtMum7Dxj8/levels-of-safety-for-ai-and-other-technologies,Levels of safety for AI and other technologies,['jasoncrawford'],2023-06-28T18:35:53Z,lesswrong,, 179473,https://www.lesswrong.com/posts/RPLDQpBX9Es2qfBMf/limiting-an-agi-s-context-temporally,Limiting an AGI's Context Temporally,['EulersApprentice'],2019-02-17T03:29:27Z,lesswrong,, 179485,https://www.lesswrong.com/posts/c38nAg23YTCzd7m8P/activation-adding-experiments-with-flan-t5,Activation adding experiments with FLAN-T5,['Nina Rimsky'],2023-07-13T23:32:55Z,lesswrong,, 179496,https://www.lesswrong.com/posts/EjssJnp9fNhvdDEdK/reinterpreting-ai-and-compute,"Reinterpreting ""AI and Compute""",['habryka'],2018-12-25T21:12:11Z,lesswrong,, 179505,https://www.lesswrong.com/posts/FgHBatgKyYR6oqFz5/training-for-corrigability-obvious-problems,Training for corrigability: obvious problems?,['Ben Amitay'],2023-02-24T14:02:38Z,lesswrong,, 179514,https://www.lesswrong.com/posts/86CxnQzPdFoDhdMXh/agenty-agi-how-tempting,Agenty AGI – How Tempting?,['PeterMcCluskey'],2022-07-01T23:40:17Z,lesswrong,, 179532,https://www.lesswrong.com/posts/pZerSnxv6FPqvgoYu/book-review-age-of-em,Book Review: Age of Em,['Scott Alexander'],2016-05-28T20:42:17Z,lesswrong,, 179562,https://www.lesswrong.com/posts/k5akaNPtQLEruh4PF/one-does-not-simply-replace-the-humans,One Does Not Simply Replace the Humans,['JerkyTreats'],2023-04-06T20:56:17Z,lesswrong,, 179584,https://www.lesswrong.com/posts/ASMX9ss3J5G3GZdok/a-plea-for-solutionism-on-ai-safety,A plea for solutionism on AI safety,['jasoncrawford'],2023-06-09T16:29:56Z,lesswrong,, 179607,https://www.lesswrong.com/posts/wJYy4eCCteHD4YBjb/openai-s-gpt-4-safety-goals,OpenAI's GPT-4 Safety Goals,['PeterMcCluskey'],2023-04-22T19:11:43Z,lesswrong,, 179625,https://www.lesswrong.com/posts/nGP4soWSbFmzemM4i/reframing-the-problem-of-ai-progress,Reframing the Problem of AI Progress,['Wei Dai'],2012-04-12T19:31:05Z,lesswrong,, 179642,https://www.lesswrong.com/posts/mTi8TQEyP5Pr7oczd/machine-unlearning-evaluations-as-interpretability,Machine Unlearning Evaluations as Interpretability Benchmarks,"['NickyP', 'Nandi']",2023-10-23T16:33:05Z,lesswrong,, 179656,https://www.lesswrong.com/posts/w6AzbZR7ZQxWuAwKR/thoughts-on-robin-hanson-s-ai-impacts-interview,Thoughts on Robin Hanson's AI Impacts interview,['Steven Byrnes'],2019-11-24T01:40:35Z,lesswrong,, 179680,https://www.lesswrong.com/posts/6xiBgLvvDiH7Sboq2/trust-maximizing-agi,Trust-maximizing AGI,"['Jan', 'Karl von Wendt']",2022-02-25T15:13:14Z,lesswrong,, 179708,https://www.lesswrong.com/posts/i2M3vWPBqyefh3uow/simplified-poker,Simplified Poker,['Zvi'],2018-06-04T15:50:00Z,lesswrong,, 179717,https://www.lesswrong.com/posts/BGeBgMpKmg6b7Zvnw/a-method-for-empirical-back-testing-of-ai-s-ability-to-self,A method for empirical back-testing of AI's ability to self-improve,['Michael Tontchev'],2023-03-21T20:24:20Z,lesswrong,, 179729,https://www.lesswrong.com/posts/WP4fciGn3rNtmq3tY/ai-safety-101-chapter-5-1-debate,AI Safety 101 - Chapter 5.1 - Debate,['Charbel-Raphaël'],2023-10-31T14:30:00Z,lesswrong,, 179761,https://www.lesswrong.com/posts/FS6NCWzzP8DHp4aD4/do-earths-with-slower-economic-growth-have-a-better-chance,Do Earths with slower economic growth have a better chance at FAI?,['Eliezer Yudkowsky'],2013-06-12T19:54:07Z,lesswrong,, 179775,https://www.lesswrong.com/posts/marPAMc9yCWG79h7f/proposal-we-should-start-referring-to-the-risk-from,Proposal: we should start referring to the risk from unaligned AI as a type of *accident risk*,['Christopher King'],2023-05-16T15:18:55Z,lesswrong,, 179792,https://www.lesswrong.com/posts/zQi6T3ATa59KgaABc/notes-on-notes-on-virtues,Notes on notes on virtues,['David Gross'],2020-12-30T17:47:04Z,lesswrong,, 179809,https://www.lesswrong.com/posts/jRcWRQxx5P68ZhrFm/understanding-the-merging-of-opinions-with-increasing,Understanding the Merging of Opinions with Increasing Information theorem,['ViktoriaMalyasova'],2022-04-21T14:13:35Z,lesswrong,, 179826,https://www.lesswrong.com/posts/hpT2gvzfzLjZQWBHn/could-you-have-stopped-chernobyl,Could you have stopped Chernobyl?,['Carlos Ramirez'],2021-08-27T01:48:02Z,lesswrong,, 179845,https://www.lesswrong.com/posts/9KWs3rfvjCeGeJGzy/udt-might-not-pay-a-counterfactual-mugger,UDT might not pay a Counterfactual Mugger,['winwonce'],2020-11-21T23:27:07Z,lesswrong,, 179854,https://www.lesswrong.com/posts/QEgGp7gaogAQjqgRi/is-global-reinforcement-learning-rl-a-fantasy,Is Global Reinforcement Learning (RL) a Fantasy?,['anonymous'],2016-10-31T01:49:51Z,lesswrong,, 179875,https://www.lesswrong.com/posts/uhMRgEXabYbWeLc6T/why-and-when-interpretability-work-is-dangerous,Why and When Interpretability Work is Dangerous,['NicholasKross'],2023-05-28T00:27:38Z,lesswrong,, 179897,https://www.lesswrong.com/posts/gvfRr2TsKao4trDWw/what-projects-and-efforts-are-there-to-promote-ai-safety,What projects and efforts are there to promote AI safety research?,['Christopher King'],2023-05-24T00:33:48Z,lesswrong,, 179909,https://www.lesswrong.com/posts/BMnhDjJrix5BXE7yr/reflections-on-trusting-trust-and-ai,Reflections on Trusting Trust & AI,['Itay Yona'],2023-01-16T06:36:25Z,lesswrong,, 179925,https://www.lesswrong.com/posts/WLfogxQu4rDmone3z/notes-on-the-importance-and-implementation-of-safety-first,Notes on the importance and implementation of safety-first cognitive architectures for AI,['Brendon_Wong'],2023-05-11T10:03:39Z,lesswrong,, 179945,https://www.lesswrong.com/posts/8rTxCudZgEARxLn2u/link-openai-lp,[Link] OpenAI LP,['Alexei'],2019-03-12T23:23:00Z,lesswrong,, 179958,https://www.lesswrong.com/posts/ZqTQtEvBQhiGy6y7p/breaking-the-optimizer-s-curse-and-consequences-for-1,"Breaking the Optimizer’s Curse, and Consequences for Existential Risks and Value Learning",['Roger Dearnaley'],2023-02-21T09:05:43Z,lesswrong,, 179979,https://www.lesswrong.com/posts/b5EqwQZw7ww2K28Ki/ai-takeover-tabletop-rpg-the-treacherous-turn,"AI takeover tabletop RPG: ""The Treacherous Turn""",['Daniel Kokotajlo'],2022-11-30T07:16:56Z,lesswrong,, 179993,https://www.lesswrong.com/posts/RAZNGucjxAcKHimcS/agi-safety-literature-review-everitt-lea-and-hutter-2018,"AGI Safety Literature Review (Everitt, Lea & Hutter 2018)",['Kaj_Sotala'],2018-05-04T08:56:27Z,lesswrong,, 180002,https://www.lesswrong.com/posts/ePKCgzzb3HhcnDwfr/a-more-grounded-idea-of-ai-risk,A more grounded idea of AI risk,['Iknownothing'],2023-05-11T09:48:01Z,lesswrong,, 180012,https://www.lesswrong.com/posts/DhuPBkKA8ohyZEN8n/my-updating-thoughts-on-ai-policy,My Updating Thoughts on AI policy,['Ben Pace'],2020-03-01T07:06:12Z,lesswrong,, 180033,https://www.lesswrong.com/posts/8ibDJeoiDuxJkPwfa/various-alignment-strategies-and-how-likely-they-are-to-work,Various Alignment Strategies (and how likely they are to work),['Logan Zoellner'],2022-05-03T16:54:17Z,lesswrong,, 180070,https://www.lesswrong.com/posts/k5anbk2pBZPFkrCqh/apply-to-a-small-iteration-of-mlab-to-be-run-in-oxford,Apply to a small iteration of MLAB to be run in Oxford,"['RP', 'MariaK', 'OliverHayman']",2023-08-27T14:21:18Z,lesswrong,, 180079,https://www.lesswrong.com/posts/9r2P7kigDGZS8ysQz/how-humanity-would-respond-to-slow-takeoff-with-takeaways,"How humanity would respond to slow takeoff, with takeaways from the entire COVID-19 pandemic",['Noosphere89'],2022-07-06T17:52:17Z,lesswrong,, 180104,https://www.lesswrong.com/posts/yzGPLdEqa2ytT7MY2/simulators-increase-the-likelihood-of-alignment-by-default,Simulators Increase the Likelihood of Alignment by Default,['Wuschel Schulz'],2023-04-30T16:32:44Z,lesswrong,, 180129,https://www.lesswrong.com/posts/wCtegGaWxttfKZsfx/we-don-t-understand-what-happened-with-culture-enough,We don't understand what happened with culture enough,['Jan_Kulveit'],2023-10-09T09:54:20Z,lesswrong,, 180145,https://www.lesswrong.com/posts/sDiGGhpw7Evw7zdR4/compute-trends-comparison-to-openai-s-ai-and-compute,Compute Trends — Comparison to OpenAI’s AI and Compute,"['lennart', 'Jsevillamol', 'Pablo Villalobos', 'Marius Hobbhahn', 'Tamay Besiroglu', 'anson.ho']",2022-03-12T18:09:55Z,lesswrong,, 180164,https://www.lesswrong.com/posts/7nLMhdhXMKBLWiynJ/why-safe-oracle-ai-is-easier-than-safe-general-ai-in-a,"Why safe Oracle AI is easier than safe general AI, in a nutshell",['Stuart_Armstrong'],2011-12-03T12:33:31Z,lesswrong,, 180179,https://www.lesswrong.com/posts/895Qmhyud2PjDhte6/responses-to-apparent-rationalist-confusions-about-game,Responses to apparent rationalist confusions about game / decision theory,['Anthony DiGiovanni'],2023-08-30T22:02:12Z,lesswrong,, 180206,https://www.lesswrong.com/posts/bG6DRZAct9jeqHLra/generalization-of-the-solomonoff-induction-to-accuracy-is-it,Generalization of the Solomonoff Induction to Accuracy - Is it possible? Would it be useful?,['PeterL'],2022-02-20T19:29:03Z,lesswrong,, 180216,https://www.lesswrong.com/posts/aodPs8H9dQxpXAcwk/heritability-behaviorism-and-within-lifetime-rl,"Heritability, Behaviorism, and Within-Lifetime RL",['Steven Byrnes'],2023-02-02T16:34:33Z,lesswrong,, 180235,https://www.lesswrong.com/posts/Q4zBhYobwkGBGuh7v/fundamental-uncertainty-chapter-7-why-is-truth-useful,Fundamental Uncertainty: Chapter 7 - Why is truth useful?,['Gordon Seidoh Worley'],2023-04-30T16:48:58Z,lesswrong,, 180252,https://www.lesswrong.com/posts/C4jgnZqz6QupHrgAx/defining-optimization-in-a-deeper-way-part-1,Defining Optimization in a Deeper Way Part 1,['Jemist'],2022-07-01T14:03:19Z,lesswrong,, 180262,https://www.lesswrong.com/posts/AGPgMBp6eN95uxJyc/would-it-be-useful-to-collect-the-contexts-where-various,"Would it be useful to collect the contexts, where various LLMs think the same?",['Martin Vlach'],2023-08-24T22:01:50Z,lesswrong,, 180271,https://www.lesswrong.com/posts/8QnEXBh2dnqXfu3JX/a-deceptively-simple-argument-in-favor-of-problem,A Deceptively Simple Argument in favor of Problem Factorization,['Logan Zoellner'],2022-08-06T17:32:24Z,lesswrong,, 180280,https://www.lesswrong.com/posts/tKRGW7fEqCgStCgEd/the-omnizoid-heighn-fdt-debate-5,The omnizoid - Heighn FDT Debate #5,['Heighn'],2023-09-18T11:54:25Z,lesswrong,, 180289,https://www.lesswrong.com/posts/ARK4tyCRDd3Gb5umh/aisn-25-white-house-executive-order-on-ai-uk-ai-safety,"AISN #25: White House Executive Order on AI, UK AI Safety Summit, and Progress on Voluntary Evaluations of AI Risks","['aogara', 'Dan H']",2023-10-31T19:34:55Z,lesswrong,, 180322,https://www.lesswrong.com/posts/TohzYjnaFr3kFaKKi/any-further-work-on-ai-safety-success-stories,Any further work on AI Safety Success Stories?,['Krieger'],2022-10-02T09:53:45Z,lesswrong,, 180332,https://www.lesswrong.com/posts/g5rABd5qbp8B4g3DE/towards-understanding-sycophancy-in-language-models,Towards Understanding Sycophancy in Language Models,"['Ethan Perez', 'mrinank_sharma', 'Meg', 'Tomek Korbak']",2023-10-24T00:30:49Z,lesswrong,, 180352,https://www.lesswrong.com/posts/eYiDjCNJrR3w3WcMM/making-decisions-when-both-morally-and-empirically-uncertain,Making decisions when both morally and empirically uncertain,['MichaelA'],2020-01-02T07:20:46Z,lesswrong,, 180375,https://www.lesswrong.com/posts/bDMoMvw2PYgijqZCC/i-wanted-to-interview-eliezer-yudkowsky-but-he-s-busy-so-i,I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead,['lsusr'],2021-09-16T07:34:11Z,lesswrong,, 180398,https://www.lesswrong.com/posts/6zBnNjZfqTNLjP9j5/book-review-reinforcement-learning-by-sutton-and-barto,Book Review: Reinforcement Learning by Sutton and Barto,['billmei'],2020-10-20T19:40:53Z,lesswrong,, 180425,https://www.lesswrong.com/posts/pCesigb4NjzvoNKWB/to-contribute-to-ai-safety-consider-doing-ai-research,"To contribute to AI safety, consider doing AI research",['Vika'],2016-01-16T20:42:36Z,lesswrong,, 180444,https://www.lesswrong.com/posts/EKP4HtaHaSiZL4u4N/the-ai-dungeons-dragon-model-is-heavily-path-dependent,"The ""AI Dungeons"" Dragon Model is heavily path dependent (testing GPT-3 on ethics)",['Rafael Harth'],2020-07-21T12:14:33Z,lesswrong,, 180459,https://www.lesswrong.com/posts/LDbnZDRidKNKQBFyY/would-aixi-protect-itself,Would AIXI protect itself?,['Stuart_Armstrong'],2011-12-09T12:29:27Z,lesswrong,, 180482,https://www.lesswrong.com/posts/6vybojuDEqaHeg8aN/a-confused-chemist-s-review-of-alphafold-2,A Confused Chemist's Review of AlphaFold 2,['Jemist'],2021-09-27T11:10:17Z,lesswrong,, 180498,https://www.lesswrong.com/posts/coe8zoG7t2DZrC4xY/6-paragraph-ai-risk-intro-for-maisi,6-paragraph AI risk intro for MAISI,['JakubK'],2023-01-19T09:22:23Z,lesswrong,, 180526,https://www.lesswrong.com/posts/6JPZcScxLZtYzYRrQ/aisc-2023-progress-report-for-march-team-interpretable,"AISC 2023, Progress Report for March: Team Interpretable Architectures","['Robert Kralisch', 'Eris', 'teahorse', 'Sohaib Imran']",2023-04-02T16:19:08Z,lesswrong,, 180554,https://www.lesswrong.com/posts/aD7mZMeu9waAfm3au/mathematical-measures-of-optimization-power,Mathematical Measures of Optimization Power,['Alex_Altair'],2012-11-24T10:55:17Z,lesswrong,, 180570,https://www.lesswrong.com/posts/9g5dtLx3KTTvJ6Fb8/rishi-to-outline-his-vision-for-britain-to-take-the-world,Rishi to outline his vision for Britain to take the world lead in policing AI threats when he meets Joe Biden,['Mati_Roy'],2023-06-06T04:47:31Z,lesswrong,, 180584,https://www.lesswrong.com/posts/v3jocJRScqkBGtwvf/corrigibility-much-more-detail-than-anyone-wants-to-read,"Corrigibility, Much more detail than anyone wants to Read",['Logan Zoellner'],2023-05-07T01:02:35Z,lesswrong,, 180600,https://www.lesswrong.com/posts/D7ig57J8tWEsMQwcx/responsible-deployment-in-20xx,Responsible Deployment in 20XX,['Carson'],2023-04-20T00:24:09Z,lesswrong,, 180624,https://www.lesswrong.com/posts/4XffsAd4GgvHS7zyD/value-of-a-computational-process,Value of a Computational Process?,['jefftk'],2012-07-09T17:33:43Z,lesswrong,, 180635,https://www.lesswrong.com/posts/n6wajkE3Tpfn6sd5j/christiano-decision-theory-excerpt,Christiano decision theory excerpt,['Rob Bensinger'],2019-09-29T02:55:36Z,lesswrong,, 180651,https://www.lesswrong.com/posts/aMZGJEGM9E54Ttedb/polluting-the-agentic-commons,Polluting the agentic commons,['hamandcheese'],2023-04-13T17:42:06Z,lesswrong,, 180669,https://www.lesswrong.com/posts/q2rCMHNXazALgQpGH/conditions-for-mesa-optimization,Conditions for Mesa-Optimization,"['evhub', 'Chris van Merwijk', 'Vlad Mikulik', 'Joar Skalse', 'Scott Garrabrant']",2019-06-01T20:52:19Z,lesswrong,, 180702,https://www.lesswrong.com/posts/tiuc7kgQSxHKC8JJy/exploring-the-multiverse-of-large-language-models,Exploring the Multiverse of Large Language Models,['franky'],2023-08-06T02:38:03Z,lesswrong,, 180730,https://www.lesswrong.com/posts/upLot6eG8cbXdKiFS/reward-function-learning-the-learning-process,Reward function learning: the learning process,['Stuart_Armstrong'],2018-04-24T12:56:05Z,lesswrong,, 180751,https://www.lesswrong.com/posts/M5vEjix8oPeWXeGFY/research-report-incorrectness-cascades,Research Report: Incorrectness Cascades,['Robert_AIZI'],2023-04-14T12:49:15Z,lesswrong,, 180773,https://www.lesswrong.com/posts/Abg5wRuBZrbkXZBFj/tips-tricks-lessons-and-thoughts-on-hosting-hackathons,"Tips, tricks, lessons and thoughts on hosting hackathons",['gergogaspar'],2023-11-06T11:03:40Z,lesswrong,, 180800,https://www.lesswrong.com/posts/7HXSBxnDmQNosS5ss/why-i-moved-from-ai-to-neuroscience-or-uploading-worms,"Why I Moved from AI to Neuroscience, or: Uploading Worms",['davidad'],2012-04-13T07:10:32Z,lesswrong,, 180809,https://www.lesswrong.com/posts/aiJRv28HDx4LkoXAc/universal-and-transferable-adversarial-attacks-on-aligned,Universal and Transferable Adversarial Attacks on Aligned Language Models [paper link],['Sodium'],2023-07-29T03:21:15Z,lesswrong,, 180824,https://www.lesswrong.com/posts/oqJpxY2yZXg52QomP/a-tentative-timeline-of-the-near-future-2022-2025-for-self,A Tentative Timeline of The Near Future (2022-2025) for Self-Accountability,['Yitz'],2022-12-05T05:33:48Z,lesswrong,, 180852,https://www.lesswrong.com/posts/f3JgMn85PB7cf4jcg/an-attempt-at-preference-uncertainty-using-vnm,An Attempt at Preference Uncertainty Using VNM,['anonymous'],2013-07-16T05:20:40Z,lesswrong,, 180873,https://www.lesswrong.com/posts/fQv85Rd3pw789MHaX/timeless-decision-theory-and-meta-circular-decision-theory,Timeless Decision Theory and Meta-Circular Decision Theory,['Eliezer Yudkowsky'],2009-08-20T22:07:47Z,lesswrong,, 180896,https://www.lesswrong.com/posts/gRsH7KJSotubGKwRP/have-you-ever-considered-taking-the-turing-test-yourself,Have you ever considered taking the 'Turing Test' yourself?,['Super AGI'],2023-07-27T03:48:30Z,lesswrong,, 180905,https://www.lesswrong.com/posts/sjqGXmWdJWrRw8hBN/simpler-explanations-of-agi-risk,Simpler explanations of AGI risk,['Seth Herd'],2023-05-14T01:29:29Z,lesswrong,, 180921,https://www.lesswrong.com/posts/yu628W2EtdgmH8dq3/does-gpt-2-understand-anything-1,Does GPT-2 Understand Anything?,['Douglas Summers-Stay'],2020-01-02T17:09:15Z,lesswrong,, 180940,https://www.lesswrong.com/posts/PYS2nvffCAyBCTmn2/modelling-deception,Modelling Deception,['Garrett Baker'],2022-07-18T21:21:32Z,lesswrong,, 180960,https://www.lesswrong.com/posts/nExb2ndQF5MziGBhe/should-we-cry-wolf,"Should we cry ""wolf""?",['Tapatakt'],2023-02-18T11:24:18Z,lesswrong,, 180975,https://www.lesswrong.com/posts/fTrEqNnYYSNXNSRcg/allegory-on-ai-risk-game-theory-and-mithril,"Allegory On AI Risk, Game Theory, and Mithril",['James_Miller'],2017-02-13T20:41:51Z,lesswrong,, 180997,https://www.lesswrong.com/posts/z3NFWLwXYdJui7mg6/is-brittle-alignment-good-enough,"Is ""brittle alignment"" good enough?",['the8thbit'],2023-05-23T17:35:44Z,lesswrong,, 181011,https://www.lesswrong.com/posts/iNaB6GA6Seti3biTJ/deceptive-failures-short-of-full-catastrophe,Deceptive failures short of full catastrophe.,['Alex Lawsen'],2023-01-15T19:28:53Z,lesswrong,, 181044,https://www.lesswrong.com/posts/zjapzaDM785o8ccTr/hardcode-the-agi-to-need-our-approval-indefinitely,Hardcode the AGI to need our approval indefinitely?,['MichaelStJules'],2021-11-11T07:04:10Z,lesswrong,, 181062,https://www.lesswrong.com/posts/zYJMf7QoaNahccxrp/how-i-learned-to-stop-worrying-and-love-the-shoggoth,How I Learned To Stop Worrying And Love The Shoggoth,['Peter Merel'],2023-07-12T17:47:43Z,lesswrong,, 181079,https://www.lesswrong.com/posts/n7vPLsbTzpk8XXEAS/what-s-your-cognitive-algorithm,What's Your Cognitive Algorithm?,['Raemon'],2020-06-18T22:16:39Z,lesswrong,, 181096,https://www.lesswrong.com/posts/5vmNFRQMKxzwNm7QZ/is-chatgpt-tai,Is ChatGPT TAI?,['Amal'],2022-12-30T19:44:51Z,lesswrong,, 181105,https://www.lesswrong.com/posts/aEQBkDPZi6L2LMpnC/seri-mats-summer-2023-cohort,SERI MATS - Summer 2023 Cohort,"['Aris', 'Ryan Kidd', 'Christian Smith']",2023-04-08T15:32:57Z,lesswrong,, 181117,https://www.lesswrong.com/posts/SKyzGTbEEuNpoSFKH/list-2-why-coordinating-to-align-as-humans-to-not-develop,"List #2: Why coordinating to align as humans to not develop AGI is a lot easier than, well... coordinating as humans with AGI coordinating to be aligned with humans",['Remmelt'],2022-12-24T09:53:20Z,lesswrong,, 181132,https://www.lesswrong.com/posts/axbTNGuMtB4hCkNus/what-will-the-twenties-look-like-if-agi-is-30-years-away,What will the twenties look like if AGI is 30 years away?,['Daniel Kokotajlo'],2021-07-13T08:14:07Z,lesswrong,, 181141,https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam,The AI Timelines Scam,['jessicata'],2019-07-11T02:52:59Z,lesswrong,, 181159,https://www.lesswrong.com/posts/mquf8e9qTQqYDAQSE/abstraction-is-bigger-than-natural-abstraction,Abstraction is Bigger than Natural Abstraction,['NicholasKross'],2023-05-31T00:00:36Z,lesswrong,, 181175,https://www.lesswrong.com/posts/iJNK7rE5jphMSJJCa/thoughts-and-problems-with-eliezer-s-measure-of-optimization,Thoughts and problems with Eliezer's measure of optimization power,['Stuart_Armstrong'],2012-06-08T09:44:21Z,lesswrong,, 181193,https://www.lesswrong.com/posts/GZ8t3uJRPSQb2sAH3/formalizing-newcomb-s,Formalizing Newcomb's,['cousin_it'],2009-04-05T15:39:03Z,lesswrong,, 181209,https://www.lesswrong.com/posts/z2CuTQ4nvYGGE6F3J/aisn-19-us-china-competition-on-ai-chips-measuring-language-1,"AISN #19: US-China Competition on AI Chips, Measuring Language Agent Developments, Economic Analysis of Language Model Propaganda, and White House AI Cyber Challenge","['aogara', 'Dan H']",2023-08-15T16:10:18Z,lesswrong,, 181254,https://www.lesswrong.com/posts/z3kYdw54htktqt9Jb/what-i-think-if-not-why,"What I Think, If Not Why",['Eliezer Yudkowsky'],2008-12-11T17:41:43Z,lesswrong,, 181269,https://www.lesswrong.com/posts/69CRFgqbQyFBoYcg5/navigating-the-open-source-ai-landscape-data-funding-and,"Navigating the Open-Source AI Landscape: Data, Funding, and Safety","['André Ferretti', 'mic']",2023-04-13T15:29:05Z,lesswrong,, 181309,https://www.lesswrong.com/posts/nbrPmaDwvKQD8gdFj/proposal-labs-should-precommit-to-pausing-if-an-ai-argues,Proposal: labs should precommit to pausing if an AI argues for itself to be improved,['NickGabs'],2023-06-02T22:31:55Z,lesswrong,, 181325,https://www.lesswrong.com/posts/ib9bfyJiz4FLuHDQs/openai-codex-first-impressions,OpenAI Codex: First Impressions,['specbug'],2021-08-13T16:52:34Z,lesswrong,, 181348,https://www.lesswrong.com/posts/paeuSma7mABsrB6ju/steven-harnad-symbol-grounding-and-the-structure-of,Steven Harnad: Symbol grounding and the structure of dictionaries,['Bill Benzon'],2023-09-02T12:28:05Z,lesswrong,, 181364,https://www.lesswrong.com/posts/22PXLzsDaLkF7MoMc/intelligence-explosion-in-organizations-or-why-i-m-not,"Intelligence explosion in organizations, or why I'm not worried about the singularity",['sbenthall'],2012-12-27T04:32:33Z,lesswrong,, 181383,https://www.lesswrong.com/posts/bzhcpDZQGNXcgxRFj/takeaways-from-safety-by-default-interviews,Takeaways from safety by default interviews,"['AI Impacts', 'abergal']",2020-04-03T17:20:02Z,lesswrong,, 181404,https://www.lesswrong.com/posts/BY8fBKR6sjiPXRBia/the-way-agi-wins-could-look-very-stupid,The way AGI wins could look very stupid,['Christopher King'],2023-05-12T16:34:19Z,lesswrong,, 181423,https://www.lesswrong.com/posts/bqbL4pt92AGCCcvQH/implementing-decision-theory,Implementing Decision Theory,['justinpombrio'],2023-11-07T17:55:43Z,lesswrong,, 181444,https://www.lesswrong.com/posts/N9QQoxk6pGy5DG9sx/notes-on-chatgpt-s-memory-for-strings-and-for-events,Notes on ChatGPT’s “memory” for strings and for events,['Bill Benzon'],2023-09-20T18:12:23Z,lesswrong,, 181462,https://www.lesswrong.com/posts/Cc3KLJJBmFX5KF4mu/evil-autocomplete-existential-risk-and-next-token-predictors,Evil autocomplete: Existential Risk and Next-Token Predictors,['Yitz'],2023-02-28T08:47:19Z,lesswrong,, 181484,https://www.lesswrong.com/posts/ZDqiXgSYRgQTSFrsW/humans-are-not-prepared-to-operate-outside-their-moral,Humans are not prepared to operate outside their moral training distribution,['Prometheus'],2023-04-10T21:44:23Z,lesswrong,, 181504,https://www.lesswrong.com/posts/D52m6f3F76jp8scYA/deciphering-china-s-ai-dream,Deciphering China's AI Dream,['Qiaochu_Yuan'],2018-03-18T03:26:13Z,lesswrong,, 181514,https://www.lesswrong.com/posts/XJYdnHQqpengWn3xb/transformative-ai-and-compute-summary-2,Transformative AI and Compute [Summary],['lennart'],2021-09-26T11:41:45Z,lesswrong,, 181535,https://www.lesswrong.com/posts/z7SuGwxTBnQm8uFq4/bounded-versions-of-goedel-s-and-loeb-s-theorems,Bounded versions of Gödel's and Löb's theorems,['cousin_it'],2012-06-27T18:28:05Z,lesswrong,, 181546,https://www.lesswrong.com/posts/inedT6KkbLSDwZvfd/steam,Steam,['abramdemski'],2022-06-20T17:38:59Z,lesswrong,, 181570,https://www.lesswrong.com/posts/fWKGXSZ3uXxLKAxvm/evidential-decision-theory-selection-bias-and-reference,"Evidential Decision Theory, Selection Bias, and Reference Classes",['Qiaochu_Yuan'],2013-07-08T05:16:48Z,lesswrong,, 181588,https://www.lesswrong.com/posts/Z4MRNwkWqbQDxhTrT/gpt-augmented-blogging,GPT-Augmented Blogging,['lsusr'],2021-09-14T11:55:34Z,lesswrong,, 181598,https://www.lesswrong.com/posts/5WEoM3RCxN2cQEdzY/what-is-wei-dai-s-updateless-decision-theory,What is Wei Dai's Updateless Decision Theory?,['AlephNeil'],2010-05-19T10:16:51Z,lesswrong,, 181610,https://www.lesswrong.com/posts/8GhSZzsQmusCN9is7/minds-an-introduction,Minds: An Introduction,['Rob Bensinger'],2015-03-11T19:00:32Z,lesswrong,, 181629,https://www.lesswrong.com/posts/EaDN9grmTfzXbM652/my-plan-to-build-aligned-superintelligence,My Plan to Build Aligned Superintelligence,['apollonianblues'],2022-08-21T13:16:33Z,lesswrong,, 181668,https://www.lesswrong.com/posts/vLp5wvSzZXWzEEZJh/podcast-transcript-daniela-and-dario-amodei-on-anthropic,Podcast Transcript: Daniela and Dario Amodei on Anthropic,['remember'],2023-03-07T16:47:54Z,lesswrong,, 181703,https://www.lesswrong.com/posts/sfGBkyDyu96eePZZ6/superintelligence-27-pathways-and-enablers,Superintelligence 27: Pathways and enablers,['KatjaGrace'],2015-03-17T01:00:52Z,lesswrong,, 181721,https://www.lesswrong.com/posts/ND6uCdxKniFxKyBwQ/help-me-find-a-good-hackathon-subject,Help me find a good Hackathon subject,['Charbel-Raphaël'],2022-09-04T08:40:30Z,lesswrong,, 181732,https://www.lesswrong.com/posts/KQSpRoQBz7f6FcXt3/distillation-of-neurotech-and-alignment-workshop-january-1,Distillation of Neurotech and Alignment Workshop January 2023,"['lisathiergart', 'Sumner L Norman']",2023-05-22T07:17:24Z,lesswrong,, 181768,https://www.lesswrong.com/posts/G3hxkSeDCMBucYsp6/personal-identity-and-uploading-by-mark-walker,"""Personal Identity and Uploading"", by Mark Walker",['gwern'],2012-01-07T19:55:01Z,lesswrong,, 181788,https://www.lesswrong.com/posts/yv4xAnkEyWvpXNBte/paths-to-failure,Paths to failure,"['Karl von Wendt', 'mespa']",2023-04-25T08:03:35Z,lesswrong,, 181816,https://www.lesswrong.com/posts/wx793JXieh97AXtg6/is-the-ai-timeline-too-short-to-have-children,Is the AI timeline too short to have children?,['Yoreth'],2022-12-14T18:32:45Z,lesswrong,, 181832,https://www.lesswrong.com/posts/s2z6hKbzAyuPKeset/heuristics-on-bias-to-action-versus-status-quo,Heuristics on bias to action versus status quo?,['Farkas'],2023-02-28T12:45:38Z,lesswrong,, 181846,https://www.lesswrong.com/posts/F6vH6fr8ngo7csDdf/chess-as-a-case-study-in-hidden-capabilities-in-chatgpt,Chess as a case study in hidden capabilities in ChatGPT,['AdamYedidia'],2023-08-19T06:35:03Z,lesswrong,, 181861,https://www.lesswrong.com/posts/4RNzyt6Tthsic8LNn/the-isolation-assumption-of-expected-utility-maximization,The Isolation Assumption of Expected Utility Maximization,['Pedro Oliboni'],2020-08-06T04:05:26Z,lesswrong,, 181871,https://www.lesswrong.com/posts/zxanhCijprzDo2Tgn/formalizing-the-ai-x-risk-is-unlikely-because-it-is,"Formalizing the ""AI x-risk is unlikely because it is ridiculous"" argument",['Christopher King'],2023-05-03T18:56:26Z,lesswrong,, 181889,https://www.lesswrong.com/posts/KzGwDeaYZXNWGWjd8/shahar-avin-on-how-to-regulate-advanced-ai-systems,Shahar Avin On How To Regulate Advanced AI Systems,['Michaël Trazzi'],2022-09-23T15:46:46Z,lesswrong,, 181912,https://www.lesswrong.com/posts/LRKXuxLrnxx3nSESv/should-ethicists-be-inside-or-outside-a-profession,Should ethicists be inside or outside a profession?,['Eliezer Yudkowsky'],2018-12-12T01:40:13Z,lesswrong,, 181934,https://www.lesswrong.com/posts/ceiR9bbupYiyBq4hM/openai-our-approach-to-ai-safety,OpenAI: Our approach to AI safety,['g-w1'],2023-04-05T20:26:47Z,lesswrong,, 181949,https://www.lesswrong.com/posts/f7qcAS4DMKsMoxTmK/the-solomonoff-prior-is-malign-it-s-not-a-big-deal,The Solomonoff prior is malign. It's not a big deal.,['Charlie Steiner'],2022-08-25T08:25:56Z,lesswrong,, 181966,https://www.lesswrong.com/posts/ifLEKmhmk2utB64iX/why-building-ventures-in-ai-safety-is-particularly,Why building ventures in AI Safety is particularly challenging,['Heramb'],2023-11-06T16:27:37Z,lesswrong,, 181983,https://www.lesswrong.com/posts/pkfKRG9dQr6unrhQT/why-multi-agent-safety-is-important,Why multi-agent safety is important,['Akbir Khan'],2022-06-14T09:23:12Z,lesswrong,, 182018,https://www.lesswrong.com/posts/yRTvXPB6hmtF9y9Nn/alan-carter-on-the-complexity-of-value,Alan Carter on the Complexity of Value,['Ghatanathoah'],2012-05-10T07:23:07Z,lesswrong,, 182031,https://www.lesswrong.com/posts/NM9tjAAmYQeGyPdoY/link-freakostats-and-cev,[Link] FreakoStats and CEV,['Filipe'],2012-06-06T15:21:31Z,lesswrong,, 182041,https://www.lesswrong.com/posts/piAnXc2a4k5bFsKjL/self-shutdown-ai,Self-shutdown AI,['jan betley'],2023-08-21T16:48:04Z,lesswrong,, 182051,https://www.lesswrong.com/posts/5HvCSt3vDSDuuXkfz/incentives-and-selection-a-missing-frame-from-ai-threat,Incentives and Selection: A Missing Frame From AI Threat Discussions?,['DragonGod'],2023-02-26T01:18:13Z,lesswrong,, 182067,https://www.lesswrong.com/posts/KgTDX9wEd4s3kubpr/new-book-from-leading-neuroscientist-in-support-of-cryonics,New book from leading neuroscientist in support of cryonics and mind uploading,['lukeprog'],2012-02-08T21:37:00Z,lesswrong,, 182078,https://www.lesswrong.com/posts/yWMKQBnTwFAPFdN6S/fiction-lena-mmacevedo,[Fiction] Lena (MMAcevedo),['Kaj_Sotala'],2021-02-23T19:46:35Z,lesswrong,, 182100,https://www.lesswrong.com/posts/nALdMXkxkLzysKtzC/linkpost-scott-alexander-reacts-to-openai-s-latest-post,[Linkpost] Scott Alexander reacts to OpenAI's latest post,['Akash'],2023-03-11T22:24:39Z,lesswrong,, 182127,https://www.lesswrong.com/posts/TtYuY2QBug3dn2wuo/the-problem-with-aixi,The Problem with AIXI,['Rob Bensinger'],2014-03-18T01:55:38Z,lesswrong,, 182162,https://www.lesswrong.com/posts/GZgLa5Xc4HjwketWe/instrumental-convergence-is-what-makes-general-intelligence,Instrumental convergence is what makes general intelligence possible,['tailcalled'],2022-11-11T16:38:14Z,lesswrong,, 182178,https://www.lesswrong.com/posts/PwfwZ2LeoLC4FXyDA/against-llm-reductionism,Against LLM Reductionism,['Erich_Grunewald'],2023-03-08T15:52:39Z,lesswrong,, 182195,https://www.lesswrong.com/posts/mKnbHwRc7mXtENNEm/report-on-modeling-evidential-cooperation-in-large-worlds,Report on modeling evidential cooperation in large worlds,['Johannes Treutlein'],2023-07-12T16:37:52Z,lesswrong,, 182215,https://www.lesswrong.com/posts/ditNcHKZoLinGeaHX/should-vs-would-and-newcomb-s-paradox,Should VS Would and Newcomb's Paradox,['dadadarren'],2021-07-03T23:45:30Z,lesswrong,, 182224,https://www.lesswrong.com/posts/YZzoWGCJsoRBBbmQg/solve-psy-kosh-s-non-anthropic-problem,Solve Psy-Kosh's non-anthropic problem,['cousin_it'],2010-12-20T21:24:01Z,lesswrong,, 182239,https://www.lesswrong.com/posts/c4GrmECzui2zT4fMq/is-strong-coherence-anti-natural,"Is ""Strong Coherence"" Anti-Natural?",['DragonGod'],2023-04-11T06:22:23Z,lesswrong,, 182250,https://www.lesswrong.com/posts/e2FjFC2LjKWKSycpu/automating-reasoning-about-the-future-at-ought,Automating reasoning about the future at Ought,['jungofthewon'],2020-11-09T21:51:14Z,lesswrong,, 182264,https://www.lesswrong.com/posts/MyvkTKfndx9t4zknh/eis-ii-what-is-interpretability,EIS II: What is “Interpretability”?,['scasper'],2023-02-09T16:48:35Z,lesswrong,, 182280,https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brain-emulation-no-progress-on-c-elegans-after-10,Whole Brain Emulation: No Progress on C. elegans After 10 Years,['niconiconi'],2021-10-01T21:44:37Z,lesswrong,, 182298,https://www.lesswrong.com/posts/JnAh4YHfrYpPNwc8Y/alignment-of-autogpt-agents,Alignment of AutoGPT agents,['Ozyrus'],2023-04-12T12:54:46Z,lesswrong,, 182339,https://www.lesswrong.com/posts/i62eFmbxivXHaEtCz/dall-e-does-symbol-grounding,DALL-E does symbol grounding,['p.b.'],2021-01-17T21:20:46Z,lesswrong,, 182349,https://www.lesswrong.com/posts/NwaNPHYhXDc9LkK8J/constructing-goodhart,Constructing Goodhart,['johnswentworth'],2019-02-03T21:59:54Z,lesswrong,, 182369,https://www.lesswrong.com/posts/LtdbPZxLuYktYhveL/a-plausible-story-about-ai-risk,A plausible story about AI risk.,['DeLesley Hutchins'],2022-06-10T02:08:32Z,lesswrong,, 182392,https://www.lesswrong.com/posts/j5shgF5LJC75GoXrt/metaculus-launches-contest-for-essays-with-quantitative,Metaculus launches contest for essays with quantitative predictions about AI,"['Tamay Besiroglu', 'Metaculus']",2022-02-08T16:07:25Z,lesswrong,, 182407,https://www.lesswrong.com/posts/wwKoGMq4omR3Mrv4w/aisn-24-kissinger-urges-us-china-cooperation-on-ai-china-s,"AISN #24: Kissinger Urges US-China Cooperation on AI, China's New AI Law, US Export Controls, International Institutions, and Open Source AI","['aogara', 'Dan H']",2023-10-18T17:06:54Z,lesswrong,, 182443,https://www.lesswrong.com/posts/Zrn8JBQKMs4Ho5oAZ/is-ai-gain-of-function-research-a-thing,Is AI Gain-of-Function research a thing?,['MadHatter'],2022-11-12T02:33:21Z,lesswrong,, 182465,https://www.lesswrong.com/posts/aEF24JnXFhiR4bkEu/openai-could-help-x-risk-by-wagering-itself-1,OpenAI could help X-risk by wagering itself,['VojtaKovarik'],2023-04-20T14:51:00Z,lesswrong,, 182476,https://www.lesswrong.com/posts/kkcQdR63LvoRZwutY/could-roko-s-basilisk-acausally-bargain-with-a-paperclip,Could Roko's basilisk acausally bargain with a paperclip maximizer?,['Christopher King'],2023-03-13T18:21:47Z,lesswrong,, 182491,https://www.lesswrong.com/posts/CXaQj85r4LtafCBi8/should-we-postpone-agi-until-we-reach-safety,Should we postpone AGI until we reach safety?,['otto.barten'],2020-11-18T15:43:52Z,lesswrong,, 182512,https://www.lesswrong.com/posts/vnrazshJpxHbL6cdR/call-for-submissions-choice-of-futures-survey-questions,Call for submissions: Choice of Futures survey questions,['c.trout'],2023-04-30T06:59:11Z,lesswrong,, 182528,https://www.lesswrong.com/posts/bHHrdXwrCj2LRa2sW/architects-of-our-own-demise-we-should-stop-developing-ai,Architects of Our Own Demise: We Should Stop Developing AI,['Roko'],2023-10-26T00:36:05Z,lesswrong,, 182550,https://www.lesswrong.com/posts/6w9uTPdJk52Nyknvm/linkpost-scaling-laws-for-generative-mixed-modal-language,[Linkpost] Scaling Laws for Generative Mixed-Modal Language Models,['Amal'],2023-01-12T14:24:01Z,lesswrong,, 182559,https://www.lesswrong.com/posts/tK37jT79YFgARZRje/link-nyt-article-about-existential-risk-from-ai,[LINK] NYT Article about Existential Risk from AI,['anonymous'],2013-01-28T10:37:59Z,lesswrong,, 182581,https://www.lesswrong.com/posts/ZpQ3H4YgSw8BRZAHD/if-you-re-very-optimistic-about-elk-then-you-should-be,If you’re very optimistic about ELK then you should be optimistic about outer alignment,['Sam Marks'],2022-04-27T19:30:12Z,lesswrong,, 182599,https://www.lesswrong.com/posts/BCynDEwguEiogicAo/reflection-of-hierarchical-relationship-via-nuanced,Reflection of Hierarchical Relationship via Nuanced Conditioning of Game Theory Approach for AI Development and Utilization,['Kyoung-cheol Kim'],2021-06-04T07:20:21Z,lesswrong,, 182617,https://www.lesswrong.com/posts/bD4B2MF7nsGAfH9fj/basic-mathematics-of-predictive-coding,Basic Mathematics of Predictive Coding,['Adam Shai'],2023-09-29T14:38:29Z,lesswrong,, 182633,https://www.lesswrong.com/posts/xregz4idkf8J5uHJm/coordination-by-common-knowledge-to-prevent-uncontrollable,Coordination by common knowledge to prevent uncontrollable AI,['Karl von Wendt'],2023-05-14T13:37:43Z,lesswrong,, 182655,https://www.lesswrong.com/posts/nu3HxpLHjMekm6KeE/a-gentle-apocalypse,A gentle apocalypse,['pchvykov'],2021-08-16T05:03:32Z,lesswrong,, 182674,https://www.lesswrong.com/posts/iRFxvNeLbHNRCzA2S/a-friendly-face-another-failure-story,A Friendly Face (Another Failure Story),"['Karl von Wendt', 'Sofia Bharadia', 'PeterDrotos', 'Artem Korotkov', 'mespa', 'mruwnik']",2023-06-20T10:31:25Z,lesswrong,, 182721,https://www.lesswrong.com/posts/jN2gbDRJHTtXYSdhY/beginning-resources-for-cev-research,Beginning resources for CEV research,['lukeprog'],2011-05-07T05:28:26Z,lesswrong,, 182738,https://www.lesswrong.com/posts/7iDtkfyn322nPzTP4/time-and-effort-discounting,Time and Effort Discounting,['Scott Alexander'],2011-07-07T23:48:06Z,lesswrong,, 182752,https://www.lesswrong.com/posts/bR8zWoYS9zfha8Hzo/crosspost-an-ai-pause-is-humanity-s-best-bet-for-preventing,[Crosspost] An AI Pause Is Humanity's Best Bet For Preventing Extinction (TIME),['otto.barten'],2023-07-24T10:07:40Z,lesswrong,, 182773,https://www.lesswrong.com/posts/yenr6Zp83PHd6Beab/which-singularity-schools-plus-the-no-singularity-school-was,Which singularity schools plus the no singularity school was right?,['Noosphere89'],2022-07-23T15:16:19Z,lesswrong,, 182792,https://www.lesswrong.com/posts/5P6sNqP7N9kSA97ao/anthropomorphic-ai-and-sandboxed-virtual-universes,Anthropomorphic AI and Sandboxed Virtual Universes,['jacob_cannell'],2010-09-03T19:02:04Z,lesswrong,, 182813,https://www.lesswrong.com/posts/ZFuJzcJPvDZ2Fpz3p/monthly-doom-argument-threads-doom-argument-wiki,Monthly Doom Argument Threads? Doom Argument Wiki?,['LVSN'],2023-02-04T16:59:32Z,lesswrong,, 182825,https://www.lesswrong.com/posts/bceeKEnPHSQqgyr36/request-to-agi-organizations-share-your-views-on-pausing-ai,Request to AGI organizations: Share your views on pausing AI progress,"['Akash', 'simeon_c']",2023-04-11T17:30:47Z,lesswrong,, 182840,https://www.lesswrong.com/posts/zQnzhGLDp2PSvAjcW/why-i-am-skeptical-of-ai-regulation-as-an-x-risk-mitigation,Why I Am Skeptical of AI Regulation as an X-Risk Mitigation Strategy,['A Ray'],2022-08-06T05:46:48Z,lesswrong,, 182850,https://www.lesswrong.com/posts/kNAiwLrtn8DcBxGjR/emergent-abilities-of-large-language-models-linkpost,Emergent Abilities of Large Language Models [Linkpost],['aogara'],2022-08-10T18:02:25Z,lesswrong,, 182861,https://www.lesswrong.com/posts/gx6GEnpLkTXn3NFSS/we-need-a-theory-of-anthropic-measure-binding,We need a theory of anthropic measure binding,['mako yass'],2021-12-30T07:22:34Z,lesswrong,, 182878,https://www.lesswrong.com/posts/bjEDbjDp8xEAAE9yR/we-should-prepare-for-a-larger-representation-of-academia-in,We Should Prepare for a Larger Representation of Academia in AI Safety,['Leon Lang'],2023-08-13T18:03:20Z,lesswrong,, 182896,https://www.lesswrong.com/posts/d9tiF8pBfmeK5wmQh/ai-improving-ai-mlaisu-w01,AI improving AI [MLAISU W01!],['Esben Kran'],2023-01-06T11:13:17Z,lesswrong,, 182926,https://www.lesswrong.com/posts/wdekcGpsMtakGCo5y/on-openai-dev-day,On OpenAI Dev Day,['Zvi'],2023-11-09T16:10:07Z,lesswrong,, 182976,https://www.lesswrong.com/posts/b223mLTZNDFExf3Qp/oracle-ai-human-beliefs-vs-human-values,Oracle AI: Human beliefs vs human values,['Stuart_Armstrong'],2015-07-22T11:54:27Z,lesswrong,, 182986,https://www.lesswrong.com/posts/Y2LhX3925RodndwpC/resolving-human-values-completely-and-adequately,"Resolving human values, completely and adequately",['Stuart_Armstrong'],2018-03-30T03:35:04Z,lesswrong,, 183008,https://www.lesswrong.com/posts/eJDTaBEZgCpbSdAbS/how-to-study-unsafe-agi-s-safely-and-why-we-might-have-no,How to Study Unsafe AGI's safely (and why we might have no choice),['Punoxysm'],2014-03-07T07:24:45Z,lesswrong,, 183030,https://www.lesswrong.com/posts/cvpSNy32rgNdqqQ9r/what-does-it-take-to-ban-a-thing,What does it take to ban a thing?,['qbolec'],2023-05-08T11:00:51Z,lesswrong,, 183047,https://www.lesswrong.com/posts/3DyXQkkkGnSgy95ex/briefly-how-i-ve-updated-since-chatgpt,Briefly how I've updated since ChatGPT,['rime'],2023-04-25T14:47:16Z,lesswrong,, 183071,https://www.lesswrong.com/posts/SbQP7QcibGx784kY9/applications-of-logical-uncertainty,Applications of logical uncertainty,['alex_zag_al'],2014-10-18T19:26:31Z,lesswrong,, 183095,https://www.lesswrong.com/posts/L5pWY8gEGhsHiWcsG/eu-ai-act-passed-plenary-vote-and-x-risk-was-a-main-topic,"EU AI Act passed Plenary vote, and X-risk was a main topic",['Ariel G.'],2023-06-21T18:33:18Z,lesswrong,, 183117,https://www.lesswrong.com/posts/83D92q8t3zYasnCcj/new-wbe-implementation,New WBE implementation,['Louie'],2012-11-30T11:16:27Z,lesswrong,, 183139,https://www.lesswrong.com/posts/osmwiGkCGxqPfLf4A/i-ve-updated-towards-ai-boxing-being-surprisingly-easy,I've updated towards AI boxing being surprisingly easy,['Noosphere89'],2022-12-25T15:40:48Z,lesswrong,, 183154,https://www.lesswrong.com/posts/edAvzMeg9zxMby6Lq/realistic-near-future-scenarios-of-ai-doom-understandable,Realistic near-future scenarios of AI doom understandable for non-techy people?,['RomanS'],2023-04-28T14:45:17Z,lesswrong,, 183175,https://www.lesswrong.com/posts/e2XAqFyEBWxzGXeHy/ai-safety-newsletter-6-examples-of-ai-safety-progress-yoshua,"AI Safety Newsletter #6: Examples of AI safety progress, Yoshua Bengio proposes a ban on AI agents, and lessons from nuclear arms control","['Dan H', 'Akash', 'aogara']",2023-05-16T15:14:46Z,lesswrong,, 183210,https://www.lesswrong.com/posts/NGj4KrTYsyH57SxYC/what-does-functional-decision-theory-say-to-do-in-imperfect,What does Functional Decision Theory say to do in imperfect Newcomb situations?,['Daniel_Eth'],2022-05-07T22:26:33Z,lesswrong,, 183219,https://www.lesswrong.com/posts/iekoEYDLgC7efzbBv/controlling-intelligent-agents-the-only-way-we-know-how,Controlling Intelligent Agents The Only Way We Know How: Ideal Bureaucratic Structure (IBS),['Justin Bullock'],2021-05-24T12:53:25Z,lesswrong,, 183237,https://www.lesswrong.com/posts/Qryk6FqjtZk9FHHJR/sparse-autoencoders-find-highly-interpretable-directions-in,Sparse Autoencoders Find Highly Interpretable Directions in Language Models,"['Logan Riggs', 'Hoagy', 'Aidan Ewart', 'Robert_AIZI']",2023-09-21T15:30:24Z,lesswrong,, 183254,https://www.lesswrong.com/posts/tcBEh9ZtEWvoHNKZ4/i-asked-my-senator-to-slow-ai,I asked my senator to slow AI,['Omid'],2023-04-06T18:18:08Z,lesswrong,, 183273,https://www.lesswrong.com/posts/gpPikAoWDfpa6jwBo/antitrust-compliant-ai-industry-self-regulation,Antitrust-Compliant AI Industry Self-Regulation,['Cullen'],2020-07-07T20:53:37Z,lesswrong,, 183282,https://www.lesswrong.com/posts/FbP7EteJBCx8FpLFn/the-bio-anchors-forecast,The Bio Anchors Forecast,['Ansh Radhakrishnan'],2022-06-02T01:32:06Z,lesswrong,, 183298,https://www.lesswrong.com/posts/TdWFxiusGP5raLMbC/training-a-rl-model-with-continuous-state-and-action-space,Training a RL Model with Continuous State & Action Space in a Real-World Scenario,['Alexander Ries'],2023-10-15T22:59:52Z,lesswrong,, 183311,https://www.lesswrong.com/posts/4JeAoTrAuByXGw6zm/updated-how-does-gpt2-s-training-corpus-capture-internet,[updated] how does gpt2′s training corpus capture internet discussion?  not well,['nostalgebraist'],2020-07-27T22:30:08Z,lesswrong,, 183327,https://www.lesswrong.com/posts/qP3s89RAcdYy2LN2K/have-you-felt-exiert-yet,Have you felt exiert yet?,['Stuart_Armstrong'],2018-01-05T17:03:51Z,lesswrong,, 183336,https://www.lesswrong.com/posts/kXHBz2Z5BBgaBiazf/a-transcript-of-the-ted-talk-by-eliezer-yudkowsky,A transcript of the TED talk by Eliezer Yudkowsky,['Mikhail Samin'],2023-07-12T12:12:34Z,lesswrong,, 183350,https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml-researchers-think-about-ai-in-2022,What do ML researchers think about AI in 2022?,['KatjaGrace'],2022-08-04T15:40:05Z,lesswrong,, 183371,https://www.lesswrong.com/posts/mEbYbekPXoWvrHraW/human-level-ai-can-plausibly-take-over-the-world,Human level AI can plausibly take over the world,['anithite'],2023-03-01T23:27:00Z,lesswrong,, 183392,https://www.lesswrong.com/posts/FFz6H35Gy6BArHxkc/task-decomposition-for-scalable-oversight-agisf-distillation,Task decomposition for scalable oversight (AGISF Distillation),['Charbel-Raphaël'],2023-07-25T13:34:59Z,lesswrong,, 183421,https://www.lesswrong.com/posts/CNytdmT6xrWdexQgN/ai-alignment-research-engineer-accelerator-arena-call-for,AI Alignment Research Engineer Accelerator (ARENA): call for applicants,['TheMcDouglas'],2023-04-17T20:30:04Z,lesswrong,, 183441,https://www.lesswrong.com/posts/gxxpK3eiSQ3XG3DW7/decision-theory-why-we-need-to-reduce-could-would-should,"Decision theory: Why we need to reduce “could”, “would”, “should”",['AnnaSalamon'],2009-09-02T09:23:35Z,lesswrong,, 183451,https://www.lesswrong.com/posts/wkws2WgraeN8AYJjv/llms-don-t-have-a-coherent-model-of-the-world-what-it-means,"""LLMs Don't Have a Coherent Model of the World"" - What it Means, Why it Matters",['Davidmanheim'],2023-06-01T07:46:37Z,lesswrong,, 183476,https://www.lesswrong.com/posts/qccxb3uzwFDsRuJuP/deference-on-ai-timelines-survey-results,Deference on AI timelines: survey results,"['Sam Clarke', 'mccaffary']",2023-03-30T23:03:53Z,lesswrong,, 183496,https://www.lesswrong.com/posts/oXnZovoFSWMgRe88W/is-training-data-going-to-be-diluted-by-ai-generated-content,Is training data going to be diluted by AI-generated content?,['Hannes Thurnherr'],2022-09-07T18:13:29Z,lesswrong,, 183510,https://www.lesswrong.com/posts/P6aDYBDiu9DyvsF9g/are-generative-world-models-a-mesa-optimization-risk,Are Generative World Models a Mesa-Optimization Risk?,['Thane Ruthenis'],2022-08-29T18:37:14Z,lesswrong,, 183530,https://www.lesswrong.com/posts/YZeZXF6LwZn5vqFo9/the-fundamental-theorem-of-asset-pricing-missing-link-of-the,The Fundamental Theorem of Asset Pricing: Missing Link of the Dutch Book Arguments,['johnswentworth'],2019-06-01T20:34:07Z,lesswrong,, 183542,https://www.lesswrong.com/posts/trrSb7fsQp5Bs5y3p/current-ai-harms-are-also-sci-fi,Current AI harms are also sci-fi,['Christopher King'],2023-06-08T17:49:59Z,lesswrong,, 183566,https://www.lesswrong.com/posts/hjL4KnPKtoa3JGtPy/summary-of-80k-s-ai-problem-profile,Summary of 80k's AI problem profile,['JakubK'],2023-01-01T07:30:22Z,lesswrong,, 183594,https://www.lesswrong.com/posts/W2ufY8ihDDWWqJA7h/if-you-don-t-know-the-name-of-the-game-just-tell-me-what-i,"If you don't know the name of the game, just tell me what I mean to you",['Stuart_Armstrong'],2010-10-26T13:43:58Z,lesswrong,, 183607,https://www.lesswrong.com/posts/vp8G6J9K8RiEq5Ghq/measuring-artificial-intelligence-on-human-benchmarks-is,Measuring artificial intelligence on human benchmarks is naive,['Anomalous'],2023-04-11T11:34:19Z,lesswrong,, 183625,https://www.lesswrong.com/posts/3dFogxGK8uNv5xCSv/you-won-t-solve-alignment-without-agent-foundations,You won’t solve alignment without agent foundations,['Mikhail Samin'],2022-11-06T08:07:13Z,lesswrong,, 183641,https://www.lesswrong.com/posts/67QdHTkNxAZvakR9a/hegel-vs-gpt-3,Hegel vs. GPT-3,['Bezzi'],2021-10-27T05:55:18Z,lesswrong,, 183655,https://www.lesswrong.com/posts/oRRpsGkCZHA3pzhvm/a-fungibility-theorem,A fungibility theorem,['Nisan'],2013-01-12T09:27:26Z,lesswrong,, 183665,https://www.lesswrong.com/posts/FsfP3e7ZspCPuwaRA/simplified-bio-anchors-for-upper-bounds-on-ai-timelines,Simplified bio-anchors for upper bounds on AI timelines,['Fabien Roger'],2023-07-15T18:15:00Z,lesswrong,, 183680,https://www.lesswrong.com/posts/xn59deCXbe99EoG86/the-idea-of-an-aligned-superintelligence-seems-misguided,"The idea of an ""aligned superintelligence"" seems misguided",['ssadler'],2023-02-27T11:19:58Z,lesswrong,, 183702,https://www.lesswrong.com/posts/NSX8RuD9tQ4uWzkk3/a-problem-with-timeless-decision-theory-tdt,A problem with Timeless Decision Theory (TDT),['Gary_Drescher'],2010-02-04T18:47:08Z,lesswrong,, 183712,https://www.lesswrong.com/posts/CPKYuJqLYGpBTtdFd/good-news-everyone,"Good News, Everyone!",['jbash'],2023-03-25T13:48:22Z,lesswrong,, 183751,https://www.lesswrong.com/posts/3CsynkTxNEdHDexTT/how-i-learned-to-stop-worrying-and-love-skill-trees,How I learned to stop worrying and love skill trees,['junk heap homotopy'],2023-05-23T04:08:42Z,lesswrong,, 183771,https://www.lesswrong.com/posts/bYrF8rXFYwPqnfxTp/1960-the-year-the-singularity-was-cancelled,1960: The Year The Singularity Was Cancelled,['Scott Alexander'],2019-04-23T01:30:01Z,lesswrong,, 183788,https://www.lesswrong.com/posts/RjeshF3iKaY2Bvkt9/robustness-and-evolution-mlaisu-w02,Robustness & Evolution [MLAISU W02],['Esben Kran'],2023-01-13T15:47:16Z,lesswrong,, 183818,https://www.lesswrong.com/posts/PtEPqonFDv7ueYYpu/consequentialism-is-in-the-stars-not-ourselves,Consequentialism is in the Stars not Ourselves,['DragonGod'],2023-04-24T00:02:40Z,lesswrong,, 183836,https://www.lesswrong.com/posts/ZcvNZYPsT9jvpHkp7/what-does-ai-alignment-success-look-like,What Does AI Alignment Success Look Like?,['shminux'],2022-10-20T00:32:48Z,lesswrong,, 183854,https://www.lesswrong.com/posts/oHk9T3jbx2J5zJ39P/the-alignment-community-is-culturally-broken,The Alignment Community Is Culturally Broken,['sudo -i'],2022-11-13T18:53:55Z,lesswrong,, 183873,https://www.lesswrong.com/posts/u9pfbkZeG8mFTPNi2/babies-and-bunnies-a-caution-about-evo-psych,Babies and Bunnies: A Caution About Evo-Psych,['Alicorn'],2010-02-22T01:53:07Z,lesswrong,, 183883,https://www.lesswrong.com/posts/KdEkNx3SgjfciNKPx/linkpost-the-agi-show-podcast,[Linkpost] The AGI Show podcast,['Soroush Pour'],2023-05-23T09:52:30Z,lesswrong,, 183892,https://www.lesswrong.com/posts/XfpJ6WQBDcEcC8Mu4/humans-are-utility-monsters,Humans are utility monsters,['PhilGoetz'],2013-08-16T21:05:28Z,lesswrong,, 183902,https://www.lesswrong.com/posts/Ek4odgixoYoheWZFz/chatgpt-tells-stories-about-xp-708-dq-eliezer-dragons-dark,"ChatGPT tells stories about XP-708-DQ, Eliezer, dragons, dark sorceresses, and unaligned robots becoming aligned",['Bill Benzon'],2023-01-08T23:21:19Z,lesswrong,, 183925,https://www.lesswrong.com/posts/k93NEoXZq6CdXegdx/philosophical-cyborg-part-1,Philosophical Cyborg (Part 1),"['ukc10014', 'Roman Leventov', 'NicholasKees']",2023-06-14T16:20:40Z,lesswrong,, 183950,https://www.lesswrong.com/posts/nAwTGhgrdxE85Bjmg/tools-versus-agents,Tools versus agents,['Stuart_Armstrong'],2012-05-16T13:00:04Z,lesswrong,, 183971,https://www.lesswrong.com/posts/oQ7LXJP5bzNKxomWm/what-are-some-examples-of-ais-instantiating-the-nearest,What are some examples of AIs instantiating the 'nearest unblocked strategy problem'?,['EJT'],2023-10-04T11:05:35Z,lesswrong,, 183984,https://www.lesswrong.com/posts/P98i7kAN2uWuy7mhD/ai-summer-harvest,AI Summer Harvest,['Cleo Nardo'],2023-04-04T03:35:58Z,lesswrong,, 184000,https://www.lesswrong.com/posts/aKT6WCs3ASvBWfLw9/how-lesswrong-helped-me-make-usd25k-a-rational-pricing,How Lesswrong helped me make $25K: A rational pricing strategy,['kareemabukhadra'],2020-12-21T20:20:18Z,lesswrong,, 184015,https://www.lesswrong.com/posts/vs49tuFuaMEd4iskA/one-path-to-coherence-conditionalization,One path to coherence: conditionalization,['porby'],2023-06-29T01:08:15Z,lesswrong,, 184033,https://www.lesswrong.com/posts/kpkxKDpiRn6BNArFm/content-and-takeaways-from-seri-mats-training-program-with,Content and Takeaways from SERI MATS Training Program with John Wentworth,['RohanS'],2022-12-24T04:17:21Z,lesswrong,, 184056,https://www.lesswrong.com/posts/PfbE2nTvRJjtzysLM/instrumental-convergence-to-offer-hope,Instrumental Convergence To Offer Hope?,['michael_mjd'],2022-04-22T01:56:15Z,lesswrong,, 184068,https://www.lesswrong.com/posts/5sNLX2yY5FzkCp7Ju/the-spelling-miracle-gpt-3-spelling-abilities-and-glitch,"The ""spelling miracle"": GPT-3 spelling abilities and glitch tokens revisited",['mwatkins'],2023-07-31T19:47:03Z,lesswrong,, 184087,https://www.lesswrong.com/posts/DTv3jpro99KwdkHRE/ai-alignment-prize-super-boxing,AI Alignment Prize: Super-Boxing,['X4vier'],2018-03-18T01:03:20Z,lesswrong,, 184102,https://www.lesswrong.com/posts/ZvnqqcCeSwhrmiSAe/community-building-for-graduate-students-a-targeted-approach,Community Building for Graduate Students: A Targeted Approach,['Neil Crawford'],2022-09-06T17:17:45Z,lesswrong,, 184115,https://www.lesswrong.com/posts/fknxaKAqrc6hguvJD/open-source-llms-can-now-actively-lie,Open Source LLMs Can Now Actively Lie,['Josh Levy'],2023-06-01T22:03:37Z,lesswrong,, 184129,https://www.lesswrong.com/posts/BvFJnyqsJzBCybDSD/taxonomy-of-ai-risk-counterarguments,Taxonomy of AI-risk counterarguments,['Odd anon'],2023-10-16T00:12:51Z,lesswrong,, 184149,https://www.lesswrong.com/posts/vLEa2kaaQsyDxvPu5/robin-hanson-on-lumpiness-of-ai-services,Robin Hanson on Lumpiness of AI Services,['DanielFilan'],2019-02-17T23:08:36Z,lesswrong,, 184165,https://www.lesswrong.com/posts/tFjoPbLGrvEvAL9TL/self-reference-breaks-the-orthogonality-thesis,Self-Reference Breaks the Orthogonality Thesis,['lsusr'],2023-02-17T04:11:16Z,lesswrong,, 184185,https://www.lesswrong.com/posts/bfsyLY3Xnq442eKL8/gpt-2005-a-conversation-with-chatgpt-featuring-semi,GPT-2005: A conversation with ChatGPT (featuring semi-functional Wolfram Alpha plugin!),['Lone Pine'],2023-03-24T14:03:06Z,lesswrong,, 184196,https://www.lesswrong.com/posts/FHz3p9kDzcxeRktqm/link-choose-your-preference-utilitarianism-carefully-part-1,[link] Choose your (preference) utilitarianism carefully – part 1,['Kaj_Sotala'],2015-06-25T12:06:02Z,lesswrong,, 184206,https://www.lesswrong.com/posts/DdDKsyA925Sm8BpQh/ai-as-super-demagogue,AI as Super-Demagogue,['RationalDino'],2023-11-05T21:21:14Z,lesswrong,, 184236,https://www.lesswrong.com/posts/eSEEJppsfh5tGHuy4/a-quick-remark-on-so-called-hallucinations-in-llms-and,A quick remark on so-called “hallucinations” in LLMs and humans,['Bill Benzon'],2023-09-23T12:17:27Z,lesswrong,, 184250,https://www.lesswrong.com/posts/Xb6RGvTzbcHhJ4jXR/danger-s-of-theorem-proving-ai,Danger(s) of theorem-proving AI?,['Yitz'],2022-03-16T02:47:47Z,lesswrong,, 184283,https://www.lesswrong.com/posts/HpHHJaHrdpimpBvv5/paradigm-building-conclusion-and-practical-takeaways,Paradigm-building: Conclusion and practical takeaways,['Cameron Berg'],2022-02-15T16:11:09Z,lesswrong,, 184296,https://www.lesswrong.com/posts/MJffzWaCQcvB3gRJa/in-defence-of-optimizing-routine-tasks,In Defence of Optimizing Routine Tasks,['leogao'],2021-11-09T05:09:42Z,lesswrong,, 184307,https://www.lesswrong.com/posts/FoDdrWGrNQSJLtqWL/against-utility-functions,Against utility functions,['Qiaochu_Yuan'],2014-06-19T05:56:30Z,lesswrong,, 184317,https://www.lesswrong.com/posts/PnBFLWiX5p36CJyTH/how-to-upload-a-mind-in-three-not-so-easy-steps,How to Upload a Mind (In Three Not-So-Easy Steps),"['aggliu', 'Writer']",2023-11-13T18:13:33Z,lesswrong,, 184342,https://www.lesswrong.com/posts/WMDy4GxbyYkNrbmrs/in-praise-of-boredom,In Praise of Boredom,['Eliezer Yudkowsky'],2009-01-18T09:03:29Z,lesswrong,, 184359,https://www.lesswrong.com/posts/oHwt2JmDBefiN8rvg/does-solomonoff-always-win,Does Solomonoff always win?,['cousin_it'],2011-02-23T20:42:20Z,lesswrong,, 184380,https://www.lesswrong.com/posts/wpmdkftNZy26z4MTW/information-bottleneck-for-counterfactual-corrigibility,Information bottleneck for counterfactual corrigibility,['tailcalled'],2021-12-06T17:11:13Z,lesswrong,, 184403,https://www.lesswrong.com/posts/S54P4qzkMSb6iwqjx/dl-towards-the-unaligned-recursive-self-optimization,DL towards the unaligned Recursive Self-Optimization attractor,['jacob_cannell'],2021-12-18T02:15:31Z,lesswrong,, 184424,https://www.lesswrong.com/posts/fnixDbZfvf3FtHeCz/world-mind-and-learnability-a-note-on-the-metaphysical,"World, mind, and learnability: A note on the metaphysical structure of the cosmos [& LLMs]",['Bill Benzon'],2023-09-05T12:19:38Z,lesswrong,, 184434,https://www.lesswrong.com/posts/fSMrwJnqRb5NrMYFx/all-agi-safety-questions-welcome-especially-basic-ones-2,All AGI Safety questions welcome (especially basic ones) [~monthly thread],['Robert Miles'],2022-11-01T23:23:04Z,lesswrong,, 184456,https://www.lesswrong.com/posts/J4wpcCTo6CF6C5ftB/thoughts-on-a-sequences-inspired-phd-topic,"Thoughts on a ""Sequences Inspired"" PhD Topic",['goose000'],2021-06-17T20:36:21Z,lesswrong,, 184467,https://www.lesswrong.com/posts/bxJQoXKuHsnCwcimg/super-intelligent-ais-that-don-t-require-alignment,Super intelligent AIs that don't require alignment,['Yair Halberstadt'],2021-11-16T19:55:01Z,lesswrong,, 184494,https://www.lesswrong.com/posts/2nLnTMCogFujFYCHD/sam-altman-and-ezra-klein-on-the-ai-revolution,Sam Altman and Ezra Klein on the AI Revolution,['Zack_M_Davis'],2021-06-27T04:53:17Z,lesswrong,, 184505,https://www.lesswrong.com/posts/jDTqKRdy3fxvc7fFH/anthropics-and-embedded-agency,Anthropics and Embedded Agency,['dadadarren'],2021-06-26T01:45:07Z,lesswrong,, 184518,https://www.lesswrong.com/posts/2kphKANEE2NA9JKoN/since-figuring-out-human-values-is-hard-what-about-say,"Since figuring out human values is hard, what about, say, monkey values?",['shminux'],2020-01-01T21:56:29Z,lesswrong,, 184527,https://www.lesswrong.com/posts/q7nWEbyW7tXwnKBe9/looking-for-judges-for-critiques-of-alignment-plans,Looking for judges for critiques of Alignment Plans,['Iknownothing'],2023-08-17T22:35:41Z,lesswrong,, 184536,https://www.lesswrong.com/posts/XpeYpKXHvbqhefQi5/all-agi-safety-questions-welcome-especially-basic-ones-sept,All AGI safety questions welcome (especially basic ones) [Sept 2022],['plex'],2022-09-08T11:56:50Z,lesswrong,, 184555,https://www.lesswrong.com/posts/7PePKKWfzcoqmF3Hs/an-uncanny-prison-1,An Uncanny Prison,['Nathan1123'],2022-08-13T21:40:49Z,lesswrong,, 184569,https://www.lesswrong.com/posts/9SBSTFECnyHpkKyBA/ai-community-building-eliezerkart,AI community building: EliezerKart,['Christopher King'],2023-04-01T15:25:05Z,lesswrong,, 184578,https://www.lesswrong.com/posts/KkSncxP2jQ3nmtmEv/eleutherai-s-gpt-neox-20b-release,EleutherAI's GPT-NeoX-20B release,['leogao'],2022-02-10T06:56:41Z,lesswrong,, 184602,https://www.lesswrong.com/posts/iXbPe9EAxScuimsGh/linkpost-scaling-laws-for-language-encoding-models-in-fmri,[Linkpost] Scaling laws for language encoding models in fMRI,['Bogdan Ionut Cirstea'],2023-06-08T10:52:16Z,lesswrong,, 184623,https://www.lesswrong.com/posts/RuFPGfq7QeNoeGcbs/how-not-to-be-stupid-brewing-a-nice-cup-of-utilitea,How Not to be Stupid: Brewing a Nice Cup of Utilitea,['Psy-Kosh'],2009-05-09T08:14:18Z,lesswrong,, 184637,https://www.lesswrong.com/posts/3aRwhGWFkWPJiwnCq/what-are-cais-boldest-near-medium-term-predictions,What are CAIS' boldest near/medium-term predictions?,['jacobjacob'],2019-03-28T13:14:33Z,lesswrong,, 184646,https://www.lesswrong.com/posts/BBvcAPDM9u6bYMMqi/risk-budgets-vs-basic-decision-theory,Risk Budgets vs. Basic Decision Theory,['Vlad Firoiu'],2021-04-05T21:55:51Z,lesswrong,, 184656,https://www.lesswrong.com/posts/qoTpit4zFPni54GSo/a-caveat-to-the-orthogonality-thesis,A caveat to the Orthogonality Thesis,['Wuschel Schulz'],2022-11-09T15:06:51Z,lesswrong,, 184669,https://www.lesswrong.com/posts/5QmtqBt6GDM8hYkAr/training-my-friend-to-cook,Training My Friend to Cook,['lsusr'],2021-08-29T05:54:50Z,lesswrong,, 184685,https://www.lesswrong.com/posts/TkgZWZKXgcCLc3G55/which-anaesthetic-to-choose,Which Anaesthetic To Choose?,['dadadarren'],2023-10-14T14:55:43Z,lesswrong,, 184696,https://www.lesswrong.com/posts/u3vfqmShFrMTA6qvA/a-mathematical-model-for-simulators,A Mathematical Model for Simulators,['marc/er'],2023-10-02T06:46:32Z,lesswrong,, 184711,https://www.lesswrong.com/posts/9fbr7axHenRAL5Gkm/arena-2-0-impact-report,ARENA 2.0 - Impact Report,['TheMcDouglas'],2023-09-26T17:13:20Z,lesswrong,, 184738,https://www.lesswrong.com/posts/LYvPrrjAy5qXmP8fM/what-is-the-solution-to-the-alignment-problem,What is the solution to the Alignment problem?,['Algon'],2022-04-30T23:19:07Z,lesswrong,, 184747,https://www.lesswrong.com/posts/Q8MKRDHoDWBtpu8bN/defining-optimization-in-a-deeper-way-part-2,Defining Optimization in a Deeper Way Part 2,['Jemist'],2022-07-11T20:29:30Z,lesswrong,, 184762,https://www.lesswrong.com/posts/8FRzErffqEW9gDCCW/against-the-linear-utility-hypothesis-and-the-leverage,Against the Linear Utility Hypothesis and the Leverage Penalty,['AlexMennen'],2017-12-14T18:38:53Z,lesswrong,, 184783,https://www.lesswrong.com/posts/3ahqzpKvtqkom63cx/game-theory-without-argmax-part-1,Game Theory without Argmax [Part 1],['Cleo Nardo'],2023-11-11T15:59:47Z,lesswrong,, 184811,https://www.lesswrong.com/posts/T4KZ62LJsxDkMf4nF/a-casual-intro-to-ai-doom-and-alignment-1,a casual intro to AI doom and alignment,['Tamsin Leake'],2022-11-01T16:38:31Z,lesswrong,, 184837,https://www.lesswrong.com/posts/rr6zRX7LTfsnjHvPL/on-the-current-status-of-ai-dating,On The Current Status Of AI Dating,['Nikita Brancatisano'],2023-02-07T20:00:47Z,lesswrong,, 184858,https://www.lesswrong.com/posts/oQMZzr4jzzksdNdMe/brain-centredness-and-mind-uploading,Brain-centredness and mind uploading,['gedymin'],2015-01-02T12:23:35Z,lesswrong,, 184876,https://www.lesswrong.com/posts/gDrSf2ccJNbbTPuG9/maximal-lotteries-for-value-learning,Maximal lotteries for value learning,['ViktoriaMalyasova'],2022-10-16T23:44:13Z,lesswrong,, 184911,https://www.lesswrong.com/posts/ExT9itYFrWztXLnsc/what-s-the-most-impressive-thing-that-gpt-4-could-plausibly,What's the Most Impressive Thing That GPT-4 Could Plausibly Do?,['bayesed'],2022-08-26T15:34:52Z,lesswrong,, 184920,https://www.lesswrong.com/posts/hN5pvxEYPnWR4rd4G/russian-parliamentarian-let-s-ban-personal-computers-and-the,Russian parliamentarian: let's ban personal computers and the Internet,['RomanS'],2023-07-25T17:30:21Z,lesswrong,, 184933,https://www.lesswrong.com/posts/kmWrwtGE9B9hpbgRT/a-search-for-more-chatgpt-gpt-3-5-gpt-4-unspeakable-glitch,"A Search for More ChatGPT / GPT-3.5 / GPT-4 ""Unspeakable"" Glitch Tokens",['Martin Fell'],2023-05-09T14:36:44Z,lesswrong,, 184943,https://www.lesswrong.com/posts/DZzk8tLqbSCN5qe5M/regulate-or-compete-the-china-factor-in-u-s-ai-policy-nair-2,Regulate or Compete? The China Factor in U.S. AI Policy (NAIR #2),['charles_m'],2023-05-05T17:43:42Z,lesswrong,, 184966,https://www.lesswrong.com/posts/9RCoE7jmmvGd5Zsh2/the-lifespan-dilemma,The Lifespan Dilemma,['Eliezer Yudkowsky'],2009-09-10T18:45:24Z,lesswrong,, 184977,https://www.lesswrong.com/posts/t5AfR3LBb6syaxcXn/some-miscellaneous-thoughts-on-chatgpt-stories-and,"Some miscellaneous thoughts on ChatGPT, stories, and mechanical interpretability",['Bill Benzon'],2023-02-04T19:35:28Z,lesswrong,, 184989,https://www.lesswrong.com/posts/4iFSxvddsZCzBWtCo/the-hard-intelligence-hypothesis-and-its-bearing-on,The Hard Intelligence Hypothesis and Its Bearing on Succession Induced Foom,['DragonGod'],2022-05-31T19:04:41Z,lesswrong,, 185002,https://www.lesswrong.com/posts/6ssARXJut22DShdE2/is-it-possible-to-prevent-the-torture-of-ems,Is it possible to prevent the torture of ems?,['NancyLebovitz'],2011-06-29T07:42:12Z,lesswrong,, 185012,https://www.lesswrong.com/posts/DKtWikjcdApRj3rWr/paper-understanding-and-controlling-a-maze-solving-policy,Paper: Understanding and Controlling a Maze-Solving Policy Network,"['TurnTrout', 'Ulisse Mini', 'peligrietzer', 'mrinank_sharma', 'Austin Meek', 'Monte M', 'lisathiergart']",2023-10-13T01:38:09Z,lesswrong,, 185022,https://www.lesswrong.com/posts/6untaSPpsocmkS7Z3/ways-i-expect-ai-regulation-to-increase-extinction-risk,Ways I Expect AI Regulation To Increase Extinction Risk,['1a3orn'],2023-07-04T17:32:48Z,lesswrong,, 185044,https://www.lesswrong.com/posts/dYnHLWMXCYdm9xu5j/simulator-framing-and-confusions-about-llms,'simulator' framing and confusions about LLMs,['Beth Barnes'],2022-12-31T23:38:57Z,lesswrong,, 185071,https://www.lesswrong.com/posts/omDu7vNy3YyKXsvCd/taboo-p-doom,Taboo P(doom),['NathanBarnard'],2023-02-03T10:37:01Z,lesswrong,, 185083,https://www.lesswrong.com/posts/Tt83WGrZn6QGcnf2B/against-a-general-factor-of-doom,Against a General Factor of Doom,['Jeffrey Heninger'],2022-11-23T16:50:04Z,lesswrong,, 185097,https://www.lesswrong.com/posts/4MLBK7iCW3vYd93Mn/a-closer-look-at-chess-scalings-into-the-past,A closer look at chess scalings (into the past),['hippke'],2021-07-15T08:13:35Z,lesswrong,, 185107,https://www.lesswrong.com/posts/2rh2ikhTXe5KR4yeN/fai-and-the-information-theory-of-pleasure,FAI and the Information Theory of Pleasure,['johnsonmx'],2015-09-08T21:16:03Z,lesswrong,, 185136,https://www.lesswrong.com/posts/AdGo5BRCzzsdDGM6H/contra-strong-coherence,"Contra ""Strong Coherence""",['DragonGod'],2023-03-04T20:05:28Z,lesswrong,, 185150,https://www.lesswrong.com/posts/FgXjuS4R9sRxbzE5w/medical-image-registration-the-obscure-field-where-deep,Medical Image Registration: The obscure field where Deep Mesaoptimizers are already at the top of the benchmarks. (post + colab notebook),['Hastings'],2023-01-30T22:46:31Z,lesswrong,, 185169,https://www.lesswrong.com/posts/dWAEF4jbdvyJXuQzh/clippy-the-friendly-paperclipper,"Clippy, the friendly paperclipper",['Seth Herd'],2023-03-02T00:02:56Z,lesswrong,, 185180,https://www.lesswrong.com/posts/eaCcc7AhQj4EoHFzH/is-technical-ai-alignment-research-a-net-positive,Is technical AI alignment research a net positive?,['cranberry_bear'],2022-04-12T13:07:56Z,lesswrong,, 185204,https://www.lesswrong.com/posts/aBDsJhkMnFcos3HKC/what-are-brains,What are brains?,['Valentine'],2023-06-10T14:46:47Z,lesswrong,, 185221,https://www.lesswrong.com/posts/76etTtAiKtZGGzkmi/video-and-transcript-of-presentation-on-existential-risk,Video and Transcript of Presentation on Existential Risk from Power-Seeking AI,['Joe Carlsmith'],2022-05-08T03:50:13Z,lesswrong,, 185246,https://www.lesswrong.com/posts/mz3hwS4c9bc9EHAm9/replacing-karma-with-good-heart-tokens-worth-usd1,Replacing Karma with Good Heart Tokens (Worth $1!),"['Ben Pace', 'habryka']",2022-04-01T09:31:34Z,lesswrong,, 185265,https://www.lesswrong.com/posts/kWP5vcW4FSpM3is3J/launching-the-forecasting-ai-progress-tournament,Launching the Forecasting AI Progress Tournament,['Tamay'],2020-12-07T14:08:40Z,lesswrong,, 185275,https://www.lesswrong.com/posts/LdEMkP2Ajn4SvhHwd/gpt-learning-from-smarter-texts,GPT learning from smarter texts?,['Viliam'],2023-01-08T22:23:26Z,lesswrong,, 185286,https://www.lesswrong.com/posts/bbjeShykGbTwFsiMu/why-don-t-governments-seem-to-mind-that-companies-are,Why don't governments seem to mind that companies are explicitly trying to make AGIs?,['ozziegooen'],2021-12-26T01:58:20Z,lesswrong,, 185297,https://www.lesswrong.com/posts/8C4hnqZ62gSjszHuT/the-friendly-ai-game,The Friendly AI Game,['bentarm'],2011-03-15T16:45:14Z,lesswrong,, 185306,https://www.lesswrong.com/posts/Amcb6uokHAmDTRooC/why-i-m-not-worried-about-imminent-doom,Why I'm not worried about imminent doom,['Ariel Kwiatkowski'],2023-04-10T15:31:03Z,lesswrong,, 185321,https://www.lesswrong.com/posts/FP8T6rdZ3ohXxJRto/superintelligence-20-the-value-loading-problem,Superintelligence 20: The value-loading problem,['KatjaGrace'],2015-01-27T02:00:19Z,lesswrong,, 185352,https://www.lesswrong.com/posts/XddMs9kSGtm6L8522/a-taxonomy-of-oracle-ais,A taxonomy of Oracle AIs,['lukeprog'],2012-03-08T23:14:16Z,lesswrong,, 185376,https://www.lesswrong.com/posts/nqCqa4s44tXkwdrLd/how-easy-is-it-to-supervise-processes-vs-outcomes,How easy is it to supervise processes vs outcomes?,['Noosphere89'],2022-10-18T17:48:24Z,lesswrong,, 185386,https://www.lesswrong.com/posts/2D9s6kpegDQtrueBE/multiple-worlds-one-universal-wave-function,"Multiple Worlds, One Universal Wave Function",['evhub'],2020-11-04T22:28:23Z,lesswrong,, 185408,https://www.lesswrong.com/posts/76bhSfCPFZki2Woa8/asot-thoughts-on-gpt-n,[ASoT] Thoughts on GPT-N,['Ulisse Mini'],2022-11-08T07:14:38Z,lesswrong,, 185427,https://www.lesswrong.com/posts/oBTkthd7h8sDpkiu2/analysis-us-restricts-gpu-sales-to-china,Analysis: US restricts GPU sales to China,['aogara'],2022-10-07T18:38:07Z,lesswrong,, 185458,https://www.lesswrong.com/posts/fmnALAQprRBLFvxDq/activation-additions-in-a-simple-mnist-network,Activation additions in a simple MNIST network,['Garrett Baker'],2023-05-18T02:49:45Z,lesswrong,, 185477,https://www.lesswrong.com/posts/dDnszTTNLznAmrdFJ/my-current-uncertainties-regarding-ai-alignment-and-the-end,"My current uncertainties regarding AI, alignment, and the end of the world",['dominicq'],2021-11-14T14:08:51Z,lesswrong,, 185498,https://www.lesswrong.com/posts/acaPdSfwNiG9igJ7u/what-we-talk-about-when-we-talk-about-maximising-utility,What we talk about when we talk about maximising utility,['Richard_Ngo'],2018-02-24T22:33:28Z,lesswrong,, 185514,https://www.lesswrong.com/posts/QgsH9yWBvFtuZDsgN/question-3-control-proposals-for-minimizing-bad-outcomes,Question 3: Control proposals for minimizing bad outcomes,['Cameron Berg'],2022-02-12T19:13:48Z,lesswrong,, 185534,https://www.lesswrong.com/posts/q9xHFf8duqbc45YvT/logical-uncertainty-and-mathematical-uncertainty,Logical uncertainty and Mathematical uncertainty,['AlexMennen'],2018-06-26T01:08:21Z,lesswrong,, 185549,https://www.lesswrong.com/posts/2gHEfJfuP5ToQMv2E/ai-rights-in-your-view-what-would-be-required-for-an-agi-to,"AI Rights: In your view, what would be required for an AGI to gain rights and protections from the various Governments of the World?",['Super AGI'],2023-06-09T01:24:18Z,lesswrong,, 185563,https://www.lesswrong.com/posts/cLtdcxu9E4noRSons/part-1-amplifying-generalist-research-via-forecasting-models,[Part 1] Amplifying generalist research via forecasting – Models of impact and challenges,"['jacobjacob', 'ozziegooen', 'Elizabeth', 'NunoSempere', 'bgold']",2019-12-19T15:50:33Z,lesswrong,, 185593,https://www.lesswrong.com/posts/NCb28Xdv7xDajtqtS/engelbart-insufficiently-recursive,Engelbart: Insufficiently Recursive,['Eliezer Yudkowsky'],2008-11-26T08:31:09Z,lesswrong,, 185609,https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities,Coherent decisions imply consistent utilities,['Eliezer Yudkowsky'],2019-05-12T21:33:58Z,lesswrong,, 185625,https://www.lesswrong.com/posts/eskuEKHNAHGA8ZJcu/super-luigi-luigi-luigi-waluigi,Super-Luigi = Luigi + (Luigi - Waluigi),['Alexei'],2023-03-17T15:27:29Z,lesswrong,, 185639,https://www.lesswrong.com/posts/727sAH7RWsxgg93Xz/why-do-ai-researchers-rate-the-probability-of-doom-so-low,Why Do AI researchers Rate the Probability of Doom So Low?,['Aorou'],2022-09-24T02:33:19Z,lesswrong,, 185650,https://www.lesswrong.com/posts/ftf4HyNrsie8QdJu8/whole-bird-emulation-requires-quantum-mechanics,Whole Bird Emulation requires Quantum Mechanics,['Jeffrey Heninger'],2023-02-14T23:50:05Z,lesswrong,, 185661,https://www.lesswrong.com/posts/q4uuwa7DXQdBeRGHP/logical-probability-of-goldbach-s-conjecture-provable-rule,Logical Probability of Goldbach’s Conjecture: Provable Rule or Coincidence?,['avturchin'],2022-12-29T13:37:45Z,lesswrong,, 185674,https://www.lesswrong.com/posts/ByGnDpfrtAtoFiMwp/survey-what-de-motivates-you-about-ai-risk,Survey: What (de)motivates you about AI risk?,['Daniel_Friedrich'],2022-08-03T19:17:36Z,lesswrong,, 185683,https://www.lesswrong.com/posts/qZHn4rDBXdkPKnwNH/lone-genius-bias-and-returns-on-additional-researchers,Lone Genius Bias and Returns on Additional Researchers,['ChrisHallquist'],2013-11-01T00:38:41Z,lesswrong,, 185695,https://www.lesswrong.com/posts/SxW9sCis6nMdeDpvr/aci-3-the-origin-of-goals-and-utility,ACI #3: The Origin of Goals and Utility,['Akira Pyinya'],2023-05-17T20:47:49Z,lesswrong,, 185710,https://www.lesswrong.com/posts/yawkDNfjR2nnzycMZ/new-tool-the-residual-stream-viewer,New Tool: the Residual Stream Viewer,['AdamYedidia'],2023-10-01T00:49:52Z,lesswrong,, 185721,https://www.lesswrong.com/posts/qo2hqf2ha7rfgCdjY/a-bridge-to-dath-ilan-improved-governance-on-the-critical,A bridge to Dath Ilan? Improved governance on the critical path to AI alignment.,['Jackson Wagner'],2022-05-18T15:51:10Z,lesswrong,, 185750,https://www.lesswrong.com/posts/nxcRxWtiNiTZX29Ek/we-need-to-know-about-continual-learning,We Need To Know About Continual Learning,['michael_mjd'],2023-04-22T17:08:07Z,lesswrong,, 185771,https://www.lesswrong.com/posts/3wMAppMNRQvAbhwcj/ai-safety-without-alignment-how-humans-can-win-against-ai,AI Safety without Alignment: How humans can WIN against AI,['vicchain'],2023-06-29T17:53:03Z,lesswrong,, 185786,https://www.lesswrong.com/posts/xdjA6YtE7QBsLYQ3i/reinforcement-learning-a-non-standard-introduction-part-2,Reinforcement Learning: A Non-Standard Introduction (Part 2),['royf'],2012-08-02T08:17:09Z,lesswrong,, 185802,https://www.lesswrong.com/posts/JjLuRtPn6B9n45Jga/a-bite-sized-introduction-to-elk,A Bite Sized Introduction to ELK,['Luk27182'],2022-09-17T00:28:02Z,lesswrong,, 185825,https://www.lesswrong.com/posts/PpdFFtxsPQK5dk4EB/chatgpt-tells-stories-and-a-note-about-reverse-engineering-a,"ChatGPT tells stories, and a note about reverse engineering: A Working Paper",['Bill Benzon'],2023-03-03T15:12:06Z,lesswrong,, 185835,https://www.lesswrong.com/posts/g7rLyjg67iopg9zLD/is-the-valley-of-confused-abstractions-real,"Is the ""Valley of Confused Abstractions"" real?",['jacquesthibs'],2022-12-05T13:36:22Z,lesswrong,, 185845,https://www.lesswrong.com/posts/jbi9kxhb4iCQyWG9Y/explaining-solidgoldmagikarp-by-looking-at-it-from-random,Explaining SolidGoldMagikarp by looking at it from random directions,['Robert_AIZI'],2023-02-14T14:54:21Z,lesswrong,, 185862,https://www.lesswrong.com/posts/xhzbgjYYMmmErENB6/versions-of-aixi-can-be-arbitrarily-stupid,Versions of AIXI can be arbitrarily stupid,['Stuart_Armstrong'],2015-08-10T13:23:35Z,lesswrong,, 185880,https://www.lesswrong.com/posts/AvJeJw52NL9y7RJDJ/against-discount-rates,Against Discount Rates,['Eliezer Yudkowsky'],2008-01-21T10:00:00Z,lesswrong,, 185903,https://www.lesswrong.com/posts/Gj4GPdLntgJX9kodL/what-causes-a-decision-theory-to-be-used,What causes a decision theory to be used?,['Dagon'],2023-09-25T16:33:36Z,lesswrong,, 185912,https://www.lesswrong.com/posts/ekBsq6mATCvEvqmkC/gpt-4-is-bad-at-strategic-thinking,GPT-4 is bad at strategic thinking,['Christopher King'],2023-03-27T15:11:47Z,lesswrong,, 185925,https://www.lesswrong.com/posts/qy5dF7bQcFjSKaW58/bad-at-arithmetic-promising-at-math,"Bad at Arithmetic, Promising at Math",['cohenmacaulay'],2022-12-18T05:40:37Z,lesswrong,, 185951,https://www.lesswrong.com/posts/j2Yi4WJgNWYKawweY/is-infra-bayesianism-applicable-to-value-learning,Is Infra-Bayesianism Applicable to Value Learning?,['RogerDearnaley'],2023-05-11T08:17:55Z,lesswrong,, 185965,https://www.lesswrong.com/posts/wByPb6syhxvqPCutu/8-examples-informing-my-pessimism-on-uploading-without,8 examples informing my pessimism on uploading without reverse engineering,['Steven Byrnes'],2023-11-03T20:03:50Z,lesswrong,, 185989,https://www.lesswrong.com/posts/eD34hTMp8uv3ifSjg/consequentialists-one-way-pattern-traps,Consequentialists: One-Way Pattern Traps,['David Udell'],2023-01-16T20:48:57Z,lesswrong,, 186008,https://www.lesswrong.com/posts/cC2mEzNhzLHPKbru7/logical-uncertainty-reading-list,Logical uncertainty reading list,['alex_zag_al'],2014-10-18T19:16:16Z,lesswrong,, 186027,https://www.lesswrong.com/posts/YpdTSt4kRnuSkn63c/the-prediction-problem-a-variant-on-newcomb-s,The Prediction Problem: A Variant on Newcomb's,['Chris_Leong'],2018-07-04T07:40:22Z,lesswrong,, 186042,https://www.lesswrong.com/posts/z4dna4cbvasn6BepA/mauhn-releases-ai-safety-documentation,Mauhn Releases AI Safety Documentation,['Berg Severens'],2021-07-03T21:23:01Z,lesswrong,, 186052,https://www.lesswrong.com/posts/e936w9JzDP4WqQjcc/goal-directedness-relativising-complexity,Goal-directedness: relativising complexity,['Morgan_Rogers'],2022-08-18T09:48:41Z,lesswrong,, 186065,https://www.lesswrong.com/posts/C53REQuzSk3TTfqgT/my-reservations-about-discovering-latent-knowledge-burns-ye,"My Reservations about Discovering Latent Knowledge (Burns, Ye, et al)",['Robert_AIZI'],2022-12-27T17:27:02Z,lesswrong,, 186083,https://www.lesswrong.com/posts/ehLR9HeXB5TMp9Y4v/air-gapping-evaluation-and-support,Air-gapping evaluation and support,['Ryan Kidd'],2022-12-26T22:52:30Z,lesswrong,, 186095,https://www.lesswrong.com/posts/iFBdEqEogtXcjCPBB/the-compleat-cybornaut,The Compleat Cybornaut,"['ukc10014', 'Jozdien', 'NicholasKees']",2023-05-19T08:44:38Z,lesswrong,, 186132,https://www.lesswrong.com/posts/vyxwgQnWPdhpWQ9ZN/vlm-rm-specifying-rewards-with-natural-language,VLM-RM: Specifying Rewards with Natural Language,"['ChengCheng', 'David Lindner', 'Ethan Perez']",2023-10-23T14:11:34Z,lesswrong,, 186159,https://www.lesswrong.com/posts/KQ6fGiPeMnzzC6p9q/race-to-the-top-benchmarks-for-ai-safety,Race to the Top: Benchmarks for AI Safety,['Isabella Duan'],2022-12-04T18:48:51Z,lesswrong,, 186179,https://www.lesswrong.com/posts/tuZT5q3Jf87FF8akh/fundamental-uncertainty-chapter-4-why-don-t-we-do-what-we,Fundamental Uncertainty: Chapter 4 - Why don't we do what we think we should?,['Gordon Seidoh Worley'],2022-08-29T19:25:17Z,lesswrong,, 186198,https://www.lesswrong.com/posts/D7EcMhL26zFNbJ3ED/optimization,Optimization,['Eliezer Yudkowsky'],2008-09-13T16:00:00Z,lesswrong,, 186213,https://www.lesswrong.com/posts/2PucFqdRyEvaHb4Hn/an-adversarial-example-for-direct-logit-attribution-memory,An adversarial example for Direct Logit Attribution: memory management in gelu-4l,"['Can Rager', 'Yeu-Tong Lau', 'James Dao', 'Jett']",2023-08-30T17:36:59Z,lesswrong,, 186234,https://www.lesswrong.com/posts/ZWRYt5FXj89AdyNf3/trajectories-to-2036,Trajectories to 2036,['ukc10014'],2022-10-20T20:23:34Z,lesswrong,, 186266,https://www.lesswrong.com/posts/rp4CiJtttvwFNHkhL/searching-for-modularity-in-large-language-models,Searching for Modularity in Large Language Models,"['NickyP', 'Stephen Fowler']",2022-09-08T02:25:32Z,lesswrong,, 186284,https://www.lesswrong.com/posts/a3fvua59H6nktCtxw/governing-high-impact-ai-systems-understanding-canada-s,"Governing High-Impact AI Systems: Understanding Canada’s Proposed AI Bill. April 15, Carleton University, Ottawa",['Liav Koren'],2023-03-28T17:48:12Z,lesswrong,, 186293,https://www.lesswrong.com/posts/usCm5ju2tadbwqdTy/goals-of-model-vs-goals-of-simulacra,Goals of model vs. goals of simulacra?,['dr_s'],2023-04-12T13:03:00Z,lesswrong,, 186309,https://www.lesswrong.com/posts/y5eapqjYYku8Wt9wn/the-abruptness-of-nuclear-weapons,The abruptness of nuclear weapons,['paulfchristiano'],2018-02-25T17:40:36Z,lesswrong,, 186318,https://www.lesswrong.com/posts/gbNLvkGuGcmSFFpSE/escaping-the-loebian-obstacle,Escaping the Löbian Obstacle,['Morgan_Rogers'],2021-06-16T00:02:03Z,lesswrong,, 186337,https://www.lesswrong.com/posts/tjc8jbCvsgfpdWXvz/agi-and-war,AGI & War,['Calecute'],2023-06-29T22:20:58Z,lesswrong,, 186348,https://www.lesswrong.com/posts/i6fKszWY6gLZSX2Ey/fake-optimization-criteria,Fake Optimization Criteria,['Eliezer Yudkowsky'],2007-11-10T00:10:51Z,lesswrong,, 186360,https://www.lesswrong.com/posts/QPqFJ8oEEuzxqsatw/challenge-proposal-smallest-possible-self-hardening-backdoor-1,Challenge proposal: smallest possible self-hardening backdoor for RLHF,['Christopher King'],2023-06-29T16:57:00Z,lesswrong,, 186375,https://www.lesswrong.com/posts/CJw2tNHaEimx6nwNy/ai-forecasting-one-year-in,AI Forecasting: One Year In,['jsteinhardt'],2022-07-04T05:10:18Z,lesswrong,, 186399,https://www.lesswrong.com/posts/cbQih72wbKkrSX7yx/policy-discussions-follow-strong-contextualizing-norms,Policy discussions follow strong contextualizing norms,['Richard_Ngo'],2023-04-01T23:51:37Z,lesswrong,, 186409,https://www.lesswrong.com/posts/68E8emGRvyJPqanvZ/a-safer-oracle-setup,A Safer Oracle Setup?,['Ofer'],2018-02-09T12:16:12Z,lesswrong,, 186432,https://www.lesswrong.com/posts/ejxi9W9nRqGY7BzYY/openai-releases-functional-dota-5v5-bot-aims-to-beat-world,"OpenAI releases functional Dota 5v5 bot, aims to beat world champions by August",['habryka'],2018-06-26T22:40:35Z,lesswrong,, 186442,https://www.lesswrong.com/posts/XWDnqvcTCjpjiYPk5/can-aixi-be-trained-to-do-anything-a-human-can,Can AIXI be trained to do anything a human can?,['Stuart_Armstrong'],2014-10-20T13:12:23Z,lesswrong,, 186454,https://www.lesswrong.com/posts/LyJAFBuuEfd4kxgsw/agentic-mess-a-failure-story,Agentic Mess (A Failure Story),"['Karl von Wendt', 'Sofia Bharadia', 'PeterDrotos', 'Artem Korotkov', 'mespa', 'mruwnik']",2023-06-06T13:09:19Z,lesswrong,, 186482,https://www.lesswrong.com/posts/AL6DRuE8s4yLn3yBo/robin-hanson-s-latest-ai-risk-position-statement,Robin Hanson’s latest AI risk position statement,['Liron'],2023-03-03T14:25:05Z,lesswrong,, 186491,https://www.lesswrong.com/posts/KS7iao3MvnqT2PLGm/manifold-predicted-the-ai-extinction-statement-and-cais,Manifold Predicted the AI Extinction Statement and CAIS Wanted it Deleted,['David Chee'],2023-06-12T15:54:52Z,lesswrong,, 186511,https://www.lesswrong.com/posts/ftEvHLAXia8Cm9W5a/data-and-tokens-a-30-year-old-human-trains-on,"Data and ""tokens"" a 30 year old human ""trains"" on",['Jose Miguel Cruz y Celis'],2023-05-23T05:34:31Z,lesswrong,, 186525,https://www.lesswrong.com/posts/BD6G9wzRRt3fxckNC/superintelligence-10-instrumentally-convergent-goals,Superintelligence 10: Instrumentally convergent goals,['KatjaGrace'],2014-11-18T02:00:26Z,lesswrong,, 186540,https://www.lesswrong.com/posts/evtJJeghGM5aAM5W7/will-research-in-ai-risk-jinx-it-consequences-of-training-ai,Will research in AI risk jinx it? Consequences of training AI on AI risk arguments,['Yann Dubois'],2022-12-19T22:42:31Z,lesswrong,, 186554,https://www.lesswrong.com/posts/oFGPYfPqdbNacfJ5H/an-ml-interpretation-of-shard-theory,An ML interpretation of Shard Theory,['beren'],2023-01-03T20:30:29Z,lesswrong,, 186568,https://www.lesswrong.com/posts/ubdp8qAL8Gfki2pYo/naive-hypotheses-on-ai-alignment,Naive Hypotheses on AI Alignment,['Shoshannah Tekofsky'],2022-07-02T19:03:49Z,lesswrong,, 186595,https://www.lesswrong.com/posts/EAwe7smpmFQi2653G/my-assessment-of-the-chinese-ai-safety-community,My Assessment of the Chinese AI Safety Community,['Lao Mein'],2023-04-25T04:21:19Z,lesswrong,, 186614,https://www.lesswrong.com/posts/a3FuA7fGgpTQ7mX3W/is-gpt3-a-good-rationalist-instructgpt3-2-2,Is GPT3 a Good Rationalist? - InstructGPT3 [2/2],['simeon_c'],2022-04-07T13:46:58Z,lesswrong,, 186650,https://www.lesswrong.com/posts/Nex8EgEJPsn7dvoQB/the-ai-safety-game-updated,The AI Safety Game (UPDATED),['Daniel Kokotajlo'],2020-12-05T10:27:06Z,lesswrong,, 186661,https://www.lesswrong.com/posts/rb7SjbTZjjiukqiHk/i-vouch-for-miri,I Vouch For MIRI,['Zvi'],2017-12-17T17:50:00Z,lesswrong,, 186678,https://www.lesswrong.com/posts/HCZ6feW2EGXuiwuid/advice-for-entering-ai-safety-research,Advice for Entering AI Safety Research,['scasper'],2023-06-02T20:46:13Z,lesswrong,, 186696,https://www.lesswrong.com/posts/jqzkA7s3oriAfsbNz/what-does-pulling-the-fire-alarm-look-like,What does pulling the fire alarm look like?,['nem'],2023-03-20T21:45:23Z,lesswrong,, 186708,https://www.lesswrong.com/posts/pRD5u2omuDoMTuH39/multi-agent-inverse-reinforcement-learning-suboptimal,Multi-Agent Inverse Reinforcement Learning: Suboptimal Demonstrations and Alternative Solution Concepts,['sage_bergerson'],2021-09-07T16:11:59Z,lesswrong,, 186727,https://www.lesswrong.com/posts/87aqBTkhTgfzhu5po/ai-race-considerations-in-a-report-by-the-u-s-house,AI race considerations in a report by the U.S. House Committee on Armed Services,['NunoSempere'],2020-10-04T12:11:36Z,lesswrong,, 186753,https://www.lesswrong.com/posts/2XLFyhuKP7m4xpgea/will-values-and-competition-decouple-1,Will Values and Competition Decouple?,['interstice'],2022-09-28T16:27:23Z,lesswrong,, 186776,https://www.lesswrong.com/posts/JnDEAmNhSpBRpjD8L/resolutions-to-the-challenge-of-resolving-forecasts,Resolutions to the Challenge of Resolving Forecasts,['Davidmanheim'],2021-03-11T19:08:16Z,lesswrong,, 186807,https://www.lesswrong.com/posts/WcAZW2Hg27Fu5F22q/optimization-happens-inside-the-mind-not-in-the-world,"Optimization happens inside the mind, not in the world",['azsantosk'],2023-06-03T21:36:44Z,lesswrong,, 186829,https://www.lesswrong.com/posts/tdcLpkydLwcKwbKre/understanding-selection-theorems,Understanding Selection Theorems,['adamk'],2022-05-28T01:49:43Z,lesswrong,, 186845,https://www.lesswrong.com/posts/LtS3tPD56MaKuSLie/when-ai-solves-a-game-focus-on-the-game-s-mechanics-not-its,"When AI solves a game, focus on the game's mechanics, not its theme.",['Cleo Nardo'],2022-11-23T19:16:07Z,lesswrong,, 186857,https://www.lesswrong.com/posts/jkY6QdCfAXHJk3kea/the-petertodd-phenomenon,The ‘ petertodd’ phenomenon,['mwatkins'],2023-04-15T00:59:47Z,lesswrong,, 186875,https://www.lesswrong.com/posts/KczLGBLisMMt9AKiF/some-thought-experiments-on-digital-consciousness,Some thought experiments on digital consciousness,['rorygreig'],2023-04-01T11:45:13Z,lesswrong,, 186885,https://www.lesswrong.com/posts/KB5CeQYdq5oeah3nd/who-owns-openai-s-new-language-model,Who owns OpenAI's new language model?,['ioannes'],2019-02-14T17:51:26Z,lesswrong,, 186894,https://www.lesswrong.com/posts/HT8jwNJ6vH7p9gaTT/decision-theory-paradox-pd-with-three-implies-chaos,Decision Theory Paradox: PD with Three Implies Chaos?,['orthonormal'],2011-08-27T19:22:15Z,lesswrong,, 186906,https://www.lesswrong.com/posts/hKS4NdcZqDnosbKhY/synthetic-media-and-the-future-of-film,Synthetic Media and The Future of Film,['ifalpha'],2022-05-24T05:54:22Z,lesswrong,, 186923,https://www.lesswrong.com/posts/G3aG5eLGYE3cnhEZq/faq-what-the-heck-is-goal-agnosticism,FAQ: What the heck is goal agnosticism?,['porby'],2023-10-08T19:11:50Z,lesswrong,, 186949,https://www.lesswrong.com/posts/TjKmAu6xjezwNhTBK/should-ai-writers-be-prohibited-in-education,Should AI writers be prohibited in education?,['Eleni Angelou'],2023-01-17T00:42:57Z,lesswrong,, 186965,https://www.lesswrong.com/posts/pJ9qWeBRRuvPvnoNK/announcing-athena-women-in-ai-alignment-research,Announcing Athena - Women in AI Alignment Research,['Claire Short'],2023-11-07T21:46:42Z,lesswrong,, 186975,https://www.lesswrong.com/posts/7FrHeyxQpb3pvr9Pn/paul-christiano-on-dwarkesh-podcast,Paul Christiano on Dwarkesh Podcast,['ESRogs'],2023-11-03T22:13:21Z,lesswrong,, 186998,https://www.lesswrong.com/posts/BzFhRCSysACaCZSAu/when-can-a-mimic-surprise-you-why-generative-models-handle,When can a mimic surprise you? Why generative models handle seemingly ill-posed problems,['David Johnston'],2022-11-05T13:19:37Z,lesswrong,, 187016,https://www.lesswrong.com/posts/bqFu8fxokJSPjidJo/ai-winter-is-coming-how-to-profit-from-it,AI Winter Is Coming - How to profit from it?,['anonymous'],2020-12-05T20:23:51Z,lesswrong,, 187026,https://www.lesswrong.com/posts/cM7sR7seBRwtxctGY/the-ai-governance-gaps-in-developing-countries,The AI governance gaps in developing countries,['nguyên'],2023-06-17T02:50:44Z,lesswrong,, 187050,https://www.lesswrong.com/posts/JBadX7rwdcRFzGuju/recursive-self-improvement,Recursive Self-Improvement,['Eliezer Yudkowsky'],2008-12-01T20:49:11Z,lesswrong,, 187072,https://www.lesswrong.com/posts/LMXk5SYD8bfpfZvBW/do-nothing-utility-function-3-years-later,"""Do Nothing"" utility function, 3½ years later?",['niplav'],2020-07-20T11:09:37Z,lesswrong,, 187087,https://www.lesswrong.com/posts/3kN79EuT27trGexsq/when-is-unaligned-ai-morally-valuable,When is unaligned AI morally valuable?,['paulfchristiano'],2018-05-25T01:57:56Z,lesswrong,, 187111,https://www.lesswrong.com/posts/7ZkQ5wGouiBeRMSdF/on-agent-foundations,On Agent Foundations,['Robert Kralisch'],2023-06-30T20:43:12Z,lesswrong,, 187122,https://www.lesswrong.com/posts/4DHqYtnhZgknpEDv4/hello-elua,"Hello, Elua.",['Tamsin Leake'],2023-02-23T05:19:07Z,lesswrong,, 187136,https://www.lesswrong.com/posts/vm7FRyPWGCqDHy6LF/dario-amodei-s-prepared-remarks-from-the-uk-ai-safety-summit,"Dario Amodei’s prepared remarks from the UK AI Safety Summit, on Anthropic’s Responsible Scaling Policy",['Zac Hatfield-Dodds'],2023-11-01T18:10:31Z,lesswrong,, 187168,https://www.lesswrong.com/posts/wKnwcjJGriTS9QxxL/dreams-of-friendliness,Dreams of Friendliness,['Eliezer Yudkowsky'],2008-08-31T01:20:52Z,lesswrong,, 187194,https://www.lesswrong.com/posts/hDuoHuBjxGue7fwhJ/trading-off-compute-in-training-and-inference-overview,Trading off compute in training and inference (Overview),['Pablo Villalobos'],2023-07-31T16:03:46Z,lesswrong,, 187213,https://www.lesswrong.com/posts/cjBxCtQ9nDsseGCxf/symbiotic-self-alignment-of-ais,Symbiotic self-alignment of AIs.,['Spiritus Dei'],2023-11-07T17:18:21Z,lesswrong,, 187227,https://www.lesswrong.com/posts/3u8oZEEayqqjjZ7Nw/current-ai-safety-roles-for-software-engineers,Current AI Safety Roles for Software Engineers,['ozziegooen'],2018-11-09T20:57:16Z,lesswrong,, 187246,https://www.lesswrong.com/posts/qARBe3jBodrdPeRE6/how-can-i-reduce-existential-risk-from-ai,How can I reduce existential risk from AI?,['lukeprog'],2012-11-13T21:56:13Z,lesswrong,, 187281,https://www.lesswrong.com/posts/R4otYvzrgvTcz4YkY/talk-to-me-about-your-summer-career-plans,Talk to me about your summer/career plans,['Akash'],2023-01-31T18:29:23Z,lesswrong,, 187292,https://www.lesswrong.com/posts/3ZqfSd7wMsaA2kgq9/search-engines-and-oracles,Search Engines and Oracles,['HalMorris'],2014-07-08T14:27:03Z,lesswrong,, 187303,https://www.lesswrong.com/posts/kfY2JegjuzLewWyZd/oracles-informers-and-controllers,"Oracles, Informers, and Controllers",['ozziegooen'],2021-05-25T14:16:22Z,lesswrong,, 187322,https://www.lesswrong.com/posts/S6Qcf5EgX5zAozTAa/the-paradox-of-expert-opinion,The Paradox of Expert Opinion,['Emrik'],2021-09-26T21:39:46Z,lesswrong,, 187340,https://www.lesswrong.com/posts/byMNKEXBn4RTcaaa6/immanuel-kant-and-the-decision-theory-app-store,Immanuel Kant and the Decision Theory App Store,['Daniel Kokotajlo'],2022-07-10T16:04:04Z,lesswrong,, 187351,https://www.lesswrong.com/posts/kKFTzraz9oyajkcEH/into-ai-safety-episode-0,Into AI Safety - Episode 0,['jacobhaimes'],2023-10-22T03:30:58Z,lesswrong,, 187360,https://www.lesswrong.com/posts/KZbhXqCtcWA58D8nz/fundamental-uncertainty-chapter-1-how-can-we-know-what-s,Fundamental Uncertainty: Chapter 1 - How can we know what's true?,['Gordon Seidoh Worley'],2023-08-13T18:55:45Z,lesswrong,, 187374,https://www.lesswrong.com/posts/yndw9qQFsNkXdTu3K/cyberspace-administration-of-china-draft-of-regulation-for,"Cyberspace Administration of China: Draft of ""Regulation for Generative Artificial Intelligence Services"" is open for comments",['sanxiyn'],2023-04-11T09:32:13Z,lesswrong,, 187383,https://www.lesswrong.com/posts/vjDi4jPYgbcGSswv4/back-to-the-past-to-the-future,Back to the Past to the Future,['Prometheus'],2023-10-18T16:51:52Z,lesswrong,, 187393,https://www.lesswrong.com/posts/zgbZNwW7f3C89ZgGK/selfish-preferences-and-self-modification,Selfish preferences and self-modification,['Manfred'],2015-01-14T08:42:51Z,lesswrong,, 187403,https://www.lesswrong.com/posts/os7N7nJoezWKQnnuW/levels-of-ai-self-improvement,Levels of AI Self-Improvement,['avturchin'],2018-04-29T11:45:42Z,lesswrong,, 187434,https://www.lesswrong.com/posts/zaER5ziEprE7aNm6u/empathy-as-a-natural-consequence-of-learnt-reward-models,Empathy as a natural consequence of learnt reward models,['beren'],2023-02-04T15:35:24Z,lesswrong,, 187455,https://www.lesswrong.com/posts/HtEffpHcLxppLN6dL/explaining-inner-alignment-to-myself,Explaining inner alignment to myself,['Jeremy Gillen'],2022-05-24T23:10:56Z,lesswrong,, 187479,https://www.lesswrong.com/posts/cLGRGKDrhhNP6Jgub/scaling-and-sustaining-standards-a-case-study-on-the-basel,Scaling and Sustaining Standards: A Case Study on the Basel Accords,['Conrad K.'],2023-07-16T22:01:05Z,lesswrong,, 187514,https://www.lesswrong.com/posts/mBFqG3xjYazsPiZkH/more-on-the-linear-utility-hypothesis-and-the-leverage-prior,More on the Linear Utility Hypothesis and the Leverage Prior,['AlexMennen'],2018-02-26T23:53:36Z,lesswrong,, 187530,https://www.lesswrong.com/posts/YgHQ9Nez4S8277Wr2/google-may-be-trying-to-take-over-the-world,Google may be trying to take over the world,['anonymous'],2014-01-27T09:33:58Z,lesswrong,, 187540,https://www.lesswrong.com/posts/6pBPiEGqS8ncNq8x4/a-difficulty-in-the-concept-of-cev,A Difficulty in the Concept of CEV,['anonymous'],2013-03-27T01:20:36Z,lesswrong,, 187557,https://www.lesswrong.com/posts/nEy2JDjvqE6o5E6kx/if-there-was-a-millennium-equivalent-prize-for-ai-alignment,"If there was a millennium equivalent prize for AI alignment, what would the problems be?",['Yair Halberstadt'],2022-06-09T16:56:11Z,lesswrong,, 187567,https://www.lesswrong.com/posts/krNuoicpfyJajR8ng/would-you-join-the-society-of-the-free-and-easy,Would you join the Society of the Free & Easy?,['David Gross'],2019-07-10T01:15:27Z,lesswrong,, 187577,https://www.lesswrong.com/posts/6GdeWixgAuxfuSWf6/the-burden-of-knowing,The burden of knowing,['arisAlexis'],2023-02-28T18:40:25Z,lesswrong,, 187595,https://www.lesswrong.com/posts/hcWc76eCan3FF5XBk/frontier-model-forum,Frontier Model Forum,['Zach Stein-Perlman'],2023-07-26T14:30:02Z,lesswrong,, 187620,https://www.lesswrong.com/posts/oBpebs5j5ngs3EXr5/a-summary-of-anthropic-s-first-paper-3,A Summary Of Anthropic's First Paper,['Sam Ringer'],2021-12-30T00:48:15Z,lesswrong,, 187647,https://www.lesswrong.com/posts/D6bFssha8hKp8aFhe/risk-aversion-and-gpt-3,Risk aversion and GPT-3,['hatta_afiq'],2022-09-13T20:50:49Z,lesswrong,, 187657,https://www.lesswrong.com/posts/fsbcq9z7korjBTP8Z/understanding-strategic-deception-and-deceptive-alignment,Understanding strategic deception and deceptive alignment,"['Marius Hobbhahn', 'Mikita Balesni', 'Jérémy Scheurer', 'Dan Braun']",2023-09-25T16:27:47Z,lesswrong,, 187675,https://www.lesswrong.com/posts/SvwuduvpsKtXkLnPF/the-overton-window-widens-examples-of-ai-risk-in-the-media,The Overton Window widens: Examples of AI risk in the media,['Akash'],2023-03-23T17:10:15Z,lesswrong,, 187697,https://www.lesswrong.com/posts/nu4wpKCo6AfJkkd4F/sydney-can-play-chess-and-kind-of-keep-track-of-the-board,Sydney can play chess and kind of keep track of the board state,['Erik Jenner'],2023-03-03T09:39:52Z,lesswrong,, 187720,https://www.lesswrong.com/posts/whHiGKJBGhgiHi7Ts/what-is-the-ground-reality-of-countries-taking-steps-to,What is the ground reality of countries taking steps to recalibrate AI development towards Alignment first?,['anonymous'],2023-01-29T13:26:40Z,lesswrong,, 187739,https://www.lesswrong.com/posts/vuNBKnRNorhygohd8/readability-is-mostly-a-waste-of-characters,Readability is mostly a waste of characters,['vlad.proex'],2023-04-21T22:05:35Z,lesswrong,, 187748,https://www.lesswrong.com/posts/MCygbQPKdGLbeePDx/chatgpt-an-error-occurred-if-this-issue-persists,"ChatGPT: ""An error occurred. If this issue persists...""",['Bill Benzon'],2022-12-07T15:41:40Z,lesswrong,, 187757,https://www.lesswrong.com/posts/NA5LYobKuFMKCvzMo/iso-name-of-problem,ISO: Name of Problem,['johnswentworth'],2018-07-24T17:15:07Z,lesswrong,, 187767,https://www.lesswrong.com/posts/bAcJxQeKJcBBnE7bc/irl-1-8-inverse-reinforcement-learning-and-the-problem-of,IRL 1/8: Inverse Reinforcement Learning and the problem of degeneracy,['RAISE'],2019-03-04T13:11:45Z,lesswrong,, 187776,https://www.lesswrong.com/posts/d3fgJb3RED258TciH/provably-honest-a-first-step,Provably Honest - A First Step,['Srijanak De'],2022-11-05T19:18:46Z,lesswrong,, 187787,https://www.lesswrong.com/posts/8NKu9WES7KeKRWEKK/why-all-the-fuss-about-recursive-self-improvement,Why all the fuss about recursive self-improvement?,['So8res'],2022-06-12T20:53:42Z,lesswrong,, 187801,https://www.lesswrong.com/posts/n3reR9jJ3qYrdnNbb/abstract-concepts-and-metalingual-definition-does-chatgpt,Abstract concepts and metalingual definition: Does ChatGPT understand justice and charity?,['Bill Benzon'],2022-12-16T21:01:05Z,lesswrong,, 187818,https://www.lesswrong.com/posts/h5CGM5qwivGk2f5T9/7-traps-that-we-think-new-alignment-researchers-often-fall,7 traps that (we think) new alignment researchers often fall into,"['Akash', 'Thomas Larsen']",2022-09-27T23:13:47Z,lesswrong,, 187857,https://www.lesswrong.com/posts/3kErRpEprB8iJvnNq/thoughts-on-ai-safety-camp,Thoughts on AI Safety Camp,['Charlie Steiner'],2022-05-13T07:16:56Z,lesswrong,, 187887,https://www.lesswrong.com/posts/N5sNpXLvak9BtHTX9/delberting-as-an-adversarial-strategy,DELBERTing as an Adversarial Strategy,['Matthew_Opitz'],2023-05-12T20:09:58Z,lesswrong,, 187904,https://www.lesswrong.com/posts/Z4tBreNCxnppoPLtd/gpts-ability-to-keep-a-secret-is-weirdly-prompt-dependent,GPTs' ability to keep a secret is weirdly prompt-dependent,"['Mateusz Bagiński', 'Filip Sondej', 'Marcel Windys']",2023-07-22T12:21:26Z,lesswrong,, 187925,https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality,Newcomb's Problem and Regret of Rationality,['Eliezer Yudkowsky'],2008-01-31T19:36:56Z,lesswrong,, 187940,https://www.lesswrong.com/posts/YYuB8w4nrfWmLzNob/thatcher-s-axiom,Thatcher's Axiom,['Edward P. Könings'],2023-01-24T22:35:01Z,lesswrong,, 187950,https://www.lesswrong.com/posts/YC6ZjCQPPuKQb49TQ/are-short-timelines-actually-bad,Are short timelines actually bad?,['joshc'],2023-02-05T21:21:41Z,lesswrong,, 187973,https://www.lesswrong.com/posts/Tc2H9KbKRjuDJ3WSS/leaky-generalizations,Leaky Generalizations,['Eliezer Yudkowsky'],2007-11-22T21:16:11Z,lesswrong,, 187987,https://www.lesswrong.com/posts/EWeCmbMyDaTnD8Guc/4-ways-to-think-about-democratizing-ai-govai-linkpost,4 ways to think about democratizing AI [GovAI Linkpost],['Akash'],2023-02-13T18:06:41Z,lesswrong,, 188000,https://www.lesswrong.com/posts/wrkEnGrTTrM2mnmGa/retracted-it-s-time-for-ea-leadership-to-pull-the-short,[RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm.,['Not Relevant'],2022-04-08T16:07:47Z,lesswrong,, 188018,https://www.lesswrong.com/posts/oSrfAYpGLXAeZYmvY/an-upcoming-us-supreme-court-case-may-impede-ai-governance,An upcoming US Supreme Court case may impede AI governance efforts,['NickGabs'],2023-07-16T23:51:26Z,lesswrong,, 188030,https://www.lesswrong.com/posts/qkAWySeomp3aoAedy/language-models-can-generate-superior-text-compared-to-their,Language models can generate superior text compared to their input,['ChristianKl'],2023-01-17T10:57:10Z,lesswrong,, 188041,https://www.lesswrong.com/posts/5mTCuP7iPt7Ta6CBe/my-advice-for-incoming-seri-mats-scholars,My Advice for Incoming SERI MATS Scholars,['Johannes C. Mayer'],2023-01-03T19:25:39Z,lesswrong,, 188072,https://www.lesswrong.com/posts/hmwaq6zff53Wa5JJB/is-a-self-iterating-agi-vulnerable-to-thompson-style-trojans,Is a Self-Iterating AGI Vulnerable to Thompson-style Trojans?,['sxae'],2021-03-25T14:46:34Z,lesswrong,, 188092,https://www.lesswrong.com/posts/mCYc5oRkgegAqMEFv/chatgpt-explores-the-semantic-differential,ChatGPT explores the semantic differential,['Bill Benzon'],2023-03-09T13:09:02Z,lesswrong,, 188117,https://www.lesswrong.com/posts/LxhJ8mhdBuX27BDug/probabilities-small-enough-to-ignore-an-attack-on-pascal-s,Probabilities Small Enough To Ignore: An attack on Pascal's Mugging,['Kaj_Sotala'],2015-09-16T10:45:57Z,lesswrong,, 188133,https://www.lesswrong.com/posts/k8a4xx25aW3jvmfF3/initial-thoughts-on-dissolving-couldness,"Initial Thoughts on Dissolving ""Couldness""",['DragonGod'],2022-09-22T21:23:32Z,lesswrong,, 188146,https://www.lesswrong.com/posts/Ybw7LfZWPRbEEKa5s/stopping-dangerous-ai-ideal-us-behavior,Stopping dangerous AI: Ideal US behavior,['Zach Stein-Perlman'],2023-05-09T21:00:55Z,lesswrong,, 188175,https://www.lesswrong.com/posts/8vGFoqcQdkk7n3vDz/where-s-the-foom,Where's the foom?,['Fergus Fettes'],2023-04-11T15:50:43Z,lesswrong,, 188190,https://www.lesswrong.com/posts/Bok5RAPPjuHKvPyv2/interviews-with-97-ai-researchers-quantitative-analysis,Interviews with 97 AI Researchers: Quantitative Analysis,"['Maheen Shermohammed', 'Vael Gates']",2023-02-02T01:01:32Z,lesswrong,, 188214,https://www.lesswrong.com/posts/YtKnLchzsaJoa24uA/a-couple-of-questions-about-conjecture-s-cognitive-emulation,A couple of questions about Conjecture's Cognitive Emulation proposal,['Igor Ivanov'],2023-04-11T14:05:59Z,lesswrong,, 188231,https://www.lesswrong.com/posts/gKcoZk3sxfv6JrAJn/is-progress-in-ml-assisted-theorem-proving-beneficial,Is progress in ML-assisted theorem-proving beneficial?,['mako yass'],2021-09-28T01:54:38Z,lesswrong,, 188244,https://www.lesswrong.com/posts/DbuCdEbkh4wL5cjJ5/preface-to-clr-s-research-agenda-on-cooperation-conflict-and,"Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI",['JesseClifton'],2019-12-13T21:02:49Z,lesswrong,, 188270,https://www.lesswrong.com/posts/z8s3bsw3WY9fdevSm/boxing-an-ai,Boxing an AI?,['tailcalled'],2015-03-27T14:06:19Z,lesswrong,, 188281,https://www.lesswrong.com/posts/dCTFXi2Z2f2aacmNa/openworm-and-differential-technological-development,OpenWorm and differential technological development,['John_Maxwell'],2014-05-19T04:47:00Z,lesswrong,, 188291,https://www.lesswrong.com/posts/rXSBvSKvKdaNkhLeJ/takeaways-from-a-survey-on-ai-alignment-resources,Takeaways from a survey on AI alignment resources,['DanielFilan'],2022-11-05T23:40:02Z,lesswrong,, 188307,https://www.lesswrong.com/posts/jupZtKjrh8ftBaxxe/it-looks-like-you-re-trying-to-take-over-the-narrative,It Looks Like You’re Trying To Take Over The Narrative,['George3d6'],2022-08-24T13:36:55Z,lesswrong,, 188331,https://www.lesswrong.com/posts/hnvPCZ4Cx35miHkw3/why-is-so-much-discussion-happening-in-private-google-docs,Why is so much discussion happening in private Google Docs?,['Wei Dai'],2019-01-12T02:19:19Z,lesswrong,, 188340,https://www.lesswrong.com/posts/NQ85WRcLkjnTudzdg/arc-tests-to-see-if-gpt-4-can-escape-human-control-gpt-4,ARC tests to see if GPT-4 can escape human control; GPT-4 failed to do so,['Christopher King'],2023-03-15T00:29:24Z,lesswrong,, 188359,https://www.lesswrong.com/posts/gZkYvA6suQJthvj4E/my-may-2023-priorities-for-ai-x-safety-more-empathy-more,"My May 2023 priorities for AI x-safety: more empathy, more unification of concerns, and less vilification of OpenAI",['Andrew_Critch'],2023-05-24T00:02:09Z,lesswrong,, 188392,https://www.lesswrong.com/posts/b8HauRWrjBdnKEwM5/rigging-is-a-form-of-wireheading,Rigging is a form of wireheading,['Stuart_Armstrong'],2018-05-03T12:50:50Z,lesswrong,, 188403,https://www.lesswrong.com/posts/muLN8GRBdB8NLLX36/visible-loss-landscape-basins-don-t-correspond-to-distinct,Visible loss landscape basins don't correspond to distinct algorithms,['Mikhail Samin'],2023-07-28T16:19:05Z,lesswrong,, 188415,https://www.lesswrong.com/posts/xfWbXzBnzre2DG8f7/can-i-take-ducks-home-from-the-park,Can I take ducks home from the park?,['dynomight'],2023-09-14T21:03:10Z,lesswrong,, 188432,https://www.lesswrong.com/posts/RcbpeYJMdvxCpTocg/oracle-paper,Oracle paper,['Stuart_Armstrong'],2017-12-13T14:59:08Z,lesswrong,, 188446,https://www.lesswrong.com/posts/mR65cuuoG9prZnAvf/could-simulating-an-agi-taking-over-the-world-actually-lead,Could Simulating an AGI Taking Over the World Actually Lead to a LLM Taking Over the World?,['simeon_c'],2023-01-13T06:33:36Z,lesswrong,, 188455,https://www.lesswrong.com/posts/nnDTgmzRrzDMiPF9B/how-much-do-you-believe-your-results,How much do you believe your results?,['Eric Neyman'],2023-05-06T20:31:31Z,lesswrong,, 188470,https://www.lesswrong.com/posts/xMBZTQtMSvowk3H2k/seq-rerun-brain-emulation-and-hard-takeoff,[SEQ RERUN] Brain Emulation and Hard Takeoff,['MinibearRex'],2012-11-14T06:47:03Z,lesswrong,, 188483,https://www.lesswrong.com/posts/4hz6Rswmu6qLEHAei/analysis-of-gpt-4-competence-in-assessing-complex-legal,Analysis of GPT-4 competence in assessing complex legal language: Example of Bill C-11 of the Canadian Parliament. - Part 1,['M. Y. Zuo'],2023-04-02T00:01:11Z,lesswrong,, 188499,https://www.lesswrong.com/posts/7LLLkMGq4ncinzrmd/the-wizard-of-oz-problem-how-incentives-and-narratives-can,The Wizard of Oz Problem: How incentives and narratives can skew our perception of AI developments,['Akash'],2023-03-20T20:44:29Z,lesswrong,, 188522,https://www.lesswrong.com/posts/hbmsW2k9DxED5Z4eJ/impossibility-results-for-unbounded-utilities,Impossibility results for unbounded utilities,['paulfchristiano'],2022-02-02T03:52:19Z,lesswrong,, 188538,https://www.lesswrong.com/posts/3MmoaigFiiuE3vZD8/the-view-from-30-000-feet-preface-to-the-second-eleutherai-1,"The View from 30,000 Feet: Preface to the Second EleutherAI Retrospective","['StellaAthena', 'Curtis Huebner', 'Shivanshu Purohit']",2023-03-07T16:22:08Z,lesswrong,, 188561,https://www.lesswrong.com/posts/C8kn3iL9Zedorjykt/taking-the-parameters-which-seem-to-matter-and-rotating-them,Taking the parameters which seem to matter and rotating them until they don't,['Garrett Baker'],2022-08-26T18:26:48Z,lesswrong,, 188570,https://www.lesswrong.com/posts/FBAwthqMY2N6JH9nA/the-soul-of-the-writer-on-llms-the-psychology-of-writers-and,"The Soul of the Writer (on LLMs, the psychology of writers, and the nature of intelligence)",['rogersbacon'],2023-04-16T16:02:49Z,lesswrong,, 188586,https://www.lesswrong.com/posts/gdyfJE3noRFSs373q/resources-i-send-to-ai-researchers-about-ai-safety,Resources I send to AI researchers about AI safety,['Vael Gates'],2022-06-14T02:24:59Z,lesswrong,, 188616,https://www.lesswrong.com/posts/bNpqBNvfgCWixB2MT/towards-empathy-in-rl-agents-and-beyond-insights-from-1,Towards empathy in RL agents and beyond: Insights from cognitive science for AI Alignment,['Marc Carauleanu'],2023-04-03T19:59:00Z,lesswrong,, 188634,https://www.lesswrong.com/posts/JewWDfLoxgFtJhNct/utility-versus-reward-function-partial-equivalence,Utility versus Reward function: partial equivalence,['Stuart_Armstrong'],2018-04-13T14:58:16Z,lesswrong,, 188651,https://www.lesswrong.com/posts/fyrJsfNmZECLasR3S/would-a-misaligned-ssi-really-kill-us-all,Would a Misaligned SSI Really Kill Us All?,['DragonGod'],2022-09-14T12:15:31Z,lesswrong,, 188670,https://www.lesswrong.com/posts/Hicfd4C5ffrtEaTbF/the-sharp-right-turn-sudden-deceptive-alignment-as-a,The Sharp Right Turn: sudden deceptive alignment as a convergent goal,['avturchin'],2023-06-06T09:59:57Z,lesswrong,, 188685,https://www.lesswrong.com/posts/Rgoi7nkWKAHoDti2R/chatgpt-plugins-the-beginning-of-the-end,ChatGPT Plugins - The Beginning of the End,['Bary Levy'],2023-03-25T11:45:33Z,lesswrong,, 188695,https://www.lesswrong.com/posts/kjmRBhyYG3CPhEMot/has-there-been-any-work-on-attempting-to-use-pascal-s,Has there been any work on attempting to use Pascal's Mugging to make an AGI behave?,['Chris_Leong'],2022-06-15T08:33:20Z,lesswrong,, 188708,https://www.lesswrong.com/posts/65hFPABegiB9uGFiC/we-are-less-wrong-than-e-t-jaynes-on-loss-functions-in-human,We Are Less Wrong than E. T. Jaynes on Loss Functions in Human Society,['Zack_M_Davis'],2023-06-05T05:34:59Z,lesswrong,, 188721,https://www.lesswrong.com/posts/eJ8xrMeWMqQHEN2vm/reframing-the-burden-of-proof-companies-should-prove-that,Reframing the burden of proof: Companies should prove that models are safe (rather than expecting auditors to prove that models are dangerous),['Akash'],2023-04-25T18:49:29Z,lesswrong,, 188735,https://www.lesswrong.com/posts/oZsyK4SjnPe6HGia8/moral-uncertainty-vs-related-concepts,Moral uncertainty vs related concepts,['MichaelA'],2020-01-11T10:03:18Z,lesswrong,, 188758,https://www.lesswrong.com/posts/tjH8XPxAnr6JRbh7k/hard-takeoff,Hard Takeoff,['Eliezer Yudkowsky'],2008-12-02T20:44:26Z,lesswrong,, 188777,https://www.lesswrong.com/posts/WgnAEXw5fXaW9p5PS/raise-is-launching-their-mvp,RAISE is launching their MVP,['anonymous'],2019-02-26T11:45:54Z,lesswrong,, 188790,https://www.lesswrong.com/posts/ohXcBjGvazPAxq2ex/continue-working-on-hard-alignment-don-t-give-up,continue working on hard alignment! don't give up!,['Tamsin Leake'],2023-03-24T00:14:36Z,lesswrong,, 188804,https://www.lesswrong.com/posts/rHSuu2X9ca8FR4thH/tools-want-to-become-agents,Tools want to become agents,['Stuart_Armstrong'],2014-07-04T10:12:45Z,lesswrong,, 188819,https://www.lesswrong.com/posts/tFYGdq9ivjA3rdaS2/high-level-interpretability-detecting-an-ai-s-objectives,High-level interpretability: detecting an AI's objectives,"['Paul Colognese', 'Jozdien']",2023-09-28T19:30:17Z,lesswrong,, 188830,https://www.lesswrong.com/posts/HNJwteaxpRYfLaQt7/tradeoffs-in-complexity-abstraction-and-generality,"Tradeoffs in complexity, abstraction, and generality","['remember', 'Gabriel Alfour']",2022-12-12T15:55:19Z,lesswrong,, 188848,https://www.lesswrong.com/posts/X3z3rtzGG6F4ZWADQ/ai-overhangs-depend-on-whether-algorithms-compute-and-data,"AI overhangs depend on whether algorithms, compute and data are substitutes or complements",['NathanBarnard'],2022-12-16T02:23:58Z,lesswrong,, 188859,https://www.lesswrong.com/posts/qGTxGGNxcciY2nrHv/llms-may-find-it-hard-to-foom,LLMs May Find It Hard to FOOM,['RogerDearnaley'],2023-11-15T02:52:09Z,lesswrong,, 188882,https://www.lesswrong.com/posts/LSFpWmrsvw32teiLB/stability-ai-releases-stablelm-an-open-source-chatgpt,"Stability AI releases StableLM, an open-source ChatGPT counterpart",['Ozyrus'],2023-04-20T06:04:48Z,lesswrong,, 188895,https://www.lesswrong.com/posts/rzJ9FgCoxuqSR2zb5/thoughts-on-dangerous-learned-optimization,Thoughts on Dangerous Learned Optimization,['peterbarnett'],2022-02-19T10:46:24Z,lesswrong,, 188914,https://www.lesswrong.com/posts/tJki2nDzxHxAax52x/my-summary-of-pragmatic-ai-safety,My summary of “Pragmatic AI Safety”,['Eleni Angelou'],2022-11-05T12:54:54Z,lesswrong,, 188959,https://www.lesswrong.com/posts/GqyQSwYrryc4e2hgf/mlyyrczo,Mlyyrczo,['lsusr'],2022-12-26T07:58:58Z,lesswrong,, 188976,https://www.lesswrong.com/posts/ZXB3HbAuwJakwjPB6/llms-may-capture-key-components-of-human-agency,LLMs may capture key components of human agency,['catubc'],2022-11-17T20:14:33Z,lesswrong,, 188992,https://www.lesswrong.com/posts/PniKib7oxh8uhM8xj/unsafe-ai-as-dynamical-systems,Unsafe AI as Dynamical Systems,['Robert_AIZI'],2023-07-14T15:31:48Z,lesswrong,, 189004,https://www.lesswrong.com/posts/Ccsx339LE9Jhoii9K/slow-motion-videos-as-ai-risk-intuition-pumps,Slow motion videos as AI risk intuition pumps,['Andrew_Critch'],2022-06-14T19:31:14Z,lesswrong,, 189013,https://www.lesswrong.com/posts/5CGvoviP7t7ftsohh/progress-report-7-making-gpt-go-hurrdurr-instead-of-brrrrrrr,Progress Report 7: making GPT go hurrdurr instead of brrrrrrr,['Nathan Helm-Burger'],2022-09-07T03:28:36Z,lesswrong,, 189036,https://www.lesswrong.com/posts/BZbYQRmhj8a2jBYWC/developmental-psychology-in-the-age-of-ems,Developmental Psychology in The Age of Ems,['Gordon Seidoh Worley'],2017-07-03T18:53:44Z,lesswrong,, 189060,https://www.lesswrong.com/posts/nBNjdYwmbybn8CXGq/the-issue-of-meaning-in-large-language-models-llms,The issue of meaning in large language models (LLMs),['Bill Benzon'],2023-03-11T23:00:28Z,lesswrong,, 189077,https://www.lesswrong.com/posts/sm6npdgZArSn4afeZ/ai-researchers-on-ai-risk,AI Researchers On AI Risk,['Scott Alexander'],2015-05-22T11:16:18Z,lesswrong,, 189099,https://www.lesswrong.com/posts/ZRYqXHdiFrdxLAmue/you-are-probably-not-a-good-alignment-researcher-and-other,"You are probably not a good alignment researcher, and other blatant lies",['junk heap homotopy'],2023-02-02T13:55:15Z,lesswrong,, 189110,https://www.lesswrong.com/posts/yFkNYyspBBqfSeBx9/against-using-stock-prices-to-forecast-ai-timelines,Against using stock prices to forecast AI timelines,"['basil.halperin', 'tmychow', 'J. Zachary Mazlish']",2023-01-10T16:03:32Z,lesswrong,, 189121,https://www.lesswrong.com/posts/Go5ELsHAyw7QrArQ6/searching-for-a-model-s-concepts-by-their-shape-a,Searching for a model's concepts by their shape – a theoretical framework,"['Kaarel', 'gekaklam', 'Walter Laurito', 'Kay Kozaronek', 'AlexMennen', 'June Ku']",2023-02-23T20:14:46Z,lesswrong,, 189132,https://www.lesswrong.com/posts/4vv95qgg9pGWo4eBk/timeless-modesty,Timeless Modesty?,['abramdemski'],2017-11-24T11:12:47Z,lesswrong,, 189142,https://www.lesswrong.com/posts/S3EgMfDGkrA8WCvep/key-questions-for-digital-minds-3,Key Questions for Digital Minds,['Jacy Reese Anthis'],2023-03-22T17:13:49Z,lesswrong,, 189166,https://www.lesswrong.com/posts/ngpC5PFAgxHJMhicM/agi-and-the-emh-markets-are-not-expecting-aligned-or-1,AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years,"['basil.halperin', 'J. Zachary Mazlish', 'tmychow']",2023-01-10T16:06:52Z,lesswrong,, 189184,https://www.lesswrong.com/posts/WhNxG4r774bK32GcH/the-simple-picture-on-ai-safety,The simple picture on AI safety,['Alex Flint'],2018-05-27T19:43:27Z,lesswrong,, 189200,https://www.lesswrong.com/posts/CjnenD4AcFa7SDRCf/how-much-should-you-be-willing-to-pay-for-an-agi,How much should you be willing to pay for an AGI?,['Logan Zoellner'],2021-09-20T11:51:34Z,lesswrong,, 189213,https://www.lesswrong.com/posts/vsuMu98Rwde5krxSJ/should-we-push-for-requiring-ai-training-data-to-be-licensed,Should we push for requiring AI training data to be licensed?,['ChristianKl'],2022-10-19T17:49:56Z,lesswrong,, 189223,https://www.lesswrong.com/posts/mmPohumufQJmCLeh6/trying-to-measure-ai-deception-capabilities-using-temporary,Trying to measure AI deception capabilities using temporary simulation fine-tuning,['alenoach'],2023-05-04T17:59:28Z,lesswrong,, 189247,https://www.lesswrong.com/posts/ubQDcDxjNJ2Exp3ni/our-existing-solutions-to-agi-alignment-semi-safe,Our Existing Solutions to AGI Alignment (semi-safe),['Michael Soareverix'],2022-07-21T19:00:44Z,lesswrong,, 189268,https://www.lesswrong.com/posts/MpD8bR9A8BFswxNq3/everything-s-normal-until-it-s-not,Everything's normal until it's not,['Eleni Angelou'],2023-03-10T02:02:17Z,lesswrong,, 189289,https://www.lesswrong.com/posts/zZbM5JdMs5uCtMkgs/robustness-of-contrast-consistent-search-to-adversarial,Robustness of Contrast-Consistent Search to Adversarial Prompting,"['Nandi', 'i', 'Jamie Wright', 'Seamus_F', 'hugofry']",2023-11-01T12:46:15Z,lesswrong,, 189307,https://www.lesswrong.com/posts/8kHgaLYamxQdE2zk7/yoshua-bengio-how-rogue-ais-may-arise,Yoshua Bengio: How Rogue AIs may Arise,['harfe'],2023-05-23T18:28:27Z,lesswrong,, 189337,https://www.lesswrong.com/posts/JXcKqYdcHoabmMpjh/perfect-predictors,Perfect Predictors,['aditya malik'],2022-08-12T11:51:51Z,lesswrong,, 189346,https://www.lesswrong.com/posts/3AXpysm7emgWQ2BKe/the-impact-of-whole-brain-emulation,The impact of whole brain emulation,['jefftk'],2013-05-14T19:59:12Z,lesswrong,, 189360,https://www.lesswrong.com/posts/5DsHZidaShW5EM9rz/results-from-the-language-model-hackathon,Results from the language model hackathon,['Esben Kran'],2022-10-10T08:29:06Z,lesswrong,, 189397,https://www.lesswrong.com/posts/Ap3BYipKmigLPAHc9/which-animals-can-suffer,Which animals can suffer?,['Just Learning'],2021-06-01T03:42:43Z,lesswrong,, 189407,https://www.lesswrong.com/posts/PJhvcTkwpGr9Ysmcd/finding-skeletons-on-rashomon-ridge,Finding Skeletons on Rashomon Ridge,"['David Udell', 'Peter S. Park', 'NickyP']",2022-07-24T22:32:00Z,lesswrong,, 189419,https://www.lesswrong.com/posts/uS6vdQH8zpHyMAsxR/ordinary-and-unordinary-decision-theory,Ordinary and unordinary decision theory,['JonasMoss'],2022-03-02T11:39:31Z,lesswrong,, 189442,https://www.lesswrong.com/posts/sizjfDgCgAsuLJQmm/reply-to-holden-on-tool-ai,Reply to Holden on 'Tool AI',['Eliezer Yudkowsky'],2012-06-12T18:00:52Z,lesswrong,, 189467,https://www.lesswrong.com/posts/ab8wAd6FJvWAexYug/wapo-big-tech-was-moving-cautiously-on-ai-then-came-chatgpt,"WaPo: ""Big Tech was moving cautiously on AI. Then came ChatGPT.""",['Julian Bradshaw'],2023-01-27T22:54:50Z,lesswrong,, 189482,https://www.lesswrong.com/posts/cDFj427x9LzgsMKv4/benchmarks-for-comparing-human-and-ai-intelligence,Benchmarks for Comparing Human and AI Intelligence,['ViktorThink'],2022-12-11T22:06:31Z,lesswrong,, 189501,https://www.lesswrong.com/posts/GF6hDawC6QdwGXLsj/the-algorithm-isn-t-doing-x-it-s-just-doing-y,"The algorithm isn't doing X, it's just doing Y.",['Cleo Nardo'],2023-03-16T23:28:49Z,lesswrong,, 189511,https://www.lesswrong.com/posts/apFCckw6grxBH7bYL/how-much-should-we-worry-about-mesa-optimization-challenges,How much should we worry about mesa-optimization challenges?,['sudo -i'],2022-07-25T03:56:15Z,lesswrong,, 189528,https://www.lesswrong.com/posts/Py3vqPp9uSqQJHFuy/how-useful-is-corrigibility,How useful is Corrigibility?,['martinkunev'],2023-09-12T00:05:42Z,lesswrong,, 189547,https://www.lesswrong.com/posts/xkusvgfxD8MbDtxin/hertford-sourbut-rationality-lessons-from-university,"Hertford, Sourbut (rationality lessons from University Challenge)",['Oliver Sourbut'],2023-09-04T18:44:24Z,lesswrong,, 189567,https://www.lesswrong.com/posts/b9s2HknEC2RiasmGh/solving-for-meta-ethics-by-inducing-from-the-self,Solving For Meta-Ethics By Inducing From The Self,['TheAspiringHumanist'],2023-01-20T07:21:29Z,lesswrong,, 189583,https://www.lesswrong.com/posts/3duR8CrvcHywrnhLo/how-does-gpt-3-spend-its-175b-parameters,How does GPT-3 spend its 175B parameters?,['Robert_AIZI'],2023-01-13T19:21:03Z,lesswrong,, 189600,https://www.lesswrong.com/posts/MSw5y88tDbyp8WKo9/ai-safety-newsletter-4-ai-and-cybersecurity-persuasive-ais,"AI Safety Newsletter #4: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks","['ozhang', 'Dan H', 'Akash', 'aogara']",2023-05-02T18:41:43Z,lesswrong,, 189638,https://www.lesswrong.com/posts/XjuT9vgBfwXPxsdfN/might-humans-not-be-the-most-intelligent-animals,Might humans not be the most intelligent animals?,['Matthew Barnett'],2019-12-23T21:50:05Z,lesswrong,, 189653,https://www.lesswrong.com/posts/mxmeeGk3mTM7BrT93/intuition,Intuition,['Rafael Harth'],2020-12-20T21:49:30Z,lesswrong,, 189666,https://www.lesswrong.com/posts/BdXeZC8hFvLMcN495/engaging-first-introductions-to-ai-risk,Engaging First Introductions to AI Risk,['Rob Bensinger'],2013-08-19T06:26:27Z,lesswrong,, 189686,https://www.lesswrong.com/posts/YCMfQoqqi2o9Tjwoa/vnm-expected-utility-theory-uses-abuses-and-interpretation,"VNM expected utility theory: uses, abuses, and interpretation",['Academian'],2010-04-17T20:23:05Z,lesswrong,, 189710,https://www.lesswrong.com/posts/hhKpXEsfAiyFLecyF/internal-target-information-for-ai-oversight,Internal Target Information for AI Oversight,['Paul Colognese'],2023-10-20T14:53:00Z,lesswrong,, 189719,https://www.lesswrong.com/posts/28XBkxauWQAMZeXiF/chatgpt-seems-overconfident-to-me,ChatGPT seems overconfident to me,['qbolec'],2022-12-04T08:03:09Z,lesswrong,, 189736,https://www.lesswrong.com/posts/WAsghurJ3EppkhmQX/analysing-a-2036-takeover-scenario,Analysing a 2036 Takeover Scenario,['ukc10014'],2022-10-06T20:48:50Z,lesswrong,, 189770,https://www.lesswrong.com/posts/PZYD5kBpeHWgE5jX4/extraction-of-human-preferences,Extraction of human preferences 👨→🤖,['arunraja-hub'],2021-08-24T16:34:14Z,lesswrong,, 189792,https://www.lesswrong.com/posts/hi8MgnTjDCbh6kexs/seeking-input-to-ai-safety-book-for-non-technical-audience,Seeking Input to AI Safety Book for non-technical audience,['Darren McKee'],2023-08-10T17:58:30Z,lesswrong,, 189813,https://www.lesswrong.com/posts/PdcnEEE6sdgACDrEk/snapshot-of-narratives-and-frames-against-regulating-ai,Snapshot of narratives and frames against regulating AI,['Jan_Kulveit'],2023-11-01T16:30:19Z,lesswrong,, 189828,https://www.lesswrong.com/posts/iHLJtbdFwsoNWZg3e/guardian-ai-misaligned-systems-are-all-around-us,Guardian AI (Misaligned systems are all around us.),['Jessica Rumbelow'],2022-11-25T15:55:44Z,lesswrong,, 189838,https://www.lesswrong.com/posts/bku9odAYPyQwHqzCo/the-shape-of-agi-cartoons-and-back-of-envelope,The shape of AGI: Cartoons and back of envelope,['boazbarak'],2023-07-17T20:57:30Z,lesswrong,, 189859,https://www.lesswrong.com/posts/DqLHvJjuPdtrzuoas/my-impression-of-singular-learning-theory,My impression of singular learning theory,['Ege Erdil'],2023-06-18T15:34:27Z,lesswrong,, 189870,https://www.lesswrong.com/posts/2CFBi4MNFNQXdbkss/alignment-problems-all-the-way-down,Alignment Problems All the Way Down,['peterbarnett'],2022-01-22T00:19:24Z,lesswrong,, 189890,https://www.lesswrong.com/posts/AKtn6reGFm5NBCgnd/in-defense-of-oracle-tool-ai-research,"In defense of Oracle (""Tool"") AI research",['Steven Byrnes'],2019-08-07T19:14:10Z,lesswrong,, 189907,https://www.lesswrong.com/posts/MznxnYCtHZbtDxJuh/approximating-solomonoff-induction,Approximating Solomonoff Induction,['Houshalter'],2015-05-29T12:23:20Z,lesswrong,, 189936,https://www.lesswrong.com/posts/FwJz34524hbYtXrkK/neurips-safety-and-chatgpt-mlaisu-w48,NeurIPS Safety & ChatGPT. MLAISU W48,"['Esben Kran', 'Steinthal']",2022-12-02T15:50:17Z,lesswrong,, 189974,https://www.lesswrong.com/posts/NdJsWDS7Aq4xqoumk/a-very-non-technical-explanation-of-the-basics-of-infra,A very non-technical explanation of the basics of infra-Bayesianism,['matolcsid'],2023-04-26T22:57:05Z,lesswrong,, 189992,https://www.lesswrong.com/posts/twxiFkcDbTyNxNxmd/boolean-primitives-for-coupled-optimizers,Boolean Primitives for Coupled Optimizers,['Paul Bricman'],2022-10-07T18:02:55Z,lesswrong,, 190029,https://www.lesswrong.com/posts/5GskScdvYXBpL78wL/reply-to-holden-on-the-singularity-institute,Reply to Holden on The Singularity Institute,['lukeprog'],2012-07-10T23:20:19Z,lesswrong,, 190066,https://www.lesswrong.com/posts/HW5Q9cW9sgk4yCffd/hacking-the-cev-for-fun-and-profit,Hacking the CEV for Fun and Profit,['Wei Dai'],2010-06-03T20:30:30Z,lesswrong,, 190076,https://www.lesswrong.com/posts/rEDjo94iPvXWkkt4L/apparently-of-the-195-million-the-dod-allocated-in,"Apparently, of the 195 Million the DoD allocated in University Research Funding Awards in 2022, more than half of them concerned AI or compute hardware research",['mako yass'],2023-07-07T01:20:20Z,lesswrong,, 190097,https://www.lesswrong.com/posts/ZqgCQQH6P6EPdCLaT/link-wait-but-why-the-ai-revolution-part-2,[LINK] Wait But Why - The AI Revolution Part 2,['Adam Zerner'],2015-02-04T16:02:09Z,lesswrong,, 190117,https://www.lesswrong.com/posts/9KoyMKHmwCCJdMma4/a-problem-with-playing-chicken-with-the-universe-as-an,"A problem with ""playing chicken with the universe"" as an approach to UDT",['Karl'],2013-03-08T02:34:28Z,lesswrong,, 190134,https://www.lesswrong.com/posts/rLzMBxew4S4TevtqB/intelligence-explosion-vs-co-operative-explosion,Intelligence Explosion vs. Co-operative Explosion,['Kaj_Sotala'],2012-04-16T11:01:01Z,lesswrong,, 190153,https://www.lesswrong.com/posts/arkQaWauCvkTvgcRH/ai-safety-info-distillation-fellowship,AI Safety Info Distillation Fellowship,"['Robert Miles', 'mwatkins']",2023-02-17T16:16:46Z,lesswrong,, 190166,https://www.lesswrong.com/posts/p3prZSfJ3nC6jjgAk/chatgpt-s-fuzzy-alignment-isn-t-evidence-of-agi-alignment,"ChatGPT's ""fuzzy alignment"" isn't evidence of AGI alignment: the banana test",['Michael Tontchev'],2023-03-23T07:12:33Z,lesswrong,, 190185,https://www.lesswrong.com/posts/tJyuHfRpENXYjCJit/mental-acceptance-and-reflection,Mental acceptance and reflection,"['remember', 'Gabriel Alfour']",2022-12-22T14:32:16Z,lesswrong,, 190194,https://www.lesswrong.com/posts/MwetLcBPvshg9ePZB/decision-theory-is-not-policy-theory-is-not-agent-theory,Decision theory is not policy theory is not agent theory,['Cole Wyeth'],2023-09-05T01:38:27Z,lesswrong,, 190214,https://www.lesswrong.com/posts/dEvJCWBfRYNdXXTsS/supernatural-math,Supernatural Math,['saturn'],2009-05-19T11:31:44Z,lesswrong,, 190224,https://www.lesswrong.com/posts/ShrAZXjTs5HTxDmGM/potential-gears-level-explanations-of-smooth-progress,Potential gears level explanations of smooth progress,['ryan_greenblatt'],2021-12-22T18:05:59Z,lesswrong,, 190245,https://www.lesswrong.com/posts/t9MP6ZPqhTTvjssuB/gpt-2-6-month-follow-up,GPT-2: 6-Month Follow-Up,['anonymous'],2019-08-21T05:06:52Z,lesswrong,, 190266,https://www.lesswrong.com/posts/4AHXDwcGab5PhKhHT/humans-who-are-not-concentrating-are-not-general,Humans Who Are Not Concentrating Are Not General Intelligences,['sarahconstantin'],2019-02-25T20:40:01Z,lesswrong,, 190283,https://www.lesswrong.com/posts/NFTe38cwu7LqT2oTy/superintelligence-22-emulation-modulation-and-institutional,Superintelligence 22: Emulation modulation and institutional design,['KatjaGrace'],2015-02-10T02:06:01Z,lesswrong,, 190305,https://www.lesswrong.com/posts/kowKm25hFxRquEyim/is-edt-correct-does-edt-logical-edt-logical-cdt,"Is EDT correct? Does ""EDT"" == ""logical EDT"" == ""logical CDT""?",['Vivek Hebbar'],2023-05-08T02:07:18Z,lesswrong,, 190319,https://www.lesswrong.com/posts/HDAjZaeTtEYyDk93a/evolutions-building-evolutions-layers-of-generate-and-test,Evolutions Building Evolutions: Layers of Generate and Test,['plex'],2021-02-05T18:21:29Z,lesswrong,, 190336,https://www.lesswrong.com/posts/thkAtqoQwN6DtaiGT/carefully-bootstrapped-alignment-is-organizationally-hard,"""Carefully Bootstrapped Alignment"" is organizationally hard",['Raemon'],2023-03-17T18:00:10Z,lesswrong,, 190358,https://www.lesswrong.com/posts/4ZWcHKf9FkEAQssbF/how-likely-do-you-think-worse-than-extinction-type-fates-to,How likely do you think worse-than-extinction type fates to be?,['span1'],2022-08-01T04:08:06Z,lesswrong,, 190372,https://www.lesswrong.com/posts/73kwTFKgi4AagxFHJ/planes-are-still-decades-away-from-displacing-most-bird-jobs,Planes are still decades away from displacing most bird jobs,['guzey'],2022-11-25T16:49:32Z,lesswrong,, 190382,https://www.lesswrong.com/posts/CqYaazaG6EkovspMT/open-agency-model-can-solve-the-ai-regulation-dilemma,Open Agency model can solve the AI regulation dilemma,['Roman Leventov'],2023-11-08T20:00:56Z,lesswrong,, 190412,https://www.lesswrong.com/posts/ZdCztwnxXu3aC4kxZ/the-e-coli-test-for-ai-alignment,The E-Coli Test for AI Alignment,['johnswentworth'],2018-12-16T08:10:51Z,lesswrong,, 190422,https://www.lesswrong.com/posts/Z9P2m462wQ4qmH6uo/aspiration-based-q-learning,Aspiration-based Q-Learning,"['Clément Dumas', 'Jobst Heitzig']",2023-10-27T14:42:03Z,lesswrong,, 190444,https://www.lesswrong.com/posts/KCg7NeKQ7MycXWpYd/our-values-are-underdefined-changeable-and-manipulable,"Our values are underdefined, changeable, and manipulable",['Stuart_Armstrong'],2017-11-02T11:09:16Z,lesswrong,, 190464,https://www.lesswrong.com/posts/jjGiCZLuJ8ZNvZwQc/an-illustrative-model-of-backfire-risks-from-pausing-ai,An illustrative model of backfire risks from pausing AI research,['Maxime Riché'],2023-11-06T14:30:59Z,lesswrong,, 190489,https://www.lesswrong.com/posts/NptxTqHDtFovhtW9b/how-humans-are-aligned-1,how humans are aligned,['bhauth'],2023-05-26T00:09:21Z,lesswrong,, 190535,https://www.lesswrong.com/posts/X3p8mxE5dHYDZNxCm/a-concrete-bet-offer-to-those-with-short-agi-timelines,A concrete bet offer to those with short AGI timelines,"['Matthew Barnett', 'Tamay']",2022-04-09T21:41:45Z,lesswrong,, 190545,https://www.lesswrong.com/posts/RTGjvFcCMhmfohC8s/thought-experiment-if-you-had-to-choose-which-would-you,"(Thought experiment) If you had to choose, which would you prefer?",['kuira'],2023-08-17T00:57:03Z,lesswrong,, 190554,https://www.lesswrong.com/posts/qrrEtrbLcmqr3b5uf/without-a-trajectory-change-the-development-of-agi-is-likely,"Without a trajectory change, the development of AGI is likely to go badly",['Max H'],2023-05-29T23:42:17Z,lesswrong,, 190582,https://www.lesswrong.com/posts/Q44QjdtKtSoqRKgRe/introducing-leap-labs-an-ai-interpretability-startup,"Introducing Leap Labs, an AI interpretability startup",['Jessica Rumbelow'],2023-03-06T16:16:22Z,lesswrong,, 190600,https://www.lesswrong.com/posts/FtNFhuXXtmSjnNvE7/goal-retention-discussion-with-eliezer,Goal retention discussion with Eliezer,['MaxTegmark'],2014-09-04T22:23:22Z,lesswrong,, 190617,https://www.lesswrong.com/posts/mkaaLsuCGJwiYzpig/will-artificial-superintelligence-kill-us,Will Artificial Superintelligence Kill Us?,['James_Miller'],2023-05-23T16:27:52Z,lesswrong,, 190661,https://www.lesswrong.com/posts/wiwpmtyiKr6bPnSai/top-9-2-myths-about-ai-risk,Top 9+2 myths about AI risk,['Stuart_Armstrong'],2015-06-29T20:41:25Z,lesswrong,, 190681,https://www.lesswrong.com/posts/YJpMgi7HJuHwXTkjk/taking-features-out-of-superposition-with-sparse,Taking features out of superposition with sparse autoencoders more quickly with informed initialization,['Pierre Peigné'],2023-09-23T16:21:43Z,lesswrong,, 190700,https://www.lesswrong.com/posts/QFuTYKhF4ouXTn9ML/algorithms-vs-compute,Algorithms vs Compute,['johnswentworth'],2020-01-28T17:34:32Z,lesswrong,, 190709,https://www.lesswrong.com/posts/mtv98d726qJgag3X2/might-whole-brain-emulation-require-quantum-level-emulation,Might whole brain emulation require quantum-level emulation?,['lukeprog'],2011-04-14T06:12:38Z,lesswrong,, 190723,https://www.lesswrong.com/posts/6nbuf6ZDiQe5RWwqv/how-sure-are-you-that-brain-emulations-would-be-conscious,How sure are you that brain emulations would be conscious?,['ChrisHallquist'],2013-08-26T06:21:18Z,lesswrong,, 190742,https://www.lesswrong.com/posts/yTp9s4LrJn6ppLvvM/openai-gpt-based-llms-show-ability-to-discriminate-between,"OpenAI: GPT-based LLMs show ability to discriminate between its own wrong answers, but inability to explain how/why it makes that discrimination, even as model scales",['Aditya Jain'],2022-06-13T23:33:13Z,lesswrong,, 190752,https://www.lesswrong.com/posts/6x9rJbx9bmGsxXWEj/forecasting-thread-existential-risk-1,Forecasting Thread: Existential Risk,['Amandango'],2020-09-22T03:44:29Z,lesswrong,, 190762,https://www.lesswrong.com/posts/7ysKDyQDPK3dDAbkT/narrow-ai-nanny-reaching-strategic-advantage-via-narrow-ai,Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence,['avturchin'],2018-07-25T17:12:32Z,lesswrong,, 190786,https://www.lesswrong.com/posts/camG6t6SxzfasF42i/a-year-of-ai-increasing-ai-progress,A Year of AI Increasing AI Progress,['ThomasW'],2022-12-30T02:09:39Z,lesswrong,, 190795,https://www.lesswrong.com/posts/sEeh6tWpSvLSpoaH8/hiring-inform-and-shape-a-new-project-on-ai-safety-at-3,HIRING: Inform and shape a new project on AI safety at Partnership on AI,['madhu_lika'],2021-12-07T19:37:31Z,lesswrong,, 190819,https://www.lesswrong.com/posts/obMiQv9K76nRZj9tE/stampy-s-ai-safety-info-soft-launch,Stampy's AI Safety Info soft launch,"['steven0461', 'Robert Miles']",2023-10-05T22:13:05Z,lesswrong,, 190841,https://www.lesswrong.com/posts/sACaK4tBvPHkEQW9w/responses-to-christiano-on-takeoff-speeds,Responses to Christiano on takeoff speeds?,['Richard_Ngo'],2020-10-30T15:16:03Z,lesswrong,, 190850,https://www.lesswrong.com/posts/gmsAWkcQRJst2Jnrk/stanford-encyclopedia-of-philosophy-on-ai-ethics-and,Stanford Encyclopedia of Philosophy on AI ethics and superintelligence,['Kaj_Sotala'],2020-05-02T07:35:37Z,lesswrong,, 190871,https://www.lesswrong.com/posts/4KHPxsJgGfxwCNSCC/if-alignment-is-hard-then-so-is-self-improvement,"If Alignment is Hard, then so is Self-Improvement",['PavleMiha'],2023-04-07T00:08:22Z,lesswrong,, 190880,https://www.lesswrong.com/posts/FkDuWGtiCTshovoTN/list-of-links-for-getting-into-ai-safety,List of links for getting into AI safety,['zef'],2023-01-04T19:45:10Z,lesswrong,, 190889,https://www.lesswrong.com/posts/5BwADJo44dvLQ4csS/defining-optimizer,"Defining ""optimizer""",['Chantiel'],2021-04-17T15:38:00Z,lesswrong,, 190899,https://www.lesswrong.com/posts/dDDi9bZm6ELSXTJd9/intent-aligned-ai-systems-deplete-human-agency-the-need-for,Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety,['catubc'],2023-05-31T21:18:57Z,lesswrong,, 190920,https://www.lesswrong.com/posts/9aN7hrCFhoQshqQz2/positive-attractors,Positive Attractors,"['Robert Kralisch', 'teahorse', 'Eris', 'Sohaib Imran']",2023-06-30T20:43:19Z,lesswrong,, 190940,https://www.lesswrong.com/posts/N7DxcLCjfBpEv3QwB/request-stop-advancing-ai-capabilities,Request: stop advancing AI capabilities,['So8res'],2023-05-26T17:42:07Z,lesswrong,, 190951,https://www.lesswrong.com/posts/w9oACum6FW7HdGHST/emotional-attachment-to-ais-opens-doors-to-problems,Emotional attachment to AIs opens doors to problems,['Igor Ivanov'],2023-01-22T20:28:35Z,lesswrong,, 190978,https://www.lesswrong.com/posts/2QLxNdxQpnesokk9H/shane-legg-interview-on-alignment,Shane Legg interview on alignment,['Seth Herd'],2023-10-28T19:28:52Z,lesswrong,, 191004,https://www.lesswrong.com/posts/GZwCWnbtLBmji9A2i/a-potential-problem-with-using-solomonoff-induction-as-a,A potential problem with using Solomonoff induction as a prior,['JoshuaZ'],2011-04-07T19:27:41Z,lesswrong,, 191013,https://www.lesswrong.com/posts/MrcTzbYeZ3xnh9mGj/is-the-endowment-effect-due-to-incomparability,Is the Endowment Effect Due to Incomparability?,['Kevin Dorst'],2023-07-10T16:26:07Z,lesswrong,, 191031,https://www.lesswrong.com/posts/R3tXGhSCgYbp3kXm2/jack-clark-on-the-realities-of-ai-policy,Jack Clark on the realities of AI policy,['Kaj_Sotala'],2022-08-07T08:44:34Z,lesswrong,, 191054,https://www.lesswrong.com/posts/7aHCZbofofA5JeKgb/memetic-judo-3-the-intelligence-of-stochastic-parrots-v-2,Memetic Judo #3: The Intelligence of Stochastic Parrots v.2,['Max TK'],2023-08-20T15:18:24Z,lesswrong,, 191069,https://www.lesswrong.com/posts/Ce82o8mbBfH9N3Jes/evaluating-gpt-4-theory-of-mind-capabilities,Evaluating GPT-4 Theory of Mind Capabilities,"['gcmac', 'Nathan']",2023-08-10T17:57:26Z,lesswrong,, 191090,https://www.lesswrong.com/posts/ix4Tx3BMR4EwHdnS3/a-case-for-capabilities-work-on-ai-as-net-positive,A case for capabilities work on AI as net positive,['Noosphere89'],2023-02-27T21:12:44Z,lesswrong,, 191103,https://www.lesswrong.com/posts/DWWy7oqopwsuN3mpz/my-agenda-for-research-into-transformer-capabilities,My agenda for research into transformer capabilities - Introduction,['p.b.'],2022-04-05T21:23:11Z,lesswrong,, 191126,https://www.lesswrong.com/posts/RyaEZiNDtciYsZ3nH/continuity-in-uploading,Continuity in Uploading,['Error'],2014-01-17T22:57:23Z,lesswrong,, 191136,https://www.lesswrong.com/posts/wXCnvavM5F8asTdD3/paperclipgpt-4,PaperclipGPT(-4),['Michael Tontchev'],2023-03-14T22:03:24Z,lesswrong,, 191157,https://www.lesswrong.com/posts/4rgDink5LirgzwyqF/what-if-we-solve-ai-safety-but-no-one-cares,What if we solve AI Safety but no one cares,['142857'],2022-08-22T05:38:03Z,lesswrong,, 191174,https://www.lesswrong.com/posts/TQvSZ4n4BuntC22Af/decisions-are-not-about-changing-the-world-they-are-about,"Decisions are not about changing the world, they are about learning what world you live in",['shminux'],2018-07-28T08:41:26Z,lesswrong,, 191192,https://www.lesswrong.com/posts/v5z6rDuFPKM5dLpz8/probably-good-projects-for-the-ai-safety-ecosystem,Probably good projects for the AI safety ecosystem,['Ryan Kidd'],2022-12-05T02:26:42Z,lesswrong,, 191224,https://www.lesswrong.com/posts/ZKgDpktMhnyE6Gvcs/of-pumpkins-the-falcon-heavy-and-groucho-marx-high-level,"Of pumpkins, the Falcon Heavy, and Groucho Marx: High-Level discourse structure in ChatGPT",['Bill Benzon'],2022-12-08T22:25:49Z,lesswrong,, 191243,https://www.lesswrong.com/posts/qDoqwGs4Dhj27sbTj/what-are-the-numbers-in-mind-for-the-super-short-agi,What are the numbers in mind for the super-short AGI timelines so many long-termists are alarmed about?,['Evan_Gaensbauer'],2022-04-21T23:32:23Z,lesswrong,, 191253,https://www.lesswrong.com/posts/ehK7WtBsDfiCzXTw8/recommend-haist-resources-for-assessing-the-value-of-rlhf,Recommend HAIST resources for assessing the value of RLHF-related alignment research,"['Sam Marks', 'Xander Davies']",2022-11-05T20:58:07Z,lesswrong,, 191278,https://www.lesswrong.com/posts/Nqn2tkAHbejXTDKuW/openai-makes-humanity-less-safe,OpenAI makes humanity less safe,['Benquo'],2017-04-03T19:07:52Z,lesswrong,, 191300,https://www.lesswrong.com/posts/ddR8dExcEFJKJtWvR/how-evolutionary-lineages-of-llms-can-plan-their-own-future,How evolutionary lineages of LLMs can plan their own future and act on these plans,['Roman Leventov'],2022-12-25T18:11:19Z,lesswrong,, 191324,https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq,Superintelligence FAQ,['Scott Alexander'],2016-09-20T19:00:00Z,lesswrong,, 191353,https://www.lesswrong.com/posts/opJxxfrN33xQx3eXu/wanting-and-liking,"""Wanting"" and ""liking""",['Mateusz Bagiński'],2023-08-30T14:52:05Z,lesswrong,, 191371,https://www.lesswrong.com/posts/xtHskkvvExNXnqEvi/proposed-alignment-technique-osnr-output-sanitization-via,Proposed Alignment Technique: OSNR (Output Sanitization via Noising and Reconstruction) for Safer Usage of Potentially Misaligned AGI,['sudo -i'],2023-05-29T01:35:52Z,lesswrong,, 191388,https://www.lesswrong.com/posts/PrNw3EBSwYfJyREjE/three-questions-about-mesa-optimizers,Three questions about mesa-optimizers,['Eric Neyman'],2022-04-12T02:58:00Z,lesswrong,, 191405,https://www.lesswrong.com/posts/kcEdqZqF98eCaZpfQ/bad-news-for-uploading,Bad news for uploading,['PhilGoetz'],2012-12-13T23:32:46Z,lesswrong,, 191414,https://www.lesswrong.com/posts/tq8uMdSDj8iRnGmTE/yoshua-bengio-slowing-down-development-of-ai-systems-passing,"Yoshua Bengio: ""Slowing down development of AI systems passing the Turing test""",['Roman Leventov'],2023-04-06T03:31:39Z,lesswrong,, 191452,https://www.lesswrong.com/posts/EHSJD8qTnuFHG73fd/poorly-aimed-death-rays,Poorly-Aimed Death Rays,['Thane Ruthenis'],2022-06-11T18:29:55Z,lesswrong,, 191469,https://www.lesswrong.com/posts/3dwADq2hjsJB2GAno/practical-everyday-human-strategizing,Practical everyday human strategizing,['anonymous'],2022-03-27T14:20:19Z,lesswrong,, 191486,https://www.lesswrong.com/posts/zk6RK3xFaDeJHsoym/connor-leahy-on-dying-with-dignity-eleutherai-and-conjecture,"Connor Leahy on Dying with Dignity, EleutherAI and Conjecture",['Michaël Trazzi'],2022-07-22T18:44:20Z,lesswrong,, 191513,https://www.lesswrong.com/posts/2H4huFGykKCP5Qu7C/a-simple-way-to-make-gpt-3-follow-instructions,A simple way to make GPT-3 follow instructions,['Quintin Pope'],2021-03-08T02:57:37Z,lesswrong,, 191529,https://www.lesswrong.com/posts/4JvnwryM8rGiPmWBy/why-small-phenomenons-are-relevant-to-morality-1,Why small phenomenons are relevant to morality ​,['Ryo'],2023-11-13T15:25:47Z,lesswrong,, 191546,https://www.lesswrong.com/posts/b8ijLnSE9aqXDLdGW/could-utility-functions-be-for-narrow-ai-only-and-downright,"Could utility functions be for narrow AI only, and downright antithetical to AGI?",['chaosmage'],2017-03-16T18:24:23Z,lesswrong,, 191565,https://www.lesswrong.com/posts/yfxp4Y6YETjjtChFh/the-mind-killer,The mind-killer,['Paul Crowley'],2009-05-02T16:49:20Z,lesswrong,, 191581,https://www.lesswrong.com/posts/nJHXQWCSByS4SxfQz/if-wentworth-is-right-about-natural-abstractions-it-would-be,"If Wentworth is right about natural abstractions, it would be bad for alignment",['Wuschel Schulz'],2022-12-08T15:19:02Z,lesswrong,, 191596,https://www.lesswrong.com/posts/8qq72ABNmY6WDNF2p/andrew-ng-wants-to-have-a-conversation-about-extinction-risk,Andrew Ng wants to have a conversation about extinction risk from AI,['Leon Lang'],2023-06-05T22:29:08Z,lesswrong,, 191605,https://www.lesswrong.com/posts/jpLJdFMGJiKKBNoLy/two-ideas-for-alignment-perpetual-mutual-distrust-and,"Two ideas for alignment, perpetual mutual distrust and induction",['APaleBlueDot'],2023-05-25T00:56:33Z,lesswrong,, 191620,https://www.lesswrong.com/posts/ngqFnDjCtWqQcSHXZ/safety-of-self-assembled-neuromorphic-hardware,Safety of Self-Assembled Neuromorphic Hardware,['Can Rager'],2022-12-26T18:51:26Z,lesswrong,, 191640,https://www.lesswrong.com/posts/xGPXDNGYebD3rgCoa/slightly-advanced-decision-theory-102-four-reasons-not-to-be,Slightly advanced decision theory 102: Four reasons not to be a (naive) utility maximizer,['Jan'],2021-11-23T11:02:38Z,lesswrong,, 191669,https://www.lesswrong.com/posts/C5x8GiDhiaEpu54jS/logical-uncertainty-as-probability,Logical Uncertainty as Probability,['gRR'],2012-04-29T22:26:35Z,lesswrong,, 191678,https://www.lesswrong.com/posts/6CjnFcsRHJesR9MEA/agi-timelines-in-governance-different-strategies-for,AGI Timelines in Governance: Different Strategies for Different Timeframes,"['simeon_c', 'AmberDawn']",2022-12-19T21:31:26Z,lesswrong,, 191705,https://www.lesswrong.com/posts/kDsywodAKgQAAAxE8/how-not-to-choose-a-research-project,How (not) to choose a research project,"['Garrett Baker', 'CatGoddess', 'Johannes C. Mayer']",2022-08-09T00:26:37Z,lesswrong,, 191728,https://www.lesswrong.com/posts/fEYyLjGpR7R8of6of/least-squares-concept-erasure-leace,LEAst-squares Concept Erasure (LEACE),['tricky_labyrinth'],2023-06-07T21:51:04Z,lesswrong,, 191737,https://www.lesswrong.com/posts/nBzTxJmLdebiqhY8q/aisafety-info-how-can-i-help-faq,"AISafety.info ""How can I help?"" FAQ","['steven0461', 'Severin T. Seehrich']",2023-06-05T22:09:58Z,lesswrong,, 191749,https://www.lesswrong.com/posts/2JGFTwcfjCjW3PY5p/how-might-we-make-better-use-of-ai-capabilities-research-for,How might we make better use of AI capabilities research for alignment purposes?,['ghostwheel'],2022-08-31T04:19:33Z,lesswrong,, 191759,https://www.lesswrong.com/posts/Afs6FtptMSWcAetxR/the-ai-control-problem-in-a-wider-intellectual-context,The AI Control Problem in a wider intellectual context,['philosophybear'],2023-01-13T00:28:16Z,lesswrong,, 191779,https://www.lesswrong.com/posts/XhHetxjWxZ6b85HK9/whole-brain-emulation-looking-at-progress-on-c-elgans,Whole Brain Emulation: Looking At Progress On C. elgans,['jefftk'],2011-10-29T15:21:09Z,lesswrong,, 191790,https://www.lesswrong.com/posts/hFaXe4Mi64xkE6Kqp/attributing-to-interactions-with-gcpd-and-gwpd,Attributing to interactions with GCPD and GWPD,['jenny'],2023-10-11T15:06:17Z,lesswrong,, 191801,https://www.lesswrong.com/posts/NptgfCiJvXyoRgdcz/is-there-a-simple-parameter-that-controls-human-working,"Is there a simple parameter that controls human working memory capacity, which has been set tragically low?",['Liron'],2019-08-23T22:10:40Z,lesswrong,, 191813,https://www.lesswrong.com/posts/5ZFgZbqp6Mi2xpYjK/an-explanation-for-every-token-using-an-llm-to-sample,An explanation for every token: using an LLM to sample another LLM,['Max H'],2023-10-11T00:53:55Z,lesswrong,, 191833,https://www.lesswrong.com/posts/yk5iRtFKesLe6i6sE/newcomb-s-lottery-problem,Newcomb's Lottery Problem,['Heighn'],2022-01-27T16:28:12Z,lesswrong,, 191843,https://www.lesswrong.com/posts/GwxotzGc2ipRNweg3/nuclear-espionage-and-ai-governance,Nuclear Espionage and AI Governance,['GAA'],2021-10-04T23:04:14Z,lesswrong,, 191869,https://www.lesswrong.com/posts/YL2RpsCsFuDBgz4HS/three-alignment-schemas-and-their-problems,Three Alignment Schemas & Their Problems,['Shoshannah Tekofsky'],2022-11-26T04:25:49Z,lesswrong,, 191895,https://www.lesswrong.com/posts/dJMt6ty2Bs34gLvAZ/notes-on-gratitude,Notes on Gratitude,['David Gross'],2021-01-13T20:37:30Z,lesswrong,, 191923,https://www.lesswrong.com/posts/wKZzLhhyADKqLAFan/clarifying-how-misalignment-can-arise-from-scaling-llms,Clarifying how misalignment can arise from scaling LLMs,['Util'],2023-08-19T14:16:05Z,lesswrong,, 191946,https://www.lesswrong.com/posts/ZYADHcQFXwwsaqrMq/virtue-ethics-and-why-the-rationalist-community-might-care,Virtue ethics and why the rationalist community might care about it.,['David Gross'],2020-10-22T03:53:29Z,lesswrong,, 191955,https://www.lesswrong.com/posts/PABtHv8X28jJdxrD6/racing-through-a-minefield-the-ai-deployment-problem,Racing through a minefield: the AI deployment problem,['HoldenKarnofsky'],2022-12-22T16:10:08Z,lesswrong,, 191986,https://www.lesswrong.com/posts/wiNSeNQT6jiBGZ3Pi/response-to-oren-etzioni-s-how-to-know-if-artificial,"Response to Oren Etzioni's ""How to know if artificial intelligence is about to destroy civilization""",['Daniel Kokotajlo'],2020-02-27T18:10:11Z,lesswrong,, 191997,https://www.lesswrong.com/posts/gZsTAsui5xqz7RTFt/elk-shaving,ELK shaving,['Miss Aligned AI'],2022-05-01T21:05:38Z,lesswrong,, 192008,https://www.lesswrong.com/posts/jXbrx7kfA4XHswfcu/what-is-a-training-step-vs-episode-in-machine-learning,"What is a training ""step"" vs. ""episode"" in machine learning?",['Evan R. Murphy'],2022-04-28T21:53:25Z,lesswrong,, 192017,https://www.lesswrong.com/posts/7wCeeqXYksnBeFSbx/a-fictional-ai-law-laced-w-alignment-theory,A fictional AI law laced w/ alignment theory,['MiguelDev'],2023-07-17T01:42:52Z,lesswrong,, 192030,https://www.lesswrong.com/posts/c7fDt27pBdDDrEaZo/precise-p-doom-isn-t-very-important-for-prioritization-or,Precise P(doom) isn't very important for prioritization or strategy,['harsimony'],2022-09-14T17:19:30Z,lesswrong,, 192045,https://www.lesswrong.com/posts/XAma8pvsKGJZsNLDt/betting-on-logic,Betting on Logic,['Sylvester Kollin'],2023-07-12T14:03:42Z,lesswrong,, 192054,https://www.lesswrong.com/posts/KdgD8wD8TYT2kbESj/the-moral-copernican-principle,The Moral Copernican Principle,['Legionnaire'],2023-05-02T03:25:40Z,lesswrong,, 192064,https://www.lesswrong.com/posts/pYWA7hYJmXnuyby33/alignment-implications-of-llm-successes-a-debate-in-one-act,Alignment Implications of LLM Successes: a Debate in One Act,['Zack_M_Davis'],2023-10-21T15:22:23Z,lesswrong,, 192087,https://www.lesswrong.com/posts/3b79GzkPXLfHyxRhv/researcher-incentives-cause-smoother-progress-on-benchmarks,Researcher incentives cause smoother progress on benchmarks,['ryan_greenblatt'],2021-12-21T04:13:49Z,lesswrong,, 192098,https://www.lesswrong.com/posts/9RdhJKPrYvsttsko9/the-mirror-chamber-a-short-story-exploring-the-anthropic,The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter,['mako yass'],2022-11-03T06:47:56Z,lesswrong,, 192111,https://www.lesswrong.com/posts/KKdaaxce5BYtofJC4/biases-are-engines-of-cognition,Biases are engines of cognition,"['remember', 'Gabriel Alfour']",2022-11-30T16:47:58Z,lesswrong,, 192120,https://www.lesswrong.com/posts/qPKbLpSRRw89zdkJn/fading-novelty,Fading Novelty,['anonymous'],2018-07-25T21:36:06Z,lesswrong,, 192138,https://www.lesswrong.com/posts/NcoLpvv6wS9vLCho4/article-review-google-s-alphatensor,Article Review: Google's AlphaTensor,['Robert_AIZI'],2022-10-12T18:04:49Z,lesswrong,, 192154,https://www.lesswrong.com/posts/ZX9rgMfvZaxBseoYi/understanding-and-visualizing-sycophancy-datasets,Understanding and visualizing sycophancy datasets,['Nina Rimsky'],2023-08-16T05:34:07Z,lesswrong,, 192177,https://www.lesswrong.com/posts/LK8R8YmndScXjeynx/chatgpt-tells-20-versions-of-its-prototypical-story-with-a,"ChatGPT tells 20 versions of its prototypical story, with a short note on method",['Bill Benzon'],2023-10-14T15:27:58Z,lesswrong,, 192195,https://www.lesswrong.com/posts/jEXfacKpuy87vBYWe/disincentivizing-deception-in-mesa-optimizers-with-model,Disincentivizing deception in mesa optimizers with Model Tampering,['martinkunev'],2023-07-11T00:44:48Z,lesswrong,, 192220,https://www.lesswrong.com/posts/EKnDXLxqLAtkkix4w/personal-imitation-software,Personal imitation software,['Flaglandbase'],2022-03-07T07:55:36Z,lesswrong,, 192238,https://www.lesswrong.com/posts/mWJbYebezFdhoFHP6/learning-to-summarize-with-human-feedback-openai-1,"""Learning to Summarize with Human Feedback"" - OpenAI",['anonymous'],2020-09-07T17:59:33Z,lesswrong,, 192256,https://www.lesswrong.com/posts/hSc4yMamMzrHfJrKF/ai-governance-fundamentals-curriculum-and-application,AI Governance Fundamentals - Curriculum and Application,['Mauricio'],2021-11-30T02:19:59Z,lesswrong,, 192265,https://www.lesswrong.com/posts/mMBoPnFrFqQJKzDsZ/alignment-101-ch-2-reward-misspecification,Alignment 101 - Ch.2 - Reward Misspecification,['markov'],2023-10-18T20:39:35Z,lesswrong,, 192300,https://www.lesswrong.com/posts/wTgS6J73XMjYCKGC7/new-ai-risks-research-institute-at-oxford-university,New AI risks research institute at Oxford University,['lukeprog'],2011-11-16T18:52:10Z,lesswrong,, 192310,https://www.lesswrong.com/posts/jEXdGBpD723DhizAZ/riffing-on-the-agent-type,Riffing on the agent type,['Quinn'],2022-12-08T00:19:38Z,lesswrong,, 192331,https://www.lesswrong.com/posts/z3GwFzt4fnBdPz5hd/possible-miracles,Possible miracles,"['Akash', 'Thomas Larsen']",2022-10-09T18:17:01Z,lesswrong,, 192358,https://www.lesswrong.com/posts/4f7cB6HKMT26N5t9b/what-are-the-limits-of-superintelligence,What are the limits of superintelligence?,['rainy'],2023-04-27T18:29:32Z,lesswrong,, 192376,https://www.lesswrong.com/posts/kBJsot7RT7njnX2B5/linkpost-dreamerv3-a-general-rl-architecture,[Linkpost] DreamerV3: A General RL Architecture,['simeon_c'],2023-01-12T03:55:30Z,lesswrong,, 192386,https://www.lesswrong.com/posts/Lw8enYm5EXyvbcjmt/sensor-exposure-can-compromise-the-human-brain-in-the-2020s,Sensor Exposure can Compromise the Human Brain in the 2020s,['trevor'],2023-10-25T22:40:35Z,lesswrong,, 192407,https://www.lesswrong.com/posts/LSzSFeZpwsJB4Nowu/notes-on-prudence,Notes on Prudence,['David Gross'],2020-11-19T16:14:17Z,lesswrong,, 192421,https://www.lesswrong.com/posts/HxRjHq3QG8vcYy4yy/the-stochastic-parrot-hypothesis-is-debatable-for-the-last,The Stochastic Parrot Hypothesis is debatable for the last generation of LLMs,"['Quentin FEUILLADE--MONTIXI', 'Pierre Peigné']",2023-11-07T16:12:20Z,lesswrong,, 192444,https://www.lesswrong.com/posts/iFrefmWAct3wYG7vQ/ai-labs-statements-on-governance,AI labs' statements on governance,['Zach Stein-Perlman'],2023-07-04T16:30:02Z,lesswrong,, 192481,https://www.lesswrong.com/posts/datp9aq4DAzEP8taM/mental-subagent-implications-for-ai-safety,Mental subagent implications for AI Safety,['moridinamael'],2021-01-03T18:59:50Z,lesswrong,, 192501,https://www.lesswrong.com/posts/ZHrpjDc3CepSeeBuE/gpt-3-a-disappointing-paper,GPT-3: a disappointing paper,['nostalgebraist'],2020-05-29T19:06:28Z,lesswrong,, 192524,https://www.lesswrong.com/posts/KJQjXAsvNkKmRkiXm/3-p-group-optimal-for-discussion-1,3-P Group optimal for discussion?,['AiresJL'],2020-07-13T22:23:45Z,lesswrong,, 192541,https://www.lesswrong.com/posts/2ew4NFZovxCLsvHKS/do-the-safety-properties-of-powerful-ai-systems-need-to-be,Do the Safety Properties of Powerful AI Systems Need to be Adversarially Robust? Why?,['DragonGod'],2023-02-09T13:36:00Z,lesswrong,, 192556,https://www.lesswrong.com/posts/TNfx89dh5KkcKrvho/ai-cooperation-in-practice,AI cooperation in practice,['cousin_it'],2010-07-30T16:21:51Z,lesswrong,, 192565,https://www.lesswrong.com/posts/FoP8FpkXG6cnwyvsG/uncompetitive-programming-with-gpt-3,Uncompetitive programming with GPT-3,['Bezzi'],2022-02-06T10:19:34Z,lesswrong,, 192580,https://www.lesswrong.com/posts/M3xpp7CZ2JaSafDJB/compute-governance-and-conclusions-transformative-ai-and,Compute Governance and Conclusions - Transformative AI and Compute [3/4],['lennart'],2021-10-14T08:23:12Z,lesswrong,, 192610,https://www.lesswrong.com/posts/GbXAeq6smRzmYRSQg/foresight-for-agi-safety-strategy-mitigating-risks-and,Foresight for AGI Safety Strategy: Mitigating Risks and Identifying Golden Opportunities,['jacquesthibs'],2022-12-05T16:09:46Z,lesswrong,, 192627,https://www.lesswrong.com/posts/Aq5X9tapacnk2QGY4/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all,Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky,['jacquesthibs'],2023-03-29T23:16:19Z,lesswrong,, 192647,https://www.lesswrong.com/posts/KtCgvaPRzWPCoFKWp/is-it-time-to-start-thinking-about-what-ai-friendliness,Is it time to start thinking about what AI Friendliness means?,['ZT5'],2022-04-11T09:33:00Z,lesswrong,, 192665,https://www.lesswrong.com/posts/cwz5mRM5KvtdXftzk/a-compressed-take-on-recent-disagreements,A compressed take on recent disagreements,['kman'],2022-07-04T04:39:58Z,lesswrong,, 192680,https://www.lesswrong.com/posts/2xmKZu73gZLDEQw7c/probability-knowledge-and-meta-probability,"Probability, knowledge, and meta-probability",['David_Chapman'],2013-09-17T00:02:57Z,lesswrong,, 192692,https://www.lesswrong.com/posts/Rm3FydAgNSYHZFrEs/algo-trading-is-a-central-example-of-ai-risk,Algo trading is a central example of AI risk,['Vanessa Kosoy'],2018-07-28T20:31:55Z,lesswrong,, 192709,https://www.lesswrong.com/posts/rLAHEcrjtsAbhH5Eq/ai-as-a-civilizational-risk-part-4-6-bioweapons-and,AI as a Civilizational Risk Part 4/6: Bioweapons and Philosophy of Modification,['PashaKamyshev'],2022-11-01T20:50:54Z,lesswrong,, 192731,https://www.lesswrong.com/posts/fLbQghg3ckLgTnyeS/an-analysis-of-the-digital-gaia-proposal-from-a-safety,An Analysis of the ‘Digital Gaia’ Proposal from a Safety Perspective,['marc/er'],2023-05-31T12:21:39Z,lesswrong,, 192761,https://www.lesswrong.com/posts/8mHtoM5gaW2QsL82c/anti-parfit-s-hitchhiker,Anti-Parfit's Hitchhiker,['k64'],2022-02-04T23:37:12Z,lesswrong,, 192770,https://www.lesswrong.com/posts/LXgEYLEFbcJnyzSEZ/apply-to-haist-maia-s-ai-governance-workshop-in-dc-feb-17-20,Apply to HAIST/MAIA’s AI Governance Workshop in DC (Feb 17-20),"['Phosphorous', 'Xander Davies', 'CMD', 'Fiona_Pollack', 'TJL']",2023-01-31T02:06:55Z,lesswrong,, 192783,https://www.lesswrong.com/posts/9y5RpyyFJX4GaqPLC/pink-shoggoths-what-does-alignment-look-like-in-practice,Pink Shoggoths: What does alignment look like in practice?,['Yuli_Ban'],2023-02-25T12:23:11Z,lesswrong,, 192810,https://www.lesswrong.com/posts/5F6vzHanbhNRTus2T/team-shard-status-report,Team Shard Status Report,['David Udell'],2022-08-09T05:33:49Z,lesswrong,, 192826,https://www.lesswrong.com/posts/nZYhs48pWsaCCgGfi/agi-isn-t-just-a-technology,AGI isn't just a technology,['Seth Herd'],2023-09-01T14:35:57Z,lesswrong,, 192838,https://www.lesswrong.com/posts/menRJyuyc5yzGdTGf/the-genie-in-the-bottle-an-introduction-to-ai-alignment-and,The Genie in the Bottle: An Introduction to AI Alignment and Risk,['Snorkelfarsan'],2023-05-25T16:30:51Z,lesswrong,, 192867,https://www.lesswrong.com/posts/sb9EKuMQBHzeQjG72/how-is-reinforcement-learning-possible-in-non-sentient,How is reinforcement learning possible in non-sentient agents?,['SomeoneKind'],2021-01-05T20:57:22Z,lesswrong,, 192876,https://www.lesswrong.com/posts/yGrL388z4WHKeerN2/fair-collective-efficient-altruism,Fair Collective Efficient Altruism,['Jobst Heitzig'],2022-11-25T09:38:19Z,lesswrong,, 192890,https://www.lesswrong.com/posts/FBG7AghvvP7fPYzkx/my-thoughts-on-openai-s-alignment-plan-1,My thoughts on OpenAI's alignment plan,['Akash'],2022-12-30T19:33:15Z,lesswrong,, 192930,https://www.lesswrong.com/posts/nqTkfrnE4CkbMtmHE/research-ideas-to-study-humans-with-ai-safety-in-mind,Research ideas to study humans with AI Safety in mind,['Riccardo Volpato'],2020-07-03T16:01:25Z,lesswrong,, 192957,https://www.lesswrong.com/posts/fhLAoL4GbSzESkuMv/money-value,money ≠ value,['stonefly'],2023-04-30T17:47:43Z,lesswrong,, 192968,https://www.lesswrong.com/posts/ouXqWFxHZGsC3B8D7/draft-inferring-minimizers,Draft: Inferring minimizers,['Alex_Altair'],2023-04-01T20:20:49Z,lesswrong,, 192985,https://www.lesswrong.com/posts/9NNB9Fc8NTc9RYiFD/linkpost-acquisition-of-chess-knowledge-in-alphazero,[linkpost] Acquisition of Chess Knowledge in AlphaZero,['Quintin Pope'],2021-11-23T07:55:29Z,lesswrong,, 193001,https://www.lesswrong.com/posts/HpkZgmNskc2WwTy8N/in-defense-of-the-arms-races-that-end-arms-races,In Defense of the Arms Races… that End Arms Races,['Gentzel'],2020-01-15T21:30:01Z,lesswrong,, 193023,https://www.lesswrong.com/posts/dRAmQrvXAnwLEsFzv/the-security-mindset-s-risk-and-publishing-prosaic-alignment,"The Security Mindset, S-Risk and Publishing Prosaic Alignment Research",['marc/er'],2023-04-22T14:36:51Z,lesswrong,, 193046,https://www.lesswrong.com/posts/EKN8Zv4hZY3hMywKz/a-way-to-make-solving-alignment-10-000-times-easier-the,A way to make solving alignment 10.000 times easier. The shorter case for a massive open source simbox project.,['AlexFromSafeTransition'],2023-06-21T08:08:10Z,lesswrong,, 193068,https://www.lesswrong.com/posts/vZCSPffGLhJT3heqc/towards-a-solution-to-the-alignment-problem-via-objective,Towards a solution to the alignment problem via objective detection and evaluation,['Paul Colognese'],2023-04-12T15:39:32Z,lesswrong,, 193089,https://www.lesswrong.com/posts/gDgxnNSDeFQicaHtJ/frontier-ai-regulation,Frontier AI Regulation,['Zach Stein-Perlman'],2023-07-10T14:30:06Z,lesswrong,, 193119,https://www.lesswrong.com/posts/bgXQJiCyf7iqAzyrk/linkpost-faith-and-fate-limits-of-transformers-on,[Linkpost] Faith and Fate: Limits of Transformers on Compositionality,['Joe Kwon'],2023-06-16T15:05:00Z,lesswrong,, 193129,https://www.lesswrong.com/posts/M6CEJmgna6FTt9Yci/linkpost-concept-alignment-as-a-prerequisite-for-value,[Linkpost] Concept Alignment as a Prerequisite for Value Alignment,['Bogdan Ionut Cirstea'],2023-11-04T17:34:37Z,lesswrong,, 193140,https://www.lesswrong.com/posts/4orwmWSosNyev4ByS/reinforcement-learner-wireheading,Reinforcement Learner Wireheading,['Nate Showell'],2022-07-08T05:32:49Z,lesswrong,, 193159,https://www.lesswrong.com/posts/E5MwmuuryBLF3wWRZ/fyi-i-m-working-on-a-book-about-the-threat-of-agi-asi-for-a,FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community,['Darren McKee'],2022-06-15T18:08:43Z,lesswrong,, 193173,https://www.lesswrong.com/posts/XBTzNv9MfjYFf3nGG/against-sacrificing-ai-transparency-for-generality-gains,Against sacrificing AI transparency for generality gains,['Ape in the coat'],2023-05-07T06:52:34Z,lesswrong,, 193183,https://www.lesswrong.com/posts/WXLJASckbjJcoaEmx/orthogonality-is-expensive,Orthogonality is expensive,['beren'],2023-04-03T10:20:44Z,lesswrong,, 193197,https://www.lesswrong.com/posts/haojehnyLfdxkgbCm/i-with-the-help-of-a-few-more-people-am-planning-to-create,I (with the help of a few more people) am planning to create an introduction to AI Safety that a smart teenager can understand. What am I missing?,['Tapatakt'],2022-11-14T16:12:23Z,lesswrong,, 193212,https://www.lesswrong.com/posts/v5AJZyEY7YFthkzax/hedging-our-bets-the-case-for-pursuing-whole-brain-emulation,Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future,['inklesspen'],2010-03-01T02:32:34Z,lesswrong,, 193232,https://www.lesswrong.com/posts/7pCBPPFYgG7nBiNbL/why-ai-safety-is-hard,Why AI Safety is Hard,['Simon Möller'],2023-03-22T10:44:49Z,lesswrong,, 193257,https://www.lesswrong.com/posts/5jyRWhwvwzW5F2uGd/the-case-for-convexity,The Case for Convexity,['Jesse Richardson'],2023-08-09T14:09:37Z,lesswrong,, 193267,https://www.lesswrong.com/posts/mSYR46GZZPMmX7q93/corrigible-but-misaligned-a-superintelligent-messiah,Corrigible but misaligned: a superintelligent messiah,['zhukeepa'],2018-04-01T06:20:51Z,lesswrong,, 193285,https://www.lesswrong.com/posts/kaxqjCKJL6RNHwJLD/scalable-and-transferable-black-box-jailbreaks-for-language-2,Scalable And Transferable Black-Box Jailbreaks For Language Models Via Persona Modulation,"['Soroush Pour', 'rusheb', 'Quentin FEUILLADE--MONTIXI', 'Arush', 'scasper']",2023-11-07T17:59:37Z,lesswrong,, 193300,https://www.lesswrong.com/posts/znt3p9AGQDbYGf9Sy/the-problem-solution-matrix-calculating-the-probability-of,"The problem/solution matrix: Calculating the probability of AI safety ""on the back of an envelope""",['John_Maxwell'],2019-10-20T08:03:24Z,lesswrong,, 193318,https://www.lesswrong.com/posts/wZAa9fHZfR6zxtdNx/agi-systems-and-humans-will-both-need-to-solve-the-alignment,AGI systems & humans will both need to solve the alignment problem,['Jeffrey Ladish'],2023-02-24T03:29:21Z,lesswrong,, 193343,https://www.lesswrong.com/posts/ddd3eBucFN3ZdbFzx/psychological-disorders-and-problems,Psychological Disorders and Problems,"['adamShimi', 'Gabriel Alfour']",2022-12-12T18:15:49Z,lesswrong,, 193352,https://www.lesswrong.com/posts/YzdoNdfgfvXgC3wR4/google-deepmind-s-rt-2,Google DeepMind's RT-2,['SandXbox'],2023-08-11T11:26:17Z,lesswrong,, 193369,https://www.lesswrong.com/posts/s6BGofzFbEr4Tmxkj/value-uncertainty,Value uncertainty,['MichaelA'],2020-01-29T20:16:19Z,lesswrong,, 193406,https://www.lesswrong.com/posts/AE9yM7ZaPiZ662BF8/thoughts-on-ben-garfinkel-s-how-sure-are-we-about-this-ai,"Thoughts on Ben Garfinkel's ""How sure are we about this AI stuff?""",['David Scott Krueger (formerly: capybaralet)'],2019-02-06T19:09:21Z,lesswrong,, 193434,https://www.lesswrong.com/posts/nEFAno6PsCKnNgkd5/infra-bayesian-logic,Infra-Bayesian Logic,"['harfe', 'Yegreg']",2023-07-05T19:16:42Z,lesswrong,, 193449,https://www.lesswrong.com/posts/dX7vNKg4vex5vxWCW/making-decisions-under-moral-uncertainty-1,Making decisions under moral uncertainty,['MichaelA'],2019-12-30T01:49:49Z,lesswrong,, 193470,https://www.lesswrong.com/posts/fjgoMaBenyXcRDrbX/boundaries-membranes-and-ai-safety-compilation,«Boundaries/Membranes» and AI safety compilation,['Chipmonk'],2023-05-03T21:41:19Z,lesswrong,, 193487,https://www.lesswrong.com/posts/GEJtDHMfuW4vZ5msG/a-summary-of-current-work-in-ai-governance,A summary of current work in AI governance,['constructive'],2023-06-17T18:41:13Z,lesswrong,, 193526,https://www.lesswrong.com/posts/9Pz4Hg8qmATFkux4q/an-ethical-puzzle-about-brain-emulation,an ethical puzzle about brain emulation,['asr'],2013-12-13T21:53:11Z,lesswrong,, 193544,https://www.lesswrong.com/posts/3cmbR4oeimCTJ67G3/how-do-takeoff-speeds-affect-the-probability-of-bad-outcomes,How do takeoff speeds affect the probability of bad outcomes from AGI?,['KR'],2020-06-29T22:06:18Z,lesswrong,, 193570,https://www.lesswrong.com/posts/CBpatGkdEqr4hCCAW/alignment-being-impossible-might-be-better-than-it-being,Alignment being impossible might be better than it being really difficult,['Martín Soto'],2022-07-25T23:57:21Z,lesswrong,, 193579,https://www.lesswrong.com/posts/B6LvjefPmHdBFts4z/how-will-they-feed-us,How will they feed us,['meijer1973'],2023-06-01T08:49:52Z,lesswrong,, 193605,https://www.lesswrong.com/posts/boBZkTqPdboX5u7g9/public-facing-censorship-is-safety-theater-causing,"Public-facing Censorship Is Safety Theater, Causing Reputational Damage",['Yitz'],2022-09-23T05:08:14Z,lesswrong,, 193627,https://www.lesswrong.com/posts/hHCBGXkQCbEqBEADE/launched-friendship-is-optimal,Launched: Friendship is Optimal,['iceman'],2012-11-15T04:57:48Z,lesswrong,, 193636,https://www.lesswrong.com/posts/8vpf46nLMDYPC6wA4/optimization-and-the-intelligence-explosion,Optimization and the Intelligence Explosion,['Eliezer Yudkowsky'],2015-03-11T19:00:16Z,lesswrong,, 193653,https://www.lesswrong.com/posts/zpz929rJvJ8xCBiGZ/research-notes-what-are-we-aligning-for,Research Notes: What are we aligning for?,['Shoshannah Tekofsky'],2022-07-08T22:14:00Z,lesswrong,, 193664,https://www.lesswrong.com/posts/nG4biq5ymbBviKYsJ/link-sarah-constantin-why-i-am-not-an-ai-doomer,"[Link] Sarah Constantin: ""Why I am Not An AI Doomer""",['lbThingrb'],2023-04-12T01:52:49Z,lesswrong,, 193681,https://www.lesswrong.com/posts/Q6oWinLaKXmGNWGLy/ai-timeline-prediction-data,AI timeline prediction data,['Stuart_Armstrong'],2012-08-22T11:49:52Z,lesswrong,, 193691,https://www.lesswrong.com/posts/bCQqgr324NMSa8t8Z/ai-cure-this-fake-person-s-fake-cancer,"AI, cure this fake person's fake cancer!",['Stuart_Armstrong'],2015-08-24T16:42:17Z,lesswrong,, 193716,https://www.lesswrong.com/posts/XiKidK9kNvJHX9Yte/avoid-the-abbreviation-flops-use-flop-or-flop-s-instead,"Avoid the abbreviation ""FLOPs"" – use ""FLOP"" or ""FLOP/s"" instead",['Daniel_Eth'],2022-07-10T10:44:38Z,lesswrong,, 193725,https://www.lesswrong.com/posts/5nfHFRC4RZ6S2zQyb/risks-from-gpt-4-byproduct-of-recursively-optimizing-ais,Risks from GPT-4 Byproduct of Recursively Optimizing AIs,['ben hayum'],2023-04-07T00:02:59Z,lesswrong,, 193755,https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned,"Bing Chat is blatantly, aggressively misaligned",['evhub'],2023-02-15T05:29:45Z,lesswrong,, 193775,https://www.lesswrong.com/posts/hrMKnCKGjXFmWaqPx/ai-based-code-generation-using-gpt-j-6b,AI-Based Code Generation Using GPT-J-6B,['Tomás B.'],2021-06-16T15:05:26Z,lesswrong,, 193789,https://www.lesswrong.com/posts/AuA92E3rESZ5sgjYX/are-there-substantial-research-efforts-towards-aligning,Are there substantial research efforts towards aligning narrow AIs?,['Rossin'],2021-09-04T18:40:18Z,lesswrong,, 193806,https://www.lesswrong.com/posts/47ci9ixyEbGKWENwR/ai-timeline-predictions-are-we-getting-better,AI timeline predictions: are we getting better?,['Stuart_Armstrong'],2012-08-17T07:07:11Z,lesswrong,, 193823,https://www.lesswrong.com/posts/FBcK7dEBSTgsHLwET/inherently-interpretable-architectures,Inherently Interpretable Architectures,"['Robert Kralisch', 'teahorse', 'Eris', 'Sohaib Imran']",2023-06-30T20:43:23Z,lesswrong,, 193850,https://www.lesswrong.com/posts/BDTZBPunnvffCfKff/uncovering-latent-human-wellbeing-in-llm-embeddings,Uncovering Latent Human Wellbeing in LLM Embeddings,"['ChengCheng', 'Pedro Freire', 'Dan H', 'Scott Emmons']",2023-09-14T01:40:24Z,lesswrong,, 193870,https://www.lesswrong.com/posts/2yjoEKE9ryuCitBRs/fear-mitigated-the-nuclear-threat-can-it-do-the-same-to-agi,"Fear mitigated the nuclear threat, can it do the same to AGI risks?",['Igor Ivanov'],2022-12-09T10:04:10Z,lesswrong,, 193885,https://www.lesswrong.com/posts/DTLRw6ZstkdghEgqA/chatgpt-vs-the-2-4-6-task,ChatGPT vs the 2-4-6 Task,['cwillu'],2023-01-25T06:59:10Z,lesswrong,, 193900,https://www.lesswrong.com/posts/NinDHrw4k4ZWaSjri/basic-question-about-llms-how-do-they-know-what-task-to,Basic Question about LLMs: how do they know what task to perform,['Garak'],2023-01-14T13:13:31Z,lesswrong,, 193913,https://www.lesswrong.com/posts/XNqCRLtc2syiDbQYn/linkpost-two-major-announcements-in-ai-governance-today,[Linkpost] Two major announcements in AI governance today,['anonymous'],2023-10-30T17:28:16Z,lesswrong,, 193940,https://www.lesswrong.com/posts/7X9KKqgZa7edknKPm/goal-directedness-my-baseline-beliefs,Goal-directedness: my baseline beliefs,['Morgan_Rogers'],2022-01-08T13:09:07Z,lesswrong,, 193957,https://www.lesswrong.com/posts/9LfyRzbK3ZLQ4Fueu/agentic-language-model-memes,Agentic Language Model Memes,['FactorialCode'],2020-08-01T18:03:31Z,lesswrong,, 193983,https://www.lesswrong.com/posts/PxaMA44u8WBz2fZDh/links-brain-mapping-emulation-news,[Links] Brain mapping/emulation news,['John_Maxwell'],2013-02-21T08:17:28Z,lesswrong,, 193996,https://www.lesswrong.com/posts/o42zugvMWbJXai6x3/chasing-infinities,Chasing Infinities,['Michael Bateman'],2021-08-16T01:19:24Z,lesswrong,, 194018,https://www.lesswrong.com/posts/9AWoAAA59hN9PEwT7/why-would-code-english-or-low-abstraction-high-abstraction,Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond?,['curi'],2020-09-04T19:46:29Z,lesswrong,, 194030,https://www.lesswrong.com/posts/agFgSc8D7yn852QDN/on-dollars-utility-and-crack-cocaine,"On dollars, utility, and crack cocaine",['PhilGoetz'],2009-04-04T00:00:25Z,lesswrong,, 194041,https://www.lesswrong.com/posts/dL3qxebM29WjwtSAv/would-it-make-sense-to-bring-a-civil-lawsuit-against-meta,Would it make sense to bring a civil lawsuit against Meta for recklessly open sourcing models?,['Nathan Helm-Burger'],2023-10-30T19:34:01Z,lesswrong,, 194059,https://www.lesswrong.com/posts/y5k77ZEyrAzY8E8EK/is-there-any-policy-for-a-fair-treatment-of-ais-whose,Is there any policy for a fair treatment of AIs whose friendliness is in doubt?,['nahoj'],2022-11-18T19:01:41Z,lesswrong,, 194068,https://www.lesswrong.com/posts/DcW8ebBp38z7fAmyq/confusion-about-neuroscience-cognitive-science-as-a-danger,Confusion about neuroscience/cognitive science as a danger for AI Alignment,['Samuel Nellessen'],2022-06-22T17:59:31Z,lesswrong,, 194083,https://www.lesswrong.com/posts/tNdSqrk6hpxfxmZqS/on-interpretability-s-robustness,On Interpretability's Robustness,['WCargo'],2023-10-18T13:18:52Z,lesswrong,, 194101,https://www.lesswrong.com/posts/bfsDSY3aakhDzS9DZ/instantiating-an-agent-with-gpt-4-and-text-davinci-003,Instantiating an agent with GPT-4 and text-davinci-003,['Max H'],2023-03-19T23:57:20Z,lesswrong,, 194117,https://www.lesswrong.com/posts/sruT3a9KhyLnYmLi7/identifying-semantic-neurons-mechanistic-circuits-and,"Identifying semantic neurons, mechanistic circuits & interpretability web apps","['Esben Kran', 'Neel Nanda']",2023-04-13T11:59:52Z,lesswrong,, 194142,https://www.lesswrong.com/posts/RHurATLtM7S5JWe9v/factorio-accelerando-empathizing-with-empires-and-moderate,"Factorio, Accelerando, Empathizing with Empires and Moderate Takeoffs",['Raemon'],2018-02-04T02:33:43Z,lesswrong,, 194158,https://www.lesswrong.com/posts/nCeyBbhtJhToBFmrL/cheat-sheet-of-ai-x-risk,Cheat sheet of AI X-risk,['amaury lorin'],2023-06-29T04:28:32Z,lesswrong,, 194184,https://www.lesswrong.com/posts/Pk5Nyd5ByXwHWXX5r/the-application-of-the-secretary-problem-to-real-life-dating,The application of the secretary problem to real life dating,['Elo'],2015-09-29T22:28:07Z,lesswrong,, 194198,https://www.lesswrong.com/posts/BMkGb2ZzXdiXHaxn4/an-issue-with-macaskill-s-evidentialist-s-wager,An issue with MacAskill's Evidentialist's Wager,['Martín Soto'],2022-09-21T22:02:48Z,lesswrong,, 194209,https://www.lesswrong.com/posts/EvX7XRgYjiwm2atTw/lecun-says-making-a-utility-function-is-intractable,LeCun says making a utility function is intractable,['Iknownothing'],2023-06-28T18:02:14Z,lesswrong,, 194219,https://www.lesswrong.com/posts/4Hnso8NMAeeYs8Cta/revealing-intentionality-in-language-models-through-adavae,Revealing Intentionality In Language Models Through AdaVAE Guided Sampling,['jdp'],2023-10-20T07:32:29Z,lesswrong,, 194239,https://www.lesswrong.com/posts/aXSrXgNS5D3Zstqtw/sandboxing-by-physical-simulation,Sandboxing by Physical Simulation?,['moridinamael'],2018-08-01T00:36:32Z,lesswrong,, 194253,https://www.lesswrong.com/posts/J5j3wypPgcLyrKmwZ/containing-the-ai-inside-a-simulated-reality,Containing the AI... Inside a Simulated Reality,['HumaneAutomation'],2020-10-31T16:16:48Z,lesswrong,, 194264,https://www.lesswrong.com/posts/Jj2cThYbNfgZPQkMw/100-dinners-and-a-workshop-information-preservation-and,100 Dinners And A Workshop: Information Preservation And Goals,['Stephen Fowler'],2023-03-28T03:13:06Z,lesswrong,, 194278,https://www.lesswrong.com/posts/uyPo8pfEtBffyPdxf/the-other-side-of-the-tidal-wave,The other side of the tidal wave,['KatjaGrace'],2023-11-03T05:40:05Z,lesswrong,, 194292,https://www.lesswrong.com/posts/zJZvoiwydJ5zvzTHK/the-allais-paradox,The Allais Paradox,['Eliezer Yudkowsky'],2008-01-19T03:05:32Z,lesswrong,, 194304,https://www.lesswrong.com/posts/N7KYWJPmyzB6bJSYT/the-next-ai-winter-will-be-due-to-energy-costs-1,The next AI winter will be due to energy costs,['hippke'],2020-11-24T16:53:50Z,lesswrong,, 194327,https://www.lesswrong.com/posts/JbmDxh5WLjX8AiTQv/control-symmetry-why-we-might-want-to-start-investigating,Control Symmetry: why we might want to start investigating asymmetric alignment interventions,['domenicrosati'],2023-11-11T17:27:11Z,lesswrong,, 194344,https://www.lesswrong.com/posts/gbaat54g4h6AA9pof/the-commercial-incentive-to-intentionally-train-ai-to,The commercial incentive to intentionally train AI to deceive us,['Derek M. Jones'],2022-12-29T11:30:28Z,lesswrong,, 194361,https://www.lesswrong.com/posts/TDzvtnQ8cpfdp7y4t/did-ai-pioneers-not-worry-much-about-ai-risks,Did AI pioneers not worry much about AI risks?,['lisperati'],2020-02-09T19:58:53Z,lesswrong,, 194377,https://www.lesswrong.com/posts/6gJMwXuWd3oskaLCo/what-is-everyone-doing-in-ai-governance,What is everyone doing in AI governance,['Igor Ivanov'],2023-07-08T15:16:10Z,lesswrong,, 194406,https://www.lesswrong.com/posts/ubiSawqhHe2GAHXWX/how-should-we-think-about-the-decision-relevance-of-models,How should we think about the decision relevance of models estimating p(doom)?,['Mo Putera'],2023-05-11T04:16:56Z,lesswrong,, 194426,https://www.lesswrong.com/posts/jP3vRbtvDtBtgvkeb/clarifying-consequentialists-in-the-solomonoff-prior,Clarifying Consequentialists in the Solomonoff Prior,['Vlad Mikulik'],2018-07-11T02:35:57Z,lesswrong,, 194442,https://www.lesswrong.com/posts/u8yT9bbabmdnpgDaQ/a-story-of-ai-risk-instructgpt-n,A Story of AI Risk: InstructGPT-N,['peterbarnett'],2022-05-26T23:22:13Z,lesswrong,, 194457,https://www.lesswrong.com/posts/tAtp4odpziBDdvdXL/microsoft-and-openai-stop-telling-chatbots-to-roleplay-as-ai,"Microsoft and OpenAI, stop telling chatbots to roleplay as AI",['hold_my_fish'],2023-02-17T19:55:33Z,lesswrong,, 194468,https://www.lesswrong.com/posts/gts62zc6roEWzZEsg/natural-abstraction-convergent-preferences-over-information-4,Natural Abstraction: Convergent Preferences Over Information Structures,['paulom'],2023-10-14T18:34:42Z,lesswrong,, 194485,https://www.lesswrong.com/posts/4BnhQ9Gmo8TMSQb6H/could-transformer-network-models-learn-motor-planning-like,Could transformer network models learn motor planning like they can learn language and image generation?,['mu_(negative)'],2023-04-23T17:24:09Z,lesswrong,, 194494,https://www.lesswrong.com/posts/MTpCeShqRmu4nkgon/critique-my-model-the-ev-of-agi-to-selfish-individuals,Critique my Model: The EV of AGI to Selfish Individuals,['ozziegooen'],2018-04-08T20:04:17Z,lesswrong,, 194515,https://www.lesswrong.com/posts/svWQMXsmWoZCgLPBk/on-the-naturalistic-study-of-the-linguistic-behavior-of,On the naturalistic study of the linguistic behavior of artificial intelligence,['Bill Benzon'],2023-01-03T09:06:42Z,lesswrong,, 194524,https://www.lesswrong.com/posts/MmmPyJicaaJRk4Eg2/the-limit-of-language-models,The Limit of Language Models,['DragonGod'],2023-01-06T23:53:33Z,lesswrong,, 194544,https://www.lesswrong.com/posts/8wWb9zhwk8qH4cSK7/aligned-objectives-prize-competition,Aligned Objectives Prize Competition,['Prometheus'],2023-06-15T12:42:22Z,lesswrong,, 194557,https://www.lesswrong.com/posts/phTQMkcH9Ttc4P9LB/new-article-from-oren-etzioni,New article from Oren Etzioni,['Aryeh Englander'],2020-02-25T15:25:22Z,lesswrong,, 194580,https://www.lesswrong.com/posts/F3aESx4JWWEDAFEiH/nate-soares-on-the-ultimate-newcomb-s-problem,Nate Soares on the Ultimate Newcomb's Problem,['Rob Bensinger'],2021-10-31T19:42:01Z,lesswrong,, 194592,https://www.lesswrong.com/posts/pR5Wn7bJQWRPYNGsF/troubles-with-cev-part1-cev-sequence,Troubles With CEV Part1 - CEV Sequence,['diegocaleiro'],2012-02-28T04:15:30Z,lesswrong,, 194614,https://www.lesswrong.com/posts/jMpCXKoCgRp8xmyiN/asot-policy-trajectory-visualization,[ASoT] Policy Trajectory Visualization,['Ulisse Mini'],2023-02-07T00:13:13Z,lesswrong,, 194624,https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like,What an actually pessimistic containment strategy looks like,['lc'],2022-04-05T00:19:50Z,lesswrong,, 194642,https://www.lesswrong.com/posts/6CGvBneihYuq73kAy/value-drift-threat-models,Value drift threat models,['Garrett Baker'],2023-05-12T23:03:22Z,lesswrong,, 194662,https://www.lesswrong.com/posts/LiDKveFnmL59u5Ceq/how-might-an-alignment-attractor-look-like,How Might an Alignment Attractor Look like?,['shminux'],2022-04-28T06:46:11Z,lesswrong,, 194683,https://www.lesswrong.com/posts/cHwCBTwWiTdsqXNyn/helen-toner-on-china-cset-and-ai,"Helen Toner on China, CSET, and AI",['Rob Bensinger'],2019-04-21T04:10:21Z,lesswrong,, 194701,https://www.lesswrong.com/posts/nbDFj4ZS6WSDKtSk4/machines-vs-memes-part-3-imitation-and-memes,Machines vs Memes Part 3: Imitation and Memes,['ceru23'],2022-06-01T13:36:15Z,lesswrong,, 194720,https://www.lesswrong.com/posts/pjesEx526ngE6dnmr/rlhf-does-not-appear-to-differentially-cause-mode-collapse,RLHF does not appear to differentially cause mode-collapse,"['Arthur Conmy', 'beren']",2023-03-20T15:39:45Z,lesswrong,, 194736,https://www.lesswrong.com/posts/tggt6cWtBFCYJrbw8/wide-vs-tall-superintelligence,"""Wide"" vs ""Tall"" superintelligence",['Templarrr'],2023-03-19T19:23:58Z,lesswrong,, 194745,https://www.lesswrong.com/posts/EPEvBANNYN9rxRQFB/alignment-risk-doesn-t-require-superintelligence,Alignment Risk Doesn't Require Superintelligence,['JustisMills'],2022-06-15T03:12:57Z,lesswrong,, 194758,https://www.lesswrong.com/posts/bQkp4pi5Ra4SgSSxn/what-makes-an-idea-understandable-on-architecturally-and,What Makes an Idea Understandable? On Architecturally and Culturally Natural Ideas.,"['NickyP', 'Peter S. Park', 'Stephen Fowler']",2022-08-16T02:09:40Z,lesswrong,, 194780,https://www.lesswrong.com/posts/Ls2i4fgbEy9XarxzW/would-this-be-progress-in-solving-embedded-agency,Would this be Progress in Solving Embedded Agency?,['Johannes C. Mayer'],2023-11-14T09:08:11Z,lesswrong,, 194791,https://www.lesswrong.com/posts/5Ae8rcYjWAe6zfdQs/what-more-compute-does-for-brain-like-models-response-to,What more compute does for brain-like models: response to Rohin,['Nathan Helm-Burger'],2022-04-13T03:40:34Z,lesswrong,, 194814,https://www.lesswrong.com/posts/ZSkForf7e5nEKGDdb/goodhart-taxonomy-agreement-1,Goodhart Taxonomy: Agreement,['Ben Pace'],2018-07-01T03:50:45Z,lesswrong,, 194830,https://www.lesswrong.com/posts/q9DbfYfFzkotno9hG/example-decision-theory-problem-agent-simulates-predictor,"Example decision theory problem: ""Agent simulates predictor""",['cousin_it'],2011-05-19T15:16:29Z,lesswrong,, 194842,https://www.lesswrong.com/posts/cJ8qybprj545XggKd/how-i-learned-to-stop-worrying-and-love-mum,How I Learned to Stop Worrying and Love MUM,['Waddington'],2021-05-20T07:57:26Z,lesswrong,, 194864,https://www.lesswrong.com/posts/9Bho6d6jkkWhbvkMe/link-palm-2-technical-report,[Link] PaLM 2 Technical Report,['marc/er'],2023-05-10T20:28:16Z,lesswrong,, 194887,https://www.lesswrong.com/posts/uFEWwnyToRdB7b44o/killing-recurrent-memory-over-self-attention,Killing Recurrent Memory Over Self Attention?,['Del Nobolo'],2023-06-06T23:02:50Z,lesswrong,, 194900,https://www.lesswrong.com/posts/vw9QAviBxcGodMHfN/international-cooperation-vs-ai-arms-race,International cooperation vs. AI arms race,['Brian_Tomasik'],2013-12-05T01:09:33Z,lesswrong,, 194923,https://www.lesswrong.com/posts/mNR3tQDCxWL9XWv5u/on-the-loebner-silver-prize-a-turing-test,On the Loebner Silver Prize (a Turing test),['hold_my_fish'],2023-05-07T00:39:13Z,lesswrong,, 194940,https://www.lesswrong.com/posts/jrtpmdk68R2yZ7ufv/gradient-hacking-via-actual-hacking,Gradient hacking via actual hacking,['Max H'],2023-05-10T01:57:10Z,lesswrong,, 194955,https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a,Ilya Sutskever's thoughts on AI safety (July 2023): a transcript with my comments,['mishka'],2023-08-10T19:07:45Z,lesswrong,, 194980,https://www.lesswrong.com/posts/XFpDTCHZZ4wpMT8PZ/a-model-i-use-when-making-plans-to-reduce-ai-x-risk,A model I use when making plans to reduce AI x-risk,['Ben Pace'],2018-01-19T00:21:45Z,lesswrong,, 195011,https://www.lesswrong.com/posts/KuBMKQnAsYBGP4rkZ/modest-superintelligences,Modest Superintelligences,['Wei Dai'],2012-03-22T00:29:03Z,lesswrong,, 195027,https://www.lesswrong.com/posts/oSPhmfnMGgGrpe7ib/properties-of-current-ais-and-some-predictions-of-the,Properties of current AIs and some predictions of the evolution of AI from the perspective of scale-free theories of agency and regulative development,['Roman Leventov'],2022-12-20T17:13:01Z,lesswrong,, 195061,https://www.lesswrong.com/posts/uiQ2AZp9DrgsogzxE/a-possible-check-against-motivated-reasoning-using-elicit,A possible check against motivated reasoning using elicit.org,['david reinstein'],2022-05-18T20:52:36Z,lesswrong,, 195070,https://www.lesswrong.com/posts/L9xDcTBwNyciyNbb2/anthropic-or-charting-a-path-to-ai-accountability,Anthropic | Charting a Path to AI Accountability,['Gabriel Mukobi'],2023-06-14T04:43:34Z,lesswrong,, 195102,https://www.lesswrong.com/posts/oyK6fYYnBi5Nx5pfE/is-recursive-self-improvement-relevant-in-the-deep-learning,"Is ""Recursive Self-Improvement"" Relevant in the Deep Learning Paradigm?",['DragonGod'],2023-04-06T07:13:32Z,lesswrong,, 195119,https://www.lesswrong.com/posts/8CSJvfcvDGioNQF87/draft-detecting-optimization,Draft: Detecting optimization,['Alex_Altair'],2023-03-29T20:17:47Z,lesswrong,, 195133,https://www.lesswrong.com/posts/BRHAWp7T3srrcuWDS/counterfactual-contracts,Counterfactual Contracts,['harsimony'],2021-09-16T15:20:22Z,lesswrong,, 195150,https://www.lesswrong.com/posts/p3b5T8YbijEXDAfmR/intuitive-explanation-of-solomonoff-induction,Intuitive Explanation of Solomonoff Induction,['lukeprog'],2011-12-01T06:56:51Z,lesswrong,, 195168,https://www.lesswrong.com/posts/WWmGEix82myHjhHYB/using-pict-against-pastagpt-jailbreaking,Using PICT against PastaGPT Jailbreaking,['Quentin FEUILLADE--MONTIXI'],2023-02-09T04:30:32Z,lesswrong,, 195186,https://www.lesswrong.com/posts/bFcbG2TQCCE3krhEY/existential-risk-from-ai-without-an-intelligence-explosion,Existential risk from AI without an intelligence explosion,['AlexMennen'],2017-05-25T16:44:05Z,lesswrong,, 195207,https://www.lesswrong.com/posts/qEFG8BK9HCnKcHuGH/an-extension-of-aumann-s-approach-for-reducing-game-theory,An extension of Aumann's approach for reducing game theory to bayesian decision theory to include EDT and UDT like agents,['Karl Brisebois'],2022-02-09T04:17:40Z,lesswrong,, 195230,https://www.lesswrong.com/posts/7KmBfTjmRZSNaoCCi/another-argument-that-you-will-let-the-ai-out-of-the-box,Another argument that you will let the AI out of the box,['Garrett Baker'],2022-04-19T21:54:39Z,lesswrong,, 195244,https://www.lesswrong.com/posts/hnhybBcpDsYeEWNAK/llms-pure-reason-without-the-critique,LLMs — Pure Reason Without The Critique,['Rosco-Hunter'],2023-10-11T13:11:01Z,lesswrong,, 195259,https://www.lesswrong.com/posts/fkqvztgszJpqmDHom/govai-towards-best-practices-in-agi-safety-and-governance-a,GovAI: Towards best practices in AGI safety and governance: A survey of expert opinion,['Zach Stein-Perlman'],2023-05-15T01:42:41Z,lesswrong,, 195277,https://www.lesswrong.com/posts/sDn5MnBGdvKMenHRr/nokens-a-potential-method-of-investigating-glitch-tokens,Nokens: A potential method of investigating glitch tokens,['Hoagy'],2023-03-15T16:23:38Z,lesswrong,, 195295,https://www.lesswrong.com/posts/Q37Ay82dfb3wnKjTr/12-career-related-questions-that-may-or-may-not-be-helpful,12 career-related questions that may (or may not) be helpful for people interested in alignment research,['Akash'],2022-12-12T22:36:22Z,lesswrong,, 195315,https://www.lesswrong.com/posts/eD3Tp5JukiFFACeH5/harry-potter-and-the-data-centers-of-doom,Harry Potter and the Data Centers of Doom,['RomanS'],2023-03-31T10:42:20Z,lesswrong,, 195330,https://www.lesswrong.com/posts/7eH2wBbYctWEY2dKq/nature-less-than-nurture-for-ais,Nature < Nurture for AIs,['scottviteri'],2023-06-04T20:38:08Z,lesswrong,, 195347,https://www.lesswrong.com/posts/ZFuwgLfRH4qyFTMib/are-pre-specified-utility-functions-about-the-real-world,Are pre-specified utility functions about the real world possible in principle?,['mlogan'],2018-07-11T18:46:49Z,lesswrong,, 195366,https://www.lesswrong.com/posts/G226kJ2SkwLiNy5tL/which-parts-of-the-existing-internet-are-already-likely-to,Which parts of the existing internet are already likely to be in (GPT-5/other soon-to-be-trained LLMs)'s training corpus?,['AnnaSalamon'],2023-03-29T05:17:28Z,lesswrong,, 195377,https://www.lesswrong.com/posts/PRMJCbBhsGgu5A6Ty/i-m-trying-out-asteroid-mindset,"I'm trying out ""asteroid mindset""",['Alex_Altair'],2022-06-03T13:35:49Z,lesswrong,, 195388,https://www.lesswrong.com/posts/73SotZnDbsYpxfnuQ/some-thoughts-on-singularity-strategies,Some Thoughts on Singularity Strategies,['Wei Dai'],2011-07-13T02:41:17Z,lesswrong,, 195413,https://www.lesswrong.com/posts/sbz2sCeAuarjmvkC8/a-first-success-story-for-outer-alignment-instructgpt,A first success story for Outer Alignment: InstructGPT,['Noosphere89'],2022-11-08T22:52:54Z,lesswrong,, 195437,https://www.lesswrong.com/posts/PBWqFL6P8XphH79Pm/applying-reinforcement-learning-theory-to-reduce-felt,Applying reinforcement learning theory to reduce felt temporal distance,['Kaj_Sotala'],2014-01-26T09:17:29Z,lesswrong,, 195448,https://www.lesswrong.com/posts/tTg4bn5rxHYqQJXhD/uber-self-driving-crash,Uber Self-Driving Crash,['jefftk'],2019-11-07T15:00:02Z,lesswrong,, 195472,https://www.lesswrong.com/posts/2PjoD96Kk7f2uoiEp/inflection-ai-new-startup-related-to-language-models,Inflection AI: New startup related to language models,['Nisan'],2022-04-02T05:35:25Z,lesswrong,, 195487,https://www.lesswrong.com/posts/BQm5wgtJirrontgRt/evaluating-language-model-behaviours-for-shutdown-avoidance,Evaluating Language Model Behaviours for Shutdown Avoidance in Textual Scenarios,"['Simon Lermen', 'Teun van der Weij', 'Leon Lang']",2023-05-16T10:53:33Z,lesswrong,, 195510,https://www.lesswrong.com/posts/ZZ57cBkpQ5hpAux9T/philosophical-cyborg-part-2-or-the-good-successor,"Philosophical Cyborg (Part 2)...or, The Good Successor",['ukc10014'],2023-06-21T15:43:07Z,lesswrong,, 195540,https://www.lesswrong.com/posts/YcFpJC5pJFdYdEuNN/alignment-101-ch-1-agi,Alignment 101 - Ch.1 - AGI,['markov'],2023-10-18T20:38:58Z,lesswrong,, 195571,https://www.lesswrong.com/posts/aAtpQjEaX3Fwzcfdj/it-can-t-be-mesa-optimizers-all-the-way-down-or-else-it-can,It Can't Be Mesa-Optimizers All The Way Down (Or Else It Can't Be Long-Term Supercoherence?),['Austin Witte'],2023-03-31T07:21:57Z,lesswrong,, 195580,https://www.lesswrong.com/posts/Nt8yDxkiMF8YAsNYA/operationalizing-timelines,Operationalizing timelines,['Zach Stein-Perlman'],2023-03-10T16:30:02Z,lesswrong,, 195593,https://www.lesswrong.com/posts/r4ksbGjoighPsXyXi/the-pointers-problem-distilled,"The pointers problem, distilled",['Nina Rimsky'],2022-05-26T22:44:04Z,lesswrong,, 195611,https://www.lesswrong.com/posts/7ypW3vf8xLF9D6RP7/opengpt-2-we-replicated-gpt-2-because-you-can-too,OpenGPT-2: We Replicated GPT-2 Because You Can Too,['avturchin'],2019-08-23T11:32:43Z,lesswrong,, 195620,https://www.lesswrong.com/posts/2eaLH7zp6pxdQwYSH/a-brief-overview-of-ai-safety-alignment-orgs-fields,"A Brief Overview of AI Safety/Alignment Orgs, Fields, Researchers, and Resources for ML Researchers",['Austin Witte'],2023-02-02T01:02:01Z,lesswrong,, 195630,https://www.lesswrong.com/posts/npZMkydRMqAqMqbFb/non-orthogonality-implies-uncontrollable-superintelligence,Non-orthogonality implies uncontrollable superintelligence,['Stuart_Armstrong'],2012-04-30T13:53:54Z,lesswrong,, 195639,https://www.lesswrong.com/posts/GezzauYzGTkcwgkA7/solving-the-two-envelopes-problem,Solving the two envelopes problem,['rstarkov'],2012-08-09T13:42:20Z,lesswrong,, 195649,https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman-s-sister-annie-altman-claims-sam-has-severely,"Sam Altman's sister, Annie Altman, claims Sam has (severely) abused her",['pl5015'],2023-10-07T21:06:49Z,lesswrong,, 195660,https://www.lesswrong.com/posts/398Swu6jmczzSRvHy/superintelligence-13-capability-control-methods,Superintelligence 13: Capability control methods,['KatjaGrace'],2014-12-09T02:00:34Z,lesswrong,, 195686,https://www.lesswrong.com/posts/me34KqMLwJNYAZKbs/is-evolutionary-influence-the-mesa-objective-that-we-re,Is evolutionary influence the mesa objective that we're interested in?,['David Johnston'],2022-05-03T01:18:07Z,lesswrong,, 195701,https://www.lesswrong.com/posts/sTboWTyf9MfERnsKp/gwern-about-centaurs-there-is-no-chance-that-any-useful-man,"Gwern about centaurs: there is no chance that any useful man+machine combination will work together for more than 10 years, as humans soon will be only a liability",['avturchin'],2018-12-15T21:32:55Z,lesswrong,, 195712,https://www.lesswrong.com/posts/bDG4swEX6smpRZvsX/ai-safety-should-be-made-more-accessible-using-non-text,AI safety should be made more accessible using non text-based media,['Massimog'],2022-05-10T03:14:13Z,lesswrong,, 195728,https://www.lesswrong.com/posts/pfL6sAjMfRsZjyjsZ/some-basics-of-the-hypercompetence-theory-of-government,Some basics of the hypercompetence theory of government,['trevor'],2023-07-09T19:12:53Z,lesswrong,, 195744,https://www.lesswrong.com/posts/CeZXDmp8Z363XaM6b/discontinuous-progress-in-history-an-update,Discontinuous progress in history: an update,['KatjaGrace'],2020-04-14T00:00:02Z,lesswrong,, 195764,https://www.lesswrong.com/posts/ZdEhEeg9qnxwFgPMf/a-short-calculation-about-a-twitter-poll,A short calculation about a Twitter poll,['Ege Erdil'],2023-08-14T19:48:53Z,lesswrong,, 195778,https://www.lesswrong.com/posts/axsizR4vEX8qtuLpR/hlai-2018-field-report,HLAI 2018 Field Report,['Gordon Seidoh Worley'],2018-08-29T00:11:26Z,lesswrong,, 195793,https://www.lesswrong.com/posts/sM2sANArtSJE6duZZ/where-are-people-thinking-and-talking-about-global,Where are people thinking and talking about global coordination for AI safety?,['Wei Dai'],2019-05-22T06:24:02Z,lesswrong,, 195803,https://www.lesswrong.com/posts/S8sNue3KzzWQpRjv5/lw-is-to-rationality-as-aixi-is-to-intelligence,LW is to rationality as AIXI is to intelligence,['XiXiDu'],2011-03-06T20:24:28Z,lesswrong,, 195818,https://www.lesswrong.com/posts/pL56xPoniLvtMDQ4J/the-inner-alignment-problem,The Inner Alignment Problem,"['evhub', 'Chris van Merwijk', 'Vlad Mikulik', 'Joar Skalse', 'Scott Garrabrant']",2019-06-04T01:20:36Z,lesswrong,, 195844,https://www.lesswrong.com/posts/rX4AdyrtjYrEX9xnn/deceptive-ai-vs-shifting-instrumental-incentives,Deceptive AI vs. shifting instrumental incentives,['Aryeh Englander'],2023-06-26T18:09:08Z,lesswrong,, 195862,https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers,"Adaptation-Executers, not Fitness-Maximizers",['Eliezer Yudkowsky'],2007-11-11T06:39:18Z,lesswrong,, 195879,https://www.lesswrong.com/posts/6bdb4F6Lif5AanRAd/cake-or-death,"Cake, or death!",['Stuart_Armstrong'],2012-10-25T10:33:27Z,lesswrong,, 195895,https://www.lesswrong.com/posts/u2gWM2poRPkBPFeLc/the-joys-of-conjugate-priors,The Joys of Conjugate Priors,['TCB'],2011-05-21T02:41:22Z,lesswrong,, 195917,https://www.lesswrong.com/posts/uNjcYeMXXsnopajMg/thoughts-on-the-5-10-problem,Thoughts on the 5-10 Problem,['Tofly'],2019-07-18T18:56:14Z,lesswrong,, 195927,https://www.lesswrong.com/posts/BZfnJGe5S6KtB5pjQ/ai-risk-new-executive-summary,"AI risk, new executive summary",['Stuart_Armstrong'],2014-04-18T10:46:00Z,lesswrong,, 195949,https://www.lesswrong.com/posts/CZQuFoqgPXQawH9aL/new-report-intelligence-explosion-microeconomics,New report: Intelligence Explosion Microeconomics,['Eliezer Yudkowsky'],2013-04-29T23:14:58Z,lesswrong,, 195968,https://www.lesswrong.com/posts/YiSLdjyBuD3oScDxR/research-proposal-leveraging-jungian-archetypes-to-create-1,Research proposal: Leveraging Jungian archetypes to create values-based models,['MiguelDev'],2023-03-05T17:39:27Z,lesswrong,, 195996,https://www.lesswrong.com/posts/qcbgxzFthiLRJsupb/ai-x-risk-is-a-possible-solution-to-the-fermi-paradox,AI X-risk is a possible solution to the Fermi Paradox,['magic9mushroom'],2023-05-30T17:42:22Z,lesswrong,, 196011,https://www.lesswrong.com/posts/ZruH9o8rE7o2NXokv/addendum-more-efficient-ffns-via-attention,Addendum: More Efficient FFNs via Attention,['Robert_AIZI'],2023-02-06T18:55:26Z,lesswrong,, 196020,https://www.lesswrong.com/posts/Gwt3gJNHc4LntD5We/an-unexpected-gpt-3-decision-in-a-simple-gamble,An Unexpected GPT-3 Decision in a Simple Gamble,['hatta_afiq'],2022-09-25T16:46:01Z,lesswrong,, 196035,https://www.lesswrong.com/posts/EQFfj5eC5mqBMxF2s/superintelligence-23-coherent-extrapolated-volition,Superintelligence 23: Coherent extrapolated volition,['KatjaGrace'],2015-02-17T02:00:20Z,lesswrong,, 196056,https://www.lesswrong.com/posts/g3DvR7iFN7jE7nKEL/openai-scaling-laws-for-transfer-hernandez-et-al,"OpenAI: ""Scaling Laws for Transfer"", Hernandez et al.",['Lukas Finnveden'],2021-02-04T12:49:26Z,lesswrong,, 196069,https://www.lesswrong.com/posts/Fz2Sdh24RjaaMkQRW/ai-safety-movement-builders-should-help-the-community-to,"AI Safety Movement Builders should help the community to optimise three factors: contributors, contributions and coordination",['peterslattery'],2022-12-15T22:50:30Z,lesswrong,, 196090,https://www.lesswrong.com/posts/Lia5RfipxFr2chrD3/architecture-aware-optimisation-train-imagenet-and-more,Architecture-aware optimisation: train ImageNet and more without hyperparameters,['Chris Mingard'],2023-04-22T21:50:50Z,lesswrong,, 196103,https://www.lesswrong.com/posts/prb8raC4XGJiRWs5n/consequentialism-need-not-be-nearsighted,Consequentialism Need Not Be Nearsighted,['orthonormal'],2011-09-02T07:37:08Z,lesswrong,, 196121,https://www.lesswrong.com/posts/AzByGtPNPXoJvCLKW/memes-and-rational-decisions,Memes and Rational Decisions,['inferential'],2015-01-09T06:42:03Z,lesswrong,, 196155,https://www.lesswrong.com/posts/YSEtEtqf8hRBhKBS9/the-evil-genie-puzzle,The Evil Genie Puzzle,['Chris_Leong'],2018-07-25T06:12:54Z,lesswrong,, 196164,https://www.lesswrong.com/posts/cDGhjZM8nccWyScTn/thoughts-on-the-feasibility-of-prosaic-agi-alignment,Thoughts on the Feasibility of Prosaic AGI Alignment?,['anonymous'],2020-08-21T23:25:10Z,lesswrong,, 196174,https://www.lesswrong.com/posts/9xR4KExLQKNK4iggc/leveraging-legal-informatics-to-align-ai,Leveraging Legal Informatics to Align AI,['John Nay'],2022-09-18T20:39:51Z,lesswrong,, 196187,https://www.lesswrong.com/posts/zgJCSK5KdkiKDuuCw/the-tree-of-life-stanford-ai-alignment-theory-of-change,The Tree of Life: Stanford AI Alignment Theory of Change,['Gabriel Mukobi'],2022-07-02T18:36:11Z,lesswrong,, 196211,https://www.lesswrong.com/posts/wXbSAKu2AcohaK2Gt/udt-shows-that-decision-theory-is-more-puzzling-than-ever,UDT shows that decision theory is more puzzling than ever,['Wei Dai'],2023-09-13T12:26:10Z,lesswrong,, 196238,https://www.lesswrong.com/posts/TKdpSzmcezNbfmGAy/the-urgent-meta-ethics-of-friendly-artificial-intelligence,The Urgent Meta-Ethics of Friendly Artificial Intelligence,['lukeprog'],2011-02-01T14:15:34Z,lesswrong,, 196249,https://www.lesswrong.com/posts/m8FjhuELdg7iv6boW/work-on-security-instead-of-friendliness,Work on Security Instead of Friendliness?,['Wei Dai'],2012-07-21T18:28:45Z,lesswrong,, 196259,https://www.lesswrong.com/posts/fduAAMAWfp4bw9GQo/the-world-where-llms-are-possible,The world where LLMs are possible,['Ape in the coat'],2023-07-10T08:00:12Z,lesswrong,, 196274,https://www.lesswrong.com/posts/B7bMmhvaufdtxBtLW/confusion-about-newcomb-is-confusion-about-counterfactuals,Confusion about Newcomb is confusion about counterfactuals,['AnnaSalamon'],2009-08-25T20:01:22Z,lesswrong,, 196283,https://www.lesswrong.com/posts/2z5vrsu7BoiWckLby/an-openai-board-seat-is-surprisingly-expensive,An OpenAI board seat is surprisingly expensive,['Benquo'],2017-04-19T09:05:04Z,lesswrong,, 196292,https://www.lesswrong.com/posts/px76zCyJDidizxT5t/elon-musk-announces-xai,Elon Musk announces xAI,['Jan_Kulveit'],2023-07-13T09:01:01Z,lesswrong,, 196314,https://www.lesswrong.com/posts/XaKYzdFqcirwtmPYG/linkpost-a-shared-linguistic-space-for-transmitting-our,[Linkpost] A shared linguistic space for transmitting our thoughts from brain to brain in natural conversations,['Bogdan Ionut Cirstea'],2023-07-01T13:57:56Z,lesswrong,, 196328,https://www.lesswrong.com/posts/A7aBAqLRcAPhM5mBB/anticipation-in-llms,Anticipation in LLMs,['derek shiller'],2023-07-24T15:53:07Z,lesswrong,, 196350,https://www.lesswrong.com/posts/sbaQv8zmRncpmLNKv/the-idea-that-chatgpt-is-simply-predicting-the-next-word-is,"The idea that ChatGPT is simply “predicting” the next word is, at best, misleading",['Bill Benzon'],2023-02-20T11:32:07Z,lesswrong,, 196363,https://www.lesswrong.com/posts/dmjvJwCjXWE2jFbRN/fdt-is-not-directly-comparable-to-cdt-and-edt,FDT is not directly comparable to CDT and EDT,['Sylvester Kollin'],2022-09-29T14:42:59Z,lesswrong,, 196380,https://www.lesswrong.com/posts/a5EK9WTv6x8htkGXW/terry-tao-is-hosting-an-ai-to-assist-mathematical-reasoning,"Terry Tao is hosting an ""AI to Assist Mathematical Reasoning"" workshop",['junk heap homotopy'],2023-06-03T01:19:08Z,lesswrong,, 196389,https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that,2022 was the year AGI arrived (Just don't call it that),['Logan Zoellner'],2023-01-04T15:19:55Z,lesswrong,, 196416,https://www.lesswrong.com/posts/mpj398Dy2NMB66hLY/simple-alignment-plan-that-maybe-works,Simple alignment plan that maybe works,['Iknownothing'],2023-07-18T22:48:37Z,lesswrong,, 196429,https://www.lesswrong.com/posts/npg4AkbvwhkDay5jX/a-critique-of-the-evidentialist-s-wager,A Critique of The Evidentialist's Wager,['Heighn'],2023-11-02T13:42:35Z,lesswrong,, 196444,https://www.lesswrong.com/posts/m64joiCkrCy9MhPsk/is-it-worth-making-a-database-for-moral-predictions,Is it worth making a database for moral predictions?,['Jonas Hallgren'],2021-08-16T14:51:55Z,lesswrong,, 196461,https://www.lesswrong.com/posts/kHG7BouAdXh74nZ6j/we-don-t-have-a-utility-function,We Don't Have a Utility Function,['anonymous'],2013-04-02T03:49:11Z,lesswrong,, 196473,https://www.lesswrong.com/posts/NnohDYHNnKDtbiMyp/fake-utility-functions,Fake Utility Functions,['Eliezer Yudkowsky'],2007-12-06T16:55:41Z,lesswrong,, 196487,https://www.lesswrong.com/posts/bNayfvnKKsbE7w6Sb/an-intuitive-introduction-to-causal-decision-theory,An Intuitive Introduction to Causal Decision Theory,['Heighn'],2022-03-07T16:05:45Z,lesswrong,, 196498,https://www.lesswrong.com/posts/3wg3YBmkukWzecyR9/destroying-the-fabric-of-the-universe-as-an-instrumental,Destroying the fabric of the universe as an instrumental goal.,['AI-doom'],2023-09-14T20:04:33Z,lesswrong,, 196509,https://www.lesswrong.com/posts/N3QQvRTaQfpHHWkfs/the-alignment-problem-needs-more-positive-fiction,The Alignment Problem Needs More Positive Fiction,['Netcentrica'],2022-08-21T22:02:00Z,lesswrong,, 196526,https://www.lesswrong.com/posts/rS4vCKLir3RphdEXh/podcast-shoshannah-tekofsky-on-skilling-up-in-ai-safety,"Podcast: Shoshannah Tekofsky on skilling up in AI safety, visiting Berkeley, and developing novel research ideas",['Akash'],2022-11-25T20:47:10Z,lesswrong,, 196557,https://www.lesswrong.com/posts/8n4CF4u3zxeR2tFQN/limited-agents-need-approximate-induction,Limited agents need approximate induction,['Manfred'],2015-04-24T07:42:16Z,lesswrong,, 196580,https://www.lesswrong.com/posts/KFWZg6EbCuisGcJAo/immortality-or-death-by-agi-1,Immortality or death by AGI,['ImmortalityOrDeathByAGI'],2023-09-22T00:00:00Z,lesswrong,, 196599,https://www.lesswrong.com/posts/5hhjY4QQk59DEGWuc/if-you-wish-to-make-an-apple-pie-you-must-first-become,"If you wish to make an apple pie, you must first become dictator of the universe",['jasoncrawford'],2023-07-05T18:14:59Z,lesswrong,, 196620,https://www.lesswrong.com/posts/SLckyGWZJb3bf2eCd/is-full-self-driving-an-agi-complete-problem,Is full self-driving an AGI-complete problem?,['kraemahz'],2022-11-10T02:04:49Z,lesswrong,, 196641,https://www.lesswrong.com/posts/mXaugZyivQN3Eg8G3/announcing-the-independent-ai-safety-registry,Announcing: The Independent AI Safety Registry,['Shoshannah Tekofsky'],2022-12-26T21:22:18Z,lesswrong,, 196651,https://www.lesswrong.com/posts/RGd85AErgmXmAMKw5/cev-coherence-versus-extrapolation,CEV: coherence versus extrapolation,['Stuart_Armstrong'],2014-09-22T11:24:25Z,lesswrong,, 196661,https://www.lesswrong.com/posts/wYZKtAhqawoj9PKtb/is-the-orthogonality-thesis-at-odds-with-moral-realism,Is the orthogonality thesis at odds with moral realism?,['ChrisHallquist'],2013-11-05T20:47:53Z,lesswrong,, 196671,https://www.lesswrong.com/posts/sj84MyKXZKZwqkCNh/financial-times-we-must-slow-down-the-race-to-god-like-ai,Financial Times: We must slow down the race to God-like AI,['trevor'],2023-04-13T19:55:26Z,lesswrong,, 196707,https://www.lesswrong.com/posts/sbb9bZgojmEa7Yjrc/updating-my-ai-timelines,Updating my AI timelines,['Matthew Barnett'],2022-12-05T20:46:28Z,lesswrong,, 196727,https://www.lesswrong.com/posts/6YpWggFWdzfCwmGqL/best-resource-to-go-from-typical-smart-tech-savvy-person-to,"Best resource to go from ""typical smart tech-savvy person"" to ""person who gets AGI risk urgency""?",['Liron'],2022-10-15T22:26:52Z,lesswrong,, 196738,https://www.lesswrong.com/posts/dWbhbpEMKysdYt89c/a-great-talk-for-ai-noobs-according-to-an-ai-noob,A great talk for AI noobs (according to an AI noob),['dov'],2023-04-23T05:34:53Z,lesswrong,, 196754,https://www.lesswrong.com/posts/htCBHjWNvzScEhFoA/toy-model-convergent-instrumental-goals,Toy model: convergent instrumental goals,['Stuart_Armstrong'],2016-02-25T14:03:54Z,lesswrong,, 196769,https://www.lesswrong.com/posts/RtbcxXkFwAaDBtQGP/can-we-learn-much-by-studying-the-behaviour-of-rl-policies,Can we learn much by studying the behaviour of RL policies?,['AidanGoth'],2023-05-15T12:56:26Z,lesswrong,, 196786,https://www.lesswrong.com/posts/ZmxkmCjXJBpwJkgrw/competitive-programming-with-alphacode,Competitive programming with AlphaCode,['Algon'],2022-02-02T16:49:09Z,lesswrong,, 196822,https://www.lesswrong.com/posts/9YQby2miskbcKN9FB/ai-safety-arguments-an-interactive-guide,AI Safety Arguments: An Interactive Guide,['Lukas Trötzmüller'],2023-02-01T19:26:58Z,lesswrong,, 196838,https://www.lesswrong.com/posts/xcFn7GGrypEFuDjmd/logical-foundations-of-government-policy,Logical Foundations of Government Policy,['FCCC'],2020-10-10T17:05:33Z,lesswrong,, 196862,https://www.lesswrong.com/posts/tD4bNRzHXa2Th7yPs/announcing-the-ai-safety-field-building-hub-a-new-effort-to,"Announcing the AI Safety Field Building Hub, a new effort to provide AISFB projects, mentorship, and funding",['Vael Gates'],2022-07-28T21:29:52Z,lesswrong,, 196880,https://www.lesswrong.com/posts/SPYDGGpM7aXQrgqh2/what-would-you-expect-a-massive-multimodal-online-federated,What would you expect a massive multimodal online federated learner to be capable of?,['Aryeh Englander'],2022-08-27T17:31:07Z,lesswrong,, 196890,https://www.lesswrong.com/posts/qmQFHCgCyEEjuy5a7/lora-fine-tuning-efficiently-undoes-safety-training-from,LoRA Fine-tuning Efficiently Undoes Safety Training from Llama 2-Chat 70B,"['Simon Lermen', 'Jeffrey Ladish']",2023-10-12T19:58:02Z,lesswrong,, 196907,https://www.lesswrong.com/posts/q5GYAyXETnBATNCSw/arguments-against-existential-risk-from-ai-part-2,"Arguments against existential risk from AI, part 2",['Nina Rimsky'],2023-07-10T08:25:21Z,lesswrong,, 196927,https://www.lesswrong.com/posts/t7Lgp77nAm4DEXPNE/focusing-your-impact-on-short-vs-long-tai-timelines,Focusing your impact on short vs long TAI timelines,['kuhanj'],2023-09-30T19:34:40Z,lesswrong,, 196942,https://www.lesswrong.com/posts/6a6tJcmKMCdJsNok2/i-tripped-and-became-gpt-and-how-this-updated-my-timelines,I Tripped and Became GPT! (And How This Updated My Timelines),['Frankophone'],2022-09-01T17:56:30Z,lesswrong,, 196959,https://www.lesswrong.com/posts/bv2diKZv3FBGHnXHe/ai-safety-milestones,AI safety milestones?,['Zach Stein-Perlman'],2023-01-23T21:00:24Z,lesswrong,, 196972,https://www.lesswrong.com/posts/wQmCZ8YPTfCKEN9JE/ai-as-a-civilizational-risk-part-2-6-behavioral-modification,AI as a Civilizational Risk Part 2/6: Behavioral Modification,['PashaKamyshev'],2022-10-30T16:57:51Z,lesswrong,, 196995,https://www.lesswrong.com/posts/ERsbTthnzWLmNDCb5/fli-report-policymaking-in-the-pause,FLI report: Policymaking in the Pause,['Zach Stein-Perlman'],2023-04-15T17:01:07Z,lesswrong,, 197042,https://www.lesswrong.com/posts/rytFP2zRYNK85rFyX/interpretability-tools-are-an-attack-channel,Interpretability Tools Are an Attack Channel,['Thane Ruthenis'],2022-08-17T18:47:28Z,lesswrong,, 197060,https://www.lesswrong.com/posts/xzYRbFYrkiuuvD6GJ/1h-volunteers-needed-for-a-small-ai-safety-related-research,1h-volunteers needed for a small AI Safety-related research project,['PabloAMC'],2021-08-16T17:53:13Z,lesswrong,, 197069,https://www.lesswrong.com/posts/b3CQrAo2nufqzwNHF/how-to-train-your-transformer,How to train your transformer,['p.b.'],2022-04-07T09:34:52Z,lesswrong,, 197094,https://www.lesswrong.com/posts/Lshuoww97Loy2h7kw/are-we-all-misaligned-1,Are we all misaligned?,['Mateusz Mazurkiewicz'],2021-01-03T02:42:48Z,lesswrong,, 197110,https://www.lesswrong.com/posts/H7kzai8uwPj9mQz9M/superintelligence-29-crunch-time,Superintelligence 29: Crunch time,['KatjaGrace'],2015-03-31T04:24:42Z,lesswrong,, 197139,https://www.lesswrong.com/posts/MLKmxZgtLYRH73um3/we-will-be-around-in-30-years,We will be around in 30 years,['mukashi'],2022-06-07T03:47:22Z,lesswrong,, 197150,https://www.lesswrong.com/posts/kmLP3bTnBhc22DnqY/beyond-algorithmic-equivalence-self-modelling,Beyond algorithmic equivalence: self-modelling,['Stuart_Armstrong'],2018-02-28T16:55:55Z,lesswrong,, 197160,https://www.lesswrong.com/posts/zDf7fnentCFTdK3K6/want-to-win-the-agi-race-solve-alignment,Want to win the AGI race? Solve alignment.,['leopold'],2023-03-29T17:40:36Z,lesswrong,, 197183,https://www.lesswrong.com/posts/HoTbhbdRTDo4anNLa/what-do-you-make-of-agi-unaligned-spaceships-not-enough-food,What do you make of AGI:unaligned::spaceships:not enough food?,['Ronny Fernandez'],2020-02-22T14:14:15Z,lesswrong,, 197195,https://www.lesswrong.com/posts/Cs8FaAxYHGgzqpSkh/the-bostrom-buckle-visualising-the-vulnerable-world,The Bostrom Buckle: Visualising the Vulnerable World Hypothesis,['Rosco-Hunter'],2023-10-10T18:29:37Z,lesswrong,, 197216,https://www.lesswrong.com/posts/ogC8agX64nW6N9EEW/how-is-ai-governed-and-regulated-around-the-world,"How is AI governed and regulated, around the world?",['Mitchell_Porter'],2023-03-30T15:36:56Z,lesswrong,, 197232,https://www.lesswrong.com/posts/qhxexho2mFjLnZ2nS/opportunity-cost-blackmail,Opportunity Cost Blackmail,['adamShimi'],2023-01-02T13:48:52Z,lesswrong,, 197247,https://www.lesswrong.com/posts/X9vT3o3MmtWoRRKkm/decision-theories-part-3-75-hang-on-i-think-this-works-after,"Decision Theories, Part 3.75: Hang On, I Think This Works After All",['orthonormal'],2012-09-06T16:23:38Z,lesswrong,, 197266,https://www.lesswrong.com/posts/bYqdk66S4njoeCH9g/introducing-the-longevity-research-institute,Introducing the Longevity Research Institute,['sarahconstantin'],2018-05-08T03:30:01Z,lesswrong,, 197280,https://www.lesswrong.com/posts/yxFkuyPANtL6GSwiC/the-majority-is-always-wrong,The Majority Is Always Wrong,['Eliezer Yudkowsky'],2007-04-03T01:12:23Z,lesswrong,, 197290,https://www.lesswrong.com/posts/6S4Lf2tCMWAfbGtdt/boredom-vs-scope-insensitivity,Boredom vs. Scope Insensitivity,['Wei Dai'],2009-09-24T11:45:54Z,lesswrong,, 197304,https://www.lesswrong.com/posts/FdQzArWhERh4YZqY9/deepmind-model-evaluation-for-extreme-risks,DeepMind: Model evaluation for extreme risks,['Zach Stein-Perlman'],2023-05-25T03:00:01Z,lesswrong,, 197320,https://www.lesswrong.com/posts/vgFvnr7FefZ3s3tHp/mahatma-armstrong-ceved-to-death,Mahatma Armstrong: CEVed to death.,['Stuart_Armstrong'],2013-06-06T12:50:12Z,lesswrong,, 197332,https://www.lesswrong.com/posts/kGopQHxeKmJm3iXSe/alignment-frame-exercise-building-the-puzzle-of-alignment,Alignment Frame/Exercise: Building The Puzzle of Alignment,['Jonas Hallgren'],2023-11-08T11:27:52Z,lesswrong,, 197354,https://www.lesswrong.com/posts/GzYu2acxWL6pZyzyc/stanovich-on-cev,Stanovich on CEV,['lukeprog'],2012-04-29T09:37:46Z,lesswrong,, 197367,https://www.lesswrong.com/posts/6JhjHJ2rdiXcSe7tp/let-s-talk-about-uncontrollable-ai,Let’s talk about uncontrollable AI,['Karl von Wendt'],2022-10-09T10:34:57Z,lesswrong,, 197390,https://www.lesswrong.com/posts/ggdm7GAk2Xwnn2yTW/fundamentally-fuzzy-concepts-can-t-have-crisp-definitions,Fundamentally Fuzzy Concepts Can't Have Crisp Definitions: Cooperation and Alignment vs Math and Physics,['VojtaKovarik'],2023-07-21T21:03:22Z,lesswrong,, 197404,https://www.lesswrong.com/posts/X7QbzyKWqnLmeZCnJ/microsoft-plans-to-invest-usd10b-in-openai-usd3b-invested-to,Microsoft Plans to Invest $10B in OpenAI; $3B Invested to Date | Fortune,['DragonGod'],2023-01-12T03:55:10Z,lesswrong,, 197413,https://www.lesswrong.com/posts/4nZRzoGTqg8xy5rr8/the-reward-engineering-problem,The reward engineering problem,['paulfchristiano'],2019-01-16T18:47:24Z,lesswrong,, 197434,https://www.lesswrong.com/posts/nyyvyupqJqj9tJcqx/your-terminal-values-are-complex-and-not-objective,your terminal values are complex and not objective,['Tamsin Leake'],2023-03-13T13:34:01Z,lesswrong,, 197451,https://www.lesswrong.com/posts/z4carpaEnfXLrqfqt/gpt-4-developer-livestream,GPT-4 developer livestream,['Gerald Monroe'],2023-03-14T20:55:05Z,lesswrong,, 197462,https://www.lesswrong.com/posts/yLeEPFnnB9wE7KLx2/efficient-cross-domain-optimization,Efficient Cross-Domain Optimization,['Eliezer Yudkowsky'],2008-10-28T16:33:03Z,lesswrong,, 197471,https://www.lesswrong.com/posts/Zw5STvhmGNzuQYM5B/the-whirlpool-of-reality,The whirlpool of reality,['Gordon Seidoh Worley'],2020-09-27T02:36:34Z,lesswrong,, 197480,https://www.lesswrong.com/posts/EASv46FpehppAFHSm/hebbian-natural-abstractions-mathematical-foundations,[Hebbian Natural Abstractions] Mathematical Foundations,"['Samuel Nellessen', 'Jan']",2022-12-25T20:58:03Z,lesswrong,, 197493,https://www.lesswrong.com/posts/2GebvAXXfRMTjY2g7/an-example-of-self-fulfilling-spurious-proofs-in-udt,An example of self-fulfilling spurious proofs in UDT,['cousin_it'],2012-03-25T11:47:16Z,lesswrong,, 197510,https://www.lesswrong.com/posts/DsEuRrsenZ6piGpE6/humans-aren-t-agents-what-then-for-value-learning,Humans aren't agents - what then for value learning?,['Charlie Steiner'],2019-03-15T22:01:39Z,lesswrong,, 197521,https://www.lesswrong.com/posts/oexwJBd3zAjw9Cru8/i-played-the-ai-box-experiment-again-and-lost-both-games,I played the AI Box Experiment again! (and lost both games),['Tuxedage'],2013-09-27T02:32:06Z,lesswrong,, 197541,https://www.lesswrong.com/posts/cQtvSK8eeRrYnsLFy/reflective-oracles-and-superationality,Reflective oracles and superationality,['Stuart_Armstrong'],2015-11-18T12:30:06Z,lesswrong,, 197567,https://www.lesswrong.com/posts/Ef8yLxqAXXg9yW9jJ/a-little-playing-around-with-blenderbot3,A little playing around with Blenderbot3,['Nathan Helm-Burger'],2022-08-12T16:06:42Z,lesswrong,, 197580,https://www.lesswrong.com/posts/b7JXJWY7R2jNtHerP/slowing-down-ai-progress-is-an-underexplored-alignment,Slowing down AI progress is an underexplored alignment strategy,['Norman Borlaug'],2023-07-24T16:56:26Z,lesswrong,, 197598,https://www.lesswrong.com/posts/cEhv4yd6GYgz66LgK/june-2012-0-33-turing-award-winners-predict-computers,June 2012: 0/33 Turing Award winners predict computers beating humans at go within next 10 years.,['betterthanwell'],2018-02-23T11:25:12Z,lesswrong,, 197609,https://www.lesswrong.com/posts/6EspRSzYNnv9DPhkr/morphological-intelligence-superhuman-empathy-and-ethical,"Morphological intelligence, superhuman empathy, and ethical arbitration",['Roman Leventov'],2023-02-13T10:25:17Z,lesswrong,, 197625,https://www.lesswrong.com/posts/6eAAwHPWvRqw3AtXw/the-reward-function-is-already-how-well-you-manipulate,The reward function is already how well you manipulate humans,['Kerry'],2022-10-19T01:52:30Z,lesswrong,, 197638,https://www.lesswrong.com/posts/ouFnZoYaKqicC6jH8/wargaming-agi-development,Wargaming AGI Development,['ryan_b'],2022-03-19T17:59:28Z,lesswrong,, 197649,https://www.lesswrong.com/posts/f8nd9F7dL9SxueLFA/eis-iv-a-spotlight-on-feature-attribution-saliency,EIS IV: A Spotlight on Feature Attribution/Saliency,['scasper'],2023-02-15T18:46:23Z,lesswrong,, 197672,https://www.lesswrong.com/posts/vRjYokhT22EvLHZCJ/more-experiments-in-gpt-4-agency-writing-memos,More experiments in GPT-4 agency: writing memos,['Christopher King'],2023-03-24T17:51:49Z,lesswrong,, 197684,https://www.lesswrong.com/posts/5WAJcQRqf2AmxEPoi/warning-about-ai-doom-is-also-announcing-capabilities,"""warning about ai doom"" is also ""announcing capabilities progress to noobs""",['the gears to ascension'],2023-04-08T23:42:44Z,lesswrong,, 197701,https://www.lesswrong.com/posts/JpAXF8R6pAXhFfZuj/how-would-logical-decision-theories-address-the-psychopath,How would Logical Decision Theories address the Psychopath Button?,['Nathan1123'],2022-08-07T15:19:54Z,lesswrong,, 197711,https://www.lesswrong.com/posts/hKr5SgDqnDewqACwX/contra-anton-on-kolmogorov-complexity-and-recursive-self,Contra Anton 🏴‍☠️ on Kolmogorov complexity and recursive self improvement,['DaemonicSigil'],2023-06-30T05:15:39Z,lesswrong,, 197729,https://www.lesswrong.com/posts/vY9oE39tBupZLAyoC/localizing-goal-misgeneralization-in-a-maze-solving-policy,Localizing goal misgeneralization in a maze-solving policy network,['jan betley'],2023-07-06T16:21:04Z,lesswrong,, 197741,https://www.lesswrong.com/posts/2KvjY6HZ64QXDE2T6/are-ya-winning-son,"Are ya winning, son?",['Nathan1123'],2022-08-09T00:06:33Z,lesswrong,, 197758,https://www.lesswrong.com/posts/ZGrAbfKQbNEazQpkM/linkpost-chinese-government-s-guidelines-on-ai,[Linkpost] Chinese government's guidelines on AI,['RomanS'],2021-12-10T21:10:58Z,lesswrong,, 197788,https://www.lesswrong.com/posts/ktSzxMsKBJmon6FGm/foom-seems-unlikely-in-the-current-llm-training-paradigm,Foom seems unlikely in the current LLM training paradigm,['Ocracoke'],2023-04-09T19:41:30Z,lesswrong,, 197803,https://www.lesswrong.com/posts/Krc8HqJYLFNZYvbEr/less-activations-can-result-in-high-corrigibility,Less activations can result in high corrigibility?,['MiguelDev'],2023-07-16T02:38:54Z,lesswrong,, 197819,https://www.lesswrong.com/posts/gxCdm3agFh2u5vRCC/second-call-cfp-for-rebellion-and-disobedience-in-ai,Second call: CFP for Rebellion and Disobedience in AI workshop,['Ram Rachum'],2023-02-05T12:18:01Z,lesswrong,, 197835,https://www.lesswrong.com/posts/XnnMYMjDGuYhkhQPB/the-case-for-removing-alignment-and-ml-research-from-the,The case for removing alignment and ML research from the training dataset,['beren'],2023-05-30T20:54:37Z,lesswrong,, 197856,https://www.lesswrong.com/posts/GbGzP4LZTBN8dyd8c/observing-optimization,Observing Optimization,['Eliezer Yudkowsky'],2008-11-21T05:39:25Z,lesswrong,, 197873,https://www.lesswrong.com/posts/m8fHNfrhfZuHpNJhk/strong-cheap-signals,Strong Cheap Signals,['trevor'],2023-03-29T14:18:53Z,lesswrong,, 197882,https://www.lesswrong.com/posts/RsvKae3KSXEaivpgJ/transcript-testing-chatgpt-s-performance-in-engineering,Transcript: Testing ChatGPT's Performance in Engineering,['alxgoldstn'],2023-02-28T02:16:33Z,lesswrong,, 197901,https://www.lesswrong.com/posts/WwsgJcey7nfXhZWfn/announcing-the-second-ai-safety-camp,Announcing the second AI Safety Camp,['Lachouette'],2018-06-11T18:59:49Z,lesswrong,, 197910,https://www.lesswrong.com/posts/nmMorGE4MS4txzr8q/simulators-seminar-sequence-1-background-and-shared,[Simulators seminar sequence] #1 Background & shared assumptions,"['Jan', 'Charlie Steiner', 'Logan Riggs', 'janus', 'jacquesthibs', 'metasemi', 'Michael Oesterle', 'Lucas Teixeira', 'peligrietzer', 'remember']",2023-01-02T23:48:50Z,lesswrong,, 197930,https://www.lesswrong.com/posts/5dMavpaByQaurxkYq/george-hotz-on-ai-safety-centralized-power-is-bad,"George Hotz on AI safety: ~""centralized power is bad""",['Chipmonk'],2023-06-30T05:00:39Z,lesswrong,, 197947,https://www.lesswrong.com/posts/cW3T55NeQyJnH4Px7/asot-instrumental-convergence-is-useful,[ASoT] Instrumental convergence is useful,['Ulisse Mini'],2022-11-09T20:20:52Z,lesswrong,, 197963,https://www.lesswrong.com/posts/Be3CfAW5PMWT9nNY9/what-would-it-mean-to-understand-how-a-large-language-model,What would it mean to understand how a large language model (LLM) works? Some quick notes.,['Bill Benzon'],2023-10-03T15:11:14Z,lesswrong,, 197980,https://www.lesswrong.com/posts/mh4LasKBaqdYymhB2/the-metaethics-and-normative-ethics-of-agi-value-alignment,"The Metaethics and Normative Ethics of AGI Value Alignment: Many Questions, Some Implications",['Eleos Arete Citrini'],2021-09-16T16:13:39Z,lesswrong,, 198002,https://www.lesswrong.com/posts/kHyJaixCdiZyFRo66/real-world-examples-of-money-pumping,Real-world examples of money-pumping?,['sixes_and_sevens'],2013-04-25T13:49:51Z,lesswrong,, 198019,https://www.lesswrong.com/posts/HMQmEcL3CTs3TBty7/notes-on-potential-future-ai-tax-policy,Notes on Potential Future AI Tax Policy,['Zvi'],2023-04-25T13:30:01Z,lesswrong,, 198043,https://www.lesswrong.com/posts/SEov2u5Y7mJTPaTLK/knightian-uncertainty-a-rejection-of-the-mmeu-rule,Knightian uncertainty: a rejection of the MMEU rule,['So8res'],2014-08-26T03:03:57Z,lesswrong,, 198062,https://www.lesswrong.com/posts/jDfjqu2qJLcPco9cf/automatically-finding-feature-vectors-in-the-ov-circuits-of,Automatically finding feature vectors in the OV circuits of Transformers without using probing,['Jacob Dunefsky'],2023-09-12T17:38:49Z,lesswrong,, 198084,https://www.lesswrong.com/posts/iZNcMkS6ghqBQA24E/superintelligence-18-life-in-an-algorithmic-economy,Superintelligence 18: Life in an algorithmic economy,['KatjaGrace'],2015-01-13T02:00:12Z,lesswrong,, 198121,https://www.lesswrong.com/posts/KQvdpPd3k2ap6aJTP/complexity-of-value-complexity-of-outcome,Complexity of Value ≠ Complexity of Outcome,['Wei Dai'],2010-01-30T02:50:49Z,lesswrong,, 198136,https://www.lesswrong.com/posts/dPifaJGRWMm8rKxCP/signaling-isn-t-about-signaling-it-s-about-goodhart,"Signaling isn't about signaling, it's about Goodhart",['Valentine'],2022-01-06T18:49:49Z,lesswrong,, 198153,https://www.lesswrong.com/posts/tMr3HJwitJCbQ5HTc/google-s-new-text-to-image-model-parti-a-demonstration-of,"Google's new text-to-image model - Parti, a demonstration of scaling benefits",['Kayden'],2022-06-22T20:01:00Z,lesswrong,, 198163,https://www.lesswrong.com/posts/zuHezdoGr2KtM2n43/new-year-new-research-agenda-post,"New year, new research agenda post",['Charlie Steiner'],2022-01-12T17:58:16Z,lesswrong,, 198182,https://www.lesswrong.com/posts/spBoxzcaCrqXqyQHq/using-gpt-3-to-augment-human-intelligence,Using GPT-3 to augment human intelligence,['Henrik Karlsson'],2022-08-10T15:54:29Z,lesswrong,, 198208,https://www.lesswrong.com/posts/p5ifq7Njn86mdDiHH/limiting-factors-to-predict-ai-take-off-speed,Limiting factors to predict AI take-off speed,['Alfonso Pérez Escudero'],2023-05-31T23:19:24Z,lesswrong,, 198228,https://www.lesswrong.com/posts/h9ZWrrCBgK64pAvxC/thoughts-on-ai-safety-via-debate,Thoughts on AI Safety via Debate,['Vaniver'],2018-05-09T19:46:00Z,lesswrong,, 198260,https://www.lesswrong.com/posts/nLjtqdhRaKcEGb4NA/questions-about-value-lock-in-paternalism-and-empowerment,"Questions about Value Lock-in, Paternalism, and Empowerment",['Sam'],2022-11-16T15:33:47Z,lesswrong,, 198285,https://www.lesswrong.com/posts/9BiRP6KseBPxSf3ZZ/delegated-agents-in-practice-how-companies-might-end-up,"Delegated agents in practice: How companies might end up selling AI services that act on behalf of consumers and coalitions, and what this implies for safety research",['Remmelt'],2020-11-26T11:17:19Z,lesswrong,, 198312,https://www.lesswrong.com/posts/WtKGLJQfjCWTm7tFK/article-review-discovering-latent-knowledge-burns-ye-et-al,"Article Review: Discovering Latent Knowledge (Burns, Ye, et al)",['Robert_AIZI'],2022-12-22T18:16:05Z,lesswrong,, 198323,https://www.lesswrong.com/posts/9md9QtHmhNnAmacdu/a-fli-postdoctoral-grant-application-ai-alignment-via-causal,A FLI postdoctoral grant application: AI alignment via causal analysis and design of agents,['PabloAMC'],2021-11-13T01:44:35Z,lesswrong,, 198354,https://www.lesswrong.com/posts/gvXAoH9gR4FSzyeCa/strange-loops-self-reference-from-number-theory-to-ai,Strange Loops - Self-Reference from Number Theory to AI,['ojorgensen'],2022-09-28T14:10:00Z,lesswrong,, 198372,https://www.lesswrong.com/posts/x5BFqov8rt2duhMvH/the-bitter-lesson-an-article-about-compute-vs-human,"""The Bitter Lesson"", an article about compute vs human knowledge in AI",['the gears to ascension'],2019-06-21T17:24:51Z,lesswrong,, 198385,https://www.lesswrong.com/posts/kkaBC9Epydj3m6ZsA/lm-situational-awareness-evaluation-proposal-violating,"LM Situational Awareness, Evaluation Proposal: Violating Imitation",['Jacob Pfau'],2023-04-26T22:53:32Z,lesswrong,, 198406,https://www.lesswrong.com/posts/wcLhgaokymWDzMw8s/does-agency-necessarily-imply-self-preservation-instinct,Does agency necessarily imply self-preservation instinct?,['Mislav Jurić'],2023-05-01T16:06:03Z,lesswrong,, 198415,https://www.lesswrong.com/posts/LdEwDn5veAckEemi4/we-are-already-in-a-persuasion-transformed-world-and-must,We are already in a persuasion-transformed world and must take precautions,['trevor'],2023-11-04T15:53:31Z,lesswrong,, 198435,https://www.lesswrong.com/posts/bPeB6RT78k8dXKYKf/reinforcement-preference-and-utility,"Reinforcement, Preference and Utility",['royf'],2012-08-08T06:23:26Z,lesswrong,, 198457,https://www.lesswrong.com/posts/unwRBRQivd2LYRfuP/introducing-the-center-for-ai-policy-and-we-re-hiring,Introducing the Center for AI Policy (& we're hiring!),['Thomas Larsen'],2023-08-28T21:17:12Z,lesswrong,, 198477,https://www.lesswrong.com/posts/FqdT8vpwiDKFYQHFR/knowledge-base-2-the-structure-and-the-method-of-building,Knowledge Base 2: The structure and the method of building,['iwis'],2023-10-09T11:53:15Z,lesswrong,, 198496,https://www.lesswrong.com/posts/dcjGrRrXwXBtTBHLn/n-3-ai-risk-quick-math-and-reasoning,n=3 AI Risk Quick Math and Reasoning,['lionhearted (Sebastian Marshall)'],2023-04-07T20:27:41Z,lesswrong,, 198517,https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy,"MIRI announces new ""Death With Dignity"" strategy",['Eliezer Yudkowsky'],2022-04-02T00:43:20Z,lesswrong,, 198545,https://www.lesswrong.com/posts/wTW4Juw49rHwAnxQh/is-there-any-serious-attempt-to-create-a-system-to-figure,"Is there any serious attempt to create a system to figure out the CEV of humanity and if not, why haven't we started yet?",['Jonas Hallgren'],2021-02-25T22:06:05Z,lesswrong,, 198555,https://www.lesswrong.com/posts/vSGTkkjrnJj4ehFms/self-improvement-without-self-modification,Self-improvement without self-modification,['Stuart_Armstrong'],2015-07-23T09:59:01Z,lesswrong,, 198567,https://www.lesswrong.com/posts/czxjKohS7RQjkBiSD/thinking-soberly-about-the-context-and-consequences-of,Thinking soberly about the context and consequences of Friendly AI,['Mitchell_Porter'],2012-10-16T04:33:53Z,lesswrong,, 198576,https://www.lesswrong.com/posts/HEonwwQLhMB9fqABh/human-preferences-as-rl-critic-values-implications-for,Human preferences as RL critic values - implications for alignment,['Seth Herd'],2023-03-14T22:10:33Z,lesswrong,, 198594,https://www.lesswrong.com/posts/pLDd8dJFq2iWuCD9h/briefly-thinking-through-some-analogs-of-debate,Briefly thinking through some analogs of debate,['Eli Tyre'],2022-09-11T12:02:36Z,lesswrong,, 198609,https://www.lesswrong.com/posts/pwPo7L2RjNvYDAo7g/a-corrigibility-metaphore-big-gambles,A Corrigibility Metaphore - Big Gambles,['WCargo'],2023-05-10T18:13:21Z,lesswrong,, 198620,https://www.lesswrong.com/posts/eXPSTu9u8uCvERKnq/my-current-workflow-to-study-the-internal-mechanisms-of-llm,My current workflow to study the internal mechanisms of LLM,['Yulu Pi'],2023-05-16T15:27:14Z,lesswrong,, 198635,https://www.lesswrong.com/posts/rQGW2GqHAFprupYkf/intermittent-distillations-4-semiconductors-economics,"Intermittent Distillations #4: Semiconductors, Economics, Intelligence, and Technological Progress.",['Mark Xu'],2021-07-08T22:14:23Z,lesswrong,, 198662,https://www.lesswrong.com/posts/Dry3vRTAiX2JhghMF/what-problem-would-you-like-to-see-reinforcement-learning,What problem would you like to see Reinforcement Learning applied to?,['Julian Schrittwieser'],2020-07-08T02:40:17Z,lesswrong,, 198671,https://www.lesswrong.com/posts/icR53xeAkeuzgzsWP/taboo-compute-overhang,"Taboo ""compute overhang""",['Zach Stein-Perlman'],2023-03-01T19:15:03Z,lesswrong,, 198682,https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know,Brain Efficiency: Much More than You Wanted to Know,['jacob_cannell'],2022-01-06T03:38:00Z,lesswrong,, 198700,https://www.lesswrong.com/posts/xgicQnkrdA5FehhnQ/the-domain-of-your-utility-function,The Domain of Your Utility Function,['Peter_de_Blanc'],2009-06-23T04:58:56Z,lesswrong,, 198710,https://www.lesswrong.com/posts/S2opNN9WgwpGPbyBi/do-llms-dream-of-emergent-sheep,Do LLMs dream of emergent sheep?,['shminux'],2023-04-24T03:26:54Z,lesswrong,, 198719,https://www.lesswrong.com/posts/TZy4mFJFJ4yv2MRhg/boundaries-based-security-and-ai-safety-approaches,Boundaries-based security and AI safety approaches,['Allison Duettmann'],2023-04-12T12:36:10Z,lesswrong,, 198736,https://www.lesswrong.com/posts/azdqDRbcw3EkrnHNw/wanting-to-want,Wanting to Want,['Alicorn'],2009-05-16T03:08:10Z,lesswrong,, 198748,https://www.lesswrong.com/posts/BqoE5vhPNCB7X6Say/superintelligence-12-malignant-failure-modes,Superintelligence 12: Malignant failure modes,['KatjaGrace'],2014-12-02T02:02:25Z,lesswrong,, 198770,https://www.lesswrong.com/posts/nwA2mP55oCSpZ9sza/the-prior-of-a-hypothesis-does-not-depend-on-its-complexity,The prior of a hypothesis does not depend on its complexity,['cousin_it'],2010-08-26T13:20:05Z,lesswrong,, 198781,https://www.lesswrong.com/posts/dfTm26pvq7yQp8mR3/one-bit-of-observation-can-unlock-many-of-optimization-but,One bit of observation can unlock many of optimization - but at what cost?,['dr_s'],2023-04-29T10:53:04Z,lesswrong,, 198794,https://www.lesswrong.com/posts/CBKNqmEcojy9PGLec/can-coherent-extrapolated-volition-be-estimated-with-inverse,Can coherent extrapolated volition be estimated with Inverse Reinforcement Learning?,['Jade Bishop'],2019-04-15T03:23:31Z,lesswrong,, 198809,https://medium.com/ai-control/reward-engineering-f8b5de40d075,Reward engineering,['Paul Christiano'],2015-12-04T00:00:00Z,special_docs,, 198830,https://www.fhi.ox.ac.uk/wp-content/uploads/Risks-and-Mitigation-Strategies-for-Oracle-AI.pdf,Risks and Mitigation Strategies for Oracle AI,['Stuart Armstrong'],2010-01-01T00:00:00Z,special_docs,, 198862,https://www.sciencedirect.com/science/article/pii/S0899825619300582,The truth behind the myth of the folk theorem.,"['Joseph Y', 'Halpern', 'Rafael Pass', 'Lior Seeman']",2019-08-10T00:00:00Z,special_docs,, 198874,https://drive.google.com/file/d/1GViBUPA6EYawSuVc67rE6TsTojvrZIRO/view?usp=share_link,individuallyselected_7ujun-by Vael Gates-date 20220318,['Vael Gates'],2022-03-17T23:00:00Z,special_docs,,md 198892,http://dl.acm.org/citation.cfm?doid=3171221.3171267,"Learning from Physical Human Corrections, One Feature at a Time","['Andrea Bajcsy', 'Dylan P. Losey', ""Marcia K. O'Malley"", 'Anca D. Dragan']",2018-02-26T00:00:00Z,special_docs,,xml 198904,https://www.ijcai.org/Proceedings/2018/705,Estimation with Incomplete Data: The Linear Case.,"['Karthika Mohan', 'Felix Thoemmes', 'Judea Pearl']",2018-08-10T00:00:00Z,special_docs,, 198919,https://globalprioritiesinstitute.org/philip-trammell-and-anton-korinek-economic-growth-under-transformative-ai/,Economic growth under transformative AI,"['Phillip Trammell', 'Anton Korinek']",2020-10-01T00:00:00Z,special_docs,,xml 198949,https://cset.georgetown.edu/publication/hacking-ai/,Hacking AI,['Andrew Lohn'],2020-12-01T00:00:00Z,special_docs,,pdf 198971,https://drive.google.com/file/d/1-F1AubRvkU95p8y5qUIye44WvwQg-Zid/view?usp=share_link,Fireside chat - AI governance _ Markus Anderljung _ Ben Garfinkel _ EA Global - Virtual 2020-by Centre for Effective Altruism-video_id bSTYiIgjgrk-date 20200321,"['Markus Anderljung', 'Ben Garfinkel']",2020-03-20T23:00:00Z,special_docs,,md 199000,https://drive.google.com/file/d/1YNNYmJDoHMl7H7bRtwp8jeY_SFlmFL0i/view?usp=sharing,CHAI Newsletter #4 2019,['CHAI'],2019-12-01T00:00:00Z,special_docs,, 199029,https://docs.google.com/document/d/1FbTuRvC4TFWzGYerTKpBU7FJlyvjeOvVYF2uYNFSlOc/edit#heading=h.n1wk9bxo847o,AI Alignment Research Overview,['Jacob Steinhardt'],2019-10-01T00:00:00Z,special_docs,,md 199074,https://people.eecs.berkeley.edu/~sastry/pubs/Pdfs%20of%202017/SadighActive2017.pdf,Active Preference-Based Learning of Reward Functions.,"['Dorsa Sadigh', 'Anca Dragan', 'S', 'Shankar Sastry', 'Sanjit Seshia']",2017-08-10T00:00:00Z,special_docs,, 199095,https://www.cse.ust.hk/~qyang/Docs/2009/tkde_transfer_learning.pdf,A survey on transfer learning,"['Sinno Jialin Pan', 'Qiang Yang']",2009-01-01T00:00:00Z,special_docs,, 199116,https://cset.georgetown.edu/publication/small-datas-big-ai-potential/,Small Data’s Big AI Potential,"['Husanjot Chahal', 'Helen Toner', 'Ilya Rahkovsky']",2021-09-01T00:00:00Z,special_docs,,pdf 199152,http://link.springer.com/10.1007/s13347-011-0043-6,Introduction: Open Questions in Roboethics,['John P. Sullins'],2011-09-01T00:00:00Z,special_docs,,pdf 199204,https://www.sciencedirect.com/science/article/abs/pii/S0376635717302048,Is state-dependent valuation more adaptive than simpler rules?.,"['Joseph Y', 'Halpern', 'Lior Seeman']",2018-08-10T00:00:00Z,special_docs,, 199214,https://psyarxiv.com/57v6k/,Attention in value-based choice as optimal sequential sampling.,"['Frederick Callaway', 'Tom Griffiths']",2019-08-10T00:00:00Z,special_docs,, 199232,"http://mediangroup.org/docs/Feasibility%20of%20Training%20an%20AGI%20using%20Deep%20Reinforcement%20Learning,%20A%20Very%20Rough%20Estimate.pdf",Feasibility of Training an AGI using Deep RL: A Very Rough Estimate,"['Baeo Maltinsky', 'Jack Gallagher', 'Jessica Taylor']",2019-01-01T00:00:00Z,special_docs,,pdf 199253,http://mediangroup.org/docs/insights-analysis.pdf,AI Insights Dataset Analysis,"['Colleen McKenzie', 'J Bryce Hidysmith']",2019-01-01T00:00:00Z,special_docs,,pdf 199269,https://link.springer.com/article/10.1007/s11238-018-9679-3,Robust program equilibrium,['Caspar Oesterheld'],2019-01-01T00:00:00Z,special_docs,, 199286,http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html,Mastering the game of Go with deep neural networks and tree search,"['David Silver', 'Aja Huang', 'Chris J. Maddison', 'Arthur Guez', 'Laurent Sifre', 'George van den Driessche', 'Julian Schrittwieser', 'Ioannis Antonoglou', 'Veda Panneershelvam', 'Marc Lanctot', 'Sander Dieleman', 'Dominik Grewe', 'John Nham', 'Nal Kalchbrenner', 'Ilya Sutskever', 'Timothy Lillicrap', 'Madeleine Leach', 'Koray Kavukcuoglu', 'Thore Graepel', 'Demis Hassabis']",2016-01-27T00:00:00Z,special_docs,, 199305,https://www.msamlin.com/content/dam/ms-amlin/corporate/our-world/Whitepapers/MS%20Amlin%20White%20Paper%20The%20underwriter%20and%20the%20models-%20solo%20dances%20or%20pas-de-deux.pdf.downloadasset.pdf,The underwriter and the models-solo dances or pas-de-deux? What policy data can tell us about how underwriters use models,"['Stuart Armstrong', 'Mario Weick', 'Anders Sandberg', 'Andrew Snyder-Beattie', 'Nick Beckstead']",2017-01-01T00:00:00Z,special_docs,, 199324,https://doi.org/10.1080/0020174X.2019.1658626,Existential risks: a philosophical analysis,['Phil Torres'],2019-08-23T00:00:00Z,special_docs,,pdf 199341,https://academic.oup.com/isq/article/63/4/963/5559531,"Authoritarian Audiences, Rhetoric, and Propaganda in International Crises: Evidence from China","['Jessica Chen Weiss', 'Allan Dafoe']",2019-09-03T00:00:00Z,special_docs,,pdf 199359,https://ieeexplore.ieee.org/document/9001063/,Responsible AI—Two Frameworks for Ethical Design Practice,"['Dorian Peters', 'Karina Vold', 'Diana Robinson', 'Rafael A. Calvo']",2020-03-01T00:00:00Z,special_docs,, 199395,https://foresight.org/publications/AGI-Timeframes&PolicyWhitePaper.pdf,Artificial General Intelligence: Timeframes & Policy White Paper,['Allison Duettmann'],2017-01-01T00:00:00Z,special_docs,,pdf 199448,https://cset.georgetown.edu/publication/ethical-norms-for-new-generation-artificial-intelligence-released/,Ethical Norms for New Generation Artificial Intelligence Released,['PRC Ministry of Science and Technology'],2021-10-21T00:00:00Z,special_docs,,pdf 199473,https://drive.google.com/file/d/1gONSbSzr8dA2BJlFjxqXGt6BlyUKEpxR/view?usp=share_link,individuallyselected_w5cb5-by Vael Gates-date 20220318,['Vael Gates'],2022-03-17T23:00:00Z,special_docs,,md 199498,https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-robustness-and-adversarial-examples/,Key Concepts in AI Safety: Robustness and Adversarial Examples,"['Tim G. J. Rudner', 'Helen Toner']",2021-03-01T00:00:00Z,special_docs,,pdf 199513,https://dl.acm.org/doi/10.1145/3278721.3278780,An AI Race for Strategic Advantage: Rhetoric and Risks,"['Stephen Cave', 'Seán S. ÓhÉigeartaigh']",2018-12-27T00:00:00Z,special_docs,,xml 199532,https://globalprioritiesinstitute.org/stefan-riedener-existential-risks-from-a-thomist-christian-perspective/,Existential risks from a Thomist Christian perspective,['Stefan Riedener'],2021-01-04T00:00:00Z,special_docs,,pdf 199549,https://ora.ox.ac.uk/objects/uuid:51ee3d43-533c-4de6-904e-be12c27afdca/download_file?file_format=pdf&safe_filename=There%2Bis%2Bplenty%2Bof%2Btime%2Bat%2Bthe%2Bbottom%2B4.pdf&type_of_work=Journal+article,"There is plenty of time at the bottom: the economics, risk and ethics of time compression",['Anders Sandberg'],2019-01-01T00:00:00Z,special_docs,,pdf 199579,https://drive.google.com/file/d/1LPIssfKeMhFVgRYRtbf19jfVws77AZvl/view?usp=sharing,CHAI Newsletter #2 2022,['CHAI'],2022-10-01T00:00:00Z,special_docs,, 199600,http://www.cc.gatech.edu/~athomaz/papers/ThomazBreazeal-ICDL06.pdf,Transparency and Socially Guided Machine Learning,"['Andrea L. Thomaz', 'Cynthia Breazeal']",2006-01-01T00:00:00Z,special_docs,, 199615,https://ai-alignment.com/universality-and-model-based-rl-b08701394ddd,Universality and model-based RL,['Paul Christiano'],2019-10-04T00:00:00Z,special_docs,,html 199633,https://www.ies.ee/bahps/acta-baltica/abhps-8-2/04_Sutrop-2020-2-04.pdf,Challenges of Aligning Artificial Intelligence with Human Values,['Margit Sutrop'],2020-12-15T00:00:00Z,special_docs,, 199653,https://cset.georgetown.edu/publication/military-ai-cooperation-toolbox/,Military AI Cooperation Toolbox,['Zoe Stanley-Lockman'],2021-08-01T00:00:00Z,special_docs,,pdf 199682,http://www.nickbostrom.com/papers/openness.pdf,Strategic Implications of Openness in AI Development,['Nick Bostrom'],2017-01-01T00:00:00Z,special_docs,, 199711,https://papers.ssrn.com/abstract=3761623,Solving for X?' Towards a Problem-Finding Framework to Ground Long-Term Governance Strategies for Artificial Intelligence,"['Hin-Yan Liu', 'Matthijs M. Maas']",2021-01-07T00:00:00Z,special_docs,,xml 199735,https://cset.georgetown.edu/publication/machine-intelligence-for-scientific-discovery-and-engineering-invention/,Machine Intelligence for Scientific Discovery and Engineering Invention,"['Matthew Daniels', 'Autumn Toney', 'Melissa Flagg', 'Charles Yang']",2021-05-01T00:00:00Z,special_docs,,pdf 199779,http://ceur-ws.org/Vol-2640/paper_21.pdf,judiciary.senate.gov,"['Caspar Oesterheld', 'Vincent Conitzer']",2019-08-30T00:00:00Z,special_docs,,pdf 199800,https://www.fhi.ox.ac.uk/wp-content/uploads/2021/08/QNRs_FHI-TR-2021-3.0.pdf,QNRs: Toward Language for Intelligent Machines,['K. Eric Drexler'],2021-01-01T00:00:00Z,special_docs,,pdf 199819,https://casparoesterheld.com/2017/10/22/a-behaviorist-approach-to-building-phenomenological-bridges/,A behaviorist approach to building phenomenological bridges,['Caspar Oesterheld'],2017-10-22T00:00:00Z,special_docs,,html 199836,https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2976444,Reconciliation between factions focused on near-term and long-term artificial intelligence,['Seth D. Baum'],2018-01-01T00:00:00Z,special_docs,,xml 199858,https://www.nature.com/articles/s41598-019-47540-7,An upper bound for the background rate of human extinction,"['Andrew E. Snyder-Beattie', 'Toby Ord', 'Michael B. Bonsall']",2019-07-30T00:00:00Z,special_docs,,html 199875,https://doi.org/10.1016/S0262-4079(15)31174-X,Death and pain of a digital brain,['Anders Sandberg'],2015-01-01T00:00:00Z,special_docs,,pdf 199902,https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/,How Rogue AIs may Arise,['Yoshua Bengio'],2023-05-22T00:00:00Z,special_docs,, 199923,https://ai-alignment.com/learning-the-prior-48f61b445c04,Learning the prior,['Paul Christiano'],2020-07-05T00:00:00Z,special_docs,,html 199939,https://cocosci.princeton.edu/papers/Gates2020.pdf,How to Be Helpful to Multiple People at Once.,"['Vael Gates', 'Thomas L', 'Griffiths', 'Anca D', 'Dragan']",2020-08-10T00:00:00Z,special_docs,, 199950,https://openreview.net/forum?id=8Itm8dQnJRc,Transforming Worlds: Automated Involutive MCMC for Open-Universe Probabilistic Models,"['George Matheos', 'Alexander K. Lew', 'Matin Ghavamizadeh', 'Stuart Russell', 'Marco Cusumano-Towner', 'Vikash Mansinghka']",2020-11-23T00:00:00Z,special_docs,,xml 199963,https://proceedings.neurips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-Paper.pdf,Learning to summarize with human feedback,"['Nisan Stiennon', 'Long Ouyang', 'Jeffrey Wu', 'Daniel Ziegler', 'Ryan Lowe', 'Chelsea Voss', 'Alec Radford', 'Dario Amodei', 'Paul F. Christiano']",2020-01-01T00:00:00Z,special_docs,,pdf 199996,https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1,"Building safe artificial intelligence: specification, robustness, and assurance","['Pedro Ortega', 'Vishal Maini']",2018-09-27T00:00:00Z,special_docs,,html 200028,https://people.duke.edu/~ng46/topics/evolved-radio.pdf,The Evolved Radio and its Implications for Modelling the Evolution of Novel Sensors,"['Jon Bird', 'Paul Layzell']",2002-05-12T00:00:00Z,special_docs,, 200052,https://cocosci.princeton.edu/papers/thompson2022complex.pdf,Complex cognitive algorithms preserved by selective social learning in experimental populations.,"['CEIL\nThompson', 'B', 'van Opheusden', 'B', 'Sumers', 'T', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 200066,https://static1.squarespace.com/static/6266e3c48fb0751e74f60eb6/t/62e6d6afb0c92718ae985800/1659295408089/NSBRZiros2022.pdf,Selecting the Partial State Abstractions of MDPs: A Metareasoning Approach with Deep Reinforcement Learning.,"['Samer B', 'Nashed*', 'Justin Svegliato*', 'Abhinav Bhatia', 'Shlomo Zilberstein', 'Stuart Russell']",2022-08-10T00:00:00Z,special_docs,, 200080,https://www.mdpi.com/2078-2489/9/9/209,Superintelligence skepticism as a political tool,['Seth Baum'],2018-01-01T00:00:00Z,special_docs,,html 200106,https://www.nature.com/articles/s42256-018-0003-2,Bridging near- and long-term concerns about AI,"['Stephen Cave', 'Seán S. Ó hÉigeartaigh']",2019-01-01T00:00:00Z,special_docs,,html 200118,http://www.sciencedirect.com/science/article/pii/S0016328720300987,Quantifying the probability of existential catastrophe: A reply to Beard et al.,['Seth D. Baum'],2020-10-01T00:00:00Z,special_docs,, 200156,https://unstableontology.com/2019/07/11/the-ai-timelines-scam/,The AI Timelines Scam,['Jessica Taylor'],2019-07-11T00:00:00Z,special_docs,,html 200177,https://longtermrisk.org/weak-identifiability-and-its-consequences-in-strategic-settings/,Weak identifiability and its consequences in strategic settings,['Jesse Clifton'],2021-02-13T00:00:00Z,special_docs,,html 200193,https://drive.google.com/file/d/1ZUN1YAJsy9aq1F-oXKzM43GlvvnwyI7d/view?usp=share_link,"Rohin Shah - Effective altruism, AI safety, and learning human preferences from the world_s state-by Towards Data Science-video_id uHiL6GNXHvw-date 20201028","['Rohin Shah', 'Jeremie Harris']",2020-10-27T23:00:00Z,special_docs,,md 200227,http://mediangroup.org/docs/the_professionals_dilemma.pdf,The Professional's Dilemma,['Ben Hoffman'],1990-01-01T00:00:00Z,special_docs,,pdf 200240,https://doi.org/10.1002/9781118736302.ch20,Being nice to software animals and babies,['Anders Sandberg'],2014-01-01T00:00:00Z,special_docs,,pdf 200276,https://rachelfreedman.github.io/assets/Barnett2022.pdf,Active Reward Learning from Multiple Teachers.,"['Peter Barnett', 'Rachel Freedman', 'Justin Svegliato', 'Stuart Russell']",2022-08-10T00:00:00Z,special_docs,, 200294,https://drive.google.com/file/d/1YxuzV0J6k1naLHt_GSRubthL-Rh9mxsu/view?usp=share_link,NeurIPSorICML_a0nfw-by Vael Gates-date 20220318,['Vael Gates'],2022-03-17T23:00:00Z,special_docs,,md 200316,https://longtermrisk.org/files/backup-utility-functions.pdf,Backup utility functions as a fail-safe AI technique,['Caspar Oesterheld'],2016-10-01T00:00:00Z,special_docs,,xml 200339,https://drive.google.com/file/d/1GSpRS-No3ODE2XRQBkYCDf7KZ9zSOnbb/view?usp=sharing,CHAI Newsletter 2017,['CHAI'],2017-10-01T00:00:00Z,special_docs,, 200362,http://link.springer.com/10.1007/978-3-662-54033-6_5,Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda,"['Nate Soares', 'Benya Fallenstein']",2017-01-01T00:00:00Z,special_docs,,pdf 200389,https://ceur-ws.org/Vol-2301/paper_22.pdf,Surveying Safety-relevant AI Characteristics,"['Jose Hernandez-Orallo', 'Fernando Martınez-Plumed', 'Shahar Avin']",2019-01-01T00:00:00Z,special_docs,,pdf 200424,http://www.nature.com/nature/journal/v521/n7553/abs/nature14539.html,Deep Learning,"['Yann LeCun', 'Yoshua Bengio', 'Geoffrey Hinton']",2015-05-27T00:00:00Z,special_docs,, 200445,https://drive.google.com/file/d/17BFY3y4hNBaE8ZqjIYne0-LzNvgsNs2s/view?usp=share_link,The role of existing institutions in AI strategy _ Jade Leung _ Seth Baum-by Centre for Effective Altruism-video_id pgiwvmY3brg-date 20181023,"['Jade Leung', 'Seth Baum']",2018-10-22T22:00:00Z,special_docs,,md 200468,https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf,Multiverse-wide Cooperation via Correlated Decision Making,['Caspar Oesterheld'],2017-08-10T00:00:00Z,special_docs,,pdf 200488,http://www.cs.cornell.edu/Info/People/halpern/papers/axrat.pdf,Reasoning about Rationality.,"['Adam Bjorndahl', 'Joseph Y', 'Halpern', 'Rafael Pass']",2017-08-10T00:00:00Z,special_docs,, 200511,https://longtermrisk.org/international-cooperation-vs-ai-arms-race/,International Cooperation vs. AI Arms Race,['Brian Tomasik'],2015-04-08T00:00:00Z,special_docs,,html 200532,https://medium.com/ai-control/the-informed-oversight-problem-1b51b4f66b35#.ddvq5rheo,The informed oversight problem,['Paul Christiano'],2016-03-01T00:00:00Z,special_docs,, 200549,https://nickbostrom.com/papers/aipolicy.pdf,Public Policy and Superintelligent AI: A Vector Field Approach,"['Nick Bostrom', 'Allan Dafoe', 'Carrick Flynn']",2018-01-01T00:00:00Z,special_docs,,pdf 200589,https://longtermrisk.org/reducing-long-term-risks-from-malevolent-actors/,Reducing long-term risks from malevolent actors,"['David Althaus', 'Tobias Baumann']",2020-07-07T00:00:00Z,special_docs,,html 200603,https://www.fhi.ox.ac.uk/wp-content/uploads/Policy-Desiderata-in-the-Development-of-Machine-Superintelligence.pdf,Policy desiderata in the development of machine superintelligence,"['Nick Bostrom', 'Allan Dafoe', 'Carrick Flynn']",2016-01-01T00:00:00Z,special_docs,,pdf 200647,https://openreview.net/forum?id=kBNhgqXatI,An Empirical Investigation of Representation Learning for Imitation.,"['Cynthia Chen', 'Sam Toyer', 'Cody Wild', 'Scott Emmons', 'Ian Fischer', 'Kuang-Huei Lee', 'Neel Alex', 'Steven Wang', 'Ping Luo', 'Stuart Russell', 'Pieter Abbeel', 'and Rohin Shah']",2022-08-10T00:00:00Z,special_docs,, 200665,https://www.ri.cmu.edu/publications/integrating-human-observer-inferences-into-robot-motion-planning/,Integrating Human Observer Inferences into Robot Motion Planning,"['Anca Dragan', 'Siddhartha Srinivasa']",2014-01-01T00:00:00Z,special_docs,, 200688,https://www.cs.swarthmore.edu/~bryce/publications/A_Regression_Approach_for_Modeling_Games_with_Many_Symmetric_Players.pdf,A Regression Approach for Modeling Games with Many Symmetric Players.,"['Bryce Wiedenbeck', 'Fengjun Yang', 'Michael P', 'Wellman']",2018-08-10T00:00:00Z,special_docs,, 200707,https://doi.org/10.1073/pnas.1620742114,The evolution of cognitive mechanisms in response to cultural innovations.,"['Arnon Lotem', 'Joseph Y', 'Halpern', 'Shimon Edelman', 'Oren Kolodny']",2017-08-10T00:00:00Z,special_docs,, 200730,https://drive.google.com/file/d/1OPwFpaoRgP0P05ciMIobJUh_mq2aRhNb/view?usp=share_link,NeurIPSorICML_lgu5f-by Vael Gates-date 20220322,['Vael Gates'],2022-03-21T23:00:00Z,special_docs,,md 200761,https://ceur-ws.org/Vol-2560/paper21.pdf,"Exploring AI Safety in Degrees: Generality, Capability and Control","['John Burden', 'Jose Hernandez-Orallo']",2020-08-10T00:00:00Z,special_docs,, 200779,https://doi.org/10.1007/s11229-015-0883-1,Formalizing preference utilitarianism in physical world models,['Caspar Oesterheld'],2016-09-01T00:00:00Z,special_docs,,pdf 200792,https://gustavkarreskog.com/files/jmp_karreskog.pdf,Rational heuristics for one-shot games.,"['Callaway', 'F', 'Griffiths', 'T', 'L', '& Karreskog', 'G']",2022-08-10T00:00:00Z,special_docs,, 200803,https://www.jair.org/index.php/jair/article/view/10134,Provably Bounded-Optimal Agents,"['S. J. Russell', 'D. Subramanian']",1995-05-01T00:00:00Z,special_docs,, 200822,http://pubsonline.informs.org/doi/10.1287/deca.2017.0350,Value of Global Catastrophic Risk (GCR) Information: Cost-Effectiveness-Based Approach for GCR Reduction,['Anthony Michael Barrett'],2017-09-01T00:00:00Z,special_docs,,xml 200852,https://nickbostrom.com/papers/porosity.pdf,"Hail Mary, value porosity, and utility diversification",['Nick Bostrom'],2014-01-01T00:00:00Z,special_docs,,pdf 200880,https://doi.org/10.1007/978-3-662-54033-6_10,Can the Singularity Be Patented? (And Other IP Conundrums for Converging Technologies),['David Koepsell'],2017-01-01T00:00:00Z,special_docs,,pdf 200903,https://ai-alignment.com/implicit-extortion-3c80c45af1e3,Implicit extortion,['Paul Christiano'],2018-04-13T00:00:00Z,special_docs,,html 200919,http://auai.org/uai2016/proceedings/papers/294.pdf,MDPs with Unawareness in Robotics.,"['Nan Rong', 'Joseph Y', 'Halpern', 'Ashutosh Saxena']",2016-08-10T00:00:00Z,special_docs,, 200937,https://cset.georgetown.edu/publication/headline-or-trend-line/,Headline or Trend Line?,"['Margarita Konaev', 'Andrew Imbrie', 'Ryan Fedasiuk', 'Emily Weinstein', 'Katerina Sedova', 'James Dunham']",2021-08-01T00:00:00Z,special_docs,,pdf 200953,https://dropline.net/wp-content/uploads/2012/02/iui-2015.pdf,Principles of Explanatory Debugging to Personalize Interactive Machine Learning,"['Todd Kulesza', 'Margaret Burnett', 'Weng-Keen Wong', 'Simone Stumpf']",2015-04-01T00:00:00Z,special_docs,, 200972,https://openreview.net/forum?id=NpsVSN6o4ul,Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Small,"['Kevin Ro Wang', 'Alexandre Variengien', 'Arthur Conmy', 'Buck Shlegeris', 'Jacob Steinhardt']",2023-02-01T00:00:00Z,special_docs,, 200986,https://cocosci.princeton.edu/papers/kumarmetalearning.pdf,Meta-Learning of Structured Task Distributions in Humans and Machines.,"['Sreejan Kumar', 'Ishita Dasgupta', 'Jonathan D', 'Cohen', 'Nathaniel D', 'Daw', 'and Thomas L', 'Griffiths']",2021-08-10T00:00:00Z,special_docs,, 201004,https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf,Racing to the precipice: a model of artificial intelligence development,"['Stuart Armstrong', 'Nick Bostrom', 'Carl Shulman']",2016-01-01T00:00:00Z,special_docs,,pdf 201030,https://www.nature.com/articles/d41586-021-01170-0,Cooperative AI: machines must learn to find common ground,"['Allan Dafoe', 'Yoram Bachrach', 'Gillian Hadfield', 'Eric Horvitz', 'Kate Larson', 'Thore Graepel']",2021-05-01T00:00:00Z,special_docs,,html 201057,https://www.mdpi.com/2078-2489/11/6/290,Medium-Term Artificial Intelligence and Society,['Seth D. Baum'],2020-06-01T00:00:00Z,special_docs,,html 201094,http://proceedings.mlr.press/v119/zheng20b/zheng20b.pdf,What Can Learned Intrinsic Rewards Capture?.,"['Zeyu Zheng', 'Junhyuk Oh', 'Matteo Hessel', 'Zhongwen Xu', 'Manuel Kroiss', 'Hado Van Hasselt', 'David Silver', 'Satinder Singh']",2020-08-10T00:00:00Z,special_docs,, 201112,https://www.rand.org/content/dam/rand/pubs/testimonies/CTA2700/CTA2723-1/RAND_CTA2723-1.pdf,Artificial Intelligence: Challenges and Opportunities for the Department of Defense,"['Jason Matheny', 'Rand Corporation']",2023-04-19T00:00:00Z,special_docs,, 201150,https://www.informatica.si/index.php/informatica/article/download/1875/1105,Conceptual-Linguistic Superintelligence,['David J. Jilk'],2017-12-27T00:00:00Z,special_docs,, 201170,http://www.nickbostrom.com/superintelligentwill.pdf,The Superintelligent Will: Motivation and Instrumental Rationality In Advanced Intelligent Agents,['Nick Bostrom'],2012-05-02T00:00:00Z,special_docs,, 201186,https://ought.org/updates/2020-11-09-forecasting,Automating reasoning about the future at Ought,"['Jungwon Byun', 'Andreas Stuhlmüller']",2020-01-01T00:00:00Z,special_docs,,html 201208,https://doi.org/10.1109/MSPEC.2019.8847590,It's not too soon to be wary of AI: We need to act now to protect humanity from future superintelligent machines,['Stuart Russell'],2019-10-01T00:00:00Z,special_docs,,pdf 201224,https://openai.com/blog/improving-verifiability/,Improving Verifiability in AI Development,['OpenAI'],2020-04-16T00:00:00Z,special_docs,,html 201255,https://www.sciencedirect.com/science/article/abs/pii/S0010027719303397,What the Baldwin Effect affects depends on the nature of plasticity.,"['Thomas J', 'H', 'Morgan', 'Jordan W', 'Suchow', 'Thomas L', 'Griffiths']",2020-08-10T00:00:00Z,special_docs,, 201274,https://papers.nips.cc/paper/2019/hash/97af07a14cacba681feacf3012730892-Abstract.html,ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models,"['Andrei Barbu', 'David Mayo', 'Julian Alverio', 'William Luo', 'Christopher Wang', 'Dan Gutfreund', 'Josh Tenenbaum', 'Boris Katz']",2019-01-01T00:00:00Z,special_docs,, 201289,https://cocosci.princeton.edu/papers/callawayleveraging.pdf,Leveraging artificial intelligence to improve people’s planning strategies. Proceedings of the National Academy of Sciences.,"['Callaway', 'F', 'Jain', 'Y', 'R', 'van Opheusden', 'B', 'Das', 'P', 'Iwama', 'G', 'Gul', 'S', 'Krueger', 'P', 'M', 'Becker', 'F', 'Griffiths', 'T', 'L', '& Lieder', 'F']",2022-08-10T00:00:00Z,special_docs,, 201308,https://www.ssrn.com/abstract=3806624,"AI, Governance Displacement, and the (De)Fragmentation of International Law",['Matthijs M. Maas'],2021-03-01T00:00:00Z,special_docs,,xml 201331,http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/,How to study superintelligence strategy,['Luke Muehlhauser'],2015-02-11T00:00:00Z,special_docs,, 201353,https://www.ifaamas.org/Proceedings/aamas2021/pdfs/p1599.pdf,A Strategic Analysis of Portfolio Compression.,"['Katherine Mayo', 'Michael P Wellman']",2021-08-10T00:00:00Z,special_docs,, 201368,https://aisrp.org/?page_id=169,Safer ML paradigms team: the story – AI Safety Research Program,['AI Safety Camp'],2020-01-01T00:00:00Z,special_docs,,html 201379,https://www.liebertpub.com/doi/10.1089/hs.2019.0122,Assessing the Risks Posed by the Convergence of Artificial Intelligence and Biotechnology,"[""John T. O'Brien"", 'Cassidy Nelson']",2020-06-01T00:00:00Z,special_docs,,xml 201401,https://www.sciencedirect.com/science/article/pii/S2352154618302122,Doing more with less: meta-reasoning and meta-learning in humans and machines.,"['Thomas L', 'Griffiths', 'Frederick Callaway', 'Michael B', 'Chang', 'Erin Grant', 'Paul M', 'Krueger', 'Falk Lieder']",2019-08-10T00:00:00Z,special_docs,, 201420,https://gcrinstitute.org/papers/027_asi-risk.pdf,Risk analysis and risk management for the artificial superintelligence research and development process,"['Anthony M. Barrett', 'Seth D. Baum']",2017-01-01T00:00:00Z,special_docs,,pdf 201443,https://drive.google.com/file/d/1NbBk8tN9hxClfoScoHGzk7M5iEvktGIH/view?usp=share_link,Rohin Shah_ WhatΓÇÖs been happening in AI alignment_-by EA Global Virtual 2020-date 20200321,['Rohin Shah'],2020-03-20T23:00:00Z,special_docs,,md 201482,https://www.aaai.org/Papers/AAAI/2006/AAAI06-293.pdf,Building Explainable Artificial Intelligence Systems,"['Mark G. Core', 'H. Chad Lane', 'Michael van Lent', 'Dave Gomboc', 'Steve Solomon', 'Milton Rosenberg']",2006-01-01T00:00:00Z,special_docs,, 201508,https://cset.georgetown.edu/publication/future-indices/,Future Indices,"['Michael Page', 'Catherine Aiken', 'Dewey Murdick']",2020-10-19T00:00:00Z,special_docs,,pdf 201528,http://link.springer.com/10.1007/978-3-642-32560-1_6,The Singularity and Machine Ethics,"['Luke Muehlhauser', 'Louie Helm']",2012-01-01T00:00:00Z,special_docs,,pdf 201546,https://academic.oup.com/oxrep/article-abstract/37/3/509/6374673,The history and future of AI.,['Stuart Russell'],2021-08-10T00:00:00Z,special_docs,, 201556,http://portal.acm.org/citation.cfm?doid=279943.279964,Learning agents for uncertain environments,['Stuart Russell'],1998-01-01T00:00:00Z,special_docs,, 201573,http://www.jmlr.org/papers/volume11/baehrens10a/baehrens10a.pdf,How to Explain Individual Classification Decisions,"['David Baehrens', 'Timon Schroeter', 'Stefan Harmeling', 'Motoaki Kawanabe', 'Katja Hansen', 'Klaus-Robert Muller']",2010-06-01T00:00:00Z,special_docs,, 201589,https://ieeexplore.ieee.org/document/9427056/,AI CERTIFICATION: Advancing Ethical Practice by Reducing Information Asymmetries,"['Peter Cihon', 'Moritz J. Kleinaltenkamp', 'Jonas Schuett', 'Seth D. Baum']",2021-01-01T00:00:00Z,special_docs,,xml 201612,http://link.springer.com/10.1007/s13347-020-00416-5,Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems,['Seth D. Baum'],2020-07-16T00:00:00Z,special_docs,,pdf 201642,https://nickbostrom.com/papers/survey.pdf,Future progress in artificial intelligence: A survey of expert opinion,"['Vincent C. Müller', 'Nick Bostrom']",2016-01-01T00:00:00Z,special_docs,,pdf 201659,https://nickbostrom.com/views/whyfriendlyai.pdf,Why we need friendly AI,"['Luke Muehlhauser', 'Nick Bostrom']",2014-01-01T00:00:00Z,special_docs,,pdf 201691,https://www.informatica.si/index.php/informatica/article/download/1797/1104,Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence,"['Mikhail Batin', 'Alexey Turchin', 'Markov Sergey', 'Alisa Zhila', 'David Denkenberger']",2017-12-27T00:00:00Z,special_docs,, 201732,http://link.springer.com/10.1007/978-3-319-41649-6_13,Growing Recursive Self-Improvers,"['Bas R. Steunebrink', 'Kristinn R. Thórisson', 'Jürgen Schmidhuber']",2016-01-01T00:00:00Z,special_docs,,pdf 201752,https://foresight.org/wp-content/uploads/2018/11/AGI-Coordination-Geat-Powers-Report.pdf,Artificial General Intelligence: Coordination & Great Powers,"['Allison Duettmann', 'Olga Afanasjeva', 'Stuart Armstrong', 'Ryan Braley', 'Jessica Cussins', 'Jeffrey Ding', 'Peter Eckersley', 'Melody Guan', 'Alyssa Vance', 'Roman Yampolskiy']",2018-01-01T00:00:00Z,special_docs,,pdf 201796,https://jc.gatspress.com/pdf/existential_risk_and_powerseeking_ai.pdf,Existential Risk from Power-Seeking AI,['Joe Carlsmith'],2023-03-01T00:00:00Z,special_docs,, 201830,https://ai-alignment.com/approval-maximizing-representations-56ee6a6a1fe6,Approval-maximizing representations,['Paul Christiano'],2017-07-02T00:00:00Z,special_docs,,html 201846,https://digichina.stanford.edu/work/translation-measures-for-the-management-of-generative-artificial-intelligence-services-draft-for-comment-april-2023/,Translation: Measures for the Management of Generative Artificial Intelligence Services (Draft for Comment) – April 2023,['DigiChina Stanford University'],2023-04-11T00:00:00Z,special_docs,, 201878,https://globalprioritiesinstitute.org/nick-beckstead-and-teruji-thomas-a-paradox-for-tiny-probabilities-and-enormous-values/,"A paradox for tiny probabilities and enormous values - Nick Beckstead (Open Philanthropy Project) and Teruji Thomas (Global Priorities Institute, Oxford University)","['Nick Beckstead', 'Teruji Thomas']",2021-07-12T00:00:00Z,special_docs,,pdf 201896,https://drive.google.com/file/d/1FHuZFwyHq8cBSLrNNMW0ln_vk3uzGe4F/view?usp=share_link,Alex Turner - Will powerful AIs tend to seek┬ápower-by Towards Data Science-video_id 8afHG61YmKM-date 20220119,['Alex Turner'],2022-01-18T23:00:00Z,special_docs,,md 201915,https://philpapers.org/archive/MANWIT-6.pdf,WHAT IS THE UPPER LIMIT OF VALUE?,"['Anders Sandberg', 'David Manheim']",2021-01-01T00:00:00Z,special_docs,,pdf 201938,https://www.fhi.ox.ac.uk/reports/2008-1.pdf,Global Catastrophic Risks Survey,"['Anders Sandberg', 'Nick Bostrom']",2008-01-01T00:00:00Z,special_docs,,pdf 201970,https://www.openphilanthropy.org/blog/ai-governance-grantmaking,Our AI governance grantmaking so far,['Luke Muehlhauser'],2020-12-16T00:00:00Z,special_docs,,html 201995,https://linkinghub.elsevier.com/retrieve/pii/S0040162510002106,How long until human-level AI? Results from an expert assessment,"['Seth D. Baum', 'Ben Goertzel', 'Ted G. Goertzel']",2011-01-01T00:00:00Z,special_docs,,xml 202027,https://casparoesterheld.com/2017/05/12/anthropic-uncertainty-in-the-evidential-blackmail/,Anthropic uncertainty in the Evidential Blackmail,['Johannes Treutlein'],2017-05-12T00:00:00Z,special_docs,,html 202042,https://globalprioritiesinstitute.org/the-scope-of-longtermism-david-thorstad-global-priorities-institute-university-of-oxford/,The scope of longtermism,['David Thorstad'],2021-06-22T00:00:00Z,special_docs,,pdf 202059,https://doi.org/10.1007/978-3-662-54033-6_9,Computer Simulations as a Technological Singularity in the Empirical Sciences,['Juan M. Durán'],2017-01-01T00:00:00Z,special_docs,,pdf 202079,https://ai-alignment.com/towards-formalizing-universality-409ab893a456,Towards formalizing universality,['Paul Christiano'],2019-01-11T00:00:00Z,special_docs,,html 202097,https://proceedings.neurips.cc/paper_files/paper/2018/file/d89a66c7c80a29b1bdbab0f2a1a94af8-Paper.pdf,Occam's razor is insufficient to infer the preferences of irrational agents,"['Stuart Armstrong', 'Sören Mindermann']",2018-01-01T00:00:00Z,special_docs,,pdf 202116,https://cset.georgetown.edu/publication/truth-lies-and-automation/,"Truth, Lies, and Automation","['Ben Buchanan', 'Andrew Lohn', 'Micah Musser', 'Katerina Sedova']",2021-05-01T00:00:00Z,special_docs,,pdf 202138,http://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html,Introducing the Unrestricted Adversarial Examples Challenge,"['Tom B Brown', 'Catherine Olsson']",2018-09-13T00:00:00Z,special_docs,,html 202150,https://www.tandfonline.com/doi/full/10.1080/0952813X.2014.895105,"The errors, insights and lessons of famous AI predictions – and what they mean for the future","['Stuart Armstrong', 'Kaj Sotala', 'Seán S. Ó hÉigeartaigh']",2014-07-03T00:00:00Z,special_docs,, 202168,https://carnegieendowment.org/2022/01/04/china-s-new-ai-governance-initiatives-shouldn-t-be-ignored-pub-86127,China’s New AI Governance Initiatives Shouldn’t Be Ignored,['Matt Sheehan'],2022-01-04T00:00:00Z,special_docs,, 202195,https://ought.org/updates/2020-01-11-arguments,Evaluating Arguments One Step at a Time,['Ought'],2020-01-11T00:00:00Z,special_docs,,html 202223,https://people.cs.uchicago.edu/~ravenben/publications/pdf/backdoor-sp19.pdf,Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks,"['Bolun Wang', 'Yuanshun Yao', 'Shawn Shan', 'Huiying Li', 'Bimal Viswanath', 'Haitao Zheng', 'Ben Y. Zhao']",2019-09-16T00:00:00Z,special_docs,, 202242,https://globalprioritiesinstitute.org/doomsday-and-objective-chance-teruji-thomas/,Doomsday and objective chance,['Teruji Thomas'],2021-07-13T00:00:00Z,special_docs,,xml 202257,http://link.springer.com/10.1007/s13347-020-00402-x,Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance,"['Seán S. ÓhÉigeartaigh', 'Jess Whittlestone', 'Yang Liu', 'Yi Zeng', 'Zhe Liu']",2020-05-15T00:00:00Z,special_docs,, 202294,https://www.mdpi.com/2504-2289/3/2/26,AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk,"['Brandon Perry', 'Risto Uuk']",2019-06-01T00:00:00Z,special_docs,,html 202315,https://www.fhi.ox.ac.uk/wp-content/uploads/predicting-judgments-tr2018.pdf,Predicting Human Deliberative Judgments with Machine Learning,"['Owain Evans', 'Andreas Stuhlmüller', 'Chris Cundy', 'Ryan Carey', 'Zachary Kenton', 'Thomas McGrath', 'Andrew Schreiber']",2018-01-01T00:00:00Z,special_docs,,pdf 202330,https://gcrinstitute.org/the-ethics-of-sustainability-for-artificial-intelligence/,The Ethics of Sustainability for Artificial Intelligence,"['Andrea Owe', 'Seth Baum']",2021-01-01T00:00:00Z,special_docs,,html 202352,https://cocosci.princeton.edu/papers/wilson_rational.pdf,A Rational Account of Anchor Effects in Hindsight Bias.,"['Samarie Wilson', 'Somya Arora', 'Qiong Zhang', 'Thomas L', 'Griffiths']",2021-08-10T00:00:00Z,special_docs,, 202362,http://dmip.webs.upv.es/EPAI2020/papers/EPAI_2020_paper_4.pdf,Canaries in Technology Mines: Warning Signs of Transformative Progress in AI,"['Carla Zoe Cremer', 'Jess Whittlestone']",2020-01-01T00:00:00Z,special_docs,, 202376,https://www.gleech.org/grids,Preventing Side-effects in Gridworlds,"['Gavin Leech', 'Karol Kubicki', 'Jessica Cooper', 'Tom McGrath']",2018-04-22T00:00:00Z,special_docs,,html 202396,https://www.governance.ai/research-paper/the-immigration-preferences-of-top-ai-researchers-new-survey-evidence,The Immigration Preferences of Top AI Researchers: New Survey Evidence | GovAI,"['Remco Zwetsloot', 'Baobao Zhang', 'Markus Anderljung', 'Michael C. Horowitz', 'Allan Dafoe']",2021-01-12T00:00:00Z,special_docs,, 202413,https://www.biorxiv.org/content/biorxiv/early/2019/03/17/580324.full.pdf,Rational use of episodic and working memory: A normative account of prospective memory.,"['Ida Momennejad', 'Jarrod Lewis-Peacock', 'Kenneth A Norman', 'Jonathan D Cohen', 'Satinder Singh', 'Richard L Lewis']",2020-08-10T00:00:00Z,special_docs,, 202431,https://casparoesterheld.com/2018/02/15/the-law-of-effect-randomization-and-newcombs-problem/,"The law of effect, randomization and Newcomb’s problem",['Caspar Oesterheld'],2018-02-15T00:00:00Z,special_docs,,html 202446,http://link.springer.com/10.1007/978-3-319-26485-1_2,Rationality and Intelligence: A Brief Update,['Stuart Russell'],2016-01-01T00:00:00Z,special_docs,,pdf 202463,https://link.springer.com/article/10.3758/s13428-019-01201-9,Identifying category representations for complex stimuli using discrete Markov chain Monte Carlo with people.,"['Anne S', 'Hsu', 'Jay B', 'Martin', 'Adam N', 'Sanborn', 'Thomas L', 'Griffiths']",2019-08-10T00:00:00Z,special_docs,, 202477,https://www.the-security-times.com/wp-content/uploads/2018/02/ST_Feb2018_Doppel-2.pdf,The new weapons of mass destruction?,"['Ronald Arkin', 'Stuart Russell', 'Kim Min-Seok']",2018-01-01T00:00:00Z,special_docs,, 202506,https://cocosci.princeton.edu/papers/langloisserial.pdf,"Serial reproduction reveals the geometry of visuospatial representations.","['Thomas A', 'Langloisa', 'Nori Jacobyc', 'Jordan W', 'Suchowe', 'and Thomas L', 'Griffiths']",2021-08-10T00:00:00Z,special_docs,, 202519,https://psyarxiv.com/6fm9a/download?format=pdf,The Value of Abstraction.,"['Mark K', 'Ho', 'David Abel', 'Tom Griffiths', 'Michael L', 'Littman']",2019-08-10T00:00:00Z,special_docs,, 202545,http://link.springer.com/10.1007/978-3-642-35506-6_12,Avoiding Unintended AI Behaviors,['Bill Hibbard'],2012-01-01T00:00:00Z,special_docs,,pdf 202572,http://link.springer.com/10.1007/978-3-319-09274-4_3,Problems of Self-reference in Self-improving Space-Time Embedded Intelligence,"['Benja Fallenstein', 'Nate Soares']",2014-01-01T00:00:00Z,special_docs,,pdf 202596,https://www.thomaskrendlgilbert.com/uploads/1/2/1/2/121285828/ftc_final.pdf,Trade Regulation Rule on Commercial Surveillance and Data Security Rulemaking.,"['Thomas Krendl Gilbert', 'Micah Carroll']",2022-08-10T00:00:00Z,special_docs,, 202616,https://drive.google.com/file/d/10S9GD7IPaauOE4kBHGRnZxsrAkk8Q_S3/view?usp=sharing,CHAI Newsletter #2 2020,['CHAI'],2020-07-01T00:00:00Z,special_docs,, 202661,https://drive.google.com/file/d/11wx4NIdiM-ue9blBoJCJMqUqpehJDc_K/view?usp=sharing,CHAI Newsletter 2018,['CHAI'],2018-11-01T00:00:00Z,special_docs,, 202682,https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6429408/pdf/cn8b00571.pdf,Hacking the brain: dimensions of cognitive enhancement,"['Martin Dresler', 'Anders Sandberg', 'Christoph Bublitz', 'Kathrin Ohla', 'Carlos Trenado', 'Aleksandra Mroczko-Wasowicz', 'Simone Kühn', 'Dimitris Repantis']",2018-01-01T00:00:00Z,special_docs,, 202713,https://www.ssrn.com/abstract=3312874,Artificial Intelligence: American Attitudes and Trends,"['Baobao Zhang', 'Allan Dafoe']",2019-01-01T00:00:00Z,special_docs,,xml 202758,https://sideways-view.com/2018/02/24/takeoff-speeds/,Takeoff speeds,['paulfchristiano'],2018-02-24T00:00:00Z,special_docs,,html 202772,https://people.eecs.berkeley.edu/~sastry/pubs/Pdfs%20of%202016/SadighPlanning2016.pdf,Planning for Autonomous Cars that Leverage Effects on Human Actions.,"['Dorsa Sadigh', 'Shankar Sastry', 'Sanjit Seshia', 'Anca Dragan']",2016-08-10T00:00:00Z,special_docs,, 202785,https://www.ijcai.org/proceedings/2017/32,The Off-Switch Game,"['Dylan Hadfield-Menell', 'Anca Dragan', 'Pieter Abbeel', 'Stuart Russell']",2017-01-01T00:00:00Z,special_docs,, 202805,http://people.eecs.berkeley.edu/~russell/papers/mi19book-hcai.pdf,Human-Compatible Artificial Intelligence.,['Stuart Russell'],2021-08-10T00:00:00Z,special_docs,, 202826,http://www.cs.toronto.edu/~rntoro/docs/LRM_paper.pdf,Learning Reward Machines for Partially Observable Reinforcement Learning,"['Rodrigo Toro Icarte', 'Richard Valenzano', 'Ethan Waldie', 'Margarita P Castro', 'Toryn Q Klassen', 'Sheila A McIlraith']",2019-01-01T00:00:00Z,special_docs,, 202845,https://drive.google.com/file/d/1icPC_qIAlhQ_75_-IrTGUKs7du4njv35/view?usp=share_link,Mo Gawdat - Scary Smart - A former Google exec_s perspective on AI┬árisk-by Towards Data Science-video_id u2cK0_jUX_g-date 20220126,"['Mo Gawdat', 'Jeremie Harris']",2022-01-25T23:00:00Z,special_docs,,md 202863,https://why19.causalai.net/papers/SSS19_Paper_Upload_198.pdf,Learning Causal Trees with Latent Variables via Controlled Experimentation.,"['Prasad Tadepall', 'Cameron Barrie', 'Stuart J', 'Russell']",2019-08-10T00:00:00Z,special_docs,, 202874,https://drive.google.com/file/d/1aqZBYZFghWoZKjqZtSjfrkBelYFEglS4/view?usp=sharing,CHAI Newsletter #1 2023,['CHAI'],2023-05-01T00:00:00Z,special_docs,, 202901,https://www.sciencedirect.com/science/article/abs/pii/S0004370220300229,Adapting a kidney exchange algorithm to align with human values.,"['Rachel Freedman', 'Jana Schaich Borg', 'Walter Sinnott-Armstrong', 'John P', 'Dickerson', 'Vincent Conitzer']",2020-08-10T00:00:00Z,special_docs,, 202917,http://papers.nips.cc/paper/7721-negotiable-reinforcement-learning-for-pareto-optimal-sequential-decision-making.pdf,Negotiable Reinforcement Learning for Pareto Optimal Sequential Decision-Making,"['Nishant Desai', 'Andrew Critch', 'Stuart J Russell']",2018-01-01T00:00:00Z,special_docs,, 202928,http://link.springer.com/10.1007/978-3-642-22887-2_48,Complex Value Systems in Friendly AI,['Eliezer Yudkowsky'],2011-01-01T00:00:00Z,special_docs,,pdf 202944,https://cocosci.princeton.edu/papers/devraj_dynamics.pdf,The Dynamics of Exemplar and Prototype Representations Depend on Environmental Statistics.,"['Arjun Devraj', 'Qiong Zhang', 'Thomas L', 'Griffiths']",2021-08-10T00:00:00Z,special_docs,, 202959,http://iliad.stanford.edu/pdfs/publications/stefansson2019human.pdf,Human-robot interaction for truck platooning using hierarchical dynamic games.,"['Elis Stefansson', 'Jaime F', 'Fisac', 'Dorsa Sadigh', 'S', 'Shankar Sastry', 'Karl H', 'Johansson']",2019-08-10T00:00:00Z,special_docs,, 202978,https://cocosci.princeton.edu/papers/sumerslearning.pdf,Learning Rewards from Linguistic Feedback.,"['Theodore R', 'Sumers', 'Mark K', 'Ho', 'Robert D', 'Hawkins', 'Karthik Narasimhan', 'Thomas L', 'Griffiths']",2020-08-10T00:00:00Z,special_docs,, 202997,https://cocosci.princeton.edu/papers/ho2022people.pdf,People construct simplified mental representations to plan..,"['DMRL\nHo', 'M', 'K', 'Abel', 'D', 'Correa', 'C', 'G', 'Littman', 'M', 'L', 'Cohen', 'J', 'D', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 203007,http://martin.zinkevich.org/publications/maximummarginplanning.pdf,Maximum Margin Planning,"['Nathan D. Ratliff', 'J. Andrew Bagnell', 'Martin A. Zinkevich']",2006-01-01T00:00:00Z,special_docs,, 203024,https://unstableontology.com/2020/03/05/a-critical-agential-account-of-free-will-causation-and-physics/,"A critical agential account of free will, causation, and physics",['Jessica Taylor'],2020-03-05T00:00:00Z,special_docs,,html 203040,https://medium.com/understanding-recommenders,Understanding Recommenders..,['J Stray'],2022-08-10T00:00:00Z,special_docs,, 203052,https://gcrinstitute.org/2020-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/,"2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy | Global Catastrophic Risk Institute",['Seth Baum'],2020-12-31T00:00:00Z,special_docs,,html 203084,https://www.ibm.com/policy/bias-in-ai/,Bias in AI: How we Build Fair AI Systems and Less-Biased Humans,['Anonymous'],2018-02-01T00:00:00Z,special_docs,,html 203104,https://intelligence.org/files/IEM.pdf,Intelligence Explosion Microeconomics,['Eliezer Yudkowsky'],2013-01-01T00:00:00Z,special_docs,,pdf 203124,http://www.roboticsproceedings.org/rss12/p29.pdf,Planning for Autonomous Cars that Leverage Effects on Human Actions,"['Dorsa Sadigh', 'Shankar Sastry', 'Sanjit A. Seshia', 'Anca D. Dragan']",2016-01-01T00:00:00Z,special_docs,,pdf 203138,https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/,DeepSpeed: Extreme-scale model training for everyone,"['DeepSpeed Team', 'Rangan Majumder']",2020-09-10T00:00:00Z,special_docs,,html 203165,https://drive.google.com/file/d/1ldx2yW-B9KzpTk6tNlejikFfdKVyCDQz/view?usp=share_link,NeurIPSorICML_q243b-by Vael Gates-date 20220318,['Vael Gates'],2022-03-17T23:00:00Z,special_docs,,md 203193,https://cocosci.princeton.edu/papers/murthyshades.pdf,Shades of confusion: Lexical uncertainty modulates ad hoc coordination in an interactive communication task..,"['SML\nMurthy', 'S', 'K', 'Hawkins', 'R', 'D', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 203210,https://www.sciencedirect.com/science/article/pii/S0160791X21002165,"Artificial intelligence, systemic risks, and sustainability","['Victor Galaz', 'Miguel A. Centeno', 'Peter W. Callahan', 'Amar Causevic', 'Thayer Patterson', 'Irina Brass', 'Seth Baum', 'Darryl Farber', 'Joern Fischer', 'David Garcia', 'Timon McPhearson', 'Daniel Jimenez', 'Brian King', 'Paul Larcey', 'Karen Levy']",2021-11-01T00:00:00Z,special_docs,, 203234,https://cset.georgetown.edu/research/shaping-the-terrain-of-ai-competition/,Shaping the Terrain of AI Competition,['Tim Hwang'],2020-06-01T00:00:00Z,special_docs,,pdf 203266,https://rohinshah.com/wp-content/uploads/2019/12/Reward_Combination_NeurIPS_2019_Workshop_Camera_Ready.pdf,Combining reward information from multiple sources.,"['Dmitrii Krasheninnikov', 'Rohin Shah', 'Herke van Hoof']",2019-08-10T00:00:00Z,special_docs,, 203280,https://globalprioritiesinstitute.org/wp-content/uploads/Hayden-Wilkinson_In-defence-of-fanaticism.pdf,In defence of fanaticism,['Hayden Wilkinson'],2020-09-01T00:00:00Z,special_docs,,pdf 203297,https://longtermrisk.org/using-surrogate-goals-deflect-threats/,Using surrogate goals to deflect threats,['Tobias Baumann'],2018-02-20T00:00:00Z,special_docs,,html 203317,https://drive.google.com/file/d/1haxtGQ8aigy9A-7SpbXTqGCg8tSF9D2a/view?usp=share_link,README-by Vael Gates-date 20220509,['Vael Gates'],2022-05-08T22:00:00Z,special_docs,,md 203333,http://papers.nips.cc/paper/7419-where-do-you-think-youre-going-inferring-beliefs-about-dynamics-from-behavior.pdf,Where Do You Think You're Going?: Inferring Beliefs about Dynamics from Behavior,"['Sid Reddy', 'Anca Dragan', 'Sergey Levine']",2018-01-01T00:00:00Z,special_docs,, 203353,https://openreview.net/forum?id=J1uOGgf-bP,Test Time Robustification of Deep Models via Adaptation and Augmentation,"['Marvin Mengxin Zhang', 'Sergey Levine', 'Chelsea Finn']",2021-09-29T00:00:00Z,special_docs,,pdf 203370,http://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf,Apprenticeship Learning via Inverse Reinforcement Learning,"['Pieter Abbeel', 'Andrew Ng']",2004-01-01T00:00:00Z,special_docs,, 203382,https://www.nature.com/articles/s41598-022-11518-9,Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma.,"['Elias Fernández Domingos', 'Inês Terrucha', 'Rémi Suchon', 'Jelena Grujić', 'Juan C', 'Burguillo', 'Francisco C', 'Santos & Tom Lenaerts']",2022-08-10T00:00:00Z,special_docs,, 203403,https://link.springer.com/10.1007/s43681-021-00065-0,Moral consideration of nonhumans in the ethics of artificial intelligence,"['Andrea Owe', 'Seth D. Baum']",2021-11-01T00:00:00Z,special_docs,,pdf 203429,https://www.fhi.ox.ac.uk/wp-content/uploads/nips-workshop-2015-website.pdf,Learning the Preferences of Bounded Agents,"['Owain Evans', 'Noah D Goodman']",2015-01-01T00:00:00Z,special_docs,,pdf 203445,https://www.nature.com/articles/s41598-022-05729-3,Artificial intelligence development races in heterogeneous settings.,"['Theodor Cimpeanu', 'Francisco C', 'Santos', 'Luís Moniz Pereira', 'Tom Lenaerts', 'The Anh Han']",2022-08-10T00:00:00Z,special_docs,, 203466,https://cocosci.princeton.edu/papers/dasgupta2022clustering.pdf,Clustering and the efficient use of cognitive resources..,"['Dasgupta', 'I', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 203483,https://foundational-research.org/files/suffering-focused-ai-safety.pdf,Suffering-focused AI safety: Why “fail-safe” measures might be our top intervention,['Lukas Gloor'],2016-06-01T00:00:00Z,special_docs,, 203507,https://dl.acm.org/doi/10.1145/3306618.3314294,Epistemic Therapy for Bias in Automated Decision-Making,"['Thomas Krendl Gilbert', 'Yonatan Mintz']",2019-01-27T00:00:00Z,special_docs,,xml 203529,https://linkinghub.elsevier.com/retrieve/pii/S0004370214001453,Ethical guidelines for a superintelligence,['Ernest Davis'],2015-03-01T00:00:00Z,special_docs,,xml 203550,https://cocosci.princeton.edu/papers/gates2022memory.pdf,Memory transmission in small groups and large networks: An empirical study.,"['Gates', 'V', 'Suchow', 'J', 'W', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 203566,http://link.springer.com/10.1007/s10514-018-9746-1,Planning for cars that coordinate with people: leveraging effects on human actions for planning and active information gathering over human internal state,"['Dorsa Sadigh', 'Nick Landolfi', 'Shankar S. Sastry', 'Sanjit A. Seshia', 'Anca D. Dragan']",2018-10-01T00:00:00Z,special_docs,,pdf 203586,https://www.goodreads.com/book/show/44767248-human-compatible,Human Compatible: Artificial Intelligence and the Problem of Control,['Stuart Russell'],2019-10-08T00:00:00Z,special_docs,,epub 203609,https://onlinelibrary.wiley.com/doi/epdf/10.1111/tops.12007,Knowledge and implicature: modeling language understanding as social cognition,"['Noah D. Goodman', 'Andreas Stuhlmüller']",2013-01-01T00:00:00Z,special_docs,,xml 203619,http://proceedings.mlr.press/v80/liu18e/liu18e.pdf,Open Category Detection with PAC Guarantees.,"['Si Liu', 'Risheek Garrepalli', 'Thomas G Dietterich', 'Alan Fern', 'Dan Hendrycks']",2018-08-10T00:00:00Z,special_docs,, 203637,https://www.sciencedirect.com/science/article/abs/pii/S0160791X21003183?dgcid=author,Voluntary safety commitments provide an escape from over-regulation in AI development.,"['The Anh Han', 'Tom Lenaerts', 'Francisco C', 'Santos', 'Luís Moniz Pereira']",2022-08-10T00:00:00Z,special_docs,, 203656,https://www.goodreads.com/book/show/53730358-a-citizen-s-guide-to-artificial-intelligence,A Citizen's Guide to Artificial Intelligence,['John Zerilli'],2021-02-23T00:00:00Z,special_docs,,epub 203686,https://proceedings.mlr.press/v78/bajcsy17a/bajcsy17a.pdf,Learning Robot Objectives from Physical Human Interaction,"['Andrea Bajcsy', 'Dylan P Losey', 'Marcia K O’Malley', 'Anca D Dragan']",2017-01-01T00:00:00Z,special_docs,,pdf 203704,https://drive.google.com/file/d/18mfquEOt3Ofspo_qMr2R9P8KLsfCoumJ/view?usp=share_link,Danijar Hafner - Gaming our way to┬áAGI-by Towards Data Science-video_id Bgz9eMcE5Do-date 20220112,"['Danijar Hafner', 'Jeremie Harris']",2022-01-11T23:00:00Z,special_docs,,md 203718,https://www.cs.cornell.edu/home/halpern/papers/focus.pdf,Combining the Causal Judgments of Experts with Possibly Different Focus Areas.,"['Meir Friedenberg', 'Joseph Y', 'Halpern']",2018-08-10T00:00:00Z,special_docs,, 203735,https://www.repository.cam.ac.uk/handle/1810/280193,Working together to face humanity’s greatest threats: Introduction to The Future of Research on Catastrophic and Existential Risk.,"['Adrian Currie', 'Seán Ó HÉigeartaigh', 'Apollo-University Of Cambridge Repository', 'Apollo-University Of Cambridge Repository']",2018-09-11T00:00:00Z,special_docs,,xml 203769,https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d,Worst-case guarantees,['Paul Christiano'],2019-03-23T00:00:00Z,special_docs,,html 203791,https://drive.google.com/file/d/1i_SHxE0Cn-UelJ9Z9fGVcFoYtbdeJCdu/view?usp=share_link,Training machine learning (ML) systems to answer open-ended questions _ Andreas Stuhlmuller-by Centre for Effective Altruism-video_id 7WaiYZLS94M-date 20190829,['Andreas Stuhlmüller'],2019-08-28T22:00:00Z,special_docs,,md 203804,http://www.roboticsproceedings.org/rss10/p31.pdf,Active reward learning,['Christian Daniel et al'],2014-01-01T00:00:00Z,special_docs,, 203814,https://ai-alignment.com/directions-and-desiderata-for-ai-control-b60fca0da8f4,Directions and desiderata for AI alignment,['Paul Christiano'],2018-05-12T00:00:00Z,special_docs,,html 203847,https://www.cambridge.org/core/product/identifier/S0022481217000421/type/journal_article,"A Parametric, Resource-Bounded Generalization Of Löb’s Theorem, And A Robust Cooperation Criterion For Open-Source Game Theory",['Andrew Critch'],2019-12-01T00:00:00Z,special_docs,, 203864,https://www.sciencedirect.com/science/article/pii/S0004370221000862,Reward is Enough.,"['David Silver', 'Satinder Singh', 'Doina Precup', 'and Richard Sutton']",2021-08-10T00:00:00Z,special_docs,, 203873,https://ai-alignment.com/techniques-for-optimizing-worst-case-performance-39eafec74b99,Techniques for optimizing worst-case performance,['Paul Christiano'],2018-02-02T00:00:00Z,special_docs,,html 203897,https://psyarxiv.com/7hgup/,The case against economic values in the orbitofrontal cortex (or anywhere else in the brain),"['Benjamin Hayden', 'Yael Niv']",2021-01-01T00:00:00Z,special_docs,,xml 203918,https://www.ijcai.org/Proceedings/07/Papers/416.pdf,Bayesian Inverse Reinforcement Learning,['Deepak Ramachandran'],2007-01-06T00:00:00Z,special_docs,, 203937,https://drive.google.com/file/d/1wCQyIFbCd08d2OII9KyGJGsNq7AfVvdg/view?usp=share_link,How can we see the impact of AI strategy research _ Jade Leung _ EA Global - San Francisco 2019-by Centre for Effective Altruism-video_id 8M3nIu7GIsA-date 20190829,['Jade Leung'],2019-08-28T22:00:00Z,special_docs,,md 203959,https://medium.com/ai-control/alba-an-explicit-proposal-for-aligned-ai-17a55f60bbcf#.3jwpm81j8,ALBA: An explicit proposal for aligned AI,['Paul Christiano'],2016-02-24T00:00:00Z,special_docs,, 203990,https://cocosci.princeton.edu/papers/malaviya2022can.pdf,Can Humans Do Less-Than-One-Shot Learning?.,"['Malaviya', 'M', 'Sucholutsky', 'I', 'Oktar', 'K', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 204005,https://www.jair.org/index.php/jair/article/view/11633,ASNets: Deep Learning for Generalised Planning.,"['Sam Toyer', 'Felipe Trevizan', 'Sylvie Thiebaux', 'Lexing Xie']",2020-08-10T00:00:00Z,special_docs,, 204035,https://doi.org/10.1002/9781118736302.ch3,Who knows anything about anything about AI?,"['Stuart Armstrong', 'Seán ÓhÉigeartaigh']",2014-01-01T00:00:00Z,special_docs,,pdf 204058,https://intelligence.org/files/CEV.pdf,Coherent Extrapolated Volition,['Eliezer Yudkowsky'],2004-01-01T00:00:00Z,special_docs,,pdf 204080,https://drive.google.com/file/d/1HMb90gEERyFcjf3w8WLXsI3ATNj5mnqK/view?usp=share_link,CHAI Newsletter #3 2022,['CHAI'],2022-09-01T00:00:00Z,special_docs,, 204105,https://ai-alignment.com/better-priors-as-a-safety-problem-24aa1c300710,Better priors as a safety problem,['Paul Christiano'],2020-07-05T00:00:00Z,special_docs,,html 204118,https://bounded-regret.ghost.io/ai-forecasting/,Updates and Lessons from AI Forecasting,['Jacob Steinhardt'],2021-08-18T00:00:00Z,special_docs,,html 204140,https://jair.org/index.php/jair/article/view/12360,The Societal Implications of Deep Reinforcement Learning,"['Jess Whittlestone', 'Kai Arulkumaran', 'Matthew Crosby']",2021-05-01T00:00:00Z,special_docs,,pdf 204191,https://longtermrisk.org/a-lower-bound-on-the-importance-of-promoting-cooperation/,A Lower Bound on the Importance of Promoting Cooperation,['Brian Tomasik'],2015-08-29T00:00:00Z,special_docs,,html 204205,https://longtermrisk.org/files/toward_cooperation_learning_games_oct_2020.pdf,Towards Cooperation in Learning Games,"['Jesse Clifton', 'Maxime Riché']",2020-01-01T00:00:00Z,special_docs,,pdf 204229,https://openreview.net/pdf?id=kBVJ2NtiY-,Learning What To Do by Simulating the Past.,"['David Lindner', 'Rohin Shah', 'Pieter Abbeel', 'Anca Dragan']",2021-08-10T00:00:00Z,special_docs,, 204250,https://drive.google.com/file/d/1tOKaR_9chGQFePFXUBPalyAgG78rQ8z9/view?usp=share_link,The Windfall Clause - Sharing the benefits of advanced AI _ Cullen OΓÇÖKeefe-by Centre for Effective Altruism-video_id vFDL-NxY610-date 20190829,"[""Cullen O'Keefe""]",2019-08-28T22:00:00Z,special_docs,,md 204272,https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth,Could Advanced AI Drive Explosive Economic Growth?,['Tom Davidson'],2021-04-08T00:00:00Z,special_docs,,html 204288,https://people.eecs.berkeley.edu/~russell/papers/iccv19-brm.pdf,Bayesian Relational Memory for Semantic Visual Navigation.,['IEEE Transactions on Robotics'],2019-08-10T00:00:00Z,special_docs,, 204313,https://drive.google.com/file/d/1hkDZvLBP3FsIx-qHzNbdrakdTq44VkgA/view?usp=share_link,AI safety needs social scientists _ Amanda Askell _ EA Global - London 2018-by Centre for Effective Altruism-video_id TWHcK-BNo1w-date 20190301,['Amanda Askell'],2019-02-28T23:00:00Z,special_docs,,md 204333,https://cset.georgetown.edu/publication/ai-accidents-an-emerging-threat/,AI Accidents: An Emerging Threat,"['Zachary Arnold', 'Helen Toner']",2021-07-01T00:00:00Z,special_docs,,pdf 204362,https://arielrubinstein.tau.ac.il/papers/53.pdf,On the Interpretation of Decision Problems with Imperfect Recall,"['Michele Piccione', 'A. Rubinstein']",1996-01-01T00:00:00Z,special_docs,,pdf 204383,http://sunnyday.mit.edu/Bow-tie-final.pdf,Shortcomings of the Bow Tie and Other Safety Tools Based on Linear Causality,['Prof. Nancy G. Leveson'],2019-09-01T00:00:00Z,special_docs,, 204402,https://proceedings.neurips.cc/paper/2021/file/b9ed18a301c9f3d183938c451fa183df-Paper.pdf,Reinforcement Learning in Newcomblike Environments,"['James Bell', 'Linda Linsefors', 'Caspar Oesterheld', 'Joar Skalse']",2020-12-01T00:00:00Z,special_docs,,pdf 204421,https://www.fhi.ox.ac.uk/wp-content/uploads/monte_carlo_arXiv.pdf,Off-policy Monte Carlo agents with variable behaviour policies,['Stuart Armstrong'],2015-01-01T00:00:00Z,special_docs,,pdf 204438,https://fsone-bb4c.kxcdn.com/wp-content/uploads/2018/11/AGI-Coordination-Geat-Powers-Report.pdf,Artificial General Intelligence: Coordination and Great Powers,"['Allison Duettman', 'Olga Afanasjeva', 'Stuart Armstrong', 'Ryan Braley', 'Jessica Cussins', 'Jeffrey Ding', 'Peter Eckersley', 'Melody Guan', 'Alyssa Vance', 'Roman Yampolskiy']",2018-01-01T00:00:00Z,special_docs,,pdf 204470,https://www.openphilanthropy.org/blog/modeling-human-trajectory,Modeling the Human Trajectory,['David Roodman'],2020-06-15T00:00:00Z,special_docs,,html 204494,https://www.fhi.ox.ac.uk/wp-content/uploads/trajectories.pdf,Long-term trajectories of human civilization,"['Seth D. Baum', 'Stuart Armstrong', 'Timoteus Ekenstedt', 'Olle Häggström', 'Robin Hanson', 'Karin Kuhlemann', 'Matthijs M. Maas', 'James D. Miller', 'Markus Salmela', 'Anders Sandberg']",2019-01-01T00:00:00Z,special_docs,,pdf 204526,https://medium.com/@paulfchristiano/three-impacts-of-machine-intelligence-6285c8d85376,Three impacts of machine intelligence,['Paul Christiano'],2014-11-29T00:00:00Z,special_docs,, 204549,https://cocosci.princeton.edu/papers/bai_globally_2022.pdf,Globally inaccurate stereotypes can result from locally adaptive exploration.,"['Bai', 'X', 'Fiske', 'S', 'T', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 204565,https://ai-alignment.com/universality-and-security-amplification-551b314a3bab,Universality and security amplification,['Paul Christiano'],2019-01-03T00:00:00Z,special_docs,,html 204585,https://casparoesterheld.com/2017/06/25/complications-in-evaluating-neglectedness/,Complications in evaluating neglectedness,['Caspar Oesterheld'],2017-06-25T00:00:00Z,special_docs,,html 204610,https://cocosci.princeton.edu/papers/thompsonhuman.pdf,Human biases limit cumulative innovation.,"['Bill Thompson and Thomas L', 'Griffiths']",2021-08-10T00:00:00Z,special_docs,, 204630,https://cmp.felk.cvut.cz/~peckama2/papers/safe_exploration_overview_lncs.pdf,Safe Exploration Techniques for Reinforcement Learning – An Overview,"['Martin Pecka', 'Tomas Svoboda']",2014-01-01T00:00:00Z,special_docs,,pdf 204656,http://mediangroup.org/docs/toward_a_working_theory_of_mind.pdf,Toward A Working Theory of Mind,['Miya Perry'],2018-01-01T00:00:00Z,special_docs,,pdf 204678,https://ai-alignment.com/unsupervised-translation-as-a-safety-problem-99ae1f9b6b68,“Unsupervised” translation as an (intent) alignment problem,['Paul Christiano'],2020-09-30T00:00:00Z,special_docs,,html 204694,https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4052735/,Non-pharmacological cognitive enhancement,"['Martin Dresler', 'Anders Sandberg', 'Kathrin Ohla', 'Christoph Bublitz', 'Carlos Trenado', 'Aleksandra Mroczko-Wąsowicz', 'Simone Kühn', 'Dimitris Repantis']",2013-01-01T00:00:00Z,special_docs,,html 204728,https://cocosci.princeton.edu/papers/morgan2022experimental.pdf,"The experimental evolution of human culture: flexibility, fidelity and environmental instability.","['Morgan', 'T', 'J', 'Suchow', 'J', 'W', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 204748,https://doi.org/10.1088%2F1402-4896%2Faa90e8,How feasible is the rapid development of artificial superintelligence?,['Kaj Sotala'],2017-10-01T00:00:00Z,special_docs,,pdf 204778,https://intelligence.org/files/lob-notes-IAFF.pdf,An Introduction to Löb’s Theorem in MIRI Research,['Patrick LaVictoire'],2015-02-23T00:00:00Z,special_docs,,pdf 204802,https://sethbaum.com/ac/2017_Integrated.pdf,Towards an Integrated Assessment of Global Catastrophic Risk,"['Seth Baum', 'Anthony M Barrett']",2018-01-17T00:00:00Z,special_docs,,pdf 204829,https://cset.georgetown.edu/wp-content/uploads/Building-Trust-Through-Testing.pdf,Building Trust Through Testing,"['Michèle A Flournoy', 'Avril Haines', 'Gabrielle Chefitz']",2020-01-01T00:00:00Z,special_docs,,pdf 204863,https://link.springer.com/article/10.3758/s13423-017-1288-6,Empirical evidence for resource-rational anchoring and adjustment.,"['Falk Lieder', 'Thomas L', 'Griffiths', 'Quentin J', 'M', 'Huys', 'Noah D', 'Goodman']",2018-08-10T00:00:00Z,special_docs,, 204880,https://aair-lab.github.io/Publications/srivastava_aaai21.pdf,Unifying Principles and Metrics for Safe and Assistive AI.,['Siddharth Srivastava'],2021-08-10T00:00:00Z,special_docs,, 204903,https://intelligence.org/files/QuantilizersSaferAlternative.pdf,Quantilizers: A safer alternative to maximizers for limited optimization,['Jessica Taylor'],2016-01-01T00:00:00Z,special_docs,,pdf 204923,http://www.jmlr.org/papers/volume16/garcia15a/garcia15a.pdf,A Comprehensive Survey on Safe Reinforcement Learning,"['Javier Garcia', 'Fernando Fernandez']",2015-08-01T00:00:00Z,special_docs,, 204951,https://psyarxiv.com/63zvw,Predicting responsibility judgments from dispositional inferences and causal attributions.,"['Antonia Langenhoff', 'Alex Wiegmann', 'Joseph Y', 'Halpern', 'Joshua B', 'Tenenbaum', 'Tobias Gerstenberg']",2020-08-10T00:00:00Z,special_docs,, 204968,https://cset.georgetown.edu/publication/federal-prize-competitions/,Federal Prize Competitions,"['Ali Crawford', 'Ido Wulkan']",2021-11-01T00:00:00Z,special_docs,,pdf 205005,https://doi.org/10.1007/978-3-662-54033-6_1,Introduction to the technological singularity,['Stuart Armstrong'],2017-01-01T00:00:00Z,special_docs,,pdf 205021,https://medium.com/ai-control/policy-amplification-6a70cbee4f34#.31incu10a,Capability amplification,['Paul Christiano'],2016-10-03T00:00:00Z,special_docs,, 205039,https://dl.acm.org/doi/10.1145/3375627.3375857,Should Artificial Intelligence Governance be Centralised?: Design Lessons from History,"['Peter Cihon', 'Matthijs M. Maas', 'Luke Kemp']",2020-02-07T00:00:00Z,special_docs,, 205066,https://papers.nips.cc/paper/2021/hash/1454ca2270599546dfcd2a3700e4d2f1-Abstract.html,Hindsight Task Relabelling: Experience Replay for Sparse Reward Meta-RL.,"['Charles Packer', 'Pieter Abbeel', 'Joseph E', 'Gonzalez']",2021-08-10T00:00:00Z,special_docs,, 205084,https://cocosci.princeton.edu/papers/zhangoptimal.pdf,Optimal policies for free recall.,"['Zhang', 'Q', 'Griffiths', 'T', 'L', '& Norman', 'K', 'A']",2022-08-10T00:00:00Z,special_docs,, 205098,http://link.springer.com/10.1007/s00146-017-0760-1,Social choice ethics in artificial intelligence,['Seth D. Baum'],2020-03-01T00:00:00Z,special_docs,, 205126,https://medium.com/@tdietterich/benefits-and-risks-of-artificial-intelligence-460d288cccf3#.4mobx01nw,Benefits and Risks of Artificial Intelligence,['Thomas G. Dietterich'],2015-01-23T00:00:00Z,special_docs,, 205159,https://www.econlib.org/archives/2016/03/so_far_unfriend.html,So Far: Unfriendly AI Edition,['Eliezer Yudkowsky'],2016-03-29T00:00:00Z,special_docs,,html 205179,https://drive.google.com/file/d/15lAvcmvMDJ34gCQuxcTR-kyV5Y-NC2C3/view?usp=share_link,Jaime Sevilla - Projecting AI progress from compute┬átrends-by Towards Data Science-video_id 2NXagVA3yzg-date 20220413,"['Jaime Sevilla', 'Jeremie Harris']",2022-04-12T22:00:00Z,special_docs,,md 205194,https://casparoesterheld.com/2017/03/15/the-average-utilitarians-solipsism-wager/,The average utilitarian’s solipsism wager,['Caspar'],2017-03-15T00:00:00Z,special_docs,,html 205210,https://longtermrisk.org/collaborative-game-specification/,Collaborative game specification: arriving at common models in bargaining,['Jesse Clifton'],2021-03-06T00:00:00Z,special_docs,,html 205227,https://onlinelibrary.wiley.com/doi/abs/10.1111/j.0272-4332.2004.00419.x,A Critical Look at Risk Assessments for Global Catastrophes,['Adrian Kent'],2004-02-01T00:00:00Z,special_docs,,xml 205247,http://people.eecs.berkeley.edu/~russell/papers/aabi21-oupm.pdf,Transforming Worlds: Automated Involutive MCMC for Open-Universe Probabilistic Models.,"['George Matheos', 'Alexander K', 'Lew', 'Matin Ghavamizadeh', 'Stuart Russell', 'Marco Cusumano-Towner', 'Vikash K', 'Mansinghka']",2021-08-10T00:00:00Z,special_docs,, 205264,http://dspace.mit.edu/handle/1721.1/101522,Bayesian computational models for inferring preferences,['Owain Rhys Evans'],2015-01-01T00:00:00Z,special_docs,,pdf 205281,https://intelligence.org/files/DefiningValuesForValueLearners.pdf,Defining human values for value learners,['Kaj Sotala'],2016-01-01T00:00:00Z,special_docs,,pdf 205297,http://link.springer.com/10.1007/s11229-006-9010-7,Sleeping Beauty and Self-location: A Hybrid Model,['Nick Bostrom'],2007-05-24T00:00:00Z,special_docs,,pdf 205312,https://drive.google.com/file/d/15KPTTyONZkBjGyeO1J7NN_dptUGXwY9V/view,Interview with cvgig,"['cvgig', 'Vael Gates']",2022-03-24T00:00:00Z,special_docs,,docx 205337,https://cocosci.princeton.edu/papers/dasgupta22b.pdf,Distinguishing rule- and exemplar-based generalization in learning systems.,"['Dasgupta', 'I', 'Grant', 'E', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 205361,https://cset.georgetown.edu/publication/trusted-partners/,Trusted Partners,"['Margarita Konaev', 'Tina Huang', 'Husanjot Chahal']",2021-02-01T00:00:00Z,special_docs,,pdf 205405,http://www.aleph.se/Trans/Global/Posthumanity/WeBorg.html,"We, Borg: Speculations on hive minds as a posthuman state",['Anders Sandberg'],2015-01-01T00:00:00Z,special_docs,,html 205422,http://link.springer.com/10.1007/978-3-319-08795-5_2,Unifying Logic and Probability: A New Dawn for AI?,['Stuart Russell'],2014-01-01T00:00:00Z,special_docs,,pdf 205439,https://casparoesterheld.com/2017/01/18/is-it-a-bias-or-just-a-preference-an-interesting-issue-in-preference-idealization/,Is it a bias or just a preference? An interesting issue in preference idealization,['Caspar Oesterheld'],2017-01-18T00:00:00Z,special_docs,,html 205456,https://www.fhi.ox.ac.uk/wp-content/uploads/2015/03/Armstrong_AAAI_2015_Motivated_Value_Selection.pdf,Motivated value selection for artificial agents,['Stuart Armstrong'],2015-01-01T00:00:00Z,special_docs,,pdf 205478,https://towardsdatascience.com/assessing-generalization-in-reward-learning-intro-and-background-da6c99d9e48,Assessing Generalization in Reward Learning: Intro and Background,"['Max Chiswick', 'Anton Makiievskyi', 'Liang Zhou']",2020-11-20T00:00:00Z,special_docs,,html 205509,https://longtermrisk.org/reasons-to-be-nice-to-other-value-systems/,Reasons to Be Nice to Other Value Systems,['Brian Tomasik'],2015-08-29T00:00:00Z,special_docs,,html 205537,http://proceedings.mlr.press/v97/hendrycks19a/hendrycks19a.pdf,Using Pre-Training Can Improve Model Robustness and Uncertainty.,"['Dan Hendrycks', 'Kimin Lee', 'Mantas Mazeika']",2019-08-10T00:00:00Z,special_docs,, 205571,https://doi.org/10.1007/978-3-662-54033-6_8,"Energy, Complexity, and the Singularity",['Kent A. Peacock'],2017-01-01T00:00:00Z,special_docs,,pdf 205601,https://iliad.stanford.edu/pdfs/publications/choudhury2022dynamic.pdf,Dynamic Multi-Robot Task Allocation under Uncertainty and Temporal Constraints.,"['Shushman Choudhury', 'Jayesh Gupta', 'Mykel J', 'Kochenderfer', 'Dorsa Sadigh', 'Jeannette Bohg']",2022-08-10T00:00:00Z,special_docs,, 205617,https://openreview.net/forum?id=Nct9j3BVswZ,"Self-Supervise, Refine, Repeat: Improving Unsupervised Anomaly Detection","['Jinsung Yoon', 'Kihyuk Sohn', 'Chun-Liang Li', 'Sercan O. Arik', 'Chen-Yu Lee', 'Tomas Pfister']",2021-09-29T00:00:00Z,special_docs,,pdf 205633,https://cset.georgetown.edu/publication/ai-and-the-future-of-cyber-competition/,AI and the Future of Cyber Competition,['Wyatt Hoffman'],2021-01-01T00:00:00Z,special_docs,,pdf 205652,http://link.springer.com/10.1007/s10955-017-1836-5,Why Does Deep and Cheap Learning Work So Well?,"['Henry W. Lin', 'Max Tegmark', 'David Rolnick']",2017-09-01T00:00:00Z,special_docs,,pdf 205677,https://openreview.net/forum?id=SkgZNnR5tX,Uncovering Surprising Behaviors in Reinforcement Learning via Worst-case Analysis,"['Avraham Ruderman', 'Richard Everett', 'Bristy Sikder', 'Hubert Soyer', 'Jonathan Uesato', 'Ananya Kumar', 'Charlie Beattie', 'Pushmeet Kohli']",2018-09-27T00:00:00Z,special_docs,,xml 205695,https://cset.georgetown.edu/publication/robot-hacking-games/,Robot Hacking Games,['Dakota Cary'],2021-09-01T00:00:00Z,special_docs,,pdf 205710,https://doi.org/10.1007/s13347-020-00415-6,The Whiteness of AI,"['Stephen Cave', 'Kanta Dihal']",2020-08-06T00:00:00Z,special_docs,,pdf 205725,https://drive.google.com/file/d/1XWDpS-QTI2fmFZw4ugcmXmQqj2yuSEP4/view?usp=share_link,How sure are we about this AI stuff _ Ben Garfinkel _ EA Global - London 2018-by Centre for Effective Altruism-video_id E8PGcoLDjVk-date 20190204,['Ben Garfinkel'],2019-02-03T23:00:00Z,special_docs,,md 205750,https://drive.google.com/file/d/1rZxNoVYm2-sOEc6FQXIT3c9Qq0mgdN8p/view,Interview with 7oalk,"['7oalk', 'Vael Gates']",2022-03-20T00:00:00Z,special_docs,,docx 205787,https://casparoesterheld.com/2017/02/06/betting-on-the-past-by-arif-ahmed/,“Betting on the Past” by Arif Ahmed,['Johannes Treutlein'],2017-02-06T00:00:00Z,special_docs,,html 205796,https://people.eecs.berkeley.edu/~anca/papers/IROS16_active.pdf,Information Gathering Actions Over Human Internal State.,"['Dorsa Sadigh', 'S', 'Shankar Sastry', 'Sanjit A', 'Seshia', 'Anca Dragan']",2016-08-10T00:00:00Z,special_docs,, 205812,http://dl.acm.org/citation.cfm?doid=2038642.2038685,Formal verification of hybrid systems,['Rajeev Alur'],2011-10-09T00:00:00Z,special_docs,,xml 205837,https://static1.squarespace.com/static/54763f79e4b0c4e55ffb000c/t/54e90604e4b09706d4a4fc65/1424557572437/beyond-point-and-shoot-morality.pdf,Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics,['Joshua D Greene'],2014-07-25T00:00:00Z,special_docs,, 205855,https://longtermrisk.org/files/stastny_et_al_implicit_bargaining.pdf,Multi-agent learning in mixed-motive coordination problems,"['Julian Stastny', 'Johannes Treutlein', 'Maxime Riché', 'Jesse Clifton']",2021-03-08T00:00:00Z,special_docs,,pdf 205864,http://mediangroup.org/gpu.html,How rapidly are GPUs improving in price performance?,['Baeo Maltinsky'],2018-01-01T00:00:00Z,special_docs,,html 205879,http://www.cs.utexas.edu/~pstone/Papers/bib2html-links/KCAP09-knox.pdf,Interactively shaping agents via human reinforcement: The TAMER framework,"['W. Bradley Knox', 'Peter Stone']",2009-09-01T00:00:00Z,special_docs,, 205896,http://link.springer.com/10.1007/978-3-319-09668-1_2,How We’re Predicting AI – or Failing to,"['Stuart Armstrong', 'Kaj Sotala']",2015-01-01T00:00:00Z,special_docs,,pdf 205916,https://doi.org/10.1177/0022002721995549,Reputations for Resolve and Higher-Order Beliefs in Crisis Bargaining,"['Allan Dafoe', 'Remco Zwetsloot', 'Matthew Cebul']",2021-08-01T00:00:00Z,special_docs,,pdf 205936,https://docs.google.com/document/d/1PXwjSbh-g1U1JEXhf55C7YsqD5qKNdBPQGgQ7W0Zm1A/edit?usp=sharing,X-Risk Motivations for Safety Research Directions,"['Dan Hendrycks', 'Nicholas Carlini', 'John Schulman', 'Jacob Steinhardt']",2022-05-23T00:00:00Z,special_docs,, 205999,https://doi.org/10.1007/s10514-018-9756-z,Special issue on learning for human–robot collaboration,"['Leonel Rozo', 'Heni Ben Amor', 'Sylvain Calinon', 'Anca Dragan', 'Dongheui Lee']",2018-06-01T00:00:00Z,special_docs,,pdf 206038,http://www.machinelearning.org/archive/icml2008/papers/627.pdf,Knows What It Knows: A Framework For Self-Aware Learning,"['Lihong Li', 'Michael L. Littman', 'Thomas J. Walsh']",2011-01-01T00:00:00Z,special_docs,, 206052,http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html,Human-level control through deep reinforcement learning,"['Volodymyr Mnih', 'Koray Kavukcuoglu', 'David Silver', 'Andrei A. Rusu', 'Joel Veness', 'Marc G. Bellemare', 'Alex Graves', 'Martin Riedmiller', 'Andreas K. Fidjeland', 'Georg Ostrovski', 'Stig Petersen', 'Charles Beattie', 'Amir Sadik', 'Ioannis Antonoglou', 'Helen King', 'Dharshan Kumaran', 'Daan Wierstra', 'Shane Legg', 'Demis Hassabis']",2015-02-25T00:00:00Z,special_docs,, 206073,https://papers.nips.cc/paper/2020/hash/d464b5ac99e74462f321c06ccacc4bff-Abstract.html,The MAGICAL Benchmark for Robust Imitation,"['Sam Toyer', 'Rohin Shah', 'Andrew Critch', 'Stuart Russell']",2020-01-01T00:00:00Z,special_docs,,pdf 206096,https://drive.google.com/file/d/1vgZqoUWMOL4WRjt2lILflJWVsk2kEj0H/view?usp=share_link,Yudkowsky-Hanson Jane Street Debate 2011,"['Eliezer Yudkowsky', 'Robin Hanson']",2010-12-31T00:00:00Z,special_docs,,md 206124,http://mediangroup.org/insights2.html,Revisiting the Insights model,['Median Group'],2019-01-01T00:00:00Z,special_docs,,html 206137,https://onlinelibrary.wiley.com/doi/abs/10.1111/1758-5899.12002,Existential Risk Prevention as Global Priority,['Nick Bostrom'],2013-01-01T00:00:00Z,special_docs,,xml 206174,https://intelligence.org/files/Corrigibility.pdf,Corrigibility,"['Nate Soares', 'Benja Fallenstein', 'Stuart Armstrong', 'Eliezer Yudkowsky']",2015-01-01T00:00:00Z,special_docs,, 206198,https://www.goodreads.com/book/show/38471807-solomon-s-code,Solomon's Code: Humanity in a World of Thinking Machines,"['Olaf Groth', 'Mark Nitzberg']",2018-11-06T00:00:00Z,special_docs,,epub 206228,https://drive.google.com/file/d/12EKK8SfY21Tge1M9ADJHYUk2yWB7T4zG/view?usp=share_link,Irina Rish - Out-of-distribution generalization-by Towards Data Science-video_id QjXFN4UWZCg-date 20220309,"['Irina Rish', 'Jeremie Harris']",2022-03-08T23:00:00Z,special_docs,,md 206248,https://medium.com/ai-control/technical-and-social-approaches-to-ai-safety-5e225ca30c46,Technical and social approaches to AI safety,['Paul Christiano'],2015-04-13T00:00:00Z,special_docs,, 206268,https://casparoesterheld.com/2018/04/26/goertzels-golem-implements-evidential-decision-theory-applied-to-policy-choice/,Goertzel’s GOLEM implements evidential decision theory applied to policy choice,['Caspar Oesterheld'],2018-04-26T00:00:00Z,special_docs,,html 206278,https://drive.google.com/file/d/1sb1IlXM1FMU6lYEYLSZODama2EuHSdK2/view?usp=sharing,CHAI Newsletter #1 2022,['CHAI'],2022-04-01T00:00:00Z,special_docs,, 206315,https://www.goodreads.com/book/show/55605905-the-technological-singularity,The Technological Singularity,['Murray Shanahan'],2015-08-07T00:00:00Z,special_docs,,epub 206351,https://justinsvegliato.com/s/NSBRZiros2022.pdf,"Selecting the Partial State Abstractions of MDPs: A Metareasoning Approach with Deep Reinforcement Learning.","['Samer B', 'Nashed', 'Justin Svegliato', 'Abhinav Bhatia', 'Stuart Russell', 'Shlomo Zilberstein']",2022-08-10T00:00:00Z,special_docs,, 206371,https://www.nature.com/articles/s42256-020-0195-0,Artificial intelligence in a crisis needs ethics with urgency,"['Asaf Tzachor', 'Jess Whittlestone', 'Lalitha Sundaram', 'Seán Ó hÉigeartaigh']",2020-07-01T00:00:00Z,special_docs,,html 206404,https://www.bmj.com/content/372/bmj.n364,Using AI ethically to tackle covid-19,"['Stephen Cave', 'Jess Whittlestone', 'Rune Nyrup', 'Sean O. hEigeartaigh', 'Rafael A. Calvo']",2021-03-16T00:00:00Z,special_docs,,xml 206436,https://pulkitverma.net/assets/pdf/vms_aaai21/vms_aaai21.pdf,Asking the Right Questions: Learning Interpretable Action Models Through Query Answering.,"['Pulkit Verma', 'Shashank Rao Marpally', 'Siddharth Srivastava']",2021-08-10T00:00:00Z,special_docs,, 206454,https://www.fhi.ox.ac.uk/wp-content/uploads/SafeML2019_paper_40.pdf,How Useful Is Quantilization For Mitigating Specification-Gaming?,['Ryan Carey'],2019-01-01T00:00:00Z,special_docs,,pdf 206472,https://intelligence.org/files/VingeanReflection.pdf,Vingean Reflection: Reliable Reasoning for Self-Improving Agents,"['Benja Fallenstein', 'Nate Soares']",2015-02-01T00:00:00Z,special_docs,,pdf 206497,https://www.governance.ai/research-paper/how-will-national-security-considerations-affect-antitrust-decisions-in-ai-an-examination-of-historical-precedents,How Will National Security Considerations Affect Antitrust Decisions in AI? An Examination of Historical Precedents,['Cullen O’Keefe'],2020-07-07T00:00:00Z,special_docs,,xml 206519,https://cset.georgetown.edu/publication/u-s-ai-workforce-policy-recommendations/,U.S. AI Workforce: Policy Recommendations,"['Diana Gehlhaus', 'Luke Koslosky', 'Kayla Goode', 'Claire Perkins']",2021-10-01T00:00:00Z,special_docs,,pdf 206555,https://drive.google.com/file/d/1c9Ee4DjntG3BWZiLtaDALoLsKkuIa8YF/view?usp=share_link,Why companies should be leading on AI governance _ Jade Leung _ EA Global - London 2018-by Centre for Effective Altruism-video_id AVDIQvJVhso-date 20190207,['Jade Leung'],2019-02-06T23:00:00Z,special_docs,,md 206579,https://longtermrisk.org/flavors-of-computation-are-flavors-of-consciousness/,Flavors of Computation Are Flavors of Consciousness,['Brian Tomasik'],2015-04-10T00:00:00Z,special_docs,,html 206594,https://cset.georgetown.edu/publication/indonesias-ai-promise-in-perspective/,Indonesia’s AI Promise in Perspective,"['Kayla Goode', 'Heeu Millie Kim']",2021-08-01T00:00:00Z,special_docs,,pdf 206626,https://doi.org/10.1007/978-3-662-54033-6_7,Diminishing Returns and Recursive Self Improving Artificial Intelligence,"['Andrew Majot', 'Roman Yampolskiy']",2017-01-01T00:00:00Z,special_docs,,pdf 206653,http://www-personal.umich.edu/~xintongw/papers/advgan2020ijcai.pdf,Market Manipulation: An Adversarial Learning Framework for Detection and Evasion.,"['Xintong Wang', 'Michael P Wellman']",2020-08-10T00:00:00Z,special_docs,, 206670,http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-210.pdf,Extracting and Using Preference Information from the State of the World,['Rohin Monish Shah'],2020-12-17T00:00:00Z,special_docs,,pdf 206686,https://drive.google.com/file/d/1NXlJccLPk2UV_Z3qOzldBXId4l6dEcHp/view?usp=sharing,CHAI Newsletter #2 2021,['CHAI'],2021-09-01T00:00:00Z,special_docs,, 206713,https://www.mdpi.com/2073-4336/12/2/46,Spoofing the Limit Order Book: A Strategic Agent-Based Analysis.,"['Xintong Wang', 'Christopher Hoang', 'Yevgeniy Vorobeychik', 'Michael P Wellman']",2021-08-10T00:00:00Z,special_docs,, 206733,https://doi.org/10.1007/s43681-020-00037-w,Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy,"['Charlotte Stix', 'Matthijs M. Maas']",2021-08-01T00:00:00Z,special_docs,,pdf 206745,https://people.eecs.berkeley.edu/~anca/papers/ISER16_influence.pdf,Implicitly Assisting Humans to Choose Good Grasps in Robot to Human Handovers.,"['Aaron Bestick', 'Ruzena Bajcsy', 'Anca Dragan']",2016-08-10T00:00:00Z,special_docs,, 206760,https://medium.com/ai-control/alba-on-github-5636ef510907#.ovfrkun0r,ALBA on GitHub,['Paul Christiano'],2016-10-19T00:00:00Z,special_docs,, 206793,https://ai-alignment.com/the-strategy-stealing-assumption-a26b8b1ed334,The strategy-stealing assumption,['Paul Christiano'],2019-09-15T00:00:00Z,special_docs,,html 206812,https://drive.google.com/file/d/1gFuZ9_ykohGQTTCnbCnReg5saTR9Letl/view?usp=share_link,NeurIPSorICML_cvgig-by Vael Gates-date 20220324,['Vael Gates'],2022-03-23T23:00:00Z,special_docs,,md 206864,https://longtermrisk.org/files/risks-of-astronomical-future-suffering.pdf,Risks of Astronomical Future Suffering,['Brian Tomasik'],2011-01-01T00:00:00Z,special_docs,,pdf 206891,https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/amershi_AIMagazine2014.pdf,Power to the People: The Role of Humans in Interactive Machine Learning,"['Saleema Amershi', 'Maya Cakmak', 'W. Bradley Knox', 'Todd Kulesza']",2013-01-01T00:00:00Z,special_docs,, 206930,https://www.nature.com/articles/s41598-019-50145-9,The Psychology of Existential Risk: Moral Judgments about Human Extinction,"['Stefan Schubert', 'Lucius Caviola', 'Nadira S. Faber']",2019-10-21T00:00:00Z,special_docs,,html 206953,https://drive.google.com/file/d/1-1YpMOCJJKAXdQDVlhbNf68f9_Bh_ADU/view,Interview with 92iem,"['92iem', 'Vael Gates']",2022-03-21T00:00:00Z,special_docs,,docx 206976,http://link.springer.com/10.1007/s11023-019-09513-7,Algorithmic Decision-Making and the Control Problem,"['John Zerilli', 'Alistair Knott', 'James Maclaurin', 'Colin Gavaghan']",2019-12-01T00:00:00Z,special_docs,,xml 207000,https://www.fhi.ox.ac.uk/Existential-risk-and-existential-hope.pdf,Existential risk and existential hope: definitions,"['Owen Cotton-Barratt', 'Toby Ord']",2015-01-01T00:00:00Z,special_docs,,pdf 207017,https://drive.google.com/file/d/1gRVDU0KI3YNS6ObJ3yDEV8inXO82wLPC/view?usp=sharing,CHAI Newsletter #1 2019,['CHAI'],2019-04-01T00:00:00Z,special_docs,, 207032,https://cocosci.princeton.edu/papers/meylanevaluating.pdf,Evaluating models of robust word recognition with serial reproduction.,"['Stephan C', 'Meylan', 'Sathvik Nair', 'Thomas L', 'Griffiths']",2021-08-10T00:00:00Z,special_docs,, 207048,https://www.garymcgraw.com/wp-content/uploads/2020/02/BIML-ARA.pdf,An Architectural Risk Analysis of Machine Learning Systems: Toward More Secure Machine Learning,"['Gary McGraw', 'Harold Figueroa', 'Victor Shepardson', 'Richie Bonett']",2020-01-13T00:00:00Z,special_docs,,pdf 207096,http://arxiv-export-lb.library.cornell.edu/pdf/1911.11132,Scaling Out-of-Distribution Detection for Real-World Settings.,"['Dan Hendrycks', 'Steven Basart', 'Mantas Mazeika', 'Mohammadreza Mostajabi', 'Jacob Steinhardt', 'Dawn Song']",2019-08-10T00:00:00Z,special_docs,, 207117,https://www.nature.com/articles/s42256-021-00298-y,Institutionalizing ethics in AI through broader impact requirements,"['Carina E. A. Prunkl', 'Carolyn Ashurst', 'Markus Anderljung', 'Helena Webb', 'Jan Leike', 'Allan Dafoe']",2021-02-01T00:00:00Z,special_docs,,html 207140,https://www.cser.ac.uk/media/uploads/files/Cihon_et_al-_2019-_Should_AI_Governance_be_Centralised.pdf,Should Artificial Intelligence Governance be Centralised? Six Design Lessons from History,"['Peter Cihon', 'Matthijs M Maas', 'Luke Kemp']",2019-12-15T00:00:00Z,special_docs,,pdf 207161,https://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf,Visualizing and Understanding Convolutional Networks,"['Matthew D Zeiler', 'Rob Fergus']",2014-01-01T00:00:00Z,special_docs,, 207178,http://www.sciencedirect.com/science/article/pii/S0016328717301623,Governing Boring Apocalypses: A new typology of existential vulnerabilities and exposures for existential risk research,"['Hin-Yan Liu', 'Kristian Cedervall Lauta', 'Matthijs Michiel Maas']",2018-09-01T00:00:00Z,special_docs,,xml 207211,https://why19.causalai.net/papers/mohan-why19.pdf,On Handling Self-masking and Other Hard Missing Data Problems.,['Karthika Mohan'],2018-08-10T00:00:00Z,special_docs,, 207228,https://ecai2020.eu/papers/1364_paper.pdf,AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues,"['Jose Hernandez-Orallo', 'Fernando Martınez-Plumed', 'Shahar Avin', 'Jess Whittlestone', 'Seán Ó hÉigeartaigh']",2020-01-01T00:00:00Z,special_docs,, 207254,https://cocosci.princeton.edu/papers/callawayrationaluse.pdf,"Rational use of cognitive resources in human planning. Nature Human Behaviour,.","['Callaway', 'F', 'van Opheusden', 'B', 'Gul', 'S', 'Das', 'P', 'Krueger', 'P', 'M', 'Griffiths', 'T', 'L', '& Lieder', 'F']",2022-08-10T00:00:00Z,special_docs,, 207270,https://users.cs.duke.edu/~conitzer/safeAAMAS21.pdf,Safe Pareto Improvements for Delegated Game Playing,"['Caspar Oesterheld', 'Vincent Conitzer']",2021-01-01T00:00:00Z,special_docs,,pdf 207288,https://www.governance.ai/research-paper/a-tour-of-emerging-cryptographic-technologies,A Tour of Emerging Cryptographic Technologies | GovAI,['Ben Garfinkel'],2021-05-19T00:00:00Z,special_docs,,pdf 207319,https://nickbostrom.com/views/transhumanist.pdf,Introduction—The Transhumanist FAQ: A General Introduction,['Nick Bostrom'],2014-01-01T00:00:00Z,special_docs,,pdf 207343,http://mediangroup.org/insights,Insight-based AI timelines model,['Baeo Maltinsky'],2018-01-01T00:00:00Z,special_docs,,html 207352,https://openreview.net/pdf?id=NpsVSN6o4ul,Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Small,"['Kevin Ro Wang', 'Alexandre Variengien', 'Arthur Conmy', 'Buck Shlegeris', 'Jacob Steinhardt']",2023-03-01T00:00:00Z,special_docs,, 207375,http://aima.eecs.berkeley.edu/~russell/papers/aaai19-marl.pdf,Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient.,"['Shihui Li', 'Yi Wu', 'Xinyue Cui', 'Honghua Dong', 'Fei Fang', 'Stuart Russell']",2019-08-10T00:00:00Z,special_docs,, 207388,https://aair-lab.github.io/Publications/ijcai19.pdf,"Why Can’t You Do That, HAL? Explaining Unsolvability of Planning Tasks.","['Sarath Sreedharan', 'Siddharth Srivastava', 'David Smith', 'Subbarao Kambhampati']",2019-08-10T00:00:00Z,special_docs,, 207403,http://www.nature.com/articles/548520a,"Artificial intelligence: The future is superintelligent [Book review of ""Life 3.0: Being Human in the Age of Artificial Intelligence"" by Max Tegmark]",['Stuart Russell'],2017-08-30T00:00:00Z,special_docs,,html 207426,https://proceedings.neurips.cc/paper/2020/hash/52f4691a4de70b3c441bca6c546979d9-Abstract.html,Preference learning along multiple criteria: A game-theoretic perspective.,"['Kush Bhatia', 'Ashwin Pananjady', 'Peter L', 'Bartlett', 'Anca D', 'Dragan', 'Martin J', 'Wainwright']",2020-08-10T00:00:00Z,special_docs,, 207438,https://drive.google.com/file/d/10BV6iaQ59OQ0y3cKYTqmg31inPnabNRg/view?usp=share_link,Sino-Western cooperation in AI safety _ Brian Tse _ EA Global - San Francisco 2019-by Centre for Effective Altruism-video_id 3qYmLRqemg4-date 20190829,['Brian Tse'],2019-08-28T22:00:00Z,special_docs,,md 207474,https://drive.google.com/file/d/1MTOo-ntlaB_oAcuBJtQ4c7iKt6LXDlKk/view?usp=share_link,How social science research can inform AI governance _ Baobao Zhang _ EAGxVirtual 2020-by Centre for Effective Altruism-video_id eTkvtHymI9s-date 20200615,['Baobao Zhang'],2020-06-14T22:00:00Z,special_docs,,md 207511,https://gcrinstitute.org/papers/059_ai-environmental-ethics.pdf,Artificial Intelligence Needs Environmental Ethics,"['Andrea Owe', 'Seth Baum']",2021-11-14T00:00:00Z,special_docs,,pdf 207547,https://academic.oup.com/book/2320/chapter-abstract/142464710,Liability For Present And Future Robotics Technology,"['Trevor N. White', 'Seth D. Baum']",2017-01-01T00:00:00Z,special_docs,,html 207569,https://cocosci.princeton.edu/papers/yamakoshiprobing.pdf,Probing BERT’s priors with serial reproduction chains.,"['SML\nYamakoshi', 'T', 'Griffiths', 'T', 'L', 'Hawkins', 'R', 'D']",2022-08-10T00:00:00Z,special_docs,, 207584,https://doi.org/10.1007/978-3-319-09069-6_25,Cyber insurance,"['Pythagoras Petratos', 'Anders Sandberg', 'Feng Zhou']",2017-01-01T00:00:00Z,special_docs,,pdf 207625,http://link.springer.com/10.1007/s42413-020-00086-3,Aligning AI Optimization to Community Well-Being,['Jonathan Stray'],2020-12-01T00:00:00Z,special_docs,,pdf 207658,https://nickbostrom.com/papers/vulnerable.pdf,The vulnerable world hypothesis,['Nick Bostrom'],2018-01-01T00:00:00Z,special_docs,,pdf 207681,https://thegradient.pub/independently-reproducible-machine-learning/,Quantifying Independently Reproducible Machine Learning,['Edward Raff'],2020-02-06T00:00:00Z,special_docs,,html 207704,https://pubsonline.informs.org/doi/abs/10.1287/moor.27.4.819.297,The Complexity of Decentralized Control of Markov Decision Processes,"['Daniel S. Bernstein', 'Robert Givan', 'Neil Immerman', 'Shlomo Zilberstein']",2002-11-01T00:00:00Z,special_docs,,xml 207725,https://nickbostrom.com/evolution.pdf,The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement,"['Nick Bostrom', 'Anders Sandberg']",2017-01-01T00:00:00Z,special_docs,,pdf 207736,"https://peterasaro.org/writing/Asaro,%20Ethics%20Auto%20Agents,%20AAAI.pdf",The Liability Problem for Autonomous Artificial Agents,['Peter M Asaro'],2016-01-01T00:00:00Z,special_docs,,pdf 207756,https://papers.nips.cc/paper/2021/file/94130ea17023c4837f0dcdda95034b65-Paper.pdf,Improving Transferability of Representations via Augmentation-Aware Self-Supervision.,"['Hankook Lee', 'Kibok Lee', 'Kimin Lee', 'Honglak Lee', 'Jinwoo Shin']",2021-08-10T00:00:00Z,special_docs,, 207770,https://owainevans.github.io/pdfs/evans_ida_projects.pdf,Machine Learning Projects for Iterated Distillation and Amplification,"['Owain Evans', 'William Saunders', 'Andreas Stuhlmüller']",2019-01-01T00:00:00Z,special_docs,,pdf 207790,https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-working-paper-Who-owns-AI-Apr2020.pdf,Who owns artificial intelligence? A preliminary analysis of corporate intellectual property strategies and why they matter.,"['Nathan Calvin', 'Jade Leung']",2020-02-01T00:00:00Z,special_docs,,pdf 207806,http://www.nickbostrom.com/papers/pascal.pdf,Pascal’s Mugging,['Nick Bostrom'],2009-01-01T00:00:00Z,special_docs,, 207820,https://longtermrisk.org/do-artificial-reinforcement-learning-agents-matter-morally/,Do Artificial Reinforcement-Learning Agents Matter Morally?,['Brian Tomasik'],2016-07-28T00:00:00Z,special_docs,,html 207842,http://www.jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf,Visualizing Data Using t-SNE,"['Laurens van der Maaten', 'Geoff Hinton']",2008-11-01T00:00:00Z,special_docs,, 207859,https://nickbostrom.com/astronomical/waste.pdf,Astronomical Waste: The Opportunity Cost of Delayed Technological Development: Nick Bostrom,['Nick Bostrom'],2003-01-01T00:00:00Z,special_docs,,pdf 207876,https://drive.google.com/file/d/1vhh9YN7ybAX9-hyYCZimEogitAGaC7RE/view,Interview with lgu5f,"['lgu5f', 'Vael Gates']",2022-03-22T00:00:00Z,special_docs,,docx 207906,https://cset.georgetown.edu/research/the-question-of-comparative-advantage-in-artificial-intelligence-enduring-strengths-and-emerging-challenges-for-the-united-states/,The Question of Comparative Advantage in Artificial Intelligence: Enduring Strengths and Emerging Challenges for the United States,"['Andrew Imbrie', 'Elsa Kania', 'Lorand Laskai']",2020-01-01T00:00:00Z,special_docs,,pdf 207941,http://n.sinaimg.cn/tech/f34884a9/20200501/GlobalAIGovernancein2019.pdf,From the Standard Model of AI to Provably Beneficial Systems,"['Stuart Russell', 'Caroline Jeanmaire']",2020-01-01T00:00:00Z,special_docs,,pdf 207980,https://medium.com/ai-control/ambitious-vs-narrow-value-learning-99bd0c59847e,Ambitious vs. narrow value learning,['Paul Christiano'],2015-10-05T00:00:00Z,special_docs,, 207999,https://drive.google.com/file/d/1p4ZAuEYHL_21tqstJOGsMiG4xaRBtVcj/view?usp=share_link,Natural Selection Favors AIs over Humans.,['Dan Hendrycks'],2022-08-10T00:00:00Z,special_docs,, 208028,http://www.danieldewey.net/reward-engineering-principle.pdf,Reinforcement Learning and the Reward Engineering Principle,['Daniel Dewey'],2014-01-01T00:00:00Z,special_docs,, 208043,https://cocosci.princeton.edu/papers/sumers_extending_2021.pdf,Extending rational models of communication from beliefs to actions.,"['Theodore R', 'Sumers', 'Robert D', 'Hawkins', 'Mark K', 'Ho', 'Thomas L', 'Griffiths']",2021-08-10T00:00:00Z,special_docs,, 208060,https://futureoflife.org/open-letter/ai-open-letter/,Research priorities for robust and beneficial artificial intelligence: an open letter,"['Stuart Russell', 'Daniel Dewey', 'Max Tegmark']",2015-01-01T00:00:00Z,special_docs,,html 208075,https://cset.georgetown.edu/publication/mapping-the-ai-investment-activities-of-top-global-defense-companies/,Mapping the AI Investment Activities of Top Global Defense Companies,"['Ngor Luong', 'Rebecca Gelles', 'Melissa Flagg']",2021-10-01T00:00:00Z,special_docs,,pdf 208104,http://www.sciencedirect.com/science/article/pii/S2352154618302122,Doing more with less: meta-reasoning and meta-learning in humans and machines,"['Thomas L Griffiths', 'Frederick Callaway', 'Michael B Chang', 'Erin Grant', 'Paul M Krueger', 'Falk Lieder']",2019-10-01T00:00:00Z,special_docs,, 208124,https://openreview.net/pdf?id=gT6j4_tskUt,OpenOOD: Benchmarking Generalized Out-of-Distribution Detection.,"['Jingkang Yang', 'Pengyun Wang', 'Dejian Zou', 'Zitang Zhou', 'Kunyuan Ding', 'WENXUAN PENG', 'Haoqi Wang', 'Guangyao Chen', 'Bo Li', 'Yiyou Sun', 'Xuefeng Du', 'Kaiyang Zhou', 'Wayne Zhang', 'Dan Hendrycks', 'Yixuan Li', 'Ziwei Liu']",2022-08-10T00:00:00Z,special_docs,, 208146,http://dl.acm.org/citation.cfm?doid=3173386.3173568,Explainable Robotic Systems,"['Maartje M.A. de Graaf', 'Bertram F. Malle', 'Anca Dragan', 'Tom Ziemke']",2018-03-01T00:00:00Z,special_docs,,xml 208165,http://www.pnas.org/lookup/doi/10.1073/pnas.1907370117,Fast reinforcement learning with generalized policy updates,"['André Barreto', 'Shaobo Hou', 'Diana Borsa', 'David Silver', 'Doina Precup']",2020-12-01T00:00:00Z,special_docs,,xml 208184,https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190905033.001.0001/oso-9780190905033-chapter-13,Alignment for Advanced Machine Learning Systems,"['Jessica Taylor', 'Eliezer Yudkowsky', 'Patrick LaVictoire', 'Andrew Critch', 'Jessica Taylor', 'Eliezer Yudkowsky', 'Patrick LaVictoire', 'Andrew Critch']",2020-09-17T00:00:00Z,special_docs,,xml 208235,https://www.cnas.org/publications/reports/ai-and-international-stability-risks-and-confidence-building-measures,AI and International Stability: Risks and Confidence-Building Measures,"['Michael Horowitz', 'Paul Scharre']",2021-01-12T00:00:00Z,special_docs,,html 208281,https://drive.google.com/file/d/1pZqoqMcr_gKlkiLwLMsn9D9MwDsTzHHX/view?usp=share_link,NeurIPSorICML_bj9ne-by Vael Gates-date 20220324,['Vael Gates'],2022-03-23T23:00:00Z,special_docs,,md 208316,https://www.fhi.ox.ac.uk/wp-content/uploads/space-races-settling.pdf,Space races: Settling the universe Fast,['Anders Sandberg'],2018-01-24T00:00:00Z,special_docs,,pdf 208346,http://robotics.cs.uml.edu/fileadmin/content/publications/2010/towards_state_summarization_11-10.pdf,Towards State Summarization for Autonomous Robots,"['Daniel Brooks', 'Abraham Shultz', 'Munjal Desai', 'Philip Kovac', 'Holly A. Yanco']",2010-01-01T00:00:00Z,special_docs,, 208356,http://stacks.iop.org/1402-4896/89/i=12/a=128004?key=crossref.f5938bc78a3023d740968f020cfa9970,The great downside dilemma for risky emerging technologies,['Seth D Baum'],2014-12-01T00:00:00Z,special_docs,,xml 208395,https://people.eecs.berkeley.edu/~russell/papers/nips18-pareto.pdf,Negotiable Reinforcement Learning for Pareto Optimal Sequential Decision-Making.,"['Nishant Desai', 'Andrew Critch', 'Stuart J', 'Russell']",2018-08-10T00:00:00Z,special_docs,, 208411,http://josephorallo.webs.upv.es/escrits/SafeAI2021.pdf,Negative Side Effects and AI Agent Indicators: Experiments in SafeLife,"['John Burden', 'Jose Hernandez-Orallo']",2021-01-01T00:00:00Z,special_docs,,pdf 208431,https://casparoesterheld.com/2017/06/27/a-survey-of-polls-on-newcombs-problem/,A survey of polls on Newcomb’s problem,['Caspar'],2017-06-27T00:00:00Z,special_docs,,html 208441,https://jbkjr.me/posts/2020/12/mapping_conceptual_territory_AI_safety_alignment/,Mapping the Conceptual Territory in AI Existential Safety and Alignment,['Jack Koch'],2020-12-17T00:00:00Z,special_docs,,html 208474,https://www.cser.ac.uk/media/uploads/files/It_Takes_a_Village__The_Shared_Responsibility_of_Raising_an_Autonomous_Weapon.pdf,It Takes a Village: The Shared Responsibility of 'Raising' an Autonomous Weapon,"['Amritha Jayanti', 'Shahar Avin']",2020-11-10T00:00:00Z,special_docs,,pdf 208502,https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf,A Few Useful Things to Know about Machine Learning,['Pedro Domingos'],2012-10-01T00:00:00Z,special_docs,, 208534,http://link.springer.com/10.1007/978-3-642-22887-2_35,Learning What to Value,['Daniel Dewey'],2011-01-01T00:00:00Z,special_docs,,xml 208554,https://ai-alignment.com/low-stakes-alignment-f3c36606937f,Low-stakes alignment,['Paul Christiano'],2021-04-30T00:00:00Z,special_docs,,html 208568,https://storage.googleapis.com/deepmind-media/Teaching%20language%20models%20to%20support%20answers%20with%20verified%20quotes/Teaching%20language%20models%20to%20support%20answers%20with%20verified%20quotes.pdf,Teaching language models to support answers with verified quotes,"['Jacob Menick', 'Maja Trebacz', 'Vladimir Mikulik', 'John Aslanides', 'Francis Song', 'Martin Chadwick', 'Mia Glaese', 'Susannah Young', 'Lucy Campbell-Gillingam', 'Geoffrey Irving', 'Nat McAleese']",2022-03-16T00:00:00Z,special_docs,, 208591,http://link.springer.com/10.1007/978-3-662-54033-6_3,Responses to the Journey to the Singularity,"['Kaj Sotala', 'Roman Yampolskiy']",2017-01-01T00:00:00Z,special_docs,,pdf 208629,https://proceedings.neurips.cc/paper/2021/file/0e915db6326b6fb6a3c56546980a8c93-Paper.pdf,Replay-Guided Adversarial Environment Design.,"['Minqi Jiang', 'Michael Dennis', 'Jack Parker-Holder', 'Jakob Foerster', 'Edward Grefenstette', 'Tim Rocktäschel']",2021-08-10T00:00:00Z,special_docs,, 208652,https://drive.google.com/file/d/1FJMt4m2g7PaQDEGUesSZCDqcrydSjSEi/view?usp=share_link,Rob Miles on Why should I care about AI safety-by Jeremie Harris on the Towards Data Science Podcast-date 20201202,"['Rob Miles', 'Jeremie Harris']",2020-12-01T23:00:00Z,special_docs,,md 208677,https://preflib.github.io/gaiw2021/papers/GAIW_2021_paper_32.pdf,"Symmetry, Equilibria, and Robustness in Common-Payoff Games","['Scott Emmons', 'Caspar Oesterheld', 'Andrew Critch', 'Vince Conitzer', 'Stuart Russell']",2021-05-01T00:00:00Z,special_docs,,pdf 208696,https://gcrinstitute.org/countering-superintelligence-misinformation/,Countering Superintelligence Misinformation,['Seth Baum'],2018-01-01T00:00:00Z,special_docs,,html 208718,https://drive.google.com/file/d/1avoVTY8L3hpZi9fTxI_1mjjXC5JTz882/view?usp=sharing,Appendix I of Systemantics: How Systems Work and Especially How They Fail,['John Gall'],1977-01-01T00:00:00Z,special_docs,, 208737,https://onlinelibrary.wiley.com/doi/abs/10.1111/cogs.12841,How to Be Helpful to Multiple People at Once,"['Vael Gates', 'Thomas L. Griffiths', 'Anca D. Dragan']",2020-06-01T00:00:00Z,special_docs,,xml 208758,https://longtermrisk.org/a-dialogue-on-suffering-subroutines/,A Dialogue on Suffering Subroutines,['Brian Tomasik'],2015-08-29T00:00:00Z,special_docs,,html 208777,https://papers.nips.cc/paper/2021/hash/37cfff3c04f95b22bcf166df586cd7a9-Abstract.html,Teachable Reinforcement Learning via Advice Distillation.,"['Olivia Watkins', 'Abhishek Gupta', 'Trevor Darrell', 'Pieter Abbeel', 'Jacob Andreas']",2021-08-10T00:00:00Z,special_docs,, 208798,https://cullenokeefe.com/s/Antitrust-Compliant-AI-Industry-Self-Regulation.pdf,Antitrust-Compliant AI Industry Self-Regulation,['Cullen O’Keefe'],2020-07-07T00:00:00Z,special_docs,, 208811,https://www.ijcai.org/proceedings/2018/676,Minimax-Regret Querying on Side Effects for Safe Optimality in Factored Markov Decision Processes,"['Shun Zhang', 'Edmund H. Durfee', 'Satinder Singh']",2018-07-01T00:00:00Z,special_docs,, 208824,https://gcrinstitute.org/minimizing-global-catastrophic-and-existential-risks-from-emerging-technologies-through-international-law/,Minimizing global catastrophic and existential risks from emerging technologies through international law,['Grant Wilson'],2013-01-01T00:00:00Z,special_docs,,html 208840,https://drive.google.com/file/d/1WIx_cUCJ-eCVQ_YgLNTZGsc5cgMqjBGY/view?usp=share_link,NeurIPSorICML_7oalk-by Vael Gates-date 20220320,['Vael Gates'],2022-03-19T23:00:00Z,special_docs,,md 208867,https://doi.org/10.1145/3377325.3377498,Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems,"['Zana Buçinca', 'Phoebe Lin', 'Krzysztof Z. Gajos', 'Elena L. Glassman']",2020-03-17T00:00:00Z,special_docs,,pdf 208887,https://strategicreasoning.org/wp-content/uploads/2018/12/Deception_in_Finitely_Repeated_Security_Games.pdf,Deception in finitely repeated security games.,"['Thanh H', 'Nguyen', 'Yongzhao Wang', 'Arunesh Sinha', 'Michael P', 'Wellman']",2019-08-10T00:00:00Z,special_docs,, 208903,http://ieeexplore.ieee.org/document/1667950/,"Computational Models of Ethical Reasoning: Challenges, Initial Steps, and Future Directions",['B.M. McLaren'],2006-07-01T00:00:00Z,special_docs,,xml 208923,https://www.fhi.ox.ac.uk/wp-content/uploads/GovAIAgenda.pdf,AI governance research agenda,['Allan Dafoe'],2018-01-01T00:00:00Z,special_docs,, 208965,https://cltc.berkeley.edu/wp-content/uploads/2020/05/Decision_Points_AI_Governance.pdf,Decision Points in AI Governance,['Jessica Cussins Newman'],2020-01-01T00:00:00Z,special_docs,, 208988,https://openreview.net/forum?id=S1xKd24twB,SQIL: Imitation Learning via Reinforcement Learning with Sparse Rewards..,"['Siddharth Reddy', 'Anca D', 'Dragan', 'Sergey Levine']",2020-08-10T00:00:00Z,special_docs,, 209008,https://link.springer.com/article/10.1007/s10203-020-00304-9,Inconsistency evaluation in pairwise comparison using norm-based distances.,"['Michele Fedrizzi', 'Nino Civolani', 'Andrew Critch']",2020-08-10T00:00:00Z,special_docs,, 209019,https://papers.ssrn.com/abstract=3871635,Aligning AI Regulation to Sociotechnical Change,['Matthijs M. Maas'],2021-06-16T00:00:00Z,special_docs,,xml 209036,https://drive.google.com/file/d/15notk1PoUa8YWFONRbZhAE_ljUD6_oZJ/view?usp=sharing,CHAI Newsletter #3 2021,['CHAI'],2021-12-01T00:00:00Z,special_docs,, 209057,https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf,Deciphering China’s AI dream,['Jeffrey Ding'],2018-01-01T00:00:00Z,special_docs,,pdf 209080,https://www.ijimai.org/journal/sites/default/files/2021-02/ijimai_6_5_10.pdf,Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI,"['Carla Zoe Cremer', 'Jess Whittlestone']",2021-01-01T00:00:00Z,special_docs,, 209105,https://apps.dtic.mil/sti/pdfs/ADA465311.pdf,A Framework for the Safety of Agent-Environment Systems,['Ramesh Bharadwa j'],2003-04-02T00:00:00Z,special_docs,,pdf 209129,https://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-risk/,G.K. Chesterton On AI Risk,['Scott Alexander'],2017-04-01T00:00:00Z,special_docs,,html 209145,https://cocosci.princeton.edu/papers/jaincomputational.pdf,A computational process-tracing method for measuring people’s planning strategies and how they change over time.,"['Jain', 'Y', 'R', 'Callaway', 'F', 'Griffiths', 'T', 'L', 'Dayan', 'P', 'He', 'R', 'Krueger', 'P', 'M', '& Lieder', 'F']",2022-08-10T00:00:00Z,special_docs,, 209164,https://www.sciencedirect.com/science/article/abs/pii/S0004370220300515,Special issue on autonomous agents modelling other agents: Guest editorial.,"['Stefano V', 'Albrechta', 'Peter Stone', 'Michael P', 'Wellman']",2020-08-10T00:00:00Z,special_docs,, 209173,https://ai-alignment.com/informed-oversight-18fcb5d3d1e1,Informed oversight,['Paul Christiano'],2019-01-24T00:00:00Z,special_docs,,html 209187,https://cset.georgetown.edu/publication/poison-in-the-well/,Poison in the Well,['Andrew Lohn'],2021-06-01T00:00:00Z,special_docs,,pdf 209213,https://aair-lab.github.io/Publications/nvs_aaai22.pdf,Differential Assessment of Black-Box AI Agents.,"['Rashmeet Kaur Nayyar', 'Pulkit Verma', 'and Siddharth Srivastava']",2022-08-10T00:00:00Z,special_docs,, 209230,https://ai-alignment.com/an-unaligned-benchmark-b49ad992940b,An unaligned benchmark,['Paul Christiano'],2018-09-26T00:00:00Z,special_docs,,html 209257,https://gcrinstitute.org/collective-action-on-artificial-intelligence-a-primer-and-review/,Collective Action on Artificial Intelligence: A Primer and Review | Global Catastrophic Risk Institute,['Robert de Neufville'],2021-07-15T00:00:00Z,special_docs,,html 209270,https://people.eecs.berkeley.edu/~russell/papers/iclr21-epic.pdf,Quantifying Differences in Reward Functions.,"['Adam Gleave', 'Michael Dennis', 'Shane Legg', 'Stuart Russell', 'Jan Leike']",2021-08-10T00:00:00Z,special_docs,, 209290,https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf,The Basic AI Drives,['Stephen Omohundro'],2008-02-18T00:00:00Z,special_docs,, 209324,https://longtermrisk.org/gains-from-trade-through-compromise/,Gains from Trade through Compromise,['Brian Tomasik'],2015-04-10T00:00:00Z,special_docs,,html 209346,http://link.springer.com/10.1007/978-3-540-68677-4,Artificial General Intelligence,"['Ben Goertzel', 'Cassio Pennachin']",2007-01-01T00:00:00Z,special_docs,, 209373,https://cocosci.princeton.edu/papers/petersondeepmodels22.pdf,Deep models of superficial face judgments.,"['Peterson', 'J', 'C', 'Uddenberg', 'S', 'Griffiths', 'T', 'L', 'Todorov', 'A', '& Suchow', 'J', 'W']",2022-08-10T00:00:00Z,special_docs,, 209398,https://static1.squarespace.com/static/6266e3c48fb0751e74f60eb6/t/626ed6898ec65809d49641cb/1651431052820/BSNZicaps22.pdf,"Tuning the Hyperparameters of Anytime Planning: A Metareasoning Approach with Deep Reinforcement Learning.","['Abhinav Bhatia', 'Justin Svegliato', 'Samer B', 'Nashed', 'Shlomo Zilberstein']",2022-08-10T00:00:00Z,special_docs,, 209417,https://iliad.stanford.edu/pdfs/publications/karamcheti2022lilac.pdf,Shared Autonomy for Robotic Manipulation with Language Corrections.,"['Siddharth Karamcheti*', 'Raj Palleti*', 'Yuchen Cui', 'Percy Liang', 'Dorsa Sadigh']",2022-08-10T00:00:00Z,special_docs,, 209441,https://drive.google.com/file/d/1khOhU_4oVuHRblLnDQxtzntCBBmVlIsV/view,Interview with 84py7,"['84py7', 'Vael Gates']",2022-03-18T00:00:00Z,special_docs,,docx 209467,https://cset.georgetown.edu/publication/the-dods-hidden-artificial-intelligence-workforce/,The DOD’s Hidden Artificial Intelligence Workforce,"['Diana Gehlhaus', 'Ron Hodge', 'Luke Koslosky', 'Kayla Goode', 'Jonathan Rotner']",2021-09-01T00:00:00Z,special_docs,,xml 209505,https://www.sciencedirect.com/science/article/abs/pii/S0165176520301919,Testing the Automation Revolution Hypothesis,"['Keller Scholl', 'Robin Hanson']",2020-08-01T00:00:00Z,special_docs,, 209527,https://drive.google.com/file/d/1GN_ZHi5Jx7NEvPrL8gT0I9em_av1ZyVk/view,Interview with w5cb5,"['w5cb5', 'Vael Gates']",2022-03-18T00:00:00Z,special_docs,,docx 209550,https://www.risksciences.ucla.edu/news-events/2018/1/2/proceedings-of-the-first-international-colloquium-on-catastrophic-and-existential-risk,The State of Research in Existential Risk,['Seán Ó hÉigeartaigh'],2018-01-01T00:00:00Z,special_docs,,pdf 209580,https://ai-alignment.com/universality-and-consequentialism-within-hch-c0bee00365bd,Universality and consequentialism within HCH,['Paul Christiano'],2019-01-10T00:00:00Z,special_docs,,html 209592,https://strategicreasoning.org/wp-content/uploads/2021/10/ICAIF_paper_108.pdf,An Agent-Based Model of Strategic Adoption of Real-Time Payments.,"['Katherine Mayo', 'Shaily Fozdar', 'Michael P', 'Wellman']",2021-08-10T00:00:00Z,special_docs,, 209612,https://cocosci.princeton.edu/papers/marjieh2022predicting.pdf,Predicting Human Similarity Judgments Using Large Language Models..,"['Marjieh', 'R', 'Sucholutsky', 'I', 'Sumers', 'T', 'R', 'Jacoby', 'N', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 209632,http://stanford.edu/~boyd/lmibook/lmibook.pdf,Linear Matrix Inequalities in System and Control Theory,"['Stephen Boyd', 'Laurent El Ghaoui', 'Eric Feron', 'Venkataramanan Balakrishnan']",1994-01-01T00:00:00Z,special_docs,, 209647,https://ai-alignment.com/alphago-zero-and-capability-amplification-ede767bb8446,AlphaGo Zero and capability amplification,['Paul Christiano'],2017-10-20T00:00:00Z,special_docs,,html 209659,https://doi.org/10.1080/13669870903126267,Probing the improbable: methodological challenges for risks with low probabilities and high stakes,"['Toby Ord', 'Rafaela Hillerbrand', 'Anders Sandberg']",2010-03-01T00:00:00Z,special_docs,,pdf 209676,https://cset.georgetown.edu/publication/contending-frames/,Contending Frames: Evaluating Rhetorical Dynamics in AI,"['Andrew Imbrie', 'Rebecca Gelles', 'James Dunham', 'Catherine Aiken']",2021-05-01T00:00:00Z,special_docs,,pdf 209687,http://globalprioritiesproject.org/wp-content/uploads/2015/05/MovementGrowth.pdf,How valuable is movement growth?,['Owen Cotton-Barratt'],2015-01-01T00:00:00Z,special_docs,,pdf 209723,https://drive.google.com/file/d/1Sa_PTmksYLvEAwoPsspbGNn6ZFVMeXnC/view?usp=share_link,AGI Safety and Alignment with Robert Miles-by Machine Ethics-date 20210113,['Robert Miles'],2021-01-12T23:00:00Z,special_docs,,md 209772,http://rgdoi.net/10.13140/RG.2.2.14956.46722/1,A Rational Reinterpretation of Dual-Process Theories,"['Smitha Milli', 'Falk Lieder', 'Thomas L Griffiths']",2018-01-01T00:00:00Z,special_docs,, 209790,https://drive.google.com/file/d/1v72aCvHRgQpAzFvt468iDM2GA6qOvyuD/view,Interview with bj9ne,"['bj9ne', 'Vael Gates']",2022-03-24T00:00:00Z,special_docs,,docx 209819,http://journals.sagepub.com/doi/10.1177/0278364919859436,Confidence-aware motion prediction for real-time collision avoidance 1,"['David Fridovich-Keil', 'Andrea Bajcsy', 'Jaime F Fisac', 'Sylvia L Herbert', 'Steven Wang', 'Anca D Dragan', 'Claire J Tomlin']",2019-06-24T00:00:00Z,special_docs,, 209837,https://casparoesterheld.com/2016/11/21/thoughts-on-updatelessness/,Thoughts on Updatelessness,['Caspar Oesterheld'],2016-11-21T00:00:00Z,special_docs,,html 209849,https://cocosci.princeton.edu/papers/dubeyimportantcurious.pdf,"If it’s important, then I’m curious: Increasing perceived usefulness stimulates curiosity.","['Dubey', 'R', 'Griffiths', 'T', 'L', '& Lombrozo', 'T']",2022-08-10T00:00:00Z,special_docs,, 209866,https://cocosci.princeton.edu/papers/hawkinspartners.pdf,From partners to populations: A hierarchical Bayesian account of coordination and convention.,"['Hawkins', 'R', 'D', 'Franke', 'M', 'Frank', 'M', 'C', 'Goldberg', 'A', 'E', 'Smith', 'K', 'Griffiths', 'T', 'L', '& Goodman', 'N', 'D']",2022-08-10T00:00:00Z,special_docs,, 209885,https://intelligence.org/files/FormalizingConvergentGoals.pdf,Formalizing convergent instrumental goals,"['Tsvi Benson-Tilsen', 'Nate Soares']",2016-01-01T00:00:00Z,special_docs,,pdf 209912,https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/,The case for strong longtermism,"['Hilary Greaves', 'William MacAskill']",2021-06-14T00:00:00Z,special_docs,,pdf 209940,https://cocosci.princeton.edu/papers/Accelerating_Metacognitive_RL-CameraReady.pdf,Enhancing metacognitive reinforcement learning using reward structures and feedback.,"['Paul Krueger', 'Falk Lieder', 'Tom Griffiths']",2017-08-10T00:00:00Z,special_docs,, 209955,https://www.goodreads.com/book/show/20830144-smarter-than-us,Smarter than us: The rise of machine intelligence,['Stuart Armstrong'],2014-02-01T00:00:00Z,special_docs,,epub 209983,https://docs.google.com/document/d/1SYgvWBe1ruDl9dQnxmjll-8COUHPycGOlLvTI68xtLA/edit?pli=1&usp=embed_facebook,AI Services: Introduction v1.3,['Vojta Kovarik'],2020-03-31T00:00:00Z,special_docs,,html 210020,https://people.eecs.berkeley.edu/~russell/papers/times22-russell-intvw.pdf,Is the rise of killer machines closer than we think?.,['Stuart Russell'],2022-08-10T00:00:00Z,special_docs,, 210047,https://drive.google.com/file/d/1pkMNjaJsgogoqfGcaUHzmILhn5PsHSQt/view?usp=share_link,EleutherAI Alignment 101,['Richard Ngo'],2022-05-16T00:00:00Z,special_docs,,md 210062,https://www.cs.cornell.edu/home/halpern/papers/transitivity.pdf,Sufficient Conditions for Causality to be Transitive.,"['Joseph Y', 'Halpern']",2016-08-10T00:00:00Z,special_docs,, 210074,https://link.springer.com/epdf/10.1007/s00146-018-0845-5,Classification of global catastrophic risks connected with artificial intelligence,"['Alexey Turchin', 'David Denkenberger']",2020-01-01T00:00:00Z,special_docs,,xml 210111,http://bair.berkeley.edu/blog/2019/10/21/coordination/,Collaborating with Humans Requires Understanding Them,"['Rohin Shah', 'Micah Carroll']",2019-10-21T00:00:00Z,special_docs,,html 210132,https://papers.ssrn.com/abstract=3046668,Global Catastrophes: The Most Extreme Risks,"['Seth Baum', 'Anthony Barrett']",2017-10-02T00:00:00Z,special_docs,,xml 210166,https://www.fhi.ox.ac.uk/wp-content/uploads/Orthogonality_Analysis_and_Metaethics-1.pdf,General Purpose Intelligence: Arguing The Orthogonality Thesis,['Stuart Armstrong'],2013-01-01T00:00:00Z,special_docs,,pdf 210182,https://www.researchgate.net/publication/250212613_Beyond_Normal_Accidents_and_High_Reliability_Organizations_The_Need_for_an_Alternative_Approach_to_Safety_in_Complex_Systems,Beyond Normal Accidents and High Reliability Organizations: The Need for an Alternative Approach to Safety in Complex Systems,"['Karen Marais', 'Nicolas Dulac', 'Nancy Leveson']",2004-01-01T00:00:00Z,special_docs,, 210211,https://strategicreasoning.org/wp-content/uploads/2021/11/Megan_ICAIF_2021.pdf,Stability Effects of Arbitrage in Exchange Traded Funds: An Agent-Based Model.,"['Megan Shearer', 'David Byrd', 'Tucker Hybinette Balch', 'Michael P Wellman']",2021-08-10T00:00:00Z,special_docs,, 210231,https://longtermrisk.org/how-would-catastrophic-risks-affect-prospects-for-compromise/,How Would Catastrophic Risks Affect Prospects for Compromise?,['Brian Tomasik'],2015-08-29T00:00:00Z,special_docs,,html 210252,https://drive.google.com/file/d/1fXWGvs_Vsp8CihQk-73wfv-xzm6xsDEW/view,Interview with q243b,"['q243b', 'Vael Gates']",2022-03-18T00:00:00Z,special_docs,,docx 210288,https://linkinghub.elsevier.com/retrieve/pii/S0004370207000495,"If multi-agent learning is the answer, what is the question?","['Yoav Shoham', 'Rob Powers', 'Trond Grenager']",2007-05-01T00:00:00Z,special_docs,,xml 210312,http://www.aaai.org/ojs/index.php/AAAI/article/view/4327,Robust Multi-Agent Reinforcement Learning via Minimax Deep Deterministic Policy Gradient,"['Shihui Li', 'Yi Wu', 'Xinyue Cui', 'Honghua Dong', 'Fei Fang', 'Stuart Russell']",2019-07-17T00:00:00Z,special_docs,,xml 210326,http://ai.googleblog.com/2020/08/understanding-view-selection-for.html,Understanding View Selection for Contrastive Learning,['Yonglong Tian'],2020-08-21T00:00:00Z,special_docs,,html 210368,https://longtermrisk.org/differential-intellectual-progress-as-a-positive-sum-project/,Differential Intellectual Progress as a Positive-Sum Project,['Brian Tomasik'],2015-08-29T00:00:00Z,special_docs,,html 210413,https://drive.google.com/file/d/19nqPNvLCecVyEk-FFevKhymXYaNH-fpb/view?usp=share_link,Ensuring safety and consistency in the age of machine learning _ Chongli Qin _ EAGxVirtual 2020-by Centre for Effective Altruism-video_id SS9DMr4VkbY-date 20200615,['Chongli Qin'],2020-06-14T22:00:00Z,special_docs,,md 210440,https://www.fhi.ox.ac.uk/reports/2010-1.pdf,Utility Indifference,['Stuart Armstrong'],2010-01-01T00:00:00Z,special_docs,,pdf 210455,https://people.eecs.berkeley.edu/~russell/papers/uai15-multi.pdf,Multitasking: Efficient Optimal Planning for Bandit Superprocesses,"['Dylan Hadfield-Menell', 'Stuart Russell']",2015-07-01T00:00:00Z,special_docs,,pdf 210471,https://longtermrisk.org/artificial-intelligence-and-its-implications-for-future-suffering/,Artificial Intelligence and Its Implications for Future Suffering,['Brian Tomasik'],2015-04-10T00:00:00Z,special_docs,,html 210501,https://medium.com/partnership-on-ai/aligning-ai-to-human-values-means-picking-the-right-metrics-855859e6f047,Aligning AI to Human Values means Picking the Right Metrics,['Jonathan Stray'],2020-04-15T00:00:00Z,special_docs,,html 210521,http://link.springer.com/10.1007/978-3-540-77002-2_9,Modelling Morality with Prospective Logic,"['Luís Moniz Pereira', 'Ari Saptawijaya']",2007-01-01T00:00:00Z,special_docs,,pdf 210535,https://cset.georgetown.edu/publication/ethics-and-artificial-intelligence/,Ethics and Artificial Intelligence,['Jamie Baker'],2021-04-01T00:00:00Z,special_docs,,pdf 210570,https://casparoesterheld.com/2018/08/06/moral-realism-and-ai-alignment/,Moral realism and AI alignment,['Caspar Oesterheld'],2018-08-06T00:00:00Z,special_docs,,html 210593,https://dl.acm.org/doi/pdf/10.1145/3306618.3314289,The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions,"['Jess Whittlestone', 'Rune Nyrup', 'Anna Alexandrova', 'Stephen Cave']",2019-01-01T00:00:00Z,special_docs,,pdf 210616,https://cocosci.princeton.edu/papers/cogsciReichman.pdf,The Structure of Goal Systems Predicts Human Performance.,"['David Bourgin', 'Falk Lieder', 'Daniel Reichman', 'Nimrod Talmon', 'Tom Griffiths']",2017-08-10T00:00:00Z,special_docs,, 210626,https://doi.org/10.1007/978-3-662-54033-6_2,Risks of the Journey to the Singularity,"['Kaj Sotala', 'Roman Yampolskiy']",2017-01-01T00:00:00Z,special_docs,,pdf 210649,https://papers.nips.cc/paper_files/paper/2019/file/97af07a14cacba681feacf3012730892-Paper.pdf,ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models,"['Andrei Barbu', 'David Mayo', 'Julian Alverio', 'William Luo', 'Christopher Wang', 'Dan Gutfreund', 'Josh Tenenbaum', 'Boris Katz']",2019-01-01T00:00:00Z,special_docs,, 210667,https://www.mdpi.com/2078-2489/12/7/275,Corporate Governance of Artificial Intelligence in the Public Interest,"['Peter Cihon', 'Jonas Schuett', 'Seth D. Baum']",2021-07-05T00:00:00Z,special_docs,,html 210691,https://theconversation.com/the-five-biggest-threats-to-human-existence-27053,The five biggest threats to human existence,['Anders Sandberg'],2014-01-01T00:00:00Z,special_docs,,html 210712,http://ai.googleblog.com/2020/03/fast-and-easy-infinitely-wide-networks.html,Fast and Easy Infinitely Wide Networks with Neural Tangents,"['Samuel S Schoenholz', 'Roman Novak']",2020-03-13T00:00:00Z,special_docs,,html 210729,https://www.aleph.se/papers/Meaning%20of%20life.pdf,Transhumanism and the Meaning of Life,['Anders Sandberg'],2014-01-01T00:00:00Z,special_docs,,pdf 210759,https://doi.org/10.1007/s11229-019-02098-9,Fixed-point solutions to the regress problem in normative uncertainty,['Philip Trammell'],2019-02-14T00:00:00Z,special_docs,,xml 210776,https://ai-alignment.com/mundane-solutions-to-exotic-problems-395bad49fbe7,Mundane solutions to exotic problems,['Paul Christiano'],2021-05-04T00:00:00Z,special_docs,,html 210791,"https://jan.leike.name/publications/Towards%20Interactive%20Inverse%20Reinforcement%20Learning%20-%20Armstrong,%20Leike%202016.pdf",Towards interactive inverse reinforcement learning,"['Stuart Armstrong', 'Jan Leike']",2016-01-01T00:00:00Z,special_docs,,pdf 210807,https://drive.google.com/file/d/1nacDSRDmZxaLP4wfRk0o-YPz2ecDO3mD/view?usp=share_link,Reframing superintelligence _ Eric Drexler _ EA Global - London 2018-by Centre for Effective Altruism-video_id MircoV5LKvg-date 20190314,['Eric Drexler'],2019-03-13T23:00:00Z,special_docs,,md 210830,https://stuhlmueller.org/papers/preferences-aaai2016.pdf,"Learning the Preferences of Ignorant, Inconsistent Agents","['Owain Evans', 'Andreas Stuhlmuller', 'Noah D. Goodman']",2016-01-01T00:00:00Z,special_docs,, 210842,https://cocosci.princeton.edu/papers/hardy2022overcoming.pdf,Overcoming Individual Limitations Through Distributed Computation: Rational Information Accumulation in Multigenerational Populations..,"['Hardy', 'M', 'D', 'Krafft', 'P', 'M', 'Thompson', 'B', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 210857,https://casparoesterheld.com/2018/03/31/three-wagers-for-multiverse-wide-superrationality/,Three wagers for multiverse-wide superrationality,['Johannes Treutlein'],2018-03-31T00:00:00Z,special_docs,,html 210877,https://cset.georgetown.edu/publication/harnessed-lightning/,Harnessed Lightning,"['Ryan Fedasiuk', 'Jennifer Melot', 'Ben Murphy']",2021-10-01T00:00:00Z,special_docs,,pdf 210912,https://drive.google.com/file/d/1RDr8WcSNLKZOa78W6IfI3CJZ6nVs5jEJ/view?usp=share_link,Ethan Caballero-by The Inside View-date 20220505,['Ethan'],2022-05-04T22:00:00Z,special_docs,,md 210936,https://linkinghub.elsevier.com/retrieve/pii/S0016328716302518,"The Social Science of Computerized Brains – Review of The Age of Em: Work, Love, and Life When Robots Rule the Earth by Robin Hanson (Oxford University Press, 2016)",['Seth D. Baum'],2017-06-01T00:00:00Z,special_docs,, 210957,https://www.stat.berkeley.edu/~aldous/Networks/1089229510.pdf,A Brief History of Generative Models for Power Law and Lognormal Distributions,['Michael Mitzenmacher'],2003-01-06T00:00:00Z,special_docs,, 210980,http://ceur-ws.org/Vol-2640/paper_14.pdf,Choice Set Misspecification in Reward Inference,"['Rachel Freedman', 'Rohin Shah', 'Anca Dragan']",2020-01-01T00:00:00Z,special_docs,,pdf 210994,https://participatoryml.github.io/papers/2020/42.pdf,What are you optimizing for? Aligning Recommender Systems with Human Values.,"['Jonathan Stray', 'Steven Adler', 'Dylan Hadfield-Menell']",2020-08-15T00:00:00Z,special_docs,, 211028,https://doi.org/10.1007/978-3-662-54033-6_4,How Change Agencies Can Affect Our Path Towards a Singularity,"['Ping Zheng', 'Mohammed-Asif Akhmad']",2017-01-01T00:00:00Z,special_docs,,pdf 211061,https://link.springer.com/article/10.1007/s11229-019-02148-2,Approval-directed agency and the decision theory of Newcomb-like problems,['Caspar Oesterheld'],2019-01-01T00:00:00Z,special_docs,,pdf 211074,http://link.springer.com/10.1007/s00146-016-0677-0,On the promotion of safe and socially beneficial artificial intelligence,['Seth D. Baum'],2017-11-01T00:00:00Z,special_docs,,pdf 211106,https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4959137/,The Unilateralist’s Curse and the Case for a Principle of Conformity,"['Nick Bostrom', 'Thomas Douglas', 'Anders Sandberg']",2016-07-03T00:00:00Z,special_docs,,html 211127,https://drive.google.com/file/d/1yZ9Kn2-yBMhi9r4a4XoAo2zBRdJuYs29/view?usp=share_link,individuallyselected_92iem-by Vael Gates-date 20220321,['Vael Gates'],2022-03-20T23:00:00Z,special_docs,,md 211154,http://www.amirrorclear.net/files/moral-trade.pdf,Moral Trade,['Toby Ord'],2015-01-01T00:00:00Z,special_docs,, 211184,https://s3.us-east-1.amazonaws.com/files.cnas.org/documents/CNASReport-Technology-Roulette-Final.pdf?mtime=20230609105008&focal=none,Managing Loss of Control as Many Militaries Pursue Technological Superiority,['Richard Danzig'],2018-05-30T00:00:00Z,special_docs,,pdf 211215,https://openreview.net/forum?id=SylL0krYPS,Toward Evaluating Robustness of Deep Reinforcement Learning with Continuous Control,"['Tsui-Wei Weng', 'Krishnamurthy (Dj) Dvijotham*', 'Jonathan Uesato*', 'Kai Xiao*', 'Sven Gowal*', 'Robert Stanforth*', 'Pushmeet Kohli']",2019-09-25T00:00:00Z,special_docs,,pdf 211234,https://www.fhi.ox.ac.uk/wp-content/uploads/Standards_-FHI-Technical-Report.pdf,Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development,['Peter Cihon'],2019-01-01T00:00:00Z,special_docs,, 211259,http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_76.pdf,Space-Time Embedded Intelligence,"['Laurent Orseau', 'Mark Ring']",2012-01-01T00:00:00Z,special_docs,, 211280,https://papers.ssrn.com/abstract=3070741,"A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy",['Seth Baum'],2017-11-12T00:00:00Z,special_docs,,xml 211316,https://openreview.net/forum?id=DFIoGDZejIB,Benefits of Assistance over Reward Learning,"['Rohin Shah', 'Pedro Freire', 'Neel Alex', 'Rachel Freedman', 'Dmitrii Krasheninnikov', 'Lawrence Chan', 'Michael Dennis', 'Pieter Abbeel', 'Anca Dragan', 'Stuart Russell']",2020-09-28T00:00:00Z,special_docs,,pdf 211336,https://gcrinstitute.org/papers/061_ai-world-universe.pdf,From AI for People to AI for the World and the Universe,"['Seth Baum', 'Andrea Owe']",2021-11-30T00:00:00Z,special_docs,,pdf 211357,https://gcrinstitute.org/moral-consideration-of-nonhumans-in-the-ethics-of-artificial-intelligence/,Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence | Global Catastrophic Risk Institute,['Seth Baum'],2021-06-07T00:00:00Z,special_docs,,html 211378,https://www.tandfonline.com/doi/full/10.1080/0952813X.2016.1186228,A model of pathways to artificial superintelligence catastrophe for risk and decision analysis,"['Anthony M. Barrett', 'Seth D. Baum']",2017-03-04T00:00:00Z,special_docs,,xml 211408,http://www.cs.columbia.edu/~orb/papers/justification_automl_2014.pdf,Justification Narratives for Individual Classifications,"['Or Biran', 'Kathleen McKeown']",2014-01-01T00:00:00Z,special_docs,, 211421,https://doi.org/10.1007/s00146-020-01110-y,AI transparency: a matter of reconciling design with critique,['Tomasz Hollanek'],2020-11-17T00:00:00Z,special_docs,,pdf 211447,https://cset.georgetown.edu/publication/classifying-ai-systems/,Classifying AI Systems,['Catherine Aiken'],2021-11-01T00:00:00Z,special_docs,,pdf 211471,https://www.humanityplus.org/transhumanist-faq?rq=Transhumanist%20FAQ,Transhumanist FAQ 3.0,['Nick Bostrom'],2017-01-01T00:00:00Z,special_docs,,html 211501,https://globalprioritiesinstitute.org/wp-content/uploads/William-MacAskill_Are-we-living-at-the-hinge-of-history.pdf,Are we living at the hinge of history,['William MacAskill'],2020-09-01T00:00:00Z,special_docs,,pdf 211521,http://bair.berkeley.edu/blog/2019/02/11/learning_preferences/,Learning Preferences by Looking at the World,['Daniel Seita'],2019-02-11T00:00:00Z,special_docs,,html 211538,https://casparoesterheld.com/2017/01/17/decision-theory-and-the-irrelevance-of-impossible-outcomes/,Decision Theory and the Irrelevance of Impossible Outcomes,['Caspar Oesterheld'],2017-01-17T00:00:00Z,special_docs,,html 211550,https://onlinelibrary.wiley.com/doi/abs/10.1111/1758-5899.12890,Fragmentation and the Future: Investigating Architectures for International AI Governance,"['Peter Cihon', 'Matthijs M. Maas', 'Luke Kemp']",2020-01-01T00:00:00Z,special_docs,,xml 211573,https://cset.georgetown.edu/publication/no-permits-no-fabs/,"No Permits, No Fabs: The Importance of Regulatory Reform for Semiconductor Manufacturing",['John VerWey'],2021-10-01T00:00:00Z,special_docs,,pdf 211600,https://www.governance.ai/research-paper/futureproof-artificial-intelligence-chapter,Futureproof: Artificial Intelligence Chapter | GovAI,"['Toby Ord', 'Angus Mercer', 'Sophie Dannreuther', 'Jess Whittlestone', 'Jade Leung', 'Markus Anderljung']",2021-06-15T00:00:00Z,special_docs,,pdf 211636,https://cset.georgetown.edu/publication/ai-verification/,AI Verification: Mechanisms to Ensure AI Arms Control Compliance,['Matthew Mittelsteadt'],2021-02-01T00:00:00Z,special_docs,,pdf 211665,https://cset.georgetown.edu/publication/machine-learning-and-cybersecurity/,Machine Learning and Cybersecurity,"['Micah Musser', 'Ashton Garriott']",2021-06-01T00:00:00Z,special_docs,,pdf 211710,https://longtermrisk.org/files/Cooperation-Conflict-and-Transformative-Artificial-Intelligence-A-Research-Agenda.pdf,"Cooperation, Conflict, and Transformative Artificial Intelligence - A Research Agenda",['Jesse Clifton'],2020-03-01T00:00:00Z,special_docs,,pdf 211741,https://cocosci.princeton.edu/papers/ho2022cognitive.pdf,Cognitive science as a source of forward and inverse models of human decisions for robotics and control.,"['Ho', 'M', 'K', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 211771,https://www.nature.com/articles/s41562-019-0672-9,Cognitive prostheses for goal achievement.,"['Falk Lieder', 'Owen X', 'Chen', 'Paul M', 'Krueger', 'Thomas L', 'Griffiths']",2020-08-10T00:00:00Z,special_docs,, 211782,http://symbolaris.com/pub/discworld.pdf,Formal Verification of Distributed Aircraft Controllers,"['Sarah M. Loos', 'David Renshaw', 'Andre Platzer']",2013-04-11T00:00:00Z,special_docs,, 211799,https://longtermrisk.org/files/suffering-focused-ai-safety.pdf,"Suffering-focused AI safety: Why ""fail-safe'"" measures might be our top intervention",['Lukas Gloor'],2016-01-01T00:00:00Z,special_docs,,pdf 211826,https://drive.google.com/file/d/1vM40bOBHmMkXaJwVvIJCJMRg76EKxlhF/view,Interview with zlzai,"['zlzai', 'Vael Gates']",2022-03-18T00:00:00Z,special_docs,,docx 211850,http://www.nature.com/articles/s41598-018-19194-4,Symmetric Decomposition of Asymmetric Games,"['Karl Tuyls', 'Julien Pérolat', 'Marc Lanctot', 'Georg Ostrovski', 'Rahul Savani', 'Joel Z Leibo', 'Toby Ord', 'Thore Graepel', 'Shane Legg']",2018-01-17T00:00:00Z,special_docs,,html 211867,http://moralai.cs.duke.edu/documents/mai_docs/moralAAAI17.pdf,Moral Decision Making Frameworks for Artificial Intelligence,"['Vincent Conitzer', 'Walter Sinnott-Armstrong', 'Jana Schaich Borg', 'Yuan Deng', 'Max Kramer']",2017-01-01T00:00:00Z,special_docs,,pdf 211890,https://ai-alignment.com/two-guarantees-c4c03a6b434f,Two guarantees,['Paul Christiano'],2018-04-09T00:00:00Z,special_docs,,html 211923,http://proceedings.mlr.press/v115/beckers20a/beckers20a.pdf,Approximate Causal Abstractions.,"['Sander Beckers', 'Frederick Eberhardt', 'Joseph Y Halpern']",2020-08-10T00:00:00Z,special_docs,, 211942,https://jetpress.org/v26.2/torres.pdf,Agential Risks: A Comprehensive Introduction,['Phil Torres'],2016-01-01T00:00:00Z,special_docs,,pdf 211970,https://dl.acm.org/doi/10.1145/3375627.3375817,Exploring AI Futures Through Role Play,"['Shahar Avin', 'Ross Gruetzemacher', 'James Fox']",2020-02-07T00:00:00Z,special_docs,,xml 211985,https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7228299/,Global challenges: 12 risks that threaten human civilization,"['Dennis Pamlin', 'Stuart Armstrong']",2015-01-01T00:00:00Z,special_docs,,html 212012,https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf,Reframing Superintelligence: Comprehensive AI Services as General Intelligence,['K Eric Drexler'],2019-01-01T00:00:00Z,special_docs,, 212033,https://drive.google.com/file/d/1A8XFCUHKechIzAhsdgDdOX-BLNCUIMWR/view?usp=sharing,CHAI Newsletter #2 2019,['CHAI'],2019-07-01T00:00:00Z,special_docs,, 212060,https://doi.org/10.1007/978-1-4020-8852-0_8,Why I Want to be a Posthuman when I Grow Up,['Nick Bostrom'],2009-01-01T00:00:00Z,special_docs,,pdf 212089,https://cset.georgetown.edu/research/ai-definitions-affect-policymaking/,AI Definitions Affect Policymaking,"['Dewey Murdick', 'James Dunham', 'Jennifer Melot']",2020-06-02T00:00:00Z,special_docs,,pdf 212114,http://gcrinstitute.org/papers/lessons.pdf,Lessons for Artificial Intelligence from Other Global Risks,['Seth Baum'],2019-01-01T00:00:00Z,special_docs,, 212147,http://proceedings.mlr.press/v119/stooke20a/stooke20a.pdf,Responsive safety in reinforcement learning by pid lagrangian methods,"['Adam Stooke', 'Joshua Achiam', 'Pieter Abbeel']",2020-01-01T00:00:00Z,special_docs,,pdf 212162,http://ieeexplore.ieee.org/document/7759036/,Information gathering actions over human internal state,"['Dorsa Sadigh', 'S. Shankar Sastry', 'Sanjit A. Seshia', 'Anca Dragan']",2016-10-01T00:00:00Z,special_docs,,xml 212178,https://drive.google.com/file/d/1qM_XvyjdaXUQl2OXUW6CuEX3i5bBuP75/view?usp=share_link,Iason Gabriel on Foundational Philosophical Questions in AI Alignment-by Future of Life Institute-video_id MzFl0SdjSso-date 20210630,['Iason Gabriel'],2021-06-29T22:00:00Z,special_docs,,md 212200,https://people.eecs.berkeley.edu/~tygar/papers/Machine_Learning_Security/asiaccs06.pdf,Can Machine Learning Be Secure?,"['Marco Barreno', 'Blaine Nelson', 'Russell Sears', 'Anthony D. Joseph', 'J. D. Tygar']",2006-01-01T00:00:00Z,special_docs,, 212228,http://www.cs.huji.ac.il/~shais/papers/OLsurvey.pdf,Online Learning Survey,['Shai Shalev-Schwartz'],2011-01-01T00:00:00Z,special_docs,, 212251,https://web.eecs.umich.edu/~baveja/Papers/ijcai-2018.pdf,Minimax-regret querying on side effects for safe optimality in factored Markov decision processes,"['Shun Zhang', 'Edmund H. Durfee', 'Satinder Singh']",2018-07-13T00:00:00Z,special_docs,,pdf 212265,https://openreview.net/forum?id=LiX3ECzDPHZ,X2T: Training an X-to-Text Typing Interface with Online Learning from User Feedback.,"['Jensen Gao', 'Siddharth Reddy', 'Glen Berseth', 'Nicholas Hardy', 'Nikhilesh Natraj', 'Karunesh Ganguly', 'Anca Dragan', 'Sergey Levine']",2021-08-10T00:00:00Z,special_docs,, 212289,https://doi.org/10.1002/9781118922590.ch23,"The Control Problem. Excerpts from Superintelligence: Paths, Dangers, Strategies",['Nick Bostrom'],2016-01-01T00:00:00Z,special_docs,,pdf 212322,https://projects.iq.harvard.edu/files/mcl/files/greene-et-al-ethical-principles-machines-aaai16.pdf,Embedding Ethical Principles in Collective Decision Support Systems,"['Joshua Greene', 'Francesca Rossi', 'John Tasioulas', 'Kristen Brent Venable', 'Brian Williams']",2016-01-01T00:00:00Z,special_docs,,pdf 212343,https://drive.google.com/file/d/1BxMJaWmF39r0b3DH40PiPzk5oEzCD3GH/view?usp=sharing,CHAI Newsletter #3 2019,['CHAI'],2019-12-01T00:00:00Z,special_docs,, 212376,https://www.law.berkeley.edu/files/LET_2017_7.pdf,Pervasive Spurious Normativity,"['Gillian K Hadfield', 'Dylan Hadfield-Menell']",2017-01-01T00:00:00Z,special_docs,, 212392,http://www.nature.com/nature/journal/v521/n7553/full/nature14541.html,Probabilistic machine learning and artificial intelligence,['Zoubin Ghahramani'],2015-05-27T00:00:00Z,special_docs,, 212422,https://cullenokeefe.com/blog/debate-evidence,Parallels Between AI Safety by Debate and Evidence Law,"[""Cullen O'Keefe""]",2020-07-20T00:00:00Z,special_docs,,html 212435,http://link.springer.com/10.1007/s11023-012-9282-2,Thinking Inside the Box: Controlling and Using an Oracle AI,"['Stuart Armstrong', 'Anders Sandberg', 'Nick Bostrom']",2012-11-01T00:00:00Z,special_docs,,pdf 212468,https://link.springer.com/article/10.3758/s13423-017-1286-8,The anchoring bias reflects rational use of cognitive resources.,"['Falk Lieder', 'Thomas L', 'Griffiths', 'Quentin J', 'M', 'Huys', 'Noah D', 'Goodman']",2018-08-10T00:00:00Z,special_docs,, 212490,https://drive.google.com/file/d/17rhNp735EyyI7R0bdMXVIEw1QkpWM1Yi/view,Interview with 7ujun,"['7ujun', 'Vael Gates']",2022-03-18T00:00:00Z,special_docs,,docx 212521,https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616,Iterated Distillation and Amplification,['Ajeya Cotra'],2018-03-05T00:00:00Z,special_docs,, 212537,https://intelligence.org/files/Interruptibility.pdf,Safely Interruptible Agents,"['Laurent Orseau', 'Stuart Armstrong']",2016-01-01T00:00:00Z,special_docs,,pdf 212556,http://bair.berkeley.edu/blog/2020/04/03/laikago/,Robots Learning to Move like Animals,['Daniel Seita'],2020-04-03T00:00:00Z,special_docs,,html 212575,https://openreview.net/pdf?id=c7hpFp_eRCo,Tuning the hyperparameters of anytime planning: A deep reinforcement learning approach.,"['Abhinav Bhatia', 'Justin Svegliato', 'Shlomo Zilberstein']",2021-08-10T00:00:00Z,special_docs,, 212588,https://par.nsf.gov/biblio/10082788-showing-versus-doing-teaching-demonstration,Showing versus doing: Teaching by demonstration,"['M. K. Ho', 'M. L. Littman', 'J. MacGlashan', 'F. Cushman', 'J. L. Austerweil']",2023-01-16T00:00:00Z,special_docs,,xml 212608,https://ai-alignment.com/advisor-games-b33382fef68c,Advisor games,['Paul Christiano'],2015-09-26T00:00:00Z,special_docs,,html 212630,https://psyarxiv.com/jgxra,Demonstrating the Impact of Prior Knowledge in Risky Choice.,"['Mathew Hardy', 'Tom Griffiths']",2019-08-10T00:00:00Z,special_docs,, 212647,http://www.sciencedirect.com/science/article/pii/S0893608014002135,Deep learning in neural networks: An overview,['Jürgen Schmidhuber'],2015-01-01T00:00:00Z,special_docs,, 212679,https://www.ijcai.org/proceedings/2018/718,The Facets of Artificial Intelligence: A Framework to Track the Evolution of AI,"['Fernando Martínez-Plumed', 'Bao Sheng Loe', 'Peter Flach', 'Seán Ó hÉigeartaigh', 'Karina Vold', 'José Hernández-Orallo']",2018-07-01T00:00:00Z,special_docs,,pdf 212701,https://www.safe.ai/statement-on-ai-risk,Statement on AI Risk,['Center for AI Safety'],2023-05-30T00:00:00Z,special_docs,, 212710,https://cocosci.princeton.edu/papers/battledayfrom.pdf,From convolutional neural networks to models of higher level cognition (and back again).,"['Ruairidh M Battleday', 'Joshua C Peterson', 'Thomas L Griffiths']",2021-08-10T00:00:00Z,special_docs,, 212735,https://longtermrisk.org/how-the-simulation-argument-dampens-future-fanaticism,How the Simulation Argument Dampens Future Fanaticism,['Brian Tomasik'],2016-01-01T00:00:00Z,special_docs,,html 212752,https://drive.google.com/file/d/1uK7NhdSKprQKZnRjU58X7NLA1auXlWHt/view?pli=1,AGI Safety From First Principles,['Richard Ngo'],2020-01-01T00:00:00Z,special_docs,, 212787,https://drive.google.com/file/d/1n6_WYIQytoyNAIXZrE0b0cQBlQ2CiZeg/view,README.docx,['Vael Gates'],2023-07-30T13:41:12Z,special_docs,,docx 212815,https://doi.org/10.1080/01402390.2019.1631810,How does the offense-defense balance scale?,"['Ben Garfinkel', 'Allan Dafoe']",2019-09-19T00:00:00Z,special_docs,, 212835,http://www.danieldewey.net/learning-what-to-value.pdf,Learning What to Value,['Daniel Dewey'],2011-01-01T00:00:00Z,special_docs,, 212855,https://justinsvegliato.com/s/BSWWBZaij22.pdf,Competence-Aware Systems.,"['Connor Basich', 'Justin Svegliatob', 'Kyle H', 'Wrayc', 'Stefan Witwickic', 'Joydeep Biswasd\n\nShlomo Zilbersteina']",2022-08-10T00:00:00Z,special_docs,, 212880,https://ai-alignment.com/inaccessible-information-c749c6a88ce,Inaccessible information,['Paul Christiano'],2020-06-03T00:00:00Z,special_docs,,html 212901,https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5576218/pdf/hs.2016.0118.pdf,Pricing Externalities to Balance Public Risks and Benefits of Research,"['Sebastian Farquhar', 'Owen Cotton-Barratt', 'Andrew Snyder-Beattie']",2017-01-01T00:00:00Z,special_docs,, 212920,https://longtermrisk.org/coordination-challenges-for-preventing-ai-conflict/,Coordination challenges for preventing AI conflict,['Stefan Torges'],2021-03-09T00:00:00Z,special_docs,,html 212943,http://aiweb.cs.washington.edu/research/projects/aiweb/media/papers/cogsci2011.pdf,Bayesian Theory of Mind: Modeling Joint Belief-Desire Attribution,"['Chris L. Baker', 'Rebecca R. Saxe', 'Joshua B. Tenenbaum']",2011-01-01T00:00:00Z,special_docs,, 212953,http://auai.org/uai2016/proceedings/papers/68.pdf,Safely Interruptible Agents,"['Laurent Orseau', 'Stuart Armstrong']",2016-01-01T00:00:00Z,special_docs,, 212976,https://cset.georgetown.edu/publication/national-power-after-ai/,National Power After AI,"['Matthew Daniels', 'Ben Chang']",2021-07-01T00:00:00Z,special_docs,,pdf 213004,https://justinsvegliato.com/s/BFSRsafeai23.pdf,active reward learning from multiple teachers.,"['Peter Barnett1', 'Rachel Freedman', 'Justin Svegliato\nand Stuart Russell']",2022-08-10T00:00:00Z,special_docs,, 213021,https://intelligence.org/files/PredictingAI.pdf,How we’re predicting AI–or failing to,"['Stuart Armstrong', 'Kaj Sotala']",2015-01-01T00:00:00Z,special_docs,,pdf 213045,https://cocosci.princeton.edu/papers/kumarusing.pdf,Using Natural Language to Guide Meta-Learning Agents towards Human-like Inductive Biases.,"['Kumar', 'S', 'Dasgupta', 'I', 'Hu', 'M', 'Y', 'Marjieh', 'R', 'Hawkins', 'R', 'D', 'Daw', 'N', 'Cohen', 'J', 'Narasimhan', 'K', 'R', '& Griffiths', 'T', 'L']",2022-08-10T00:00:00Z,special_docs,, 213057,https://doi.org/10.1007/s11023-017-9448-z,Implementation of Moral Uncertainty in Intelligent Machines,['Kyle Bogosian'],2017-12-01T00:00:00Z,special_docs,,pdf 213083,http://mediangroup.org/brain1.html,The Brain and Computation,['Baeo Maltinsky'],2018-01-01T00:00:00Z,special_docs,,html 213100,https://www.fhi.ox.ac.uk/wp-content/uploads/How-Will-National-Security-Considerations-Affect-Antitrust-Decisions-in-AI-Cullen-OKeefe.pdf,How Will National Security Considerations Affect Antitrust Decisions in AI? An Examination of Historical Precedents,['Cullen O’Keefe'],2020-07-07T00:00:00Z,special_docs,, 213118,https://cocosci.princeton.edu/papers/lewry_intuitions.pdf,Intuitions about magic track the development of intuitive physics.,"['Casey Lewry', 'Kaley Curtis', 'Nadya Vasilyeva', 'Fei Xu', 'Thomas L', 'Griffiths']",2021-08-10T00:00:00Z,special_docs,, 213136,https://openreview.net/forum?id=rkgqCiRqKQ,Inferring Reward Functions from Demonstrators with Unknown Biases,"['Rohin Shah', 'Noah Gundotra', 'Pieter Abbeel', 'Anca Dragan']",2018-09-27T00:00:00Z,special_docs,,pdf 213151,http://proceedings.mlr.press/v124/liu20c/liu20c.pdf,Bounded Rationality in Las Vegas: Probabilistic Finite Automata Play Multi-Armed Bandits.,"['Xinming Liu', 'Joseph Halpern']",2020-08-10T00:00:00Z,special_docs,, 213171,https://docs.microsoft.com/en-us/security/failure-modes-in-machine-learning,Failure Modes in Machine Learning - Security documentation,"['Ram Shankar Siva Kumar', 'David O Brien', 'Kendra Albert', 'Salomé Viljöen', 'Jeffrey Snover']",2019-01-01T00:00:00Z,special_docs,,html 213198,https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-an-overview/,Key Concepts in AI Safety: An Overview,"['Tim G. J. Rudner', 'Helen Toner']",2021-03-01T00:00:00Z,special_docs,,pdf 213219,https://drive.google.com/file/d/18vny49YcNuDePGyXY27mz0j0cwI0lw9G/view,Interview with a0nfw,"['a0nfw', 'Vael Gates']",2022-03-18T00:00:00Z,special_docs,,docx 213248,https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-interpretability-in-machine-learning/,Key Concepts in AI Safety: Interpretability in Machine Learning,"['Tim G. J. Rudner', 'Helen Toner']",2021-03-01T00:00:00Z,special_docs,,pdf 213273,https://en.wikipedia.org/w/index.php?title=Von_Neumann%E2%80%93Morgenstern_utility_theorem&oldid=1044421624,Von Neumann–Morgenstern utility theorem,['Wikipedia'],2021-09-15T00:00:00Z,special_docs,,html 213289,https://doi.org/10.1007/978-3-662-54033-6_12,A Psychoanalytic Approach to the Singularity: Why We Cannot Do Without Auxiliary Constructions,['Graham Clarke'],2017-01-01T00:00:00Z,special_docs,,pdf 213319,https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit,Eliciting latent knowledge: How to tell if your eyes deceive you,"['Paul Christiano', 'Ajeya Cotra', 'Mark Xu']",2021-12-01T00:00:00Z,special_docs,,md 213338,https://cset.georgetown.edu/publication/responsible-and-ethical-military-ai/,Responsible and Ethical Military AI,['Zoe Stanley-Lockman'],2021-08-01T00:00:00Z,special_docs,,pdf 213363,https://drive.google.com/file/d/11aBW7_2Y6WyY9MDmaLwC1Ark-VDUu0EV/view?usp=share_link,individuallyselected_zlzai-by Vael Gates-date 20220318,['Vael Gates'],2022-03-17T23:00:00Z,special_docs,,md 213389,https://www.fhi.ox.ac.uk/reports/2014-1.pdf,Monte Carlo model of brain emulation development,['Anders Sandberg'],2014-01-01T00:00:00Z,special_docs,,pdf 213402,https://ai-alignment.com/benign-model-free-rl-4aae8c97e385,Benign model-free RL,['Paul Christiano'],2017-06-02T00:00:00Z,special_docs,,html 213422,https://medium.com/@lucarade/issues-with-iterated-distillation-and-amplification-5aa01ab37173,Issues with Iterated Distillation and Amplification,['Luca Rade'],2018-04-29T00:00:00Z,special_docs,,html 213448,https://www.nature.com/articles/s41598-022-07692-5,Multiscale Heterogeneous Optimal Lockdown Control for COVID-19 Using Geographic Information.,"['C', 'Neary', 'M', 'Cubuktepe', 'N', 'Lauffer', 'X', 'Jin', 'A', 'Phillips', 'Z', 'Xu', 'D', 'Tong', 'and U', 'Topcu']",2022-08-10T00:00:00Z,special_docs,, 213468,https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf,ImageNet Classification with Deep Convolutional Neural Networks,"['Alex Krizevsky', 'Ilya Sutskever', 'Geoff Hinton']",2012-01-01T00:00:00Z,special_docs,, 213501,https://drive.google.com/file/d/16bG2w-k3wPP7ZOPrqQJIyW094BkmBwXR/view?usp=sharing,CHAI Newsletter #1 2020,['CHAI'],2020-04-01T00:00:00Z,special_docs,, 213525,https://drive.google.com/file/d/1UqRKpGkRBGtqqlUAYf1OUbkpTcfRK7se/view?usp=share_link,The AI revolution and international politics _ Allan Dafoe _ EAG 2017 Boston-by Centre for Effective Altruism-video_id Zef-mIKjHAk-date 20170618,['Allan Dafoe'],2017-06-17T22:00:00Z,special_docs,,md 213548,http://www.cs.cornell.edu/home/halpern/papers/newcomb.pdf,A Note on the Existence of Ratifiable Acts.,"['Joseph Y', 'Halpern']",2018-08-10T00:00:00Z,special_docs,, 213565,https://www.stat.berkeley.edu/~aldous/157/Papers/taleb_tetlock.pdf,On the Difference between Binary Prediction and True Exposure With Implications For Forecasting Tournaments and Decision Making Research,"['Nassim N. Taleb', 'Philip E. Tetlock']",2013-01-01T00:00:00Z,special_docs,, 213577,https://www.informatica.si/index.php/informatica/article/download/1877/1098,Superintelligence As a Cause or Cure For Risks of Astronomical Suffering,"['Kaj Sotala', 'Lukas Gloor']",2017-12-27T00:00:00Z,special_docs,, 213605,https://proceedings.neurips.cc/paper/2020/hash/30de9ece7cf3790c8c39ccff1a044209-Abstract.html,AvE: Assistance via Empowerment.,"['Yuqing Du', 'Stas Tiomkin', 'Emre Kiciman', 'Daniel Polani', 'Pieter Abbeel', 'Anca D', 'Dragan']",2020-08-10T00:00:00Z,special_docs,, 213619,https://medium.com/ai-control/learning-with-catastrophes-59387b55cc30#.ek0ew1n77,Learning with catastrophes,['Paul Christiano'],2016-05-29T00:00:00Z,special_docs,, 213635,https://openai.com/blog/faulty-reward-functions/,Faulty Reward Functions in the Wild,"['Jack Clark', 'Dario Amodei']",2016-12-22T00:00:00Z,special_docs,,html 213656,https://openreview.net/forum?id=Hygv3xrtDr,Sparse Skill Coding: Learning Behavioral Hierarchies with Sparse Codes.,"['Sophia Sanborn', 'Michael Chang', 'Sergey Levine', 'Thomas Griffiths']",2020-08-10T00:00:00Z,special_docs,, 213667,https://gcrinstitute.org/artificial-intelligence-needs-environmental-ethics/,Artificial Intelligence Needs Environmental Ethics | Global Catastrophic Risk Institute,['Seth Baum'],2021-11-16T00:00:00Z,special_docs,,html 213688,http://proceedings.mlr.press/v97/xu19d/xu19d.pdf,Few-Shot Intent Inference via Meta-Inverse Reinforcement Learning,"['Kelvin Xu', 'Ellis Ratner', 'Anca Dragan', 'Sergey Levine', 'Chelsea Finn']",2018-09-27T00:00:00Z,special_docs,,pdf 213701,https://www.jair.org/index.php/jair/article/view/11214,Incentive-Compatible Mechanisms for Norm Monitoring in Open Multi-agent perspectives and applications.,"['Natasha Alechina', 'Joseph Y', 'Halpern', 'Ian A', 'Kash', 'Brian Logan']",2018-08-10T00:00:00Z,special_docs,, 213717,https://people.eecs.berkeley.edu/~pabbeel/papers/MoldovanAbbeel_ICML2012full.pdf,Safe exploration in markov decision processes,"['Teodor Mihai Moldovan', 'Pieter Abbeel']",2012-07-06T00:00:00Z,special_docs,, 213727,https://www.youtube.com/watch?v=a2qTNuD1Sn8,256 Where I agree and Disagree with Eliezer 2,['AI Safety Reading Group'],2022-09-09T05:17:47Z,youtube,, 213749,https://www.youtube.com/watch?v=cWrczvf2TSg,"Artificial Intelligence Safety and Security - Roman V. Yampolskiy, PhD",['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 213791,https://www.youtube.com/watch?v=93JuWY_TpWg,258. How might we align transformative AI if it's developed very soon? 1/3,['AI Safety Reading Group'],2022-10-06T20:59:00Z,youtube,, 213810,https://www.youtube.com/watch?v=dFfAXOVej8g,[Audio problems]253 Propositions Concerning Digital Minds and Society 2,['AI Safety Reading Group'],2022-07-15T09:11:32Z,youtube,, 213867,https://www.youtube.com/watch?v=PYylPRX6z4Q,"Training AI Without Writing A Reward Function, with Reward Modelling",['Rob Miles'],2019-12-13T16:39:11Z,youtube,, 213885,https://www.youtube.com/watch?v=HAFoIRNiKYE,The Inside View #2–Connor Leahy,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 213910,https://www.youtube.com/watch?v=YkEF3FovtbY,Oxford professor on Transcendence: how could you get a machine intelligence?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 213924,https://www.youtube.com/watch?v=2cqpncnLUJM,"AI for science - DeepMind: The Podcast (S2, Ep6)",['Google DeepMind'],2022-02-16T14:42:11Z,youtube,, 213951,https://www.youtube.com/watch?v=DC0tRx71bbY,Artificial Super Intelligence - Will we survive?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 213969,https://www.youtube.com/watch?v=nvy_ziFvLDw,Robots Learning Through Interactions (Jens Kober),['AiTech - TU Delft'],2020-11-21T17:54:41Z,youtube,, 213996,https://www.youtube.com/watch?v=QCd8yXqgR_s,164. A Tutorial on Machine Learning,['AI Safety Reading Group'],2019-10-09T20:38:31Z,youtube,, 214019,https://www.youtube.com/watch?v=O--QL4SRGgI,Responsibility for outcomes when systems are intelligent (Nir Douer and Joachim Meyer),['AiTech - TU Delft'],2021-03-12T20:41:52Z,youtube,, 214040,https://www.youtube.com/watch?v=4MGCQOAxgv4,Theory of Mind Breakthrough: AI Consciousness & Disagreements at OpenAI [GPT 4 Tested],['AI Explained'],2023-03-19T15:34:03Z,youtube,, 214068,https://www.youtube.com/watch?v=kpZeUPsq_bY,"239. Soares, Tallinn, and Yudkowsky discuss AGI cognition",['AI Safety Reading Group'],2021-12-16T22:12:17Z,youtube,, 214092,https://www.youtube.com/watch?v=FIYOtZW8yEM,Timelines for Transformative AI and Language Model Alignment | Ajeya Cotra,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214121,https://www.youtube.com/watch?v=UyOk2SxkKYc,AI Alignment in AI Trustworthy,['Vulnerable Growth'],2022-05-06T05:17:34Z,youtube,, 214142,https://www.youtube.com/watch?v=bsElV5LLMqw,AI safety | Panel Discussion,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214189,https://www.youtube.com/watch?v=j9IOXa-iXKY,"Better together - DeepMind: The Podcast (S2, Ep3)",['Google DeepMind'],2022-02-01T10:35:33Z,youtube,, 214224,https://www.youtube.com/watch?v=Q57rzaHHO0k,Deep Learning 7. Attention and Memory in Deep Learning,['Google DeepMind'],2022-03-29T12:04:12Z,youtube,, 214261,https://www.youtube.com/watch?v=f7jBigoHaUg,The AI News You Might Have Missed This Week,['AI Explained'],2023-06-04T16:00:43Z,youtube,, 214291,https://www.youtube.com/watch?v=mEzpWaHzKrU,AiTech Agora: Karim Jebari - Artificial intelligence and democratic legitimacy,['AiTech - TU Delft'],2021-11-01T12:48:24Z,youtube,, 214312,https://www.youtube.com/watch?v=HZjqLY_AVgM,"AI, Ethics, and the Value Alignment Problem with Meia Chita-Tegmark and Lucas Perry",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214341,https://www.youtube.com/watch?v=2NXagVA3yzg,Jaime Sevilla - Projecting AI progress from compute trends,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214367,https://www.youtube.com/watch?v=hZTZYffRsKI,'Show Your Working': ChatGPT Performance Doubled w/ Process Rewards (+Synthetic Data Event Horizon),['AI Explained'],2023-06-01T15:23:48Z,youtube,, 214389,https://www.youtube.com/watch?v=WM2THPzFSNk,Friend or Foe? AI Safety Gridworlds extra bit,['Rob Miles'],2018-06-24T23:31:07Z,youtube,, 214404,https://www.youtube.com/watch?v=wm16iNht7PA,3:How Likely is Deceptive Alignment?: Evan Hubinger 2023,['Evan Hubinger'],2023-05-13T15:56:50Z,youtube,, 214428,https://www.youtube.com/watch?v=b05TJ2ZLfws,AI Alignment Overview: Ensuring Artificial Intelligence Behaves,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214454,https://www.youtube.com/watch?v=-VKF1lJhspg,Ben Goertzel - The unorthodox path to AGI,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214479,https://www.youtube.com/watch?v=gQddtTdmG_8,Vectoring Words (Word Embeddings) - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214495,https://www.youtube.com/watch?v=8oJM92XctGo,Hybrid Intelligence for high quality large scale deliberation,['AiTech - TU Delft'],2021-02-03T13:18:47Z,youtube,, 214524,https://www.youtube.com/watch?v=WnVvUByk26c,AiTech Agora: Alessandro Bozzon - Designing and Engineering for Meaningful Human Control,['AiTech - TU Delft'],2022-07-25T10:30:13Z,youtube,, 214554,https://www.youtube.com/watch?v=l10_IUwB1-I,AI Safety & Definitions of Intelligence - Allison Duettmann,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214569,https://www.youtube.com/watch?v=BztgYBqXi0Q,175. Q and A with Stuart Russell,['AI Safety Reading Group'],2020-01-08T21:42:55Z,youtube,, 214597,https://www.youtube.com/watch?v=4TdBDstKSWg,AI Safety Reading Group (Session 37),['AI Safety Reading Group'],2017-03-03T11:06:34Z,youtube,, 214620,https://www.youtube.com/watch?v=9GxVIf3FNJk,Defining and unpacking transformative AI | Ross Gruetzemacher | EA Global: London 2019,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214642,https://www.youtube.com/watch?v=gZj78sQbZkA,Introduction to Reinforcement Learning and Concrete Problems in AI Safety,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 214671,https://www.youtube.com/watch?v=ur4ItFzWKNk,Rob Miles - Why should I care about AI safety?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214700,https://www.youtube.com/watch?v=AJejcug2brU,DeepMind x UCL RL Lecture Series - Approximate Dynamic Programming [10/13],['Google DeepMind'],2022-03-29T12:01:55Z,youtube,, 214719,https://www.youtube.com/watch?v=Vea4cfn6TOA,Attention - General - Copying & Induction heads [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:48:13Z,youtube,, 214733,https://www.youtube.com/watch?v=cdyeyROsaPI,Human-Robot Coproduction: non verbal sharing of mental models with AR/VR (Doris Aschenbrenner),['AiTech - TU Delft'],2021-02-24T16:02:18Z,youtube,, 214753,https://www.youtube.com/watch?v=V8R0s8tesM0,255 Where I agree and disagree with Eliezer,['AI Safety Reading Group'],2022-08-26T06:35:15Z,youtube,, 214780,https://www.youtube.com/watch?v=SDmkTlqNmes,"Interview with Jaan Tallinn on Longevity, Existential Risk & AI | Vision Weekend US 2021",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214821,https://www.youtube.com/watch?v=FnNTbqSG8w4,Concrete Open Problems in Mechanistic Interpretability: Neel Nanda at SERI MATS,['Evan Hubinger'],2023-05-05T15:41:40Z,youtube,, 214844,https://www.youtube.com/watch?v=dVsIkwyZ9rQ,Designing for Human Rights in AI (Evgeni Aizenberg) - 1st AiTech Symposium,['AiTech - TU Delft'],2019-10-29T16:17:08Z,youtube,, 214860,https://www.youtube.com/watch?v=_UzX3L7lXhw,Superintelligence Mod for Civilization V,['Rob Miles'],2018-02-13T17:17:58Z,youtube,, 214879,https://www.youtube.com/watch?v=5qfIgCiYlfY,AI Self Improvement - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214897,https://www.youtube.com/watch?v=NnDermRo8O0,What is AI alignment?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214925,https://www.youtube.com/watch?v=NnSgSZXqGmQ,Attention - General - Summarizing with NMF [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:47:57Z,youtube,, 214937,https://www.youtube.com/watch?v=35BWlvPcYvg,Bart Selman – Non-Human Intelligence – CSRBAI 2016,['Machine Intelligence Research Institute'],2016-10-21T20:26:46Z,youtube,, 214952,https://www.youtube.com/watch?v=2d4dPclY1y8,202. Gwern on GPT-3,['AI Safety Reading Group'],2020-10-08T21:24:56Z,youtube,, 214974,https://www.youtube.com/watch?v=u2cK0_jUX_g,Mo Gawdat - Scary Smart: A former Google exec's perspective on AI risk,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 214992,https://www.youtube.com/watch?v=njhsfk3uT5M,New extractivism (Vladan Joler),['AiTech - TU Delft'],2020-09-02T14:16:20Z,youtube,, 215020,https://www.youtube.com/watch?v=kLcJuyJmgJU,Twelve Tenets for AI Safety,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215042,https://www.youtube.com/watch?v=cMAYhiMJ4k0,"Sven Nyholm: Responsibility Gaps, Value Alignment, and Meaningful Human Control over AI",['AiTech - TU Delft'],2021-04-28T19:20:35Z,youtube,, 215066,https://www.youtube.com/watch?v=Z0PoEeHvewk,251 A Generalist Agent 2,['AI Safety Reading Group'],2022-06-17T05:09:00Z,youtube,, 215089,https://www.youtube.com/watch?v=rURRYI66E54,AI Language Models & Transformers - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215112,https://www.youtube.com/watch?v=8afHG61YmKM,Alex Turner - Will powerful AIs tend to seek power?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215137,https://www.youtube.com/watch?v=BR3H1BAC2So,The AI Control Problem,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215167,https://www.youtube.com/watch?v=dbMp4pFVwnU,"AI alignment, philosophical pluralism, and the relevance of non-Western philosophy | Tan Zhi Xuan",['Vulnerable Growth'],2022-05-06T05:15:52Z,youtube,, 215191,https://www.youtube.com/watch?v=wjq-PHQGIug,199. How close are we to creating Artificial General Intelligence,['AI Safety Reading Group'],2020-09-17T21:34:15Z,youtube,, 215212,https://www.youtube.com/watch?v=lqJUIqZNzP8,Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1,['Rob Miles'],2017-06-18T11:02:16Z,youtube,, 215231,https://www.youtube.com/watch?v=EBK-a94IFHY,3 principles for creating safer AI | Stuart Russell,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215253,https://www.youtube.com/watch?v=lXm-MgPLkxA,Logical Induction: Progress in AI Alignment | Andrew Critch | EA Global 2016,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 215269,https://www.youtube.com/watch?v=Pw2km5fPuLM,Governing Transformative Artificial Intelligence by Markus Anderljung,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215313,https://www.youtube.com/watch?v=yl2nlejBcg0,Vael Gates: Researcher Perceptions of Current and Future AI,['plex / Eric'],2022-06-14T11:56:48Z,youtube,, 215336,https://www.youtube.com/watch?v=XDwvKiWYoUg,Explainable AI with a Purpose (Emily Sullivan),['AiTech - TU Delft'],2021-02-24T16:01:27Z,youtube,, 215357,https://www.youtube.com/watch?v=kHvJgNtwvoU,"Existential Risk, AI Safety & Interdisciplinary Research - Anders Sandberg",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215383,https://www.youtube.com/watch?v=wAh3atv9haM,The Value Alignment Problem in Artificial Intelligence,['Vulnerable Growth'],2022-05-06T05:18:21Z,youtube,, 215401,https://www.youtube.com/watch?v=IjG_Fx3D0o0,"Demis Hassabis ""Systems neuroscience and AGI""",['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 215424,https://www.youtube.com/watch?v=tVohh8Za2fk,Embodied manifestos of human-AI partnerships (Maria Luce Lupetti) - 1st AiTech Symposium,['AiTech - TU Delft'],2019-10-29T16:23:35Z,youtube,, 215439,https://www.youtube.com/watch?v=Kw_1N9Nfir0,Provably Beneficial AI | Stuart Russell,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215457,https://www.youtube.com/watch?v=-QhRGaOV844,Stuart Russel on AI and the Midas Touch problem #shorts,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215467,https://www.youtube.com/watch?v=oY7c75ggrRI,2:Risks from Learned Optimization: Evan Hubinger 2023,['Evan Hubinger'],2023-05-13T15:56:39Z,youtube,, 215488,https://www.youtube.com/watch?v=4XAwMkqosfk,Friday Seminar Series- Gillian Hadfield: AI Alignment and Human Normativity,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215520,https://www.youtube.com/watch?v=tcdVC4e6EV4,Deadly Truth of General AI? - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215537,https://www.youtube.com/watch?v=mkvgPrOCbpE,The Inside View #1—Does the world really need another podcast?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215558,https://www.youtube.com/watch?v=tu1zGwGddzw,210. Locating Ethics,['AI Safety Reading Group'],2020-12-03T21:49:58Z,youtube,, 215576,https://www.youtube.com/watch?v=MAd6D1mhNao,Michael Wellman – Autonomous Agents in Financial Markets: Implications and Risks – CSRBAI 2016,['Machine Intelligence Research Institute'],2016-10-21T20:26:46Z,youtube,, 215612,https://www.youtube.com/watch?v=31rU-VzF5ww,AI Safety Gym - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215630,https://www.youtube.com/watch?v=cgihbdO6bvw,8 Ways ChatGPT 4 [Is] Better Than ChatGPT,['AI Explained'],2023-02-06T17:31:36Z,youtube,, 215659,https://www.youtube.com/watch?v=mvSRsZduc3c,Metalearning & Induction Heads [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:48:26Z,youtube,, 215672,https://www.youtube.com/watch?v=1jLjtrM1Rn0,"""Induction Bump"" Phase Change [rough early thoughts]",['Vulnerable Growth'],2022-05-10T00:48:19Z,youtube,, 215691,https://www.youtube.com/watch?v=Kukz6bt8IF0,Careers in technical AI safety | Owain Evans and Victoria Krakovna | EA Global: London 2017,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215724,https://www.youtube.com/watch?v=zBCOMm_ytwM,Stuart Russell – AI: The Story So Far – CSRBAI 2016,['Machine Intelligence Research Institute'],2016-10-21T20:26:46Z,youtube,, 215747,https://www.youtube.com/watch?v=CFLWDaJ5Usc,Synergies vs. Tradeoffs Between Near-term and Long-term AI Safety Efforts,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215769,https://www.youtube.com/watch?v=V3NQaDR3xI4,0L - Theory [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:46:58Z,youtube,, 215778,https://www.youtube.com/watch?v=CF1BCAd5KPc,Autonomous technology and the paradox of human-centered design,['AiTech - TU Delft'],2021-02-03T09:45:07Z,youtube,, 215799,https://www.youtube.com/watch?v=XtJVLOe4cfs,Using AI to accelerate scientific discovery - Demis Hassabis (Crick Insight Lecture Series),['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 215835,https://www.youtube.com/watch?v=kVU8zTI-Od0,DeepMind x UCL | Deep Learning Lectures | 5/12 | Optimization for Machine Learning,['Google DeepMind'],2022-03-29T12:02:17Z,youtube,, 215867,https://www.youtube.com/watch?v=0Gp5zUP18x0,"AI Safety Investing | Luke Nosek, Gigafund",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 215891,https://www.youtube.com/watch?v=g4xV2mPi8aY,Attention - General - Indirect & n-gram Attention Heads [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:48:07Z,youtube,, 215907,https://www.youtube.com/watch?v=slYsaoWGNEo,Untangling Artificial Intelligence Ethics (Andreia Martinho),['AiTech - TU Delft'],2021-01-20T14:43:16Z,youtube,, 215938,https://www.youtube.com/watch?v=zkbPdEHEyEI,We Were Right! Real Inner Misalignment,['Rob Miles'],2021-10-10T20:50:54Z,youtube,, 215957,https://www.youtube.com/watch?v=JzG4CNex5XA,"A Formal Methods Perspective to AI Safety: Promises and Challenges (Wenchao Li, FSL Workshop)",['Vulnerable Growth'],2022-05-06T05:19:01Z,youtube,, 215979,https://www.youtube.com/watch?v=wFsI2WqUfdA,DeepMind x UCL | Deep Learning Lectures | 9/12 | Generative Adversarial Networks,['Google DeepMind'],2022-03-29T12:02:17Z,youtube,, 216015,https://www.youtube.com/watch?v=m0Vx2Cg0_cE,The Open Philanthropy Project's work on AI risk | Helen Toner | EA Global: London 2017,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216039,https://www.youtube.com/watch?v=CUR1FN3Ok_w,Bing (GPT 4) Just Made Smartphones MUCH Smarter (next-level Android and iOS app),['AI Explained'],2023-02-25T16:29:56Z,youtube,, 216062,https://www.youtube.com/watch?v=yQE9KAbFhNY,A Response to Steven Pinker on AI,['Rob Miles'],2019-03-31T13:39:12Z,youtube,, 216080,https://www.youtube.com/watch?v=K332ragiUD8,265. Discovering Language Model Behaviors With Model-Writen Evaluations,['AI Safety Reading Group'],2023-01-19T21:46:17Z,youtube,, 216106,https://www.youtube.com/watch?v=hEUO6pjwFOo,Intelligence and Stupidity: The Orthogonality Thesis,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216119,https://www.youtube.com/watch?v=ISk80iLhdfU,Reinforcement Learning 1: Introduction to Reinforcement Learning,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 216151,https://www.youtube.com/watch?v=cVzvNZOBaJ4,DeepMind x UCL RL Lecture Series - Deep Reinforcement Learning #1 [12/13],['Google DeepMind'],2022-03-29T12:01:55Z,youtube,, 216187,https://www.youtube.com/watch?v=MircoV5LKvg,Reframing superintelligence | Eric Drexler | EA Global: London 2018,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 216216,https://www.youtube.com/watch?v=Nzx6pMDd6xM,AGI & Corporations Seminar - Conclusions,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216252,https://www.youtube.com/watch?v=EzEuylNSn-Q,"What's Left Before AGI? PaLM-E, 'GPT 4' and Multi-Modality",['AI Explained'],2023-03-12T16:20:25Z,youtube,, 216283,https://www.youtube.com/watch?v=OUifSs28G30,Risks from Learned Optimization: Evan Hubinger at MLAB2,['Evan Hubinger'],2022-12-01T16:06:11Z,youtube,, 216307,https://www.youtube.com/watch?v=Fz-r4qwkrTk,4:How Do We Become Confident in the Safety of an ML System?: Evan Hubinger 2023,['Evan Hubinger'],2023-05-13T15:57:02Z,youtube,, 216328,https://www.youtube.com/watch?v=tlS5Y2vm02c,Holy Grail of AI (Artificial Intelligence) - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216343,https://www.youtube.com/watch?v=_8yVOC4ciXc,GPT3: An Even Bigger Language Model - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216362,https://www.youtube.com/watch?v=gPtsgTjyEj4,Empowerment: Concrete Problems in AI Safety part 2,['Rob Miles'],2017-07-09T09:24:11Z,youtube,, 216377,https://www.youtube.com/watch?v=9i1WlcCudpU,10 Reasons to Ignore AI Safety,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216400,https://www.youtube.com/watch?v=Sw9r8CL98N0,Generative Adversarial Networks (GANs) - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216415,https://www.youtube.com/watch?v=QsRro57_SxU,Artificial General Intelligence: Racing and cooperating | Seán Ó hÉigeartaigh,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216435,https://www.youtube.com/watch?v=Faoevs3XLoE,Meaningful human control over automated driving systems (F. Santoni de Sio) - 1st AiTech Symposium,['AiTech - TU Delft'],2019-10-30T09:52:35Z,youtube,, 216453,https://www.youtube.com/watch?v=Qhmt9CuV6tU,Economic Growth in the Long Run: Artificial Intelligence Explosion or an Empty Planet?,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 216480,https://www.youtube.com/watch?v=onvAl4SQ5-Q,Science Saturday: The Great Singularity Debate | Eliezer Yudkowsky & Massimo Pigliucci,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216503,https://www.youtube.com/watch?v=ISxu8lvR8Yw,229. The Case For Aligning Narrow Superhuman Models,['AI Safety Reading Group'],2021-07-22T21:24:56Z,youtube,, 216532,https://www.youtube.com/watch?v=5oXyibEgJr0,AI's Game Playing Challenge - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216544,https://www.youtube.com/watch?v=AMSKIDEbjLY,Overview of Artificial General Intelligence Safety Research Agendas | Rohin Shah,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216596,https://www.youtube.com/watch?v=vYhErnZdnso,Where do we go now?,['Rob Miles'],2017-03-31T20:16:27Z,youtube,, 216614,https://www.youtube.com/watch?v=c2wXNwdspzo,Of data scientists and AI from critique to reflective practice (Mario Sosa Hidalgo),['AiTech - TU Delft'],2020-11-21T17:54:51Z,youtube,, 216639,https://www.youtube.com/watch?v=LHEE_iqzv-8,"Key Issues In Near Term AI Safety Research | Aryeh L Englander, Daniel Elton, Ph D",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216679,https://www.youtube.com/watch?v=GpuQlJ3IHBM,AI Safety Reading Group (Session 43),['AI Safety Reading Group'],2017-04-12T19:36:02Z,youtube,, 216693,https://www.youtube.com/watch?v=tUAXCFvvODI,Science Saturday: Singularity Edition | John Horgan & Eliezer Yudkowsky [Science Saturday],['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216717,https://www.youtube.com/watch?v=sGuiWX07sKw,RL Course by David Silver - Lecture 9: Exploration and Exploitation,['Google DeepMind'],2022-03-29T12:03:07Z,youtube,, 216740,https://www.youtube.com/watch?v=xslW5sQOkC8,"The Model That Changes Everything: Alpaca Breakthrough (ft. Apple's LLM, BritGPT, Ernie and AlexaTM)",['AI Explained'],2023-03-16T19:40:22Z,youtube,, 216756,https://www.youtube.com/watch?v=wAk1lxmiW4c,Reinforcement Learning 5: Function Approximation and Deep Reinforcement Learning,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 216784,https://www.youtube.com/watch?v=QbHzxHsnAtk,123 - Robin Hanson on AI Skepticism,['AI Safety Reading Group'],2018-12-03T15:37:55Z,youtube,, 216807,https://www.youtube.com/watch?v=SOSULGb1ff0,242. Digital People Would Be an Even Bigger Deal,['AI Safety Reading Group'],2022-02-05T10:24:31Z,youtube,, 216834,https://www.youtube.com/watch?v=0jXpH8sfsyg,Are AI Safety Concerns Philosophically Sound?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216865,https://www.youtube.com/watch?v=_HzOs8V9AEs,Responsibility in the age of AI (Jeroen van den Hoven) - 1st AiTech Symposium,['AiTech - TU Delft'],2019-10-30T09:49:14Z,youtube,, 216894,https://www.youtube.com/watch?v=3HcVqQdmpu8,241 Finetuned Language Models are Zero shot Learners,['AI Safety Reading Group'],2022-01-14T06:25:23Z,youtube,, 216911,https://www.youtube.com/watch?v=RBRb_-CzNow,261. Is Power Seeking AI an Existential Threat,['AI Safety Reading Group'],2022-11-17T22:02:32Z,youtube,, 216930,https://www.youtube.com/watch?v=3fZvahTlPaQ,Aligning ML objectives with human values,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216950,https://www.youtube.com/watch?v=p-6F4rhRYLQ,"More GPT-2, the 'writer' of Unicorn AI - Computerphile",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 216970,https://www.youtube.com/watch?v=Mqg3aTGNxZ0,'Sparks of AGI' - Bombshell GPT-4 Paper: Fully Read w/ 15 Revelations,['AI Explained'],2023-03-23T17:52:59Z,youtube,, 217007,https://www.youtube.com/watch?v=7Pcvdo4EJeo,DeepMind x UCL | Deep Learning Lectures | 11/12 | Modern Latent Variable Models,['Google DeepMind'],2022-03-29T12:02:17Z,youtube,, 217037,https://www.youtube.com/watch?v=7WaiYZLS94M,Training machine learning (ML) systems to answer open-ended questions | Andreas Stuhlmuller,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 217052,https://www.youtube.com/watch?v=WG_Krd-wGM4,Andrew Critch – Robust Cooperation of Bounded Agents – CSRBAI 2016,['Machine Intelligence Research Institute'],2016-10-21T20:26:46Z,youtube,, 217073,https://www.youtube.com/watch?v=FKl8kM4finE,DeepMind x UCL RL Lecture Series - Planning & models [8/13],['Google DeepMind'],2022-03-29T12:01:55Z,youtube,, 217103,https://www.youtube.com/watch?v=GuOL2pNgXIE,Current AI Misalignment and Potential Long Term Implications,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 217124,https://www.youtube.com/watch?v=L6xaQ501jEs,Reinforcement Learning 8: Advanced Topics in Deep RL,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 217161,https://www.youtube.com/watch?v=JA4vW4oQavk,Musings on AI | Panel | EA Global: San Francisco 2017,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 217188,https://www.youtube.com/watch?v=3-GiNFRILJU,What Does (and Doesn't) AI Mean for Effective Altruism? | Owen Cotton-Barratt | EAGxBerlin 2017,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 217211,https://www.youtube.com/watch?v=IB1OvoCNnWY,AI Safety - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 217220,https://www.youtube.com/watch?v=mij7nYPKIHo,retreat jsw talk 1,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 217241,https://www.youtube.com/watch?v=KQeijCRJSog,Nick Bostrom's Q&A on Existential risk and AI,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 217271,https://www.youtube.com/watch?v=4l7Is6vOAOA,General AI Won't Want You To Fix its Code - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 217283,https://www.youtube.com/watch?v=4x3q1RbRphk,Free ML Bootcamp for Alignment #shorts,['Rob Miles'],2022-05-24T17:30:22Z,youtube,, 217292,https://www.youtube.com/watch?v=KHZVXao4qXs,RL Course by David Silver - Lecture 7: Policy Gradient Methods,['Google DeepMind'],2022-03-29T12:03:07Z,youtube,, 217321,https://www.youtube.com/watch?v=yf31XT1G1RQ,"Towards the future - DeepMind: The Podcast (S1, Ep7)",['Google DeepMind'],2020-02-27T14:12:12Z,youtube,, 217344,https://www.youtube.com/watch?v=VvJpkqKlv9Q,"AiTech Agora: Prof. Paul Pangaro: Cybernetics, AI, and Ethical Conversations",['AiTech - TU Delft'],2020-12-09T16:37:22Z,youtube,, 217364,https://www.youtube.com/watch?v=XaPVlGdj4Yk,Designing AI for Wellbeing (Derek Lomas),['AiTech - TU Delft'],2020-06-19T12:14:29Z,youtube,, 217385,https://www.youtube.com/watch?v=IQ4rQXfqM8M,"Organizing for Beneficial AGI: Lessons From The Industry | Jingying Yang, Bobi Rakova",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 217414,https://www.youtube.com/watch?v=u_dSUtp4eM8,Enter PaLM 2 (New Bard): Full Breakdown - 92 Pages Read and Gemini Before GPT 5? Google I/O,['AI Explained'],2023-05-11T17:30:37Z,youtube,, 217451,https://www.youtube.com/watch?v=vFDL-NxY610,The Windfall Clause: Sharing the benefits of advanced AI | Cullen O’Keefe,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 217473,https://www.youtube.com/watch?v=udZgpTZ20DI,Kostas Tsiakas - Designing Human-AI interactions using Explainable and Human-in-the-Loop AI,['AiTech - TU Delft'],2022-07-25T10:42:43Z,youtube,, 217511,https://www.youtube.com/watch?v=mYOg8_iPpFg,Well Founded and Human Compatible AI | Stuart Russell,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 217538,https://www.youtube.com/watch?v=pY3GG0tsx5A,5:Predictive Models: Evan Hubinger 2023,['Evan Hubinger'],2023-05-13T15:57:17Z,youtube,, 217582,https://www.youtube.com/watch?v=87kLfzmYBy8,DeepMind x UCL | Deep Learning Lectures | 6/12 | Sequences and Recurrent Networks,['Google DeepMind'],2022-03-29T12:02:17Z,youtube,, 217612,https://www.youtube.com/watch?v=KDLYS2hPKBA,195 Indifference Methods for Managing Agent Rewards,['AI Safety Reading Group'],2020-08-07T05:21:37Z,youtube,, 217635,https://www.youtube.com/watch?v=sG0vggp6qsI,"AI, Robot - DeepMind: The Podcast (S1, Ep4)",['Google DeepMind'],2020-02-27T14:02:00Z,youtube,, 217661,https://www.youtube.com/watch?v=wVzuvf9D9BU,GPT 4 is Smarter than You Think: Introducing SmartGPT,['AI Explained'],2023-05-07T17:36:49Z,youtube,, 217689,https://www.youtube.com/watch?v=L5pUA3LsEaw,Why Not Just: Think of AGI Like a Corporation?,['Rob Miles'],2018-12-23T20:01:39Z,youtube,, 217704,https://www.youtube.com/watch?v=bScJdHX0Hac,Tech talk: Privacy in AI safety,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 217722,https://www.youtube.com/watch?v=sw8IE3MX1SY,#58 Dr. Ben Goertzel - Artificial General Intelligence,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 217752,https://www.youtube.com/watch?v=jhJ0_nLGyiw,247. Eliciting Latent Knowledge 1,['AI Safety Reading Group'],2022-04-21T20:46:11Z,youtube,, 217768,https://www.youtube.com/watch?v=9jQedzUJRfA,What Are You Optimizing For? Aligning Recommender Systems to Human Values,['Vulnerable Growth'],2022-05-06T05:18:06Z,youtube,, 217790,https://www.youtube.com/watch?v=NmDRFwRczVQ,1:AGI Safety: Evan Hubinger 2023,['Evan Hubinger'],2023-05-13T15:56:21Z,youtube,, 217819,https://www.youtube.com/watch?v=t9uf9cuogBo,DeepMind x UCL RL Lecture Series - Model-free Control [6/13],['Google DeepMind'],2022-03-29T12:01:55Z,youtube,, 217849,https://www.youtube.com/watch?v=iOh7QUZGyiU,Deep Learning 1: Introduction to Machine Learning Based AI,['Google DeepMind'],2022-03-29T12:04:12Z,youtube,, 217887,https://www.youtube.com/watch?v=x_svqoZLA8o,269. Hard Problem of Corrigibility,['AI Safety Reading Group'],2023-03-30T21:07:36Z,youtube,, 217907,https://www.youtube.com/watch?v=Kzp2IT_mJPI,"Zhu Xiaohu, Center for Safe AGI | Ontological anti-crisis and AI safety",['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 217921,https://www.youtube.com/watch?v=f0s-uvvXvWg,DeepMind x UCL | Deep Learning Lectures | 10/12 | Unsupervised Representation Learning,['Google DeepMind'],2022-03-29T12:02:17Z,youtube,, 217946,https://www.youtube.com/watch?v=xvgbYhNQHSg,What Machines Shouldn’t Do (Scott Robbins),['AiTech - TU Delft'],2020-09-09T12:51:39Z,youtube,, 217965,https://www.youtube.com/watch?v=aheywElK_U8,How soon until machine intelligence? Oxford professor on Transcendence,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 217974,https://www.youtube.com/watch?v=ufQmq6X22rM,GPT 4: 9 Revelations (not covered elsewhere),['AI Explained'],2023-03-15T20:00:17Z,youtube,, 218004,https://www.youtube.com/watch?v=ojyYX4sX_w8,219. Misconceptions on discontinuous takeoff,['AI Safety Reading Group'],2021-03-25T21:38:44Z,youtube,, 218025,https://www.youtube.com/watch?v=uN3hQlxN_zo,The future of digitalization (Gerhard Fischer) - 1st AiTech Symposium,['AiTech - TU Delft'],2019-10-30T09:57:44Z,youtube,, 218049,https://www.youtube.com/watch?v=8M-6xuLjb94,260. How Might We Align Transformative AI If Its Developed Very Soon 3,['AI Safety Reading Group'],2022-11-03T21:44:42Z,youtube,, 218085,https://www.youtube.com/watch?v=Xvql4fGBoBA,144. Value Learning with Rohin Shah,['AI Safety Reading Group'],2019-05-15T23:05:36Z,youtube,, 218119,https://www.youtube.com/watch?v=QtlX2zusq_M,267. Lets think about slowing down AI 2,['AI Safety Reading Group'],2023-03-02T21:36:55Z,youtube,, 218143,https://www.youtube.com/watch?v=3kxTfBXZTds,The AI News You Might Have Missed This Week - Zuckerberg to Falcon w/ SPQR,['AI Explained'],2023-06-11T16:13:53Z,youtube,, 218166,https://www.youtube.com/watch?v=AiqgacILGcQ,121 - Artificial Stupidity,['AI Safety Reading Group'],2018-11-15T21:51:06Z,youtube,, 218178,https://www.youtube.com/watch?v=7FCEiCnHcbo,"The other ""Killer Robot Arms Race"" Elon Musk should worry about",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 218200,https://www.youtube.com/watch?v=_K02aeKNx3Q,245. Democratising Risk 2,['AI Safety Reading Group'],2022-03-18T06:25:29Z,youtube,, 218217,https://www.youtube.com/watch?v=iga_0WNQcTY,Bing is a LOT Smarter than ChatGPT (but still makes dangerous mistakes),['AI Explained'],2023-02-14T11:12:14Z,youtube,, 218239,https://www.youtube.com/watch?v=SS9DMr4VkbY,Ensuring safety and consistency in the age of machine learning | Chongli Qin | EAGxVirtual 2020,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 218273,https://www.youtube.com/watch?v=VqXulKAcjDk,244. Democratising Risk 1,['AI Safety Reading Group'],2022-03-03T22:01:44Z,youtube,, 218295,https://www.youtube.com/watch?v=jYdbVuxKBqM,Decision theory research at FRI | Johannes Treutlein & Caspar Oesterheld | EAGxBerlin 2017,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 218319,https://www.youtube.com/watch?v=pARXQnX6QS8,Provably Beneficial AI | Stuart Russell,['Vulnerable Growth'],2022-05-06T06:44:31Z,youtube,, 218339,https://www.youtube.com/watch?v=q5I9lP2mf6M,Bing Just Upgraded YouTube (and changed the internet forever),['AI Explained'],2023-02-21T16:51:30Z,youtube,, 218362,https://www.youtube.com/watch?v=ad4bHtSXiFE,Stuart Armstrong Predicting AI... or failing to - Winter Intelligence - FHI Oxford,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 218378,https://www.youtube.com/watch?v=qom0nxou4f4,2L Attention - Term Importance [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:47:36Z,youtube,, 218395,https://www.youtube.com/watch?v=JQCIZW-hOX0,Existential Risk and AI Safety Talk by Pepe Bawagan,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 218428,https://www.youtube.com/watch?v=tkqD9W5U9F4,Google Gemini: AlphaGo-GPT?,['AI Explained'],2023-06-28T17:08:43Z,youtube,, 218444,https://www.youtube.com/watch?v=VuxANJDXnIY,2L Attention - Results [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:47:41Z,youtube,, 218453,https://www.youtube.com/watch?v=IU9cQ1JdC7Y,"Building AGI: Promising Approaches, Remaining Milestones, and Likely Obstacles | Yoshua Bengio",['Vulnerable Growth'],2022-05-06T06:17:07Z,youtube,, 218486,https://www.youtube.com/watch?v=gP4ZNUHdwp8,What can AGI do? I/O and Speed,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 218503,https://www.youtube.com/watch?v=j-_FvJ-XbWA,197. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments 2,['AI Safety Reading Group'],2020-08-27T21:02:31Z,youtube,, 218526,https://www.youtube.com/watch?v=vzDm9IMyTp8,Stuart Russell (Full Interview),['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 218570,https://www.youtube.com/watch?v=q6iqI2GIllI,"Rabbits, Faces & Hyperspaces - Computerphile",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 218586,https://www.youtube.com/watch?v=CzoVn8LUaDs,The Alignment Problem: Machine Learning and Human Values,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 218625,https://www.youtube.com/watch?v=Q-LrdgEuvFA,Sam Harris and Eliezer Yudkowsky - The A.I. in a Box thought experiment,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 218637,https://www.youtube.com/watch?v=whL0OXPkvWo,246. Democratising Risk 3,['AI Safety Reading Group'],2022-03-31T21:08:46Z,youtube,, 218660,https://www.youtube.com/watch?v=Dz-A92c_e4w,AI Safety Coordination : What Capacity & Projects are we Missing? | Jaan Tallinn,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 218700,https://www.youtube.com/watch?v=hSiuJuvTBoE,Jan Leike – General Reinforcement Learning – CSRBAI 2016,['Machine Intelligence Research Institute'],2016-10-21T20:26:46Z,youtube,, 218732,https://www.youtube.com/watch?v=R8HxF8Yi6nU,Apply Now for a Paid Residency on Interpretability #short,['Rob Miles'],2022-11-11T18:07:58Z,youtube,, 218741,https://www.youtube.com/watch?v=-oKuDRFHW_Y,MLP Neurons - Privileged vs Non-Privileged Basis [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:48:31Z,youtube,, 218759,https://www.youtube.com/watch?v=3Sn0stHiNh4,Why Ain't You Rich? - Nate Soares,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 218775,https://www.youtube.com/watch?v=IStsoGiCp5g,Stuart Armstrong Manchester AI debate,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 218797,https://www.youtube.com/watch?v=nxYwNElXA1k,268. AI Practical Advice For The Worried,['AI Safety Reading Group'],2023-03-16T22:38:37Z,youtube,, 218833,https://www.youtube.com/watch?v=MhNcWxUs-PQ,DeepMind x UCL | Deep Learning Lectures | 12/12 | Responsible Innovation,['Google DeepMind'],2022-03-29T12:02:17Z,youtube,, 218866,https://www.youtube.com/watch?v=4al_hS_CCm8,AI governance landscape | Carrick Flynn | EA Global: San Francisco 2018,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 218892,https://www.youtube.com/watch?v=w65p_IIp6JY,"Why Does AI Lie, and What Can We Do About It?",['Rob Miles'],2022-12-09T20:10:37Z,youtube,, 218913,https://www.youtube.com/watch?v=VnRUvtcc53o,"Model Compression vs. Robustness of DNNs -- Can We Have Both? (Yanzhi Wang, FSL Workshop)",['Vulnerable Growth'],2022-05-06T05:19:42Z,youtube,, 218932,https://www.youtube.com/watch?v=HxN-WjqS2tA,EleutherAI IRG 220507: Interpretability Research for the Most Important Century,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 218975,https://www.youtube.com/watch?v=c8-OeGYRFpI,Possible Paths to Artificial General Intelligence,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219003,https://www.youtube.com/watch?v=QKRDp8nUMtk,Open-source learning: A bargaining approach | Jesse Clifton | EA Global: London 2019,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219022,https://www.youtube.com/watch?v=qhcBQrMfB8o,AI Safety Reading Group (Session 40),['AI Safety Reading Group'],2017-03-22T21:44:13Z,youtube,, 219046,https://www.youtube.com/watch?v=rUOowiSx6UU,1L Attention - Eigenvalue Analysis [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:47:14Z,youtube,, 219061,https://www.youtube.com/watch?v=WLXuZtWoRcE,"Stuart Armstrong: Is AI an existential threat? We don't know, and we should work on it",['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 219073,https://www.youtube.com/watch?v=_5xkh-Rh6Ec,Rohin Shah on the State of AGI Safety Research in 2021,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219098,https://www.youtube.com/watch?v=rT7sP5CMAU4,Ethan Perez - Making AI safe through debate,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219131,https://www.youtube.com/watch?v=f3o1MW2G5Rs,"Do We Get the $100 Trillion AI Windfall? Sam Altman's Plans, Jobs & the Falling Cost of Intelligence",['AI Explained'],2023-04-06T16:15:27Z,youtube,, 219161,https://www.youtube.com/watch?v=1mZASvPwC5w,AI Safety with Buck Shlegeris,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219186,https://www.youtube.com/watch?v=WjqHZEnXl9o,David Roodman - Economic history and the road to the singularity,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219212,https://www.youtube.com/watch?v=fcLQ0v3JFVg,Andy Jones - AI Safety and the Scaling Hypothesis,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219226,https://www.youtube.com/watch?v=weH9LKYNGWg,"8 New Ways to Use Bing's Upgraded 8 [now 20] Message Limit (ft. pdfs, quizzes, tables, scenarios...)",['AI Explained'],2023-03-04T15:27:17Z,youtube,, 219246,https://www.youtube.com/watch?v=QHEEQqh7wd4,Ben Garfinkel - Superhuman AI and the future of democracy and government,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219276,https://www.youtube.com/watch?v=oAINVrP31jE,Demo Day 14: AI safety at Faculty talk,['Vulnerable Growth'],2022-05-05T21:22:51Z,youtube,, 219297,https://www.youtube.com/watch?v=p6_X5Ei9C9s,"Stanford Seminar - Emerging risks and opportunities from large language models, Tatsu Hashimoto",['Vulnerable Growth'],2022-05-10T22:53:59Z,youtube,, 219317,https://www.youtube.com/watch?v=wEoAZWmsCJk,189. The Off-Switch Game,['AI Safety Reading Group'],2020-06-29T10:59:04Z,youtube,, 219342,https://www.youtube.com/watch?v=btc-4vYyOSs,130 - Embedded Agency QA,['AI Safety Reading Group'],2019-01-30T22:12:21Z,youtube,, 219379,https://www.youtube.com/watch?v=r91Co5CeOCY,AI safety | Katja Grace | EA Global: San Francisco 2017,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219403,https://www.youtube.com/watch?v=bNJRlx_BeDk,"Lectures by Olle Häggström on AI risk and long-term AI safety, part 1",['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 219440,https://www.youtube.com/watch?v=hgryE69oESg,Would We Prefer AGI To Be Conscious?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219462,https://www.youtube.com/watch?v=uQN0wqzy164,"The Inside View #3–Evan Hubinger—Takeoff speeds, Risks from learned optimization & Interpretability",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219491,https://www.youtube.com/watch?v=2Avgqeelbjk,115. Discussion with Alexander Turner,['AI Safety Reading Group'],2018-10-03T21:12:33Z,youtube,, 219514,https://www.youtube.com/watch?v=hAxGLNUYaG8,243. A General Language Assistant as a Laboratory for Alignment,['AI Safety Reading Group'],2022-02-17T21:41:34Z,youtube,, 219541,https://www.youtube.com/watch?v=3Rs_7E8pReU,Garrett Jones - Group vs Individual intelligence and AGI,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 219562,https://www.youtube.com/watch?v=i72mHaWKKUE,Understanding values – from the perspective of personal support technology (Myrthe Tielman),['AiTech - TU Delft'],2020-07-08T17:08:29Z,youtube,, 219585,https://www.youtube.com/watch?v=zc4gKH6xTBw,Could you Stop a Super Intelligent AI?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219597,https://www.youtube.com/watch?v=9739Tg8no24,Cooperative Multi-Agent Reinforcement Learning,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 219619,https://www.youtube.com/watch?v=R-VNlXJpAIQ,Risks and Benefits of Advanced Artificial Intelligence | Max Tegmark | EA Global: SF 2016,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219644,https://www.youtube.com/watch?v=funu_qgpVWk,Towards A Global Community Of Shared Future in AGI | Brian Tse,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219672,https://www.youtube.com/watch?v=6NoTuqDAkfg,"Can GPT 4 Prompt Itself? MemoryGPT, AutoGPT, Jarvis, Claude-Next [10x GPT 4!] and more...",['AI Explained'],2023-04-09T16:49:10Z,youtube,, 219699,https://www.youtube.com/watch?v=TuXl-iidnFY,Yudkowsky vs Hanson — Singularity Debate,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219723,https://www.youtube.com/watch?v=eHnS6WsGkEE,Opportunities for Cooperation on AI at Academic and Corporate Levels,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 219765,https://www.youtube.com/watch?v=-vsYtevJ2bc,Current work in AI alignment | Paul Christiano | EA Global: San Francisco 2019,['Vulnerable Growth'],2022-05-06T06:39:20Z,youtube,, 219803,https://www.youtube.com/watch?v=iY_6cnvBxN8,OUT OF CONTROL - The design of AI in everyday life (Elisa Giaccardi) - 1st AiTech Symposium,['AiTech - TU Delft'],2019-10-29T16:32:40Z,youtube,, 219813,https://www.youtube.com/watch?v=-mhBD8Frkc4,Reinforcement Learning 9: A Brief Tour of Deep RL Agents,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 219856,https://www.youtube.com/watch?v=_DbyjSbczQw,250 A Generalist Agent 1,['AI Safety Reading Group'],2022-06-03T05:05:12Z,youtube,, 219878,https://www.youtube.com/watch?v=ALdsqfrLieg,Deep Learning 5: Optimization for Machine Learning,['Google DeepMind'],2022-03-29T12:04:12Z,youtube,, 219906,https://www.youtube.com/watch?v=sPpFiwYqvq4,264. Our Approach to Alignment Research,['AI Safety Reading Group'],2023-01-05T22:20:57Z,youtube,, 219940,https://www.youtube.com/watch?v=FceQxb96GO8,How Well Can GPT-4 See? And the 5 Upgrades That Are Next,['AI Explained'],2023-03-26T16:10:10Z,youtube,, 219984,https://www.youtube.com/watch?v=m7sdlXvT2Lk,"Towards AI You Can Rely On (Aleksander Madry, Foundations of Safe Learning Workshop)",['Vulnerable Growth'],2022-05-06T05:19:52Z,youtube,, 220005,https://www.youtube.com/watch?v=OkH4QF7TEPU,AI Safety Landscape - Panel 1: The Challenge of Achieving Consensus,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220024,https://www.youtube.com/watch?v=l_pqGj-0g6A,222. Robot Responsibilities,['AI Safety Reading Group'],2021-04-29T19:46:37Z,youtube,, 220054,https://www.youtube.com/watch?v=1mExA_xdgnA,Re-deciphering China’s AI dream | Jeffrey Ding | EA Global: London 2019,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220077,https://www.youtube.com/watch?v=XAakObnisLA,Seminar Series with Stuart Russell: If We Succeed,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 220113,https://www.youtube.com/watch?v=9ZTKEDrDDi4,Superintelligence -- Andrew Critch,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 220136,https://www.youtube.com/watch?v=JRuNA2eK7w0,Is AI Safety a Pascal's Mugging?,['Rob Miles'],2019-05-16T14:11:07Z,youtube,, 220155,https://www.youtube.com/watch?v=YicCAgjsky8,Eliezer Yudkowsky - Difficulties of Artificial General Intelligence Alignment,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 220187,https://www.youtube.com/watch?v=AVlY8Gzk_GA,"Near-term AI security risks, and what to do about them | Shahar Avin | EA Global: London 2017",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220221,https://www.youtube.com/watch?v=py5VRagG6t8,EXTRA BITS: AI Gridworlds - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220246,https://www.youtube.com/watch?v=twMqHDXO29U,Apply to Study AI Safety Now! #shorts,['Rob Miles'],2023-04-28T16:37:28Z,youtube,, 220255,https://www.youtube.com/watch?v=92qDfT8pENs,Reward Hacking: Concrete Problems in AI Safety Part 3,['Rob Miles'],2017-08-12T19:24:08Z,youtube,, 220271,https://www.youtube.com/watch?v=z_WhxqCWJ4s,AI Safety Reading Group (Session 42),['Vulnerable Growth'],2022-05-06T04:20:36Z,youtube,, 220308,https://www.youtube.com/watch?v=1zroYiCkHiY,204. Universal Intelligence,['AI Safety Reading Group'],2020-10-22T20:46:33Z,youtube,, 220329,https://www.youtube.com/watch?v=1BopR9PPXsQ,Regulating AI for the safety of humanity | Ayush Patel | TEDxQESchool,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220355,https://www.youtube.com/watch?v=vW89UcvMfjQ,Jan Leike - AI alignment at OpenAI,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220386,https://www.youtube.com/watch?v=cCUOVSE71fw,"Let's get physical - DeepMind: The Podcast (S2, Ep4)",['Google DeepMind'],2022-02-08T21:38:36Z,youtube,, 220416,https://www.youtube.com/watch?v=lEUW67_ulgc,6:How to Build a Safe Advanced AGI?: Evan Hubinger 2023,['Evan Hubinger'],2023-05-13T15:57:29Z,youtube,, 220445,https://www.youtube.com/watch?v=J1pwHcZJlWA,Governing Transformative AI,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 220489,https://www.youtube.com/watch?v=VvLncg14Jc0,AiTech Agora - Jie Yang: ARCH - Know What Your Machine Doesn't Know,['AiTech - TU Delft'],2021-05-19T20:02:18Z,youtube,, 220513,https://www.youtube.com/watch?v=_aUq7lmMfxo,DeepMind x UCL | Deep Learning Lectures | 4/12 | Advanced Models for Computer Vision,['Google DeepMind'],2022-03-29T12:02:17Z,youtube,, 220544,https://www.youtube.com/watch?v=7crsHGsh3p8,1L Attention - Theory [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:47:04Z,youtube,, 220555,https://www.youtube.com/watch?v=w4g5mCy5yr8,The future of surveillance | Ben Garfinkel | EA Global: San Francisco 2018,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220577,https://www.youtube.com/watch?v=MALGrKvXql4,9 of the Best Bing (GPT 4) Prompts,['AI Explained'],2023-02-23T15:20:14Z,youtube,, 220598,https://www.youtube.com/watch?v=ecUodmQMlBs,"$100,000 for Tasks Where Bigger AIs Do Worse Than Smaller Ones #short",['Rob Miles'],2022-10-14T11:05:51Z,youtube,, 220610,https://www.youtube.com/watch?v=Ff15lbI1V9M,Science Saturday: Dreaming of an Artificial Intelligence | Eliezer Yudkowsky & Jaron Lanier,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220631,https://www.youtube.com/watch?v=N7tZ7iRmmQ8,AISafety.com Reading Group Session 79,['AI Safety Reading Group'],2018-01-17T21:11:10Z,youtube,, 220645,https://www.youtube.com/watch?v=nr1lHuFeq5w,Scalable Supervision: Concrete Problems in AI Safety Part 5,['Rob Miles'],2017-11-29T21:47:29Z,youtube,, 220662,https://www.youtube.com/watch?v=W3jbp0QiG7Y,AiTech Agora: Emma van Zoelen - Human AI Co-Learning Mutual Understanding and Smooth Collaboration,['AiTech - TU Delft'],2021-12-10T11:59:29Z,youtube,, 220688,https://www.youtube.com/watch?v=lGux74w6H9g,"Morality, uncertainty, and autonomous systems (Luciano Siebert) - 1st AiTech Symposium",['AiTech - TU Delft'],2019-10-29T15:46:12Z,youtube,, 220707,https://www.youtube.com/watch?v=eaWfWoVUTEw,DeepMind x UCL RL Lecture Series - Model-free Prediction [5/13],['Google DeepMind'],2022-03-29T12:01:55Z,youtube,, 220724,https://www.youtube.com/watch?v=eVRSfo7KIrc,EleutherAI Interpretability Reading Group 220212: Interpreting across time,['Vulnerable Growth'],2022-05-15T19:41:48Z,youtube,, 220752,https://www.youtube.com/watch?v=qk3bQrSfUzs,Robin Hanson on AI Takeoff Scenarios - AI Go Foom?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220774,https://www.youtube.com/watch?v=xLAyTZzl65Y,The Control is in the Interaction (Jered Vroon),['AiTech - TU Delft'],2020-09-16T13:34:08Z,youtube,, 220799,https://www.youtube.com/watch?v=Qgd3OK5DZWI,"A systems neuroscience approach to building AGI - Demis Hassabis, Singularity Summit 2010",['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 220824,https://www.youtube.com/watch?v=3qYmLRqemg4,Sino-Western cooperation in AI safety | Brian Tse | EA Global: San Francisco 2019,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220848,https://www.youtube.com/watch?v=gdKMG6kTl6Y,Quantilizers: AI That Doesn't Try Too Hard,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220869,https://www.youtube.com/watch?v=mEt1Wfl1jvo,"Eliezer Yudkowsky on ""Three Major Singularity Schools""",['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 220883,https://www.youtube.com/watch?v=hMbxmRyDw5M,Reinforcement Learning 3: Markov Decision Processes and Dynamic Programming,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 220904,https://www.youtube.com/watch?v=8OpW5qboDDs,"'Pause Giant AI Experiments' - Letter Breakdown w/ Research Papers, Altman, Sutskever and more",['AI Explained'],2023-03-29T17:46:41Z,youtube,, 220932,https://www.youtube.com/watch?v=8AvIErXFoH8,What's the Use of Utility Functions?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220948,https://www.youtube.com/watch?v=eaYIU6YXr3w,Why Not Just: Raise AI Like Kids?,['Rob Miles'],2017-07-22T13:58:34Z,youtube,, 220962,https://www.youtube.com/watch?v=d6pIk-JxfGw,AI and Value Alignment | Jaan Tallinn,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 220981,https://www.youtube.com/watch?v=qPKrTap4gPE,"185. If I were a Well-intentioned AI 3,4",['AI Safety Reading Group'],2020-05-21T19:19:32Z,youtube,, 221011,https://www.youtube.com/watch?v=wnqs-B4ZmVo,What is AI safety? | ZDNet,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221034,https://www.youtube.com/watch?v=pZsHZDA9TJU,"#034 Eray Özkural- AGI, Simulations & Safety",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221076,https://www.youtube.com/watch?v=NX6eYMZhbZY,"Out of the lab (incl. AlphaFold) - DeepMind: The Podcast (S1, Ep5)",['Google DeepMind'],2020-03-04T14:06:03Z,youtube,, 221101,https://www.youtube.com/watch?v=nKJlF-olKmg,9 Examples of Specification Gaming,['Rob Miles'],2020-04-29T16:41:20Z,youtube,, 221117,https://www.youtube.com/watch?v=E2aZiejw-8A,"8 Signs It's The Future: Thought-to-Text, Nvidia Text-to-Video, Character AI, and P(Doom) @Ted",['AI Explained'],2023-04-20T16:39:35Z,youtube,, 221156,https://www.youtube.com/watch?v=8zAP2qWAsKg,DeepMind x UCL | Deep Learning Lectures | 7/12 | Deep Learning for Natural Language Processing,['Google DeepMind'],2022-03-29T12:02:17Z,youtube,, 221183,https://www.youtube.com/watch?v=7zMLbif0dvs,182. The Offence-Defence Balance of Scientific Knowledge,['AI Safety Reading Group'],2020-04-30T21:03:40Z,youtube,, 221203,https://www.youtube.com/watch?v=cfUglOE_N8I,Extending Value Sensitive Design To Artificial Intelligence (Steven Umbrello),['AiTech - TU Delft'],2020-10-21T13:23:44Z,youtube,, 221222,https://www.youtube.com/watch?v=H7b_2NCJk1E,AISafety.com Reading Group Session 79 (fixed),['AI Safety Reading Group'],2018-01-18T20:41:24Z,youtube,, 221240,https://www.youtube.com/watch?v=9nktr1MgS-A,Stop Button Solution? - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221261,https://www.youtube.com/watch?v=o5R5U80IfJs,Roger Grosse | How can deep learning research inform long-term AI safety?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221281,https://www.youtube.com/watch?v=jcVkYMNdQIA,AI Safety via Debates #RB11,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221300,https://www.youtube.com/watch?v=eM6IBYVqXEA,Reinforcement Learning 2: Exploration and Exploitation,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 221321,https://www.youtube.com/watch?v=01AjTeOXGZU,EleutherAI Interpretability Reading Group 220423: In-context learning and induction heads,['Vulnerable Growth'],2022-05-15T19:42:19Z,youtube,, 221342,https://www.youtube.com/watch?v=_da0i5S-SSU,DeepMind: The Podcast (S2 trailer),['Google DeepMind'],2022-01-06T17:01:07Z,youtube,, 221365,https://www.youtube.com/watch?v=Aq9u7slW5z8,Bing Chat (GPT 4) Tutorial: 12 Steps for Beginners to Advanced - Crash Course!,['AI Explained'],2023-02-20T16:07:24Z,youtube,, 221381,https://www.youtube.com/watch?v=D6peN9LiTWA,Eliezer Yudkowsky on Intelligence Explosion,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221404,https://www.youtube.com/watch?v=h0962biiZa4,Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221442,https://www.youtube.com/watch?v=1w9gYhhXzwI,203. Roadmap to a Roadmap,['AI Safety Reading Group'],2020-10-15T21:18:00Z,youtube,, 221466,https://www.youtube.com/watch?v=QhimXPvM8DE,The AI arms race and dangerous technology narratives | Tom Westgarth | TEDxWarwickSalon,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221488,https://www.youtube.com/watch?v=HrV19SjKUss,AI Alignment & AGI Fire Alarm - Connor Leahy,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221515,https://www.youtube.com/watch?v=0g4j2k_Ggc4,RL Course by David Silver - Lecture 5: Model Free Control,['Google DeepMind'],2022-03-29T12:03:07Z,youtube,, 221542,https://www.youtube.com/watch?v=zfx-9sq4jlE,275. Why I am not as much of a doomer as some people,['AI Safety Reading Group'],2023-06-30T09:52:09Z,youtube,, 221571,https://www.youtube.com/watch?v=Bgz9eMcE5Do,Danijar Hafner - Gaming our way to AGI,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221593,https://www.youtube.com/watch?v=Ao4jwLwT36M,AI That Doesn't Try Too Hard - Maximizers and Satisficers,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221614,https://www.youtube.com/watch?v=3jSMe0owGMs,Existential Risks: AI (talk at Oxford Geek Night 25),['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 221625,https://www.youtube.com/watch?v=n_eCir24F1k,"Gillian Hadfield ""Incomplete Contracts and AI Alignment"" (Disc: Paul Milgrom)",['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 221649,https://www.youtube.com/watch?v=AkLkZgsaKp4,Owain Evans | Truthful language models and AI alignment,['Vulnerable Growth'],2023-03-11T18:18:24Z,youtube,, 221673,https://www.youtube.com/watch?v=H4VGSYGvJiA,Deep Learning 8: Unsupervised learning and generative models,['Google DeepMind'],2022-03-29T12:04:12Z,youtube,, 221709,https://www.youtube.com/watch?v=bSTYiIgjgrk,Fireside chat: AI governance | Markus Anderljung & Ben Garfinkel | EA Global: Virtual 2020,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221740,https://www.youtube.com/watch?v=2afdrE81yvg,220. June Ku on MetaEthical.AI,['AI Safety Reading Group'],2021-04-13T06:00:36Z,youtube,, 221759,https://www.youtube.com/watch?v=Zef-mIKjHAk,The AI revolution and international politics | Allan Dafoe | EAG 2017 Boston,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221781,https://www.youtube.com/watch?v=KXWlhbEDs6I,AiTech Agora: Lotje Siffels & Iris Muis - Zeitgeist and data: the danger of innovation,['AiTech - TU Delft'],2022-02-17T12:36:08Z,youtube,, 221806,https://www.youtube.com/watch?v=sx8JkdbNgdU,AI Toy Control Problem,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 221827,https://www.youtube.com/watch?v=KnBGR6UWKEc,(Duplicate?) 187. Stuart Armstrong and Scott Garrabrant on If I were a Well-intentioned AI,['AI Safety Reading Group'],2020-06-11T11:33:30Z,youtube,, 221857,https://www.youtube.com/watch?v=_SmSRLtZqEw,Brian Christian - The alignment problem,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221892,https://www.youtube.com/watch?v=R6Mzt4GwQnQ,How I think students should orient to AI safety | Buck Shlegeris | EA Student Summit 2020,['Vulnerable Growth'],2022-05-06T05:16:04Z,youtube,, 221923,https://www.youtube.com/watch?v=AjyM-f8rDpg,Concrete Problems in AI Safety (Paper) - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221960,https://www.youtube.com/watch?v=xLYE11yW-hQ,Should We Build Superintelligence?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 221988,https://www.youtube.com/watch?v=hAKMMdapqWc,248 Eliciting Latent Knowledge 2,['AI Safety Reading Group'],2022-05-06T04:56:54Z,youtube,, 222016,https://www.youtube.com/watch?v=JVVj9Dui9es,187. Stuart Armstrong and Scott Garrabrant: If I were a Well-intentioned AI,['AI Safety Reading Group'],2020-06-11T13:41:58Z,youtube,, 222042,https://www.youtube.com/watch?v=QhVMCKeQ2oQ,A.I. alarms - Sam Harris and Eliezer Yudkowsky,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 222065,https://www.youtube.com/watch?v=AIiwuClvH6k,DeepMind x UCL | Deep Learning Lectures | 8/12 | Attention and Memory in Deep Learning,['Google DeepMind'],2022-03-29T12:02:17Z,youtube,, 222108,https://www.youtube.com/watch?v=HYtJdflujjc,Win $50k for Solving a Single AI Problem? #Shorts,['Rob Miles'],2022-02-08T19:17:38Z,youtube,, 222118,https://www.youtube.com/watch?v=O8GUH0_htRM,"GPT 4 Got Upgraded - Code Interpreter (ft. Image Editing, MP4s, 3D Plots, Data Analytics and more!)",['AI Explained'],2023-05-20T17:25:52Z,youtube,, 222135,https://www.youtube.com/watch?v=vv-jKO-vlcU,Provably Beneficial AI and the Problem of Control,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 222152,https://www.youtube.com/watch?v=wc0gGRNoenI,Trajectory optimization for urban driving among decision-making vehicles (Javier Alonso-Mora),['AiTech - TU Delft'],2020-10-01T09:42:29Z,youtube,, 222183,https://www.youtube.com/watch?v=JO0LwmIlWw0,Deep Learning 2: Introduction to TensorFlow,['Google DeepMind'],2022-03-29T12:04:12Z,youtube,, 222221,https://www.youtube.com/watch?v=sq6UKF8CwJ0,"Gillian Hadfield, University of Toronto | Incomplete Contracts & AI Alignment",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 222254,https://www.youtube.com/watch?v=55AMF2z5dJU,191. Pessimism about Unknown Unknowns inspires Conservatism,['AI Safety Reading Group'],2020-07-09T20:07:25Z,youtube,, 222272,https://www.youtube.com/watch?v=ljHFmznqkYM,Carl Shulman Could we use untrustworthy human brain emulations to make trustworthy ones,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 222302,https://www.youtube.com/watch?v=u6ppY0OF6HE,"249, MIRI Announces New "Death With Dignity" Strategy",['AI Safety Reading Group'],2022-05-20T04:43:58Z,youtube,, 222332,https://www.youtube.com/watch?v=o7vHVWsdys4,Agora: M. Brandão: Fairness and explainability in robot motion,['AiTech - TU Delft'],2021-03-31T12:38:33Z,youtube,, 222361,https://www.youtube.com/watch?v=lLRWZZF3ctw,"Bad AI Predictions: Bard Upgrade, 2 Years to AI Auto-Money, OpenAI Investigation and more",['AI Explained'],2023-07-17T15:30:19Z,youtube,, 222384,https://www.youtube.com/watch?v=QVuj6ZFxw14,Stefano Albrecht – Learning to Distinguish Between Belief and Truth – CSRBAI 2016,['Machine Intelligence Research Institute'],2016-10-21T20:26:46Z,youtube,, 222405,https://www.youtube.com/watch?v=9eWvZLYcous,Eliezer Yudkowsky - Less Wrong Q&A (4/30),['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 222427,https://www.youtube.com/watch?v=1KX_DFM-DRY,Miles Brundage Limitations and Risks of Machine Ethics FHI Winter Intelligence,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 222464,https://www.youtube.com/watch?v=CPmLjqmeNJo,Virtual Attention heads [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:47:46Z,youtube,, 222480,https://www.youtube.com/watch?v=oJMRnOAB9dk,Evan Hubinger | Risks from Learned Optimization | UCL AI Society,['Vulnerable Growth'],2022-05-15T04:30:44Z,youtube,, 222510,https://www.youtube.com/watch?v=ivexBzomPv4,What's Behind the ChatGPT History Change? How You Can Benefit + The 6 New Developments This Week,['AI Explained'],2023-04-26T16:26:34Z,youtube,, 222537,https://www.youtube.com/watch?v=GN1wxEUgA_4,273. Strategizing in Large Language Models,['AI Safety Reading Group'],2023-06-11T09:30:58Z,youtube,, 222554,https://www.youtube.com/watch?v=_H-uxRq2w-c,Interactions between the AI Control Problem and the Governance Problem | Nick Bostrom,['Vulnerable Growth'],2022-05-06T06:45:31Z,youtube,, 222587,https://www.youtube.com/watch?v=xMFQErzPvYA,"Bas Steunebrink – About Understanding, Meaning, and Values – CSRBAI 2016",['Machine Intelligence Research Institute'],2016-10-21T20:26:46Z,youtube,, 222618,https://www.youtube.com/watch?v=ExrXs7PCQpU,"AI and neuroscience: The virtuous circle - DeepMind: The Podcast (S1, Ep1)",['Google DeepMind'],2020-02-27T10:39:10Z,youtube,, 222653,https://www.youtube.com/watch?v=z6atNBhItBs,The Alignment Problem: Machine Learning and Human Values with Brian Christian,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 222692,https://www.youtube.com/watch?v=hWb09uq6Zlk,180. If I Were a Well-Intentioned AI 1,['AI Safety Reading Group'],2020-04-15T20:22:51Z,youtube,, 222715,https://www.youtube.com/watch?v=1wAgBaJgEsg,Are AI Risks like Nuclear Risks?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 222752,https://www.youtube.com/watch?v=ZV9YWDDwC3A,R Dobbe: Towards a Systematic and Realistic Practice for Developing Safe and Democratically Sound AI,['AiTech - TU Delft'],2021-04-07T11:13:50Z,youtube,, 222772,https://www.youtube.com/watch?v=Xrxrd8nl4YI,Reinforcement Learning 7: Planning and Models,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 222804,https://www.youtube.com/watch?v=OkAwsrHMTgM,"Go to Zero - DeepMind: The Podcast (S1, Ep2)",['Google DeepMind'],2020-02-27T10:50:09Z,youtube,, 222827,https://www.youtube.com/watch?v=nnxHlg-2WgA,Reinforcement Learning 4: Model-Free Prediction and Control,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 222857,https://www.youtube.com/watch?v=siDtNqlPoLk,DeepMind x UCL RL Lecture Series - Deep Reinforcement Learning #2 [13/13],['Google DeepMind'],2022-03-29T12:01:55Z,youtube,, 222879,https://www.youtube.com/watch?v=h9LaSfq64E8,Would you have warning before artificial superintelligence? Oxford professor on Transcendence,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 222889,https://www.youtube.com/watch?v=A7dlTO33qd8,254 Lets see you write that Corrigibility Tag,['AI Safety Reading Group'],2022-08-12T08:11:37Z,youtube,, 222909,https://www.youtube.com/watch?v=7ZFergpawWM,"Lectures by Olle Häggström on AI risk and long-term AI safety, part 3",['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 222940,https://www.youtube.com/watch?v=ld28AU7DDB4,Reinforcement Learning 10: Classic Games Case Study,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 222964,https://www.youtube.com/watch?v=J5_-vmhsrv0,AiTech Agora - Stefan Buijsman: Defining explanation and explanatory depth in XAI,['AiTech - TU Delft'],2021-07-16T14:25:27Z,youtube,, 222984,https://www.youtube.com/watch?v=08rt1-DdlNM,AI Safety Reading Group (Session 39),['AI Safety Reading Group'],2017-03-15T21:00:32Z,youtube,, 223010,https://www.youtube.com/watch?v=jn8eqpMLFlQ,How to build a safe advanced AI (Evan Hubinger) | What's up in AI safety? (Asya Bergal),['Vulnerable Growth'],2022-05-15T04:31:25Z,youtube,, 223044,https://www.youtube.com/watch?v=_kNvExbheNA,196. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments,['AI Safety Reading Group'],2020-08-13T21:20:14Z,youtube,, 223065,https://www.youtube.com/watch?v=v9M2Ho9I9Qo,How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification,['Rob Miles'],2019-03-11T12:14:21Z,youtube,, 223078,https://www.youtube.com/watch?v=EUjc1WuyPT8,"Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start",['plex / Eric'],2022-07-06T12:32:54Z,youtube,, 223106,https://www.youtube.com/watch?v=7i_f4Kbpgn4,Sharing the Benefits of AI: The Windfall Clause,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 223121,https://www.youtube.com/watch?v=TWHcK-BNo1w,AI safety needs social scientists | Amanda Askell | EA Global: London 2018,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 223139,https://www.youtube.com/watch?v=LqvCPmbg5KI,Infrastructure - Garcon [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:48:42Z,youtube,, 223156,https://www.youtube.com/watch?v=ql4Y0-jEKhw,87. An Untrollable Mathematician,['AI Safety Reading Group'],2018-03-14T21:29:05Z,youtube,, 223165,https://www.youtube.com/watch?v=DlVG07G1m2w,"AI Ethics, Bostrom and Yudkowsky",['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 223202,https://www.youtube.com/watch?v=3xSZ2q8OpiU,Who Or What Should Be In Control of Artificial General Intelligence?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 223238,https://www.youtube.com/watch?v=PLCaPMBnsLc,225. Michael Cohen on Intelligence and Unambitiousness,['AI Safety Reading Group'],2021-05-27T21:48:06Z,youtube,, 223257,https://www.youtube.com/watch?v=V527HCWfBCU,Safe Exploration: Concrete Problems in AI Safety Part 6,['Rob Miles'],2018-09-21T11:20:53Z,youtube,, 223287,https://www.youtube.com/watch?v=8wYNsoycM1U,MLP Neurons - 40L Preliminary Investigation [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:48:37Z,youtube,, 223306,https://www.youtube.com/watch?v=8XWbPDvKgM0,257. Where I agree and Disagree with Eliezer 3,['AI Safety Reading Group'],2022-09-22T21:45:39Z,youtube,, 223333,https://www.youtube.com/watch?v=FeoyCUf9MkQ,To use the human or not to use the human (Erwin Boer) - 1st AiTech Symposium,['AiTech - TU Delft'],2019-10-30T09:58:50Z,youtube,, 223361,https://www.youtube.com/watch?v=xFvDJnf0GXs,4 Tests Reveal Bing (GPT 4) ≈ 114 IQ,['AI Explained'],2023-02-19T13:56:32Z,youtube,, 223384,https://www.youtube.com/watch?v=gFP5fCLVdtY,274. Conjecture Internal Infohazard Policy,['AI Safety Reading Group'],2023-06-30T08:31:14Z,youtube,, 223404,https://www.youtube.com/watch?v=ZeecOKBus3Q,Why Would AI Want to do Bad Things? Instrumental Convergence,['Rob Miles'],2018-03-24T19:51:39Z,youtube,, 223425,https://www.youtube.com/watch?v=OfKnA91zs9I,"Deep Learning 4: Beyond Image Recognition, End-to-End Learning, Embeddings",['Google DeepMind'],2022-03-29T12:04:12Z,youtube,, 223452,https://www.youtube.com/watch?v=jgSxmA7AiBo,The technological landscape of affective AGI (lightning talk) | Daniel Eth | EA Global: London 2017,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 223461,https://www.youtube.com/watch?v=6MkmeADXcZg,108. The Learning-Theoretic AI Alignment Agenda,['AI Safety Reading Group'],2018-08-09T21:13:12Z,youtube,, 223503,https://www.youtube.com/watch?v=IijqelLKtQ4,"Nicola Croce: ODD, data labeling and the problem of representing knowledge for AVs",['AiTech - TU Delft'],2021-03-17T14:01:24Z,youtube,, 223530,https://www.youtube.com/watch?v=BfcJymyTiu0,AI Safety at EAGlobal2017 Conference,['Rob Miles'],2017-11-16T19:21:00Z,youtube,, 223549,https://www.youtube.com/watch?v=3TYT1QfdfsM,"AI ""Stop Button"" Problem - Computerphile",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 223577,https://www.youtube.com/watch?v=CGTkoUidQ8I,AI Safety Gridworlds,['Rob Miles'],2018-05-25T16:20:46Z,youtube,, 223597,https://www.youtube.com/watch?v=nNB9svNBGHM,Respectability,['Rob Miles'],2017-05-27T14:06:29Z,youtube,, 223615,https://www.youtube.com/watch?v=uHiL6GNXHvw,"Rohin Shah - Effective altruism, AI safety, and learning human preferences from the world's state",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 223636,https://www.youtube.com/watch?v=fTvB5xMNfTY,"#029 GPT-3, Prompt Engineering, Trading, AI Alignment, Intelligence",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 223674,https://www.youtube.com/watch?v=HOJ1NVtlnyQ,Experts' Predictions about the Future of AI,['Rob Miles'],2018-03-31T12:12:37Z,youtube,, 223687,https://www.youtube.com/watch?v=ZfJhOTZi0WE,"A breakthrough unfolds - DeepMind: The Podcast (S2, Ep1)",['Google DeepMind'],2022-01-25T14:36:46Z,youtube,, 223724,https://www.youtube.com/watch?v=13tZ9Yia71c,What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 4,['Rob Miles'],2017-09-24T12:09:54Z,youtube,, 223755,https://www.youtube.com/watch?v=i8r_yShOixM,AI? Just Sandbox it... - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 223777,https://www.youtube.com/watch?v=s0FwXjzQcJk,"Robust Learning via Robust Optimization (Stefanie Jegelka, Foundations of Safe Learning Workshop)",['Vulnerable Growth'],2022-05-06T05:19:10Z,youtube,, 223793,https://www.youtube.com/watch?v=qh666c6j4mk,AI Research Considerations for Existential Safety,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 223833,https://www.youtube.com/watch?v=46nsTFfsBuc,Reward Hacking Reloaded: Concrete Problems in AI Safety Part 3.5,['Rob Miles'],2017-08-29T10:08:41Z,youtube,, 223852,https://www.youtube.com/watch?v=4x3ag-Zqo6c,How humans co-determine the development of intelligent technology (Serge Thill),['AiTech - TU Delft'],2020-12-02T15:46:27Z,youtube,, 223891,https://www.youtube.com/watch?v=6r_OgPtIae8,"'This Could Go Quite Wrong' - Altman Testimony, GPT 5 Timeline, Self-Awareness, Drones and more",['AI Explained'],2023-05-17T16:22:59Z,youtube,, 223932,https://www.youtube.com/watch?v=WjAkzPhqsxo,"AI for everyone - DeepMind: The Podcast (S1, Ep6)",['Google DeepMind'],2020-02-27T14:10:43Z,youtube,, 223967,https://www.youtube.com/watch?v=p-zdHsjiKXY,276. Universal and Transferable adversarial attacks on aligned language models,['AI Safety Reading Group'],2023-08-11T06:11:21Z,youtube,, 223986,https://www.youtube.com/watch?v=w-UN54rMjOQ,AI in Short: The Value Alignment Problem,['Vulnerable Growth'],2022-05-06T05:17:44Z,youtube,, 224009,https://www.youtube.com/watch?v=pYXy-A4siMw,"Intro to AI Safety, Remastered",['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 224029,https://www.youtube.com/watch?v=S_Sd_S8jwP0,Avoiding Positive Side Effects: Concrete Problems in AI Safety part 1.5,['Rob Miles'],2017-06-25T09:29:27Z,youtube,, 224045,https://www.youtube.com/watch?v=OOEooM_GVN0,AiTech Agora: Chao Zhang - Personal Autonomy in Human-AI Interaction,['AiTech - TU Delft'],2022-05-09T15:12:57Z,youtube,, 224059,https://www.youtube.com/watch?v=MUVbqQ3STFA,AI learns to Create ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes #1,['Rob Miles'],2017-10-29T11:49:20Z,youtube,, 224075,https://www.youtube.com/watch?v=PDvAutARum4,AI alignment and Redwood Research | Buck Shlegeris (CTO),['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 224092,https://www.youtube.com/watch?v=NCv4J4wH39w,AiTech Agora: Sebastian Köhler - Responsible AI through Conceptual Engineering,['AiTech - TU Delft'],2021-06-15T11:20:40Z,youtube,, 224108,https://www.youtube.com/watch?v=2AdkSYWB6LY,GPT 4: Full Breakdown (14 Details You May Have Missed),['AI Explained'],2023-03-14T21:15:18Z,youtube,, 224130,https://www.youtube.com/watch?v=etFCaFvt2Ks,"Attention - General - Theory, Info-Weighted Patterns, Attribution Patterns [rough early thoughts]",['Vulnerable Growth'],2022-05-10T00:47:51Z,youtube,, 224154,https://www.youtube.com/watch?v=ZBlHFFE-ng8,1L Attention - Results [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:47:09Z,youtube,, 224170,https://www.youtube.com/watch?v=gDqkCxYYDGk,AI & Logical Induction - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 224181,https://www.youtube.com/watch?v=UPlv-lFWITI,Ethan Caballero–Scale Is All You Need,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 224213,https://www.youtube.com/watch?v=Ys-U-4vjRjw,AI Safety Reading Group (Session 45),['AI Safety Reading Group'],2017-04-26T20:37:49Z,youtube,, 224231,https://www.youtube.com/watch?v=3wsiUkmC6dI,Stuart Armstrong – Reduced Impact AI and Other Alternatives to Friendliness – CSRBAI 2016,['Machine Intelligence Research Institute'],2016-10-21T20:26:46Z,youtube,, 224262,https://www.youtube.com/watch?v=HW7kfKrbLSg,183. If I were a Well-intentioned AI 2,['AI Safety Reading Group'],2020-05-07T20:30:59Z,youtube,, 224299,https://www.youtube.com/watch?v=1M9CvESSeVc,2019 09 19 Stuart Armstrong Research Agenda Online Talk,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 224320,https://www.youtube.com/watch?v=ACfMaUv3UbU,Geordie Rose - Will AGI need to be embodied?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 224349,https://www.youtube.com/watch?v=2B-AyWA2_ZY,Status Report,['Rob Miles'],2017-03-18T11:40:43Z,youtube,, 224358,https://www.youtube.com/watch?v=gbjV_hTroyU,AiTech Agora : Markus Peschl - Aligning AI with Human Norms,['AiTech - TU Delft'],2021-12-10T11:13:20Z,youtube,, 224371,https://www.youtube.com/watch?v=shVKhOmT0HE,DeepMind x UCL | Deep Learning Lectures | 3/12 | Convolutional Neural Networks for Image Recognition,['Google DeepMind'],2022-03-29T12:02:17Z,youtube,, 224402,https://www.youtube.com/watch?v=dQkZOMdFiaU,Emiliano De Cristofaro: Understanding the Weaponization of the Web via Data Driven Analysis,['AiTech - TU Delft'],2021-04-07T14:49:46Z,youtube,, 224432,https://www.youtube.com/watch?v=IkbYu_poZVE,EleutherAI Interpretability Reading Group 220226: Locating and editing factual knowledge in GPT,['Vulnerable Growth'],2022-05-15T19:39:59Z,youtube,, 224458,https://www.youtube.com/watch?v=u84MFu1nG4g,DeepMind x UCL RL Lecture Series - Multi-step & Off Policy [11/13],['Google DeepMind'],2022-03-29T12:01:55Z,youtube,, 224483,https://www.youtube.com/watch?v=H_fUBF5ZR2U,Human control over fully autonomous systems: a philosophical exploration (Giulio Mecacci),['AiTech - TU Delft'],2020-10-08T15:52:34Z,youtube,, 224501,https://www.youtube.com/watch?v=UuT154hPZOM,Anders Sandberg - Answering the Fermi Question: Is AI our Great Filter?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 224538,https://www.youtube.com/watch?v=jOxtiqszL4s,198. Language Models are Few Shot Learners 1,['AI Safety Reading Group'],2020-09-10T21:00:56Z,youtube,, 224558,https://www.youtube.com/watch?v=QCd-Yf7PqeA,retreat jsw talk 2,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 224575,https://www.youtube.com/watch?v=89A4jGvaaKk,Unicorn AI - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 224591,https://www.youtube.com/watch?v=E8PGcoLDjVk,How sure are we about this AI stuff? | Ben Garfinkel | EA Global: London 2018,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 224624,https://www.youtube.com/watch?v=MzFl0SdjSso,Iason Gabriel on Foundational Philosophical Questions in AI Alignment,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 224645,https://www.youtube.com/watch?v=vcLU0DhDhi0,"Demis Hassabis: The interview - DeepMind: The Podcast (S1, Ep8)",['Google DeepMind'],2020-02-27T14:18:45Z,youtube,, 224675,https://www.youtube.com/watch?v=PBs6BoKVGIo,What Should Happen To Humans In A World Of AGI?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 224698,https://www.youtube.com/watch?v=WvmeTaFc_Qw,Value Alignment | Stuart Russell,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 224717,https://www.youtube.com/watch?v=TZNVBgmXKrA,Evan Hubinger - The Inner Alignment Problem,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 224751,https://www.youtube.com/watch?v=PAVeYUgknMw,ChatGPT's Achilles' Heel,['AI Explained'],2023-06-25T16:10:11Z,youtube,, 224781,https://www.youtube.com/watch?v=GdeY-MrXD74,"The promise of AI with Demis Hassabis - DeepMind: The Podcast (S2, Ep9)",['Google DeepMind'],2022-03-14T10:46:54Z,youtube,, 224818,https://www.youtube.com/watch?v=_njf22xx8BQ,"12 New Code Interpreter Uses (Image to 3D, Book Scans, Multiple Datasets, Error Analysis ... )",['AI Explained'],2023-05-22T17:57:16Z,youtube,, 224848,https://www.youtube.com/watch?v=y3oqOjHilio,DeepMind x UCL RL Lecture Series - Policy-Gradient and Actor-Critic methods [9/13],['Google DeepMind'],2022-03-29T12:01:55Z,youtube,, 224887,https://www.youtube.com/watch?v=XmnTd92NqFw,"What's Up With Bard? 9 Examples + 6 Reasons Google Fell Behind [ft. Muse, Med-PaLM 2 and more]",['AI Explained'],2023-03-22T18:52:26Z,youtube,, 224921,https://www.youtube.com/watch?v=ook46h2Jfb4,DeepMind x UCL RL Lecture Series - Function Approximation [7/13],['Google DeepMind'],2022-03-29T12:01:55Z,youtube,, 224948,https://www.youtube.com/watch?v=y8gXUn9PoVI,121 - Artificial Stupidity 2,['Vulnerable Growth'],2022-05-06T04:42:55Z,youtube,, 224976,https://www.youtube.com/watch?v=5SgJKZLBrmg,"GPT 4 Can Improve Itself - (ft. Reflexion, HuggingGPT, Bard Upgrade and much more)",['AI Explained'],2023-04-02T15:18:29Z,youtube,, 224997,https://www.youtube.com/watch?v=otgIqIiLSzI,Lightning Talks | Beneficial AGI 2019,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 225028,https://www.youtube.com/watch?v=oSdPmxRCWws,Hill Climbing Algorithm & Artificial Intelligence - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225037,https://www.youtube.com/watch?v=hQr08RjkKv4,262. Counterarguments to the basic AI Xrisk case,['AI Safety Reading Group'],2022-12-01T22:16:31Z,youtube,, 225062,https://www.youtube.com/watch?v=QjXFN4UWZCg,Irina Rish - Out-of-distribution generalization,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225098,https://www.youtube.com/watch?v=Jfim9qDtbcs,AI Alignment and Our Momentous Imperative to Get It Right by Olle Häggström,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225124,https://www.youtube.com/watch?v=nRCbKFK2b2A,Modeling human-AV interactions for safety and acceptance of automated vehicles (Gustav Markkula),['AiTech - TU Delft'],2020-06-17T15:03:26Z,youtube,, 225155,https://www.youtube.com/watch?v=tY-55ho0W68,266. Lets think about slowing down AI 1,['AI Safety Reading Group'],2023-02-09T22:16:34Z,youtube,, 225191,https://www.youtube.com/watch?v=J_WkMaskv88,Artificial Super intelligence - How close are we?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225208,https://www.youtube.com/watch?v=I5mC4nDDp2I,206. Jared Kaplan on Scaling Laws,['AI Safety Reading Group'],2020-11-05T21:11:50Z,youtube,, 225230,https://www.youtube.com/watch?v=GMVlJtXRT1o,Stuart Russell - What role can psychologists play in AI Alignment?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225240,https://www.youtube.com/watch?v=Zr97Cxso4W4,Stuart Armstrong - AI: Humanity's Endgame?,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225268,https://www.youtube.com/watch?v=7S68y6huEpU,Phi-1: A 'Textbook' Model,['AI Explained'],2023-07-03T14:59:12Z,youtube,, 225284,https://www.youtube.com/watch?v=aZRSs9WTSvM,Natural Language Processing and ArtificiaI General Intelligence - Foresight Institute,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225299,https://www.youtube.com/watch?v=9ll_pth4Sss,Google Bard - The Full Review. Bard vs Bing [LaMDA vs GPT 4],['AI Explained'],2023-03-21T17:52:01Z,youtube,, 225341,https://www.youtube.com/watch?v=9Z06rY3uvGY,Ray Kurzweil: Future of Intelligence | MIT 6.S099: Artificial General Intelligence (AGI),['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225378,https://www.youtube.com/watch?v=1Wh_MBdSGPM,Opportunities for Cooperation on AGI at the Governance Level,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225426,https://www.youtube.com/watch?v=f20wXjWHh2o,"Hassabis, Altman and AGI Labs Unite - AI Extinction Risk Statement [ft. Sutskever, Hinton + Voyager]",['AI Explained'],2023-05-30T16:13:03Z,youtube,, 225454,https://www.youtube.com/watch?v=kxQ851JjACE,AiTech Agora: Pradeep Murukannaiah - Personal Values and Social Norms as Foundations of AI Ethics,['AiTech - TU Delft'],2021-05-26T20:33:55Z,youtube,, 225481,https://www.youtube.com/watch?v=5D8zELMw_8k,226. John Fox on Is AI Safety a Progressive Research Programme,['AI Safety Reading Group'],2021-06-17T22:16:56Z,youtube,, 225505,https://www.youtube.com/watch?v=7PKx3kS7f4A,Why Asimov's Laws of Robotics Don't Work - Computerphile,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225520,https://www.youtube.com/watch?v=uOQ_8Fq3q14,Predicting AI - Shanghai,['Vulnerable Growth'],2022-05-06T05:22:25Z,youtube,, 225548,https://www.youtube.com/watch?v=Dt_UNg7Mchg,Orca: The Model Few Saw Coming,['AI Explained'],2023-06-07T16:14:56Z,youtube,, 225569,https://www.youtube.com/watch?v=vvU3Dn_8sFI,"Time Until Superintelligence: 1-2 Years, or 20? Something Doesn't Add Up",['AI Explained'],2023-07-10T15:30:26Z,youtube,, 225606,https://www.youtube.com/watch?v=UM-eJbx_YDk,2L Attention - Theory [rough early thoughts],['Vulnerable Growth'],2022-05-10T00:47:24Z,youtube,, 225617,https://www.youtube.com/watch?v=Ho1XPZ8JTsI,"Me, myself and AI - DeepMind: The Podcast (S2, Ep7)",['Google DeepMind'],2022-02-20T13:14:34Z,youtube,, 225658,https://www.youtube.com/watch?v=zJBpRn2zTco,Llama 2: Full Breakdown,['AI Explained'],2023-07-19T15:30:09Z,youtube,, 225695,https://www.youtube.com/watch?v=nrCjVhp4wuo,190. Steven Pinker on the Possible Existential Threat of AI,['AI Safety Reading Group'],2020-07-02T21:23:01Z,youtube,, 225723,https://www.youtube.com/watch?v=vuYtSDMBLtQ,Channel Introduction,['Rob Miles'],2017-02-28T20:14:23Z,youtube,, 225732,https://www.youtube.com/watch?v=eTkvtHymI9s,How social science research can inform AI governance | Baobao Zhang | EAGxVirtual 2020,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225768,https://www.youtube.com/watch?v=3lD6Sygy6EQ,Alan Fern – Toward Recognizing and Explaining Uncertainty – CSRBAI 2016,['Machine Intelligence Research Institute'],2016-10-21T20:26:46Z,youtube,, 225800,https://www.youtube.com/watch?v=cKclc-KThIE,Nicolas Miailhe - AI risk is a global problem,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225835,https://www.youtube.com/watch?v=YeHNWKyySaI,Stuart Russell - Clarifying AI Alignment,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225848,https://www.youtube.com/watch?v=vg2ricXGfuI,Daniel Filan - Peering into neural nets for AI safety,['plex / Eric'],2022-06-10T10:31:56Z,youtube,, 225870,https://www.youtube.com/watch?v=Z46LIAcZ-vg,214. Consequences of Misaligned AI,['AI Safety Reading Group'],2021-01-28T21:54:01Z,youtube,, 225895,https://www.youtube.com/watch?v=XpbLq7rIJAA,DeepMind x UCL RL Lecture Series - Theoretical Fund. of Dynamic Programming Algorithms [4/13],['Google DeepMind'],2022-03-29T12:01:55Z,youtube,,