url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/Z8AZLFLN32MaRFGzJ/a-dilemma-in-ai-suffering-happiness | Z8AZLFLN32MaRFGzJ | A Dilemma in AI Suffering/Happiness | iva | The following is an example of how if one assumes that an AI (in this case autoregressive LLM) has "feelings", "qualia", "emotions", whatever, it can be unclear whether it is experiencing something more like pain or something more like pleasure in some settings, even quite simple settings which already happen a lot wit... | 2024-03-28 |
https://www.lesswrong.com/posts/xqHdZP6ehLzs83ju9/narratives-of-business-models | xqHdZP6ehLzs83ju9 | Narratives of business models | itay-dreyfus | There’s a common theme when discussing business models over the internet, which usually revolves around its optimal form.
What’s the most effective model? Monthly vs. yearly subscriptions, the relevance of ads, and the appeal of lifetime plans are debates I often come across on my Twitter feed. Builders of all kinds sh... | 2024-03-28 |
https://www.lesswrong.com/posts/zbKycwbnzcFvqHv2F/linkpost-practically-a-book-review-rootclaim-usd100-000-lab | zbKycwbnzcFvqHv2F | [Linkpost] Practically-A-Book Review: Rootclaim $100,000 Lab Leak Debate | TrevorWiesinger | Saar Wilf is an ex-Israeli entrepreneur. Since 2016, he’s been developing a new form of reasoning, meant to transcend normal human bias.
His method - called Rootclaim - uses Bayesian reasoning, a branch of math that explains the right way to weigh evidence. This isn’t exactly new. Everyone supports Bayesian reasoning. ... | 2024-03-28 |
https://www.lesswrong.com/posts/sixhTPawvRBXjL3Mw/idea-black-holes-1 | sixhTPawvRBXjL3Mw | Idea black holes | logan-kieller | You are a rational thinker.
Ever since you were born, you’ve been racing through a universe of ideas: creating, evaluating, disputing, engaging with, and being bombarded by…
Ideas.
Like a particle from the Big Bang, you have bounced around the universe until you found yourself here.
Reading, pondering, considering.
Thi... | 2024-03-28 |
https://www.lesswrong.com/posts/hCvtCHACLXhmtZQ4i/inexistence-of-rational-disagreement-when-information-can-be | hCvtCHACLXhmtZQ4i | Inexistence of Rational Disagreement when Information can be Freely Exchanged | cheops-steller | Suppose rationality is a set of principles that people agreed on to process information then arrive at conclusions. Then, on the basis of cost-free information exchange, should rational disagreements still exist? In that case, both parties would have the same information which will then be processed the same way. Just ... | 2024-03-28 |
https://www.lesswrong.com/posts/5Dz3ZrwBzzMfaucrH/ai-57-all-the-ai-news-that-s-fit-to-print | 5Dz3ZrwBzzMfaucrH | AI #57: All the AI News That’s Fit to Print | Zvi | Welcome, new readers!
This is my weekly AI post, where I cover everything that is happening in the world of AI, from what it can do for you today (‘mundane utility’) to what it can promise to do for us tomorrow, and the potentially existential dangers future AI might pose for humanity, along with covering the discourse... | 2024-03-28 |
https://www.lesswrong.com/posts/6BerZtxLQLgMSzA8n/aspiration-based-designs-1-informal-introduction | 6BerZtxLQLgMSzA8n | [Aspiration-based designs] 1. Informal introduction | Bob Jacobs | Sequence Summary. This sequence documents research by SatisfIA, an ongoing project on non-maximizing, aspiration-based designs for AI agents that fulfill goals specified by constraints ("aspirations") rather than maximizing an objective function. We aim to contribute to AI safety by exploring design approaches and th... | 2024-04-28 |
https://www.lesswrong.com/posts/yCDsGDyDguXgNwpkb/please-understand | yCDsGDyDguXgNwpkb | Please Understand | samhealy | In which a case is made for worrying about the AI Prompt Box.
Preamble
Technology serves to abstract away nonessential aspects of creative activities, giving us more direct access to their conceptual cores. Few audio engineers pine for the days of flaky reel-to-reel tape machines that unspool at the worst moments; few ... | 2024-04-01 |
https://www.lesswrong.com/posts/KCPYzWvo8z5nJWajv/measuring-predictability-of-persona-evaluations | KCPYzWvo8z5nJWajv | Measuring Predictability of Persona Evaluations | thee-ho | This work was done by Thee Ho as part of the Athena 1.0 mentorship program under Evan Hubinger. Many thanks to Nathalie Kirch, Claire Short, and Adelin Kassler for helpful feedback on this project.
Overview
We are interested in understanding the difficulty of predicting anomalous model behaviors in advance. We are inte... | 2024-04-06 |
https://www.lesswrong.com/posts/ZKksgfTxuxKhDfk4m/how-do-llms-give-truthful-answers-a-discussion-of-llm-vs | ZKksgfTxuxKhDfk4m | How do LLMs give truthful answers? A discussion of LLM vs. human reasoning, ensembles & parrots | Owain_Evans | Summary
Large language models (LLMs) like ChatGPT and Claude 3 become increasingly truthful as they scale up in size and are finetuned for factual accuracy and calibration.However, the way LLMs arrive at truthful answers is nuanced. When an LLM answers a question immediately without chain-of-thought reasoning, the answ... | 2024-03-28 |
https://www.lesswrong.com/posts/9bYbuAzhzuGiQjZRe/some-things-that-increase-blood-flow-to-the-brain | 9bYbuAzhzuGiQjZRe | Some Things That Increase Blood Flow to the Brain | romeostevensit | Epistemic status: very shallow google scholar dive. Intended mostly as trailheads for people to follow up on on their own.
previously: https://www.lesswrong.com/posts/h6kChrecznGD4ikqv/increasing-iq-is-trivial
I don't know to what degree this will wind up being a constraint. But given that many of the things that help ... | 2024-03-27 |
https://www.lesswrong.com/posts/czemBmv9WzPsTCtEQ/sp-the-edge-of-morality | czemBmv9WzPsTCtEQ | [SP] The Edge of Morality | Zane | Scott Alexander writes that philosophy is the art of exploring the edge cases of our ethics. It is clear to most that one should not kill innocent people for pleasure, but that will not help us uncover new insights. Instead, we look at the edge cases where it is not clear what morality says. We think of killing one per... | 2024-03-27 |
https://www.lesswrong.com/posts/qhdzkzC7sBq3MHhKs/come-to-manifest-2024-june-7-9-in-berkeley | qhdzkzC7sBq3MHhKs | Come to Manifest 2024 (June 7-9 in Berkeley) | saul-munn | null | 2024-03-27 |
https://www.lesswrong.com/posts/pzmRDnoi4mNtqu6Ji/the-cognitive-theoretic-model-of-the-universe-a-partial | pzmRDnoi4mNtqu6Ji | The Cognitive-Theoretic Model of the Universe: A Partial Summary and Review | jessica.liu.taylor | About 15 years ago, I read Malcolm Gladwell's Outliers. He profiled Chris Langan, an extremely high-IQ person, claiming that he had only mediocre accomplishments despite his high IQ. Chris Langan's theory of everything, the Cognitive Theoretic Model of the Universe, was mentioned. I considered that it might be worth ch... | 2024-03-27 |
https://www.lesswrong.com/posts/BBZjyBb2dos5iHuoQ/linkpost-leif-wenar-s-the-deaths-of-effective-altruism | BBZjyBb2dos5iHuoQ | [Linkpost] Leif Wenar's The Deaths of Effective Altruism | Caroline Wiese Young | cross-posted on EA Forum
Leif Wenar thoughtfully critiqued EA in "Poverty is No Pond" (2011) & just wrote a critique in WIRED. He is a philosophy professor at Stanford & author of Blood Oil.
Edit: My initial thoughts (which are very raw & will likely change & I will accordingly regret having indelibly inscribed on the... | 2024-03-27 |
https://www.lesswrong.com/posts/XcDqvYxwyX7jJYrwS/plausibility-of-cyborgism-for-protecting-boundaries | XcDqvYxwyX7jJYrwS | Plausibility of cyborgism for protecting boundaries? | Chipmonk | Most of my boundaries work so far has been focused on protecting boundaries "from the outside". For example, maybe davidad's OAA could produce some kind of boundary-defending global police AI.
But, imagine parenting a child and protecting them by keeping them inside all day. Seems kind of lame. Something else you could... | 2024-03-27 |
https://www.lesswrong.com/posts/Yo84SvKDCBwY5auGw/was-releasing-claude-3-net-negative | Yo84SvKDCBwY5auGw | Was Releasing Claude-3 Net-Negative? | elriggs | Cross-posted to EA forum
There’s been a lot of discussion among safety-concerned people about whether it was bad for Anthropic to release Claude-3. I felt like I didn’t have a great picture of all the considerations here, and I felt that people were conflating many different types of arguments for why it might be bad. ... | 2024-03-27 |
https://www.lesswrong.com/posts/S4aGGF2cWi5dHtJab/your-llm-judge-may-be-biased | S4aGGF2cWi5dHtJab | Your LLM Judge may be biased | henry | Abstract
AI safety researchers often rely on LLM “judges” to qualitatively evaluate the output of separate LLMs. We try this for our own interpretability research, but find that our LLM judges are often deeply biased. For example, we use Llama2 to judge whether movie reviews are more “(A) positive” or “(B) negative”, a... | 2024-03-29 |
https://www.lesswrong.com/posts/xWoaT3wLRQx8Rf4AX/daniel-kahneman-has-died | xWoaT3wLRQx8Rf4AX | Daniel Kahneman has died | DanielFilan | He was 90 years old.
His death was confirmed by his stepdaughter Deborah Treisman, the fiction editor for the New Yorker. She did not say where or how he died.
The obituary also describes an episode from his life that I had not previously heard (but others may have):
Daniel Kahneman was born in Tel Aviv on March 5, 193... | 2024-03-27 |
https://www.lesswrong.com/posts/yEsuwCugokgpAQyYD/math-to-english-cheat-sheet | yEsuwCugokgpAQyYD | Math-to-English Cheat Sheet | nahoj | Say you've learnt math in your native language which is not English. Since then you've also read math in English and you appreciate the near universality of mathematical notation. Then one day you want to discuss a formula in real life and you realize you don't know how to pronunce "an".
Status: I had little prior know... | 2024-04-08 |
https://www.lesswrong.com/posts/37uuuPQKiGisi8cGG/language-and-capabilities-testing-llm-mathematical-abilities | 37uuuPQKiGisi8cGG | Language and Capabilities: Testing LLM Mathematical Abilities Across Languages | Ethan Edwards | Produced as part of the SERI ML Alignment Theory Scholars Program - Summer 2023 Cohort
Thanks to @NicholasKees for guiding this and reading drafts, to @Egg Syntax @Oskar Hollinsworth, @Quentin FEUILLADE--MONTIXI and others for comments and helpful guidance, to @Viktor Rehnberg, @Daniel Paleka, @Sami Petersen, @Leon Lan... | 2024-04-04 |
https://www.lesswrong.com/posts/j7Dx2ASvvouuEAKFj/nick-bostrom-s-new-book-deep-utopia-is-out-today | j7Dx2ASvvouuEAKFj | Nick Bostrom’s new book, “Deep Utopia”, is out today | PeterH | null | 2024-03-27 |
https://www.lesswrong.com/posts/ktCrb6utgsGLuBtNy/decompiling-tracr-transformers-an-interpretability | ktCrb6utgsGLuBtNy | Decompiling Tracr Transformers - An interpretability experiment | hannes-thurnherr | Note: This blog post is cross-posted from my personal website, where I expect a broader audience than here. If you are familiar with the difficulty and significance of neural network interpretability, skip to the third subsection titled "In defence of fighting fire with fire"
Summary: This is a post about a research pr... | 2024-03-27 |
https://www.lesswrong.com/posts/7kfTd475erCm6yvBM/intergenerational-knowledge-transfer-ikt | 7kfTd475erCm6yvBM | Intergenerational Knowledge Transfer (IKT) | whitehatStoic | (This post is intended for my personal blog. Thank you.)
One of the dominant thoughts in my head when I build datasets for my training runs: what our ancestors 'did' over their lifespan likely played a key role in the creation of language and human values.[1]
"Mother" in European Languages
I imagine a tribe whose membe... | 2024-03-28 |
https://www.lesswrong.com/posts/xuokjPCDrZhNh2HLB/have-we-really-forsaken-natural-selection-1 | xuokjPCDrZhNh2HLB | Have we really forsaken natural selection? | KatjaGrace | Natural selection is often charged with having goals for humanity, and humanity is often charged with falling down on them. The big accusation, I think, is of sub-maximal procreation. If we cared at all about the genetic proliferation that natural selection wanted for us, then this time of riches would be a time of fif... | 2024-03-27 |
https://www.lesswrong.com/posts/u9z2XKHMCrrQBLwEh/robin-hanson-and-i-talk-about-ai-risk-1 | u9z2XKHMCrrQBLwEh | Robin Hanson and I talk about AI risk | KatjaGrace | From this afternoon: here
Our previous recorded discussions are here. | 2024-03-27 |
https://www.lesswrong.com/posts/uRQAiu4jQ47DqYadY/more-podcasts-on-2023-ai-survey-cognitive-revolution-and-fli | uRQAiu4jQ47DqYadY | More podcasts on 2023 AI survey: Cognitive Revolution and FLI | KatjaGrace | Two new discussions of the 2023 ESPAI:
Possibly I have a podcasting facial expression.
(If you want to listen in on more chatting about this survey, see also: Eye4AI podcast. Honestly I can’t remember how much overlap there is between the different ones.) | 2024-03-27 |
https://www.lesswrong.com/posts/ziAhnFrWqePEq8qKj/20-minutes-of-work-as-an-artist-in-one-future | ziAhnFrWqePEq8qKj | 20 minutes of work as an artist in one future | Phib | I am an artist.
”Eleven evil wizard schoolgirls in an archduke's library, dressed in red and black Asmodean schoolgirl uniforms, perched on armchairs and sofas”[1]
Sigh, at least it’s not more catgirls. I don’t even draw them well.
I stretched briefly before starting this one, my arms reaching as far as they could go b... | 2024-03-27 |
https://www.lesswrong.com/posts/pnMHEfEtuufJHqQTM/towards-white-box-deep-learning | pnMHEfEtuufJHqQTM | Towards White Box Deep Learning | maciej-satkiewicz | Hi, I’d like to share my paper that proposes a novel approach for building white box neural networks.
The paper introduces semantic features as a general technique for controlled dimensionality reduction, somewhat reminiscent of Hinton’s capsules and the idea of “inverse rendering”. In short, semantic features aim to c... | 2024-03-27 |
https://www.lesswrong.com/posts/eJXH6p3EdrEWrBGqv/summer-program-for-high-schoolers-to-start-working-on | eJXH6p3EdrEWrBGqv | Summer Program for High-Schoolers to start working on impactful projects | nonplus | Please share this opportunity with high schoolers you know – we’d be grateful for your help spreading the word!
About Non-Trivial
The Non-Trivial Fellowship is now accepting applications.
It’s an online summer program for high school students aged 14-20 to start an impactful research or policy project.
Accepted fellows... | 2024-03-26 |
https://www.lesswrong.com/posts/gP8tvspKG79RqACTn/modern-transformers-are-agi-and-human-level | gP8tvspKG79RqACTn | Modern Transformers are AGI, and Human-Level | abramdemski | This is my personal opinion, and in particular, does not represent anything like a MIRI consensus; I've gotten push-back from almost everyone I've spoken with about this, although in most cases I believe I eventually convinced them of the narrow terminological point I'm making.
In the AI x-risk community, I think there... | 2024-03-26 |
https://www.lesswrong.com/posts/FBKqrxwEmhaMjHY3b/barefoot-faq | FBKqrxwEmhaMjHY3b | Barefoot FAQ | dkl9 | Disclaimer: the post is provided as-is, without warranty of any kind, express or implied.
I am often barefoot, including in public. People seeing this give many of the same responses repeatedly. Here are my answers.
Why are you barefoot?
Sith it's funny.
But really, why?[1]
Walking barefoot lets me experience what's ar... | 2024-03-26 |
https://www.lesswrong.com/posts/tuArR8Jqp4aKyqiko/what-s-your-best-ai-safety-quip | tuArR8Jqp4aKyqiko | What's Your Best AI Safety "Quip"? | False Name, Esq. | Motivated by thinking gay rights were advanced by asking "When did you choose to be straight?" Which emphasised that what isn't a choice and doesn't harm others shouldn't be proscribed. Here, we're seeking a memetic way of framing the fact that the alignment problem is unsolved.
Author's "null quip": "Can you get a 5-y... | 2024-03-26 |
https://www.lesswrong.com/posts/48qKTPaNetw77SpWH/what-is-the-nature-of-humans-general-intelligence-and-it-s | 48qKTPaNetw77SpWH | What is the nature of humans general intelligence and it's implications for AGI? | Will_Pearson | Humans seems to have some form of generality. We seem capable of a solving a large range of problems and the people that are capable on one aspect seem more capable in general. However the nature of this generality is important. There are at least two options that I've thought of.
1)A general intelligence is intrinsica... | 2024-03-26 |
https://www.lesswrong.com/posts/GLpFovxZdwXYwmbkJ/failures-in-kindness | GLpFovxZdwXYwmbkJ | Failures in Kindness | silentbob | There's a particular kind of widespread human behavior that is kind on the surface, but upon closer inspection reveals quite the opposite. This post is about four such patterns.
Computational Kindness
One of the most useful ideas I got out of Algorithms to Live By is that of computational kindness. I was quite surprise... | 2024-03-26 |
https://www.lesswrong.com/posts/hCNt7dc7QXuKB2gsR/economics-roundup-1 | hCNt7dc7QXuKB2gsR | Economics Roundup #1 | Zvi | I call the section ‘Money Stuff’ but as a column name that is rather taken. There has been lots to write about on this front that didn’t fall neatly into other categories. It clearly benefited a lot from being better organized into subsections, and the monthly roundups could benefit from being shorter, so this will pro... | 2024-03-26 |
https://www.lesswrong.com/posts/ddj5HtnCHHMQGiQEM/timelines-to-transformative-ai-an-investigation | ddj5HtnCHHMQGiQEM | Timelines to Transformative AI: an investigation | zershaaneh-qureshi | Cross-posted on the EA Forum.
This post is part of a series by Convergence Analysis’ AI Clarity team.
Justin Bullock and Elliot Mckernon have recently motivated AI Clarity’s focus on the notion of transformative AI (TAI). In an earlier post, Corin Katzke introduced a framework for applying scenario planning methods to ... | 2024-03-26 |
https://www.lesswrong.com/posts/p7zn7M62SFFyQ6ZQF/enhancing-biosecurity-with-language-models-defining-research | p7zn7M62SFFyQ6ZQF | Enhancing biosecurity with language models: defining research directions | michael-chen | null | 2024-03-26 |
https://www.lesswrong.com/posts/MGNbfuvuaQLJk3jkC/legality-as-a-career-harm-assessment-heuristic | MGNbfuvuaQLJk3jkC | Legality as a Career Harm Assessment Heuristic | jkaufman | A question many people in the effective altruism movement have
struggled with around earning to give is
how
to handle
potentially
harmful careers. It's obviously self-defeating if you cause more
harm in earning your money than the good it does when you donate it,
but we want a higher threshold than that. As humans we... | 2024-03-26 |
https://www.lesswrong.com/posts/oYnwTuxySiaZYDrur/my-interview-with-cade-metz-on-his-reporting-about-slate | oYnwTuxySiaZYDrur | My Interview With Cade Metz on His Reporting About Slate Star Codex | Zack_M_Davis | On 16 March 2024, I sat down to chat with New York Times technology reporter Cade Metz! In part of our conversation, transcribed below, we discussed his February 2021 article "Silicon Valley's Safe Space", covering Scott Alexander's Slate Star Codex blog and the surrounding community.
The transcript has been significan... | 2024-03-26 |
https://www.lesswrong.com/posts/XkK5FtbdNEkPiYvG6/perceptual-blindspots-how-to-increase-self-awareness | XkK5FtbdNEkPiYvG6 | Perceptual Blindspots: How to Increase Self-Awareness | declan-molony | “Your nose is located right above your mouth. Suppose you don’t brush your teeth for three days. Though this nose is right here, it won’t tell you [that] you have not brushed your teeth. The whole room will know you have not brushed your teeth, but you will not know. This is the human predicament. It’s very easy to see... | 2024-03-26 |
https://www.lesswrong.com/posts/gpicju3C9P7gCoqC7/meltdown-interface-for-llama-cpp-and-chatgpt | gpicju3C9P7gCoqC7 | Meltdown: Interface for llama.cpp and ChatGPT | nextcaller | I'm afraid linking what I've been working on for a while as my first post might not be greatly received, but I think you might find it interesting none the less.
I'm making a text interface to chat with local and remote models. It is made in 100% python, it uses tkinter/tcl which should be bundled with a normal python ... | 2024-03-26 |
https://www.lesswrong.com/posts/RjzGdZLZkQufAeLrT/retro-funder-profile-and-manifund-team-recs-acx-grants-2024 | RjzGdZLZkQufAeLrT | Retro funder profile & Manifund team recs (ACX Grants 2024: Impact Market) | saul-munn | null | 2024-03-26 |
https://www.lesswrong.com/posts/smDfvqD3p8nztZCoR/podcast-interview-series-featuring-dr-peter-park | smDfvqD3p8nztZCoR | Podcast interview series featuring Dr. Peter Park | jacobhaimes | Check out the Into AI Safety podcast on Spotify, Apple Podcasts, Amazon Music, YouTube Podcasts, and many other podcast listening platforms!
As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow w... | 2024-03-26 |
https://www.lesswrong.com/posts/h5mDx2Mt2P5m9v588/fractal-strategy-workshop-report | h5mDx2Mt2P5m9v588 | "Fractal Strategy" workshop report | Raemon | I just ran a workshop teaching the rationality concepts I've developed this year.
If you're interested in paying money for a similar workshop, please fill out this form.
Six months ago, I started thinking about improving rationality.
Originally my frame was "deliberate practice for confusing problems". For the past two... | 2024-04-06 |
https://www.lesswrong.com/posts/XSqntCNMafhcy9irf/third-party-testing-as-a-key-ingredient-of-ai-policy | XSqntCNMafhcy9irf | Third-party testing as a key ingredient of AI policy | zac-hatfield-dodds | (nb: this post is written for anyone interested, not specifically aimed at this forum)
We believe that the AI sector needs effective third-party testing for frontier AI systems. Developing a testing regime and associated policy interventions based on the insights of industry, government, and academia is the best way to... | 2024-03-25 |
https://www.lesswrong.com/posts/ufBxJb4wxrh4sdqhy/idea-safe-fallback-regulations-for-widely-deployed-ai | ufBxJb4wxrh4sdqhy | Idea: Safe Fallback Regulations for Widely Deployed AI Systems | Aaron_Scher | In brief
When told that misaligned artificial intelligence might destroy all of humanity, normal people sometimes react by asking “why can’t we just unplug the misaligned AI?” This intuitively appealing solution is unfortunately not available by default — the simple off switch does not exist. Additionally, society may ... | 2024-03-25 |
https://www.lesswrong.com/posts/yvrkBxb5Lp9XY3t7f/lessonline-may-31-june-2-berkeley-ca | yvrkBxb5Lp9XY3t7f | LessOnline (May 31—June 2, Berkeley, CA) | Benito | A Festival of Writers Who are Wrong on the Internet[1]
LessOnline is a festival celebrating truth-seeking, optimization, and blogging. It's an opportunity to meet people you've only ever known by their LessWrong username or Substack handle.
We're running a rationalist conference!
The ticket cost is $400 minus your LW k... | 2024-03-26 |
https://www.lesswrong.com/posts/pX8typh7pTAmBrPdo/testing-chatgpt-for-cell-type-recognition | pX8typh7pTAmBrPdo | Testing ChatGPT for cell type recognition | Metacelsus | Biologists (including myself) often need to identify types of cells based on their gene expression. For example, if I’m differentiating stem cells to make an ovarian organoid, and I perform single cell RNA sequencing, I might want to check the data to see which ovarian cell types are present.
Today, a Nature Methods pa... | 2024-03-25 |
https://www.lesswrong.com/posts/4rzmhtG9Xv7Minrbk/photo-curation-approach | 4rzmhtG9Xv7Minrbk | Photo Curation Approach | jkaufman | I take a lot of pictures, maybe 10k annually. Most of them aren't
that great, but if you take enough you'll get some good ones, and even
the discards can be a useful reference. How do I handle these?
I have an Android phone, set to automatically upload any pictures to
Google Photos. My wife does as well, and we have... | 2024-03-25 |
https://www.lesswrong.com/posts/Gf4WtPfrELwRtfaM9/semantic-disagreement-of-sleeping-beauty-problem | Gf4WtPfrELwRtfaM9 | Semantic Disagreement of Sleeping Beauty Problem | Ape in the coat | This is the tenth post in my series on Anthropics. The previous one is Beauty and the Bets.
Introduction
In my previous posts I've been talking about the actual object-level disagreement between halfers and thirders - which of the answers formally is correct and which is not. I've shown that there is one correct model ... | 2024-05-08 |
https://www.lesswrong.com/posts/AaS6YRAGBFrxt6ZMj/on-lex-fridman-s-second-podcast-with-altman | AaS6YRAGBFrxt6ZMj | On Lex Fridman’s Second Podcast with Altman | Zvi | Last week Sam Altman spent two hours with Lex Fridman (transcript). Given how important it is to understand where Altman’s head is at and learn what he knows, this seemed like another clear case where extensive notes were in order.
Lex Fridman overperformed, asking harder questions than I expected and going deeper than... | 2024-03-25 |
https://www.lesswrong.com/posts/hueNHXKc4xdn6cfB4/on-the-confusion-between-inner-and-outer-misalignment | hueNHXKc4xdn6cfB4 | On the Confusion between Inner and Outer Misalignment | Chris_Leong | Here’s my take on why the distinction between inner and outer-alignment frame is weird/unclear/ambiguous in some circumstances: My understanding is that these terms were originally used when talking about AGI. So outer alignment involved writing down a reward or utility function for all of human values and inner alignm... | 2024-03-25 |
https://www.lesswrong.com/posts/gb9PXJfzafkxg7orm/orthogonality-thesis-seems-wrong | gb9PXJfzafkxg7orm | Orthogonality Thesis seems wrong | donatas-luciunas | Orthogonality Thesis (as well as Fact–value distinction) is based on an assumption that objective norms / values do not exist. In my opinion AGI would not make this assumption, it is a logical fallacy, specifically argument from ignorance. As black swan theory says - there are unknown unknowns. Which in this context me... | 2024-03-26 |
https://www.lesswrong.com/posts/us8cqwudP5sePqWM2/on-attunement | us8cqwudP5sePqWM2 | On attunement | joekc | (Cross-posted from my website. Podcast version here, or search for "Joe Carlsmith Audio" on your podcast app.
This essay is part of a series that I'm calling "Otherness and control
in the age of AGI." I'm hoping that the individual essays can be read
fairly well on their own, but see
here
for brief summaries of the ess... | 2024-03-25 |
https://www.lesswrong.com/posts/xQfXBs4nXnTLmGAhw/a-bit-for-you | xQfXBs4nXnTLmGAhw | A Bit For You | Ronak_Mehta | This button will send a single bit.
This is no mindgame, no weird trolley-problem-monkey's-paw-dilemma.
*
This page, this post, pressing this button, are meant to be whatever they need to be for you in this moment. The purpose of this singular bit is entirely up to you. Take a second, and Consider The Button. What do y... | 2024-03-24 |
https://www.lesswrong.com/posts/sBdbtkBynjvG7Jhod/mandolin-harp-sensor-placement | sBdbtkBynjvG7Jhod | Mandolin Harp Sensor Placement | jkaufman | One of my goals in adding
electronic
"harp strings" to my mandolin is that I don't want to change
anything about my normal mandolin technique when I'm not using them.
I've been playing for decades, and like how I normally play. This
means I can't place these "teeth" anywhere my hands normally pass
through. With a bit... | 2024-03-24 |
https://www.lesswrong.com/posts/tJfzoyKr4zCyyon2q/ai-alignment-and-the-classical-humanist-tradition | tJfzoyKr4zCyyon2q | AI Alignment and the Classical Humanist Tradition | PeteJ | Hi guys, I’d like to share a proposal regarding AI alignment. The proposal is that training AI in the curriculum of Classical Virtue ethics could be a promising approach to alignment. A) Because general virtues with many exemplifications can help us teach the AI what we would really want it to do, even when we can't mi... | 2024-03-24 |
https://www.lesswrong.com/posts/BzxnMnS7RwgALtXJ7/could-llms-help-generate-new-concepts-in-human-language | BzxnMnS7RwgALtXJ7 | Could LLMs Help Generate New Concepts in Human Language? | Pekka Lampelto | Concepts are the central part of language. I would argue that concepts serve as condensed linguistic representations of concrete or abstract entities, aimed at enhancing the precision and efficiency of thinking and communication.
I found it fascinating to ponder how new concepts have emerged in human language. One coul... | 2024-03-24 |
https://www.lesswrong.com/posts/JxCubKy7tHAZfPbzk/a-dialog-with-the-axiom-of-choice | JxCubKy7tHAZfPbzk | A dialog with the axiom of choice | Disbeliever | preliminary remark: the axiom of choice ( Auswahlaxiom in Germany) can be formulated this way:
For all sets M there is a selection function, that assigns for all elements of the power set P(M) exept ∅ an element of the corresponding subset of M.
It is assumed to be true in many areas of mathematics. Besides its "job" o... | 2024-03-30 |
https://www.lesswrong.com/posts/EkehHTGs3M2ekarS7/unga-resolution-on-ai-5-key-takeaways-looking-to-future | EkehHTGs3M2ekarS7 | UNGA Resolution on AI: 5 Key Takeaways Looking to Future Policy | Heramb | The United Nations General Assembly passed its first resolution on AI on 21st March 2024 and passed with the support of all 193 member states. The text is available here to read.
For the purposes of this summary, I focused on key aspects of the UNGA Resolution with regard to future-facing safety measures from advanced ... | 2024-03-24 |
https://www.lesswrong.com/posts/H9EcZxW5eYrJBFdgN/are-motor-sports-like-f1-a-good-thing-to-calibrate-estimates | H9EcZxW5eYrJBFdgN | Are (Motor)sports like F1 a good thing to calibrate estimates against? | CstineSublime | Today a trend broke in Formula One. Max Verstappen didn't win a Grand Prix. Of the last 35 Formula One Grand Prix, Max Verstappen has won all but 5. Last season he had something like 86% dominance.
For context I believe that I am overall pessimistic when asked to give a probability range about something "working out". ... | 2024-03-24 |
https://www.lesswrong.com/posts/xahmJmH6BtqzPP3jD/self-play-by-analogy | xahmJmH6BtqzPP3jD | Self-Play By Analogy | Amica Terra | This post is adapted from a longer, more wide-ranging post on Substack where I attempt to collect some of my thoughts about AI as a relative outsider to the field. The section I have decided to share here, though, I believe to be novel.
Success in self-play by game AIs like AlphaZero has led to some interest in its pos... | 2024-03-24 |
https://www.lesswrong.com/posts/qLrskBQNKkqPBxNTn/nuclear-quantum-immortality-hacking | qLrskBQNKkqPBxNTn | Nuclear Quantum Immortality Hacking | evyatar-or | I will very briefly explain the abridged version of "Quantum Immortality" but you might want to read more about it.
The short version of Quantum Immortality:
"Imagine you are the cat in the Schrödinger experiment, if the many worlds interpretation of quantum mechanics is correct (a big 'if'), you will only detect a wor... | 2024-03-23 |
https://www.lesswrong.com/posts/mAjKbDP5AsSTd28mA/as-many-ideas-1 | mAjKbDP5AsSTd28mA | As Many Ideas | Screwtape | Do you ever run out of ideas? In the same way that we can practice not running out of breath while running by running more, we’re going to practice not running out of ideas by coming up with lots of ideas.
Someone presents a problem. They announce it to the room at large. Then everyone comes up with as many ideas as th... | 2024-03-23 |
https://www.lesswrong.com/posts/TeKZjxczbTEFnLjot/wittgenstein-and-the-private-language-argument | TeKZjxczbTEFnLjot | Wittgenstein and the Private Language Argument | TMFOW | This is a linkpost for an essay I wrote about Wittgenstein and the private language argument on substack. Links lead to other essays on substack, so don't click these if you don't want to be directed there.
...the difficult thing here is not to dig down to the ground; no, it is to recognize the ground that lies before ... | 2024-03-24 |
https://www.lesswrong.com/posts/eaovGWEdxhYSQGthK/how-to-make-food-water-testing-cheaper-more-scalable-eg-for | eaovGWEdxhYSQGthK | How to make food/water testing cheaper/more scalable? [eg for purity/toxin testing] | alex-k-chen | Infrared spectroscopy, HPLC, Surface-enhanced Raman spectroscopy, and reverse ecology are examples. Figuring out the purity of one's foods/drugs (which is not done enough) can be a Pareto-efficient improvement on the attention economy of "pivotal actors", and can be done by environmental health specialists (whose skill... | 2024-03-23 |
https://www.lesswrong.com/posts/vdH6NNcaHAgqH4oRB/define-agent-embedded | vdH6NNcaHAgqH4oRB | Define “Agent” (Embedded) | Apollonia | I would be very grateful for as many different attempts at rigorous or semi-rigorous definitions of an “agent” as possible.
Specifically, a definition of an (embedded) Agent that makes it intuitively clear the nature of the boundary between Agent and Environment.
(I have read up on Agency, Reductive Agency, Causality, ... | 2024-03-24 |
https://www.lesswrong.com/posts/XneTcdfGiPZM9ahec/prototyping-pluck-sensors | XneTcdfGiPZM9ahec | Prototyping Pluck Sensors | jkaufman | Harp guitars
are a neat weird instrument that never really took off:
You play it like a normal guitar, with six fretted strings, but there
are also some number of extra harp strings you can pluck.
I see four main downsides:
They're bulky and hard to transport.
You need to tune the strings to match the song.
You don't h... | 2024-03-23 |
https://www.lesswrong.com/posts/3DZkuJMX8Ee7XR75D/why-the-insects-scream | 3DZkuJMX8Ee7XR75D | Why The Insects Scream | omnizoid | Crossposted on my blog.
1 Beneath the giants’ heels
The giants ruled the world and
They reshaped it a lot
Nature warped beneath their hands
Vast palaces were built, but not
Much heed was paid to those below
Those trampled by their feet
They didn’t really care to know
What those tiny beings could be
One day some giant s... | 2024-03-22 |
https://www.lesswrong.com/posts/JkmWBS6LZtXpCYQjz/dangers-of-closed-loop-ai | JkmWBS6LZtXpCYQjz | Dangers of Closed-Loop AI | gworley | In control theory, an open-loop (or non-feedback) system is one where inputs are independent of outputs. A closed-loop (or feedback) system is one where outputs are input back into the system.
In theory, open-loop systems exist. In reality, no system is truly open-loop because systems are embedded in the physical world... | 2024-03-22 |
https://www.lesswrong.com/posts/vnuGLpEMHCRZsNaqj/what-does-autodidact-mean | vnuGLpEMHCRZsNaqj | What does "autodidact" mean? | bhauth | Alice takes a chemistry class at a university. She gets a professor who basically just reads from the textbook during lectures. She reads the textbook on her own, talks to her classmates, and finds some relevant Wikipedia articles and youtube videos.
Bob studies chemistry on his own. He buys the textbook Alice uses bec... | 2024-03-22 |
https://www.lesswrong.com/posts/3m5qCdX7zuBsyAPpb/linkpost-vague-verbiage-in-forecasting | 3m5qCdX7zuBsyAPpb | [Linkpost] Vague Verbiage in Forecasting | TrevorWiesinger | “What does a ‘fair chance’ mean?”
It is a question posed to a diverse group of professionals—financial advisers, political analysts, investors, journalists—during one of Good Judgment Inc’s virtual workshops. The participants have joined the session from North America, the EU, and the Middle East. They are about to get... | 2024-03-22 |
https://www.lesswrong.com/posts/FwFdRmT6oCFWoHS7f/wolf-and-rabbit | FwFdRmT6oCFWoHS7f | Wolf and Rabbit | richard-henage | Rabbit said to Wolf, "I shall make a new animal. It shall stand on two legs and run free across meadows and mountains."
Wolf said, "I will send large beasts to eat your new animal."
Rabbit said, "My animal shall band together and fight off even large beasts. They shall travel together to stay safe."
Wolf said, "I will ... | 2024-03-22 |
https://www.lesswrong.com/posts/cZHezHezooJ4ryiro/benchmarking-llm-agents-on-kaggle-competitions | cZHezHezooJ4ryiro | Benchmarking LLM Agents on Kaggle Competitions | Aidan O'Gara | tl;dr: I prompted ChatGPT to participate in a Kaggle data science competition. It successfully wrote scripts that trained models to predict housing prices, and ultimately outperformed 71% of human participants.
I'm not planning to build a benchmark using Kaggle competitions, but I think a well-executed version could b... | 2024-03-22 |
https://www.lesswrong.com/posts/QpEz8zuRYGbiivaWC/american-acceleration-vs-development | QpEz8zuRYGbiivaWC | American Acceleration vs Development | maxwell-tabarrok | Progress studies and metascience focus on reforms which will accelerate growth in the US. The US is a small portion of the world’s population and these few are already the world’s most prosperous. Development economics and effective altruism put more effort into health and growth interventions in the developing world.
... | 2024-03-22 |
https://www.lesswrong.com/posts/oMkTwXwi2aqgib66m/the-pyromaniacs | oMkTwXwi2aqgib66m | The Pyromaniacs | ted-sanders | In a dry field, a band of feral children crowd around a dim glow. The fire is small, but it’s the first they’ve ever made. Until now, most doubted it possible. Fire, from rubbed sticks? But the flicker and crackle is proof.
One child raises her voice.
“This is dangerous. It might set the field ablaze. Let’s stop.”
The ... | 2024-03-22 |
https://www.lesswrong.com/posts/vvg6DmJSprDLNhW3v/my-simple-agi-investment-and-insurance-strategy | vvg6DmJSprDLNhW3v | My simple AGI investment & insurance strategy | lc | TL;DR:
Options traders think it's extremely unlikely that the stock market will appreciate more than 30 or 40 percent over the next two to three years, as it did over the last year. So they will sell you the option to buy current indexes for 30 or 40% above their currently traded value for very cheap.But slow takeoff, ... | 2024-03-31 |
https://www.lesswrong.com/posts/ECnLBSxw4TvpWPnae/ai-model-registries-a-regulatory-review | ECnLBSxw4TvpWPnae | AI Model Registries: A Regulatory Review | deric-cheng | This article is the third in a series of ~10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance (e.g. incident reporting, safety evals, model registries, etc.).... | 2024-03-22 |
https://www.lesswrong.com/posts/JJXWWcSuXZkkofFAJ/templates-i-made-to-run-feedback-rounds-for-ethan-perez-s | JJXWWcSuXZkkofFAJ | Templates I made to run feedback rounds for Ethan Perez’s research fellows. | ResentHighly | TL;DR: I'm releasing my templates to make running feedback rounds easy for research teams that might otherwise neglect to set it up.
The main questions on my feedback form template
Why I wrote this post:
Feedback is my job: My role on research projects mentored by Ethan is somewhere between a people manager and a resea... | 2024-03-28 |
https://www.lesswrong.com/posts/BYAo8Yvxg2tsCY2Me/vernor-vinge-who-coined-the-term-technological-singularity | BYAo8Yvxg2tsCY2Me | Vernor Vinge, who coined the term "Technological Singularity", dies at 79 | Kaj_Sotala | On Wednesday, author David Brin announced that Vernor Vinge, sci-fi author, former professor, and father of the technological singularity concept, died from Parkinson's disease at age 79 on March 20, 2024, in La Jolla, California. The announcement came in a Facebook tribute where Brin wrote about Vinge's deep love for ... | 2024-03-21 |
https://www.lesswrong.com/posts/sY3a4Rfa48CgteBEm/chatgpt-can-learn-indirect-control | sY3a4Rfa48CgteBEm | ChatGPT can learn indirect control | Raymond D | Here's a very neat twitter thread: the author sends various multimodal models screenshots of the conversation he's currently having with them, and asks them to describe the images. Most models catch on fast: the author describes this as them passing the mirror test.
I liked the direction, so I wanted to check if ChatGP... | 2024-03-21 |
https://www.lesswrong.com/posts/7GmDs4BqrFW3kk4nP/how-i-select-alignment-research-projects | 7GmDs4BqrFW3kk4nP | How I select alignment research projects | ethan-perez | Youtube Video
Recently, I was interviewed by Henry Sleight and Mikita Balesni about how I select alignment research projects. Below is the slightly cleaned up transcript for the YouTube video.
Introductions
Henry Sleight: How about you two introduce yourselves?
Ethan Perez: I'm Ethan. I'm a researcher at Anthropic and ... | 2024-04-10 |
https://www.lesswrong.com/posts/BaEQoxHhWPrkinmxd/announcing-neuronpedia-platform-for-accelerating-research | BaEQoxHhWPrkinmxd | Announcing Neuronpedia: Platform for accelerating research into Sparse Autoencoders | hijohnnylin | This posts assumes basic familiarity with Sparse Autoencoders. For those unfamiliar with this technique, we highly recommend the introductory sections of these papers.
TL;DR
Neuronpedia is a platform for mechanistic interpretability research. It was previously focused on crowdsourcing explanations of neurons, but we’ve... | 2024-03-25 |
https://www.lesswrong.com/posts/ukTLGe5CQq9w8FMne/inducing-unprompted-misalignment-in-llms | ukTLGe5CQq9w8FMne | Inducing Unprompted Misalignment in LLMs | sven | Emergent Instrumental Reasoning Without Explicit Goals
TL;DR: LLMs can act and scheme without being told to do so. This is bad.
Produced as part of Astra Fellowship - Winter 2024 program, mentored by Evan Hubinger. Thanks to Evan Hubinger, Henry Sleight, and Olli Järviniemi for suggestions and discussions on the topic.... | 2024-04-19 |
https://www.lesswrong.com/posts/Be53gAEysXeCXhaB2/comparing-alignment-to-other-agi-interventions-extensions-1 | Be53gAEysXeCXhaB2 | Comparing Alignment to other AGI interventions: Extensions and analysis | martinsq | In the last post I presented the basic, bare-bones model, used to assess the Expected Value of different interventions, and especially those related to Cooperative AI (as distinct from value Alignment). Here I briefly discuss important enhancements, and our strategy with regards to all-things-considered estimates.
I de... | 2024-03-21 |
https://www.lesswrong.com/posts/DhjcdzTyqHte2v6bu/deep-learning-is-function-approximation | DhjcdzTyqHte2v6bu | "Deep Learning" Is Function Approximation | Zack_M_Davis | A Surprising Development in the Study of Multi-layer Parameterized Graphical Function Approximators
As a programmer and epistemology enthusiast, I've been studying some statistical modeling techniques lately! It's been boodles of fun, and might even prove useful in a future dayjob if I decide to pivot my career away fr... | 2024-03-21 |
https://www.lesswrong.com/posts/asfqtMijjTjggedyD/transformative-ai-and-scenario-planning-for-ai-x-risk | asfqtMijjTjggedyD | Transformative AI and Scenario Planning for AI X-risk | elliot | This post is part of a series by the AI Clarity team at Convergence Analysis. In our previous post, Corin Katzke reviewed methods for applying scenario planning methods to AI existential risk strategy. In this post, we want to provide the motivation for our focus on transformative AI.
Overview
We argue that “Transforma... | 2024-03-22 |
https://www.lesswrong.com/posts/5FQbd2QZLthcp9DxJ/the-comcast-problem | 5FQbd2QZLthcp9DxJ | The Comcast Problem | RamblinDash | I've noticed that a lot of companies provide really valuable services yet almost inevitably are hated by consumers. I call this "the Comcast Problem," though it isn't limited to Comcast. Companies face this problem when they provide access to things consumers want, but they aren't themselves the goal.
When I consume gr... | 2024-03-21 |
https://www.lesswrong.com/posts/Btu349AKoEEhaqLk5/a-teacher-vs-everyone-else | Btu349AKoEEhaqLk5 | A Teacher vs. Everyone Else | ronak69 | A repairer wants your stuff to break down,
A doctor wants you to get ill,
A lawyer wants you to get in conflicts,
A farmer wants you to be hungry,
But there is only a teacher who wants you to learn.
Of course you see what is wrong with the above "argument / meme / good-thought". But the first time I came across this me... | 2024-03-21 |
https://www.lesswrong.com/posts/y9if8ieQGNwZRaXCA/static-vs-dynamic-alignment | y9if8ieQGNwZRaXCA | Static vs Dynamic Alignment | Gracie Green | In this paper, I outline the following claims:
We can divide AI Alignment into two types: "static" or "dynamic". Static alignment (reflecting desires at training) and dynamic alignment (adapting to current desires) are often conflated.Static alignment is more likely. Static alignment requires a simpler training procedu... | 2024-03-21 |
https://www.lesswrong.com/posts/gvNnE6Th594kfdB3z/on-green | gvNnE6Th594kfdB3z | On green | joekc | (Cross-posted from my website. Podcast version here, or search for "Joe Carlsmith Audio" on your podcast app.
This essay is part of a series that I'm calling "Otherness and control
in the age of AGI." I'm hoping that the individual essays can be read
fairly well on their own, but see
here
for brief summaries of the ess... | 2024-03-21 |
https://www.lesswrong.com/posts/iH5Sejb4dJGA2oTaP/ai-56-blackwell-that-ends-well | iH5Sejb4dJGA2oTaP | AI #56: Blackwell That Ends Well | Zvi | Hopefully, anyway. Nvidia has a new chip.
Also Altman has a new interview.
And most of Inflection has new offices inside Microsoft.
Table of Contents
Introduction.
Table of Contents.
Language Models Offer Mundane Utility. Open the book.
Clauding Along. Claude continues to impress.
Language Models Don’t Offer Mundane Ut... | 2024-03-21 |
https://www.lesswrong.com/posts/CCBaLzpB2qvwyuEJ2/deepmind-evaluating-frontier-models-for-dangerous | CCBaLzpB2qvwyuEJ2 | DeepMind: Evaluating Frontier Models for Dangerous Capabilities | Zach Stein-Perlman | To understand the risks posed by a new AI system, we must understand what it can and cannot do. Building on prior work, we introduce a programme of new “dangerous capability” evaluations and pilot them on Gemini 1.0 models. Our evaluations cover four areas: (1) persuasion and deception; (2) cyber-security; (3) self-pro... | 2024-03-21 |
https://www.lesswrong.com/posts/E5y6L6BybbA6K4skY/an-affordable-co2-monitor-1 | E5y6L6BybbA6K4skY | An Affordable CO2 Monitor | dylan-mahoney | I found a seemingly-high-quality CO2 monitor here for less than $50, or on Amazon or from the manufacturer's website for $70.
The most recent discussions of different CO2 monitors on the market I could find on LessWrong[1] were a few years old, and it seems that prices have come down somewhat since then. I found this p... | 2024-03-21 |
https://www.lesswrong.com/posts/xxqmk9xjJwCexewqR/where-are-the-contra-dances | xxqmk9xjJwCexewqR | Where are the Contra Dances? | jkaufman | About ten years ago
I made a heatmap of
contra dances, which I've
kept up to date.
I recently had a request from someone who'd like to print out a copy
for a poster, and while the one I built on the Google Maps API doesn't
have a way to generate a high-resolution version it seemed like fun to
make one from scratch:
Thi... | 2024-03-21 |
https://www.lesswrong.com/posts/GoWQT2cPuvLBgXuF9/many-kinds-of-work-one-could-do-to-make-ai-go-better-and-a | GoWQT2cPuvLBgXuF9 | Slim overview of work one could do to make AI go better (and a grab-bag of other career considerations) | Chi Nguyen | null | 2024-03-20 |
https://www.lesswrong.com/posts/NrozRodaByBxqHRru/how-does-ai-solve-problems-1 | NrozRodaByBxqHRru | How does AI solve problems? | dom-polsinelli | Epistemic Status: This is mostly just something that has been bouncing around in my mind, this is not a detailed well research essay with many citations about modern interpretability. I go back and forth on this being so obvious everyone here already implicitly agrees and it being something that was handily debunked a ... | 2024-03-20 |
https://www.lesswrong.com/posts/ApZJy3NKfW5CkftQq/on-the-gladstone-report | ApZJy3NKfW5CkftQq | On the Gladstone Report | Zvi | Like the the government-commissioned Gladstone Report on AI itself, there are two sections here.
First I cover the Gladstone Report’s claims and arguments about the state of play, including what they learned talking to people inside the labs. I mostly agree with their picture and conclusions, both in terms of arguments... | 2024-03-20 |
https://www.lesswrong.com/posts/mMEbfooQzMwJERAJJ/natural-latents-the-concepts | mMEbfooQzMwJERAJJ | Natural Latents: The Concepts | johnswentworth | Suppose our old friends Alice and Bob decide to undertake an art project. Alice will draw a bunch of random purple and green lines on a piece of paper. That will be Alice’s picture (A). She’ll then make a copy, erase all the purple lines, and send the result as a message (M) to Bob. Bob then generates his own random pu... | 2024-03-20 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.