id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
6b718cf9-81a2-4b69-9672-120ca97cb9cd
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AIのタイムライン ─ 提案されている論証と「専門家」の立ち位置 *This is a Japanese translation of “*[***AI Timelines: Where the Arguments, and the "Experts," Stand***](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand)*”* by [Holden Karnofsky](https://forum.effectivealtruism.org/users/holdenkarnofsky)2021年9月8日 *オーディオ版が*[*Cold Takes*](https://www.cold-takes.com/where-ai-forecasting-stands-today)*で利用可能です(あるいはStitcher, Spotify, Google Podcasts, etc.で「Cold Takes Audio」を検索してみてください) 。*[[1]](#fn95qf2aqwti4) ![Image](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/ZcGLsL6kuHMGWsBjp/mupnjnvj99buwg4dwrqx)この記事でははじめに、この連載中の以前の記事で扱った複数の視点から、変革的AIの開発時期がいつ頃になるのかを要約的に説明します。 次いで「この話題について専門家の間に揺るぎないコンセンサスがないのはなぜなのか、またこの事実は私たちにとって何を意味するのか」という問題を検討します。 私の見積もりはこうです。**15年以内(2036年迄)に変革的AI(transformative AI)が出現する確率は10%以上、40年以内(2060年迄)であれば約50%、今世紀中(2100年迄)であれば約2/3の確率がある。** (「変革的AI」ということで「私たちが質的に異なる、新たな時代に突入させることになるほど強力なAI」のことを私は意味しています。私が [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/) と呼ぶものに特に焦点をあてます。これは、科学的・技術的進展の速度を増加させるためのすべての人間活動を本質的に自動化しうるAIシステムのことです。PASTAは、[生産性の爆発的な向上](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#explosive-scientific-and-technological-advancement)とともに[逸脱したAIに由来するリスク](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#misaligned-ai-mysterious-potentially-dangerous-objectives)の可能性をもたらすために、今世紀を[最も重要な世紀](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/)とするのに十分なもので[ありうる](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#impacts-of-pasta)と、私は論じています。 これが、AIの発展予測にアプローチする様々な、異なる視点からの技術的な報告を踏まえて、私が辿り着いた結論の概要です。そうした報告の多くは、長期主義的な助成金提供を考えるために、変革的AIの発展予測に関する徹底した描像を描こうとする過程で、ここ数年間で[オープン・フィランソロピー](https://www.openphilanthropy.org/)が作成してきたものです。 こちらは、私が検討してきた、変革的AIの予測についての異なる視点を**ひとつの表にまとめたもの**です。[以前の一連の投稿](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/#forecasting-transformative-ai-this-century)で展開したより詳細な議論と、裏付けとなる技術的な報告へのリンクも合わせて載せています。 | **予測の観点** | **詳細を論じた重要記事(タイトルは省略されている)** | **私が引き出した結論** | | --- | --- | --- | | **変革的AIに関する確率推定** | | [**専門家対象の調査**](https://www.cold-takes.com/are-we-trending-toward-transformative-ai-how-would-we-know/#surveying-experts)。AI研究者の見積もりは? | [AIの専門家から得たエビデンス](https://arxiv.org/pdf/1705.08807.pdf) | 専門家対象の調査から得られた示唆によれば[1](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#fn1)2036年迄は約20%の確率、2100年迄は70%の確率で変革的AIが登場する。(少数の回答者への)設問の表現を僅かに変えただけでも、推定されるタイミングはかなり後ろにずれた。 | | [**生物学的アンカー・フレームワーク**](https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/)「AIのトレーニング」にかかるコストの通常のパターンに基づくと、人間の脳ほどの大きさのAIモデルを人間に行える最も困難な課題を遂行できるまで訓練するには、どれほどのコストが必要になるのか。また、誰かがAIにそのような訓練を施すことができるくらいにコストが下がるのはいつごろか。 | [ブレインコンピューティング](https://www.openphilanthropy.org/blog/new-report-brain-computation)に基づく[生物学的アンカー・フレームワーク](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP) | 2036年迄の確率は10%より大きく、2055年迄は約50%、2100年迄は約80% | | [証明責任](https://www.cold-takes.com/forecasting-transformative-ai-whats-the-burden-of-proof/)の観点 | | どの任意の世紀においても、その世紀が「最も重要な」世紀である確率は低い。(詳しくは[こちら](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#most-important-century-skepticism)) | [岐路](https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f36b015d9a3691ba8e1096b/1597419543571/Are+we+living+at+the+hinge+of+history.pdf)、[「岐路」論文への反論](https://forum.effectivealtruism.org/posts/j8afBEAa7Xb2R9AZN/thoughts-on-whether-we-re-living-at-the-most-influential) | AIの詳細を尋ねる以前にも、この世紀が「特別」だと考えられる理由は多数ある。その多くは以前の記事で扱ってきたし、その他の理由は次の行で扱われている。 | | (a)変革的AIの開発作成に人びとが取り組んできた年数と(b)それに対するこれまでの「投資」の規模(AI研究者の数と彼らの計算量)、(c)人びとが変革的AIをすでに開発したかどうか(これまでしてないか)に関する基本的情報だけを前提として、変革的AIのタイムラインはどう予測されるのか。(詳しくは[こちら](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#semi-informative-priors)) | [半情報事前確率](https://www.openphilanthropy.org/blog/report-semi-informative-priors) | 主な推定値2036年迄が8%2060年迄が13%2100年迄が20%[2](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#fn2)私見では、この報告はAIの歴史が短く、AIへの投資が急速に増加しているという事実を反映している。したがって、変革的AIが今すぐ開発されるとしても、それほど驚くべきではない。 | | 経済モデルの分析と経済史に基づくと世界経済の年間成長率30%以上として定義される「爆発的成長(explosive growth)」が2100年までに起こる確率はどのくらいか。これは、この結論を疑ったほうがいいくらいに「正常な」値から逸脱しているだろうか。(詳しくは[こちら](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#economic-growth)) | [爆発的成長](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth)、[人類の歩み](https://www.openphilanthropy.org/blog/modeling-human-trajectory) | 「[人類の歩み](https://www.openphilanthropy.org/blog/modeling-human-trajectory)」は過去のデータのみに基づいて未来を予測し、2043-2064年迄に爆発的成長が起こると示唆している。「[爆発的成長](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth)」は次のように結論付けている。「経済学的考察によっては、変革的AI(TAI)が今世紀中に開発される可能性を棄却する適切な理由は見つかりませんでした。実のところ、十分に発達したAIシステムが爆発的成長をもたらすと予測する妥当な経済学的見解も存在します。」 | | 「過去に...人びとはAIについてどのような予測を立ててきたのか。また、これまで立てられてきた予測から観察できるパターンに合わせて、今日の我々が抱いている見解を修正すべきだろうか。... 過去、AIは繰り返し持ち上げられ過ぎてきたし、それゆえ今日の予測も、楽観的過ぎる可能性が高いという見解に出会ったことがある....」(詳しくは[こちら](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#history-of-)) | [AIの発展に関する過去の予測](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts) | 「AIの過剰な喧伝は1956-1973年の間に行われていたようだ。それでも、この期間になされた最も有名なAI予測の一部について言えば、それが過大広告だというのはたいてい、大袈裟である。」 | 透明性を確保するために、多くの技術的な報告が [Open Philanthropy](https://www.openphilanthropy.org/) による分析であること、私は Open Philanthropyの共同最高経営責任者(co-CEO)であることを注記しておく。 以上の考察を踏まえても、読者の一部はまだ落ち着かない気持ちを抱いているのではないかと予測する。私の議論が理にかなっていると考えていたとしても、次のように考えるかもしれない。**これが正しいなら、なぜもっと議論され、人びとに受け止められていないのだろうか。専門家たちはどんな意見をもっているのか。** 現時点の専門家の意見を、私は以下のように要約する。 * 私の主張は専門家の間のコンセンサスのどれにも**矛盾**しない。(実際、一列目が示しているように、私が提示した確率は、AI研究者たちの予測と思われるものから乖離しているわけではない。)しかし[専門家達がこの問題について真剣に考えていないことを示す徴候](https://www.cold-takes.com/are-we-trending-toward-transformative-ai-how-would-we-know/#surveying-experts)がいくつか存在する。 * 私が典拠としてきたオープン・フィランソロピーの技術的な報告は、外部の専門家から相当程度のレビューを経ています。機械学習の専門家が「[生物学的アンカー](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP)」を、神経科学者が「[ブレインコンピューティング](https://www.openphilanthropy.org/blog/new-report-brain-computation)」を、経済学者が「[爆発的成長](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth)」を、不確実性と/または確率の分野で関連する話題を扱っている学者が「[半情報事前確率](https://www.openphilanthropy.org/blog/report-semi-informative-priors)」をそれぞれレビューしています。[2](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#fn2)(こうしたレビューの一部には重要な論点で意見の相違がありますが、そうした論点のどれも、専門家の間や文献に見つけられる明確なコンセンサスに、当該の報告が矛盾する事例にはなっていないように思われます。) * しかし例えば気候変動に対して行動を起こす必要を支持するのと同様の仕方で、「2**036年迄に変革的*****AI*****が開発される確率は少なくとも10%ある**」とか「**私たちが人類にとって最も重要な世紀に生きている確率はかなり高い**」などの主張を支持する積極的で、揺るぎないコンセンサスが、専門家たちの間にあるわけでもありません。 つまるところ私の主張は、**その研究に専心する専門家がいない分野を話題**とするものです。**これは、このこと自体が既に恐ろしい事実であり**、早晩変わって欲しいと私が願うことです。 とはいえ、そうなる前にも、「最も重要な世紀」仮説に基づいて行動しようとすべきなのでしょうか。 以下で私が論じるのは次の項目です。 * 「AI発展予測分野(AI forecasting field)」はどんなものでありうるか。 * この話題に関する今日の議論は少なすぎるし、一様で、蛸壺化しており(これには私も同意します)、したがって成熟し、より堅固な分野が登場するまで我々は[「最も重要な世紀」仮説](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/)に基づいた行動をすべきではない(私はこれには同意しかねます)という「懐疑的見解」 * 成熟し、より堅固な分野が登場するまでの間にも、「最も重要な世紀」仮説を真剣に受け取るべき理由は、以下の通りです。 + 専門家の間に揺るぎないコンセンサスが形成されるのを待つ時間はない。 + 優れた反論があるとしても ── あるいは未来の専門家が優れた反論を展開する可能性があるとしても ── そのような反論を我々はまだ見つけていない。仮説がより真剣に受け取られれば受け取られるほど、そのような反論が現れる可能性もより高くなる。(またの名を[カニンガムの法則(Cunningham’s Law)](https://bigthink.com/david-ryan-polgar/want-the-right-answer-online-dont-ask-questions-just-post-it-wrong)という。それによれば「正しい答えを得る最善の方法は間違った答えをインターネットに投稿することだ」。) + 専門家の間の揺るぎないコンセンサスに一貫してこだわり続けることは、危険な推論パターンだと考えます。私の考えでは、自分勝手な思い込みや蛸壺化に陥るリスクがいくらかあっても、最も重要なタイミングで正しいことができれば問題はありません。 **AIの発展予測に必要なのは、どの分野の専門的知識か** ----------------------------- [上に](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#SummaryTable)挙げた技術的な報告が分析した問いには、例えば以下のものがあります。 * AIの能力は、時間とともに進化しているのか。(AI、AI史) * AIモデルを動物/人間の脳と比較するとどうなるか。(AI、神経科学) * AIの能力を動物の能力と比較するとどうなるか。(AI、動物行動学) * 過去のAIシステムの訓練についての情報に基づくと、難しいタスクのための大規模なAIシステムの訓練にかかる出費について、どのような推定が立てられるか。(AI、曲線あてはめ) * これまでにこの分野につぎ込まれてきた年数/人員/資金に基づくと、変革的AIについて、最小限の情報からどのような推定が得られるか。(哲学、確率論) * 理論や歴史的傾向に基づくと、今世紀に爆発的経済成長が起こる確率はどれほどか。(成長経済学、経済史) * 過去「AIの過剰な喧伝(AI hype)」はどのようなものだったのか。(歴史学) 「最も重要な世紀」に変革的AIがもつ広範囲に渡る含意について語るとき、私は「[デジタル化した人間](https://www.cold-takes.com/digital-people-faq/#feasibility)や[銀河中の植民地化](https://www.cold-takes.com/how-digital-people-could-change-the-world/#space-expansion)の実行可能性」などを論じてきました。これらは物理学や神経科学、工学、心の哲学等々に触れる話題です。 **変革的AIの登場がいつになると予測できるのかという問題、あるいは、私たちが最も重要な世紀にいるかどうかという問題の専門家になるための職や資格は存在しません。** (特に、この予測に関しては、もっぱらAI研究者に頼るべきだというどんな主張も、私には受け入れがたいです。[この話題についてAI研究者はそれほど真剣に考えてないように思われる](https://www.cold-takes.com/are-we-trending-toward-transformative-ai-how-would-we-know/#surveying-experts)だけでなく、これまでになく強力なAIモデルの構築を専門とする人びとに頼って、変革的AIがいつ登場するかを教えてもらおうとするのは、太陽光エネルギー関連の研究開発企業 ── あるいは、あなたの見方次第では、原油掘削会社 ── に頼って、二酸化炭素の排出や気候変動を予測してもらうようなものです。AIの研究者たちが全体のピースになることは確かですが、しかし予測というのは、最先端のシステムを発明、構築することとは区別される活動です。) 加えて、こうした問題がひとつの学術分野の体裁をとるかどうかも、私には定かでありません。変革的AIを予測しようとすること、あるいは、私たちが最も重要な世紀にいる確率を見極めようとすることは、 * アカデミックな政治学(「政府と憲法はいかに相互に作用しあうのか」)よりも、[538モデル](https://projects.fivethirtyeight.com/2020-election-forecast/)(「バイデンとトランプのどちらが代表選を勝ち抜くのか」)に似ている。 * アカデミックな経済学(「なぜ景気後退が存在するのか?」)よりも、金融市場での取引(「この価格は将来、増えるのか、減るのか?」)に似ている。[3](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#fn3) * アカデミックな開発経済学(「貧困の原因は何であり、貧困はどのような要因によって減るのか」)よりも、[GiveWell](https://www.givewell.org/) の研究(「1ドル当りで人びとに役立つことが最も多大であるのはどの慈善活動か」)に似ている。[4](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#fn4) つまり、変革的AIの発展予測や「最も重要な世紀」に関する専門知にとっての自然な「住処となる学術機関(institutional home)」がどのような外観を呈するのかが、私には明らかではないのです。とはいえ、この種の問いに献身する大規模で強固な学術機関が存在しないと言うに留めておくのが、適当でしょう。 **専門家の間に揺るぎないコンセンサスが欠けている場合****私たちはどう振る舞うべきか** ---------------------------------------------- ### **懐疑的見解** 専門家の間に揺るぎないコンセンサスが欠けているため、一部の(実はほとんどの)人びとはどのような議論が提示されようとも、疑いの眼差しを向けるだろうと予測します。 極めて一般的に見られる懐疑的な反応の内、私がある程度の共感を抱いているのは次のものです。 1. 議論全体が[大胆](https://www.cold-takes.com/forecasting-transformative-ai-whats-the-burden-of-proof/#formalizing-the-)過ぎる。 2. 最も重要な世紀に生きているなどという派手な主張をしているが、これは**自分勝手な思い込みに陥っている人間の行動と合致している**。 3. [注目すべき](https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/)、[不安定な](https://www.cold-takes.com/this-cant-go-on/)時代に生きていると考えられる仕方は様々あるのだから、[証明責任](https://www.cold-takes.com/forecasting-transformative-ai-whats-the-burden-of-proof/)がそんなに高いものであるべきではないとあなたは論じているが...そうした主張や、AIについてのあなたの主張、あるいは正直なところ、こうした大胆な話題に関してはどんなことについても、自分がそれを評価できるとは思わない。 4. こうした議論に従事する人があまりに少ないことが心配だ。つまりこの議論が**小さな、同質的なグループ内の内輪の議論**になっていないかを懸念している。全体的に現状は、賢い人たちが自分たちが歴史上、どの位置を占めるのかについての物語を ── それを合理化するためにチャートや数字をふんだんに使って ── 仲間内で語り合っているだけのように感じられる。「現実」感がない。 5. というわけで、何百あるいは何千人だろうか、そのくらいの数の専門家が互いに批判し合い、評価し合うまでに分野が成熟し、気候変動について私たちが目撃しているのと同程度のコンセンサスに専門家たちが達したら、そのときにまた声をかけてほしい。 あなたがこんな風に感じるのもわかるし、私自身、ときたま同じように感じてきました ── 特に1-4番目の点についてはそうです。しかし**5番目の論点が正しくないと考えられる3つの理由**を指摘します。 ### **理由1 専門家の間の揺るぎないコンセンサスを待つ時間はない。** 変革的AIの到来は COVID-19パンデミックのよりゆっくりとした、しかしよりリスクの高いバージョンのようなものとして起こるのではないかと私は心配しています。今日利用可能な最善の情報と分析結果を観れば、何かしら大きなことが起こるという予測を支持する事実は存在します。しかしこの状況はかなりの範囲にわたって馴染みのないものです。私たちの制度が常日頃から扱っているパターンに合わないのです。しかも、どの追加の活動一年分も貴重です。 変革的AIの到来は、気候変動にあったダイナミクスの速度が増したバージョンだと考えることもできます。温室効果ガスの排出が([18世紀中盤](https://ourworldindata.org/co2-and-other-greenhouse-gas-emissions)ではなく)最近になって始まったばかりだったとして[5](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#fn5)、また、気候科学という分野が確立されていなかったとしたら、どうなるか想像してみてください。排出量の削減に努める前に研究分野が確立されるのを何十年も待つというのは全くもって良くない考えでしょう。 ### **理由2**[**カニンガムの法則**](https://bigthink.com/david-ryan-polgar/want-the-right-answer-online-dont-ask-questions-just-post-it-wrong)**(「正しい答えを得る最善の方法は、誤った答えを投稿することだ」)に従うのが、こうした議論に含まれる欠点を見つける方法としては最も見込みがある。** 私は真剣ですよ。 数年前に、私と[何人かの同僚たち](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/#acknowledgements)は、「最も重要な世紀」仮説がもしかすると真でありうるのではないかと考えました。しかしこの仮説に基づいて行動を起こし過ぎるその前に、この仮説に致命的な欠陥がないかどうかを確かめたいとも考えました。 過去数年間で私たちがしてきたことは、**あたかも「最も重要な世紀」仮説が誤っていることを明らかにするためにできるあらゆることをしてきたのだと**解釈することもできます。 第一に、私たちは重要な論証についてAI研究者や経済学者等々、なるべく色々な人びとと話してきました。しかし * この連載中に当の論証(そのほとんど、またはすべてが、[他人から拝借した](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/#acknowledgements)ものだ)について曖昧な理解しかもっていませんでした。そうした論証を歯切れよく、具体性をもって述べることはできなかったのです。 * 後で裏付けをとるつもりだったが、[6](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#fn6) 決定的な結論を下せず、批判に晒すために提示することができなかった重要な、事実に関わる論点が多数存在する。 * 全体的に見て私たちは、他の人びとが決着をつけられる機会を与えられるほど十分に、具体例を説明できたわけではない。 以上の事情から私たちは、重要な論証の多くについて、技術的な内容の報告を作成することに力を注ぎました。(これらの報告は現在公開されています。この投稿の最上部にある表を参照してください。)これによって私たちは論証を公開することができ、決定的な反論を迎える用意ができました。 それから私たちは、外部専門家に意見を求めました。[7](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand#fn7) あくまで自分自身の意見を言わせてもらえるなら、「最も重要な世紀」仮説は、以上すべての過程を経てもなお妥当な仮説として残り続けているように思われます。実際、様々な観点から、より詳細を詰めていった後で、私は以前よりも強く、この仮説が正しいと考えるようになっています。 しかしそう思えるのは、**真の**専門家たち ── 破壊的な反論を手にしているが私たちにはまだ見つけられていない人びと ── には問題全体があまりに愚かしくみえ、[あえて真剣に関わろうとしていない](https://philiptrammell.com/blog/46/)からだとしましょう。あるいは、いま生きている誰かが**いつか**こうした話題に関する専門家となって、問題の論証を撃破するとしましょう。それが起こるために〔つまり、真の専門家が真剣に議論に参加し始めたり、誰かが決定的な反論を思いつくために〕私たちに何ができるでしょうか。 私が思いつく最善の答えは「この仮説がもっと有名になり、より広く受け入れられ、より影響力をもつなら、もっと批判的に検討されるようになるだろう」というものです。 この連載はその方向で ── 「最も重要な世紀」仮説に関するより広範囲からの信用を得る方向へ ── 舵を切ろうと試みるものです。仮説が正しかったとしたら、この試みも善いことであるでしょう。私の唯一の目標が、私の信念に挑戦し、それが偽であることを知ることにあったとしても、この試みは次に取るべき最善の一手であるようにも思われます。 もちろん、もしあなたに「最も重要な世紀」仮説が正しいように思えないなら、この仮説を受け入れたり、推し進めたりしろと言うつもりはありません。それでも、もしあなたが踏みとどまる理由が、専門家の間に揺るぎないコンセンサスがないという**それだけ**であるとしたら、現状を無視し続けるのはおかしいように私には思えます。もしみんながみんなこの仕方で振る舞ったとしたら(つまり、どんな仮説も、揺るぎないコンセンサスに支えられていなければ無視するのだとしたら)正しい仮説も含めて一体どんな仮説が周縁的な位置から、広く受け入れるようになるのか分からないように私には思われます。 ### **理由3 これほど全般的な懐疑主義は悪手であるように思われる。** 私が[GiveWell](http://www.givewell.org/)に力を入れていた頃、事あるごとに、人から大体次の趣旨のことを言われたものです。「あらゆる議論を、GiveWellがトップの慈善団体におく水準に保つ ── ランダム化比較試験、揺るぎない経験的データ等々を求める ── ことはできない。善いことを行う最高の機会には、あまり明白でないようなものもある  ── そのため、GiveWellのこの基準では、[インパクトをもつための最大の潜在的機会を一部、逃してしまう](https://www.openphilanthropy.org/blog/hits-based-giving#Anti-principles_for_hits-based_giving)ことになる。」 これは正しいと私も考えます。推論やエビデンスに関する基準についての自分の一般的なアプローチをチェックして「自分のアプローチでは上手くいかないが、自分のアプローチが成功してほしいと真に思うシナリオはどのようなものか」と尋ねるのが重要であるように思われます。私の見解では、**最も重要なタイミングで正しいことを行えるなら、自分勝手な思い込みに陥ったり、蛸壺化する一定程度のリスクを負うことに問題はありません**。 専門家の間に揺るぎないコンセンサスがないこと ── そして自分勝手な思い込みや蛸壺化することへの懸念 ── は、「最も重要な世紀」仮説をすぐさま受け入れるよりも、**その粗を隈なく探す**良い理由になります。まだ見つかっていない欠陥がないかどうかを尋ね、私たち自身をつけあがらせる偏見を探し出し、この論証で最も疑問の余地があるように思われる部分を研究する、などのことを行うことができます。 しかし、この問題をあなたにとって理に適う/実際的な程度に探求したことがあるとしたら ── そして「専門家の間に揺るぎないコンセンサスがない」とか「自分勝手な思い込みに陥っていたり、議論が蛸壺化していないか心配だ」といった考慮事項**以外の欠陥**を見つけていないとしたら ── 「最も重要な世紀」仮説を見限ることで、本質的にはあなたは、**〈機会が生まれたときに、途方もなく重要な問題の存在に気づき、それに対して行動をとる初期の人間になり損ねる〉のは確実だ**。私が思うに、善いことをたくさん行う潜在的な機会を放棄するという点で、それはあまりに多くのことを手放すことになる。 1. **[^](#fnref95qf2aqwti4)**本文中の注に関しては原文を参照してください。
c6e8eef7-4245-497e-8e3e-83e1601c1a90
trentmkelly/LessWrong-43k
LessWrong
The Future of Science (Talk given at an event on Sunday 19th of July_. Richard Ngo is responsible for the talk, Jacob Lagerros and David Lambert edited the transcript. _ If you're a curated author and interested in giving a 5-min talk, which will then be transcribed and edited, sign up here_.) _ Richard Ngo: I'll be talking about the future of science. Even though this is an important topic (because science is very important) it hasn’t received the attention I think it deserves. One reason is that people tend to think, “Well, we’re going to build an AGI, and the AGI is going to do the science.” But this doesn’t really offer us much insight into what the future of science actually looks like. It seems correct to assume that AGI is going to figure a lot of things out. I am interested in what these things are. What is the space of all the things we don’t currently understand? What knowledge is possible? These are ambitious questions. But I’ll try to come up with a framing that I think is interesting. One way of framing the history of science is through individuals making an observation and coming up with general principles to explain it.   So in physics, you observe how things move and how they interact with each other. In biology, you observe living organisms, and so on. I'm going to call this “descriptive science”. More recently, however, we have developed a different type of science, which I'm going to call “generative science”. This basically involves studying the general principles behind things that don’t exist yet and still need to be built. This is, I think, harder than descriptive science, because you don't actually have anything to study. You need to bootstrap your way into it. A good example of this is electric circuits. We can come up with fairly general principles for describing how they work. And eventually this led us to computer science, which is again very general. We have a very principled understanding of many aspects of computer science, which is a science of thin
cc6853d6-2fc2-4849-8001-1175f9a47af0
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Moral Mazes and Short Termism Previously: [Short Termism](https://thezvi.wordpress.com/2017/03/04/short-termism/) and [Quotes from Moral Mazes](https://thezvi.wordpress.com/2019/05/30/quotes-from-moral-mazes/) Epistemic Status: Long term My list of [quotes from moral mazes](https://thezvi.wordpress.com/2019/05/30/quotes-from-moral-mazes/) has a section of twenty devoted to short term thinking. It fits with, and gives internal gears and color to, [my previous understanding](https://thezvi.wordpress.com/2017/03/04/short-termism/) of of the problem of short termism. Much of what we think of as a Short Term vs. Long Term issue is actually an adversarial [Goodhart’s Law](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) problem, or a [legibility](https://en.wikipedia.org/wiki/Legibility) vs. illegibility problem, at the object level, that then becomes a short vs. long term issue at higher levels. When a manager milks a plant (see quotes 72, 73, 78 and 79) they are not primarily trading long term assets for short term assets. Rather, they are trading unmeasured assets for measured assets (see 67 and 69). This is why you can have companies like Amazon, Uber or Tesla get high valuations. They hit legible short-term metrics that represent long-term growth. A start-up gets rewarded for their own sort of legible short-term indicators of progress and success, and of the quality of team and therefore potential for future success. Whereas other companies, that are not based on growth, report huge pressure to hit profit numbers. The overwhelming object level pressure towards legible short-term success, whatever that means in context, comes from being judged in the short term on one’s success, and having that judgment being more important than object-level long term success. The easiest way for this to be true is not to care about object-level long term success. If you’re gone before the long term, and no one traces the long term back to you, why do you care what happens? That is exactly the situation the managers face in Moral Mazes (see 64, 65, 70, 71, 74 and 83, and for a non-manager very clean example see 77). In particular: > 74. We’re judged on the short-term because everybody changes their jobs so frequently. > > And: > 64. The ideal situation, of course, is to end up in a position where one can fire one’s successors for one’s own previous mistakes. > > Almost as good as having a designated scapegoat is to have already sold the company or found employment elsewhere, rendering your problems [someone else’s problems](https://en.wikipedia.org/wiki/Somebody_else%27s_problem). The other way to not care is for the short-term evaluation of one’s success or failure to impact long-term success. If not hitting a short-term number gets you fired, or prevents your company from getting acceptable terms on financing or gets you bought out, then the long term will get neglected. The net present value payoff for looking good, which can then be reinvested, makes it look like by far the best long term investment around. Thus we have this problem at every level of management except the top. But for the top to actually be the top, it needs to not be answering to the stock market or capital markets, or otherwise care what others think – even without explicit verdicts, this can be as hard to root out as needing the perception of a bright future to attract and keep quality employees and keep up morale. So we almost always have it at the top as well. Each level is distorting things for the level above, and pushing these distorted priorities down to get to the next move in a giant game of adversarial telephone (see section A of quotes for how hierarchy works). This results in a corporation that acts in various short-term ways, some of which make sense for it, some of which are the result of internal conflicts. Why isn’t this out-competed? Why don’t the corporations that do less of this drive the ones that do more of it out of the market? On the level of corporations doing this direct from the top, often these actions are a response to the incentives the corporation faces. In those cases, there is no reason to expect such actions to be out-competed. In other cases, the incentives of the CEO and top management are twisted but the corporation’s incentives are not. One would certainly expect those corporations that avoid this to do better. But these mismatches are the natural consequence of putting someone in charge who does not permanently own the company. Thus, dual class share structures becoming popular to restore skin in the correct game. Some of the lower-down issues can be made less bad by removing the ones at the top, but the problem does not go away, and what sources I have inside major tech companies including Google match this model. There is also the tendency of these dynamics to arise over time. Those who play the power game tend to outperform those who do not play it barring constant vigilance and a willingness to sacrifice. As those players outperform, they cause other power players to outperform more, because they prefer and favor such other players, and favor rules that favor such players. This is especially powerful for anyone below them in the hierarchy. An infected CEO, who can install their own people, can quickly be game over on its own, and outside CEOs are brought in often. Thus, even if the system causes the corporation to underperform, it still spreads, like a meme that infects the host, causing the host to prioritize spreading the meme, while reducing reproductive fitness. The bigger the organization, the harder it is to remain uninfected. Being able to be temporarily less burdened by such issues is one of the big advantages new entrants have. One could even say that yes, they *do get wiped out by this,*but it’s not that fast, because it takes a while for this to rise to the level of a primary determining factor in outcomes. And [there are bigger things to worry about](https://thezvi.wordpress.com/2017/10/29/leaders-of-men/). It’s short termism, so that isn’t too surprising. A big pressure that causes these infections is that business is constantly under siege and forced to engage in public relations (see quotes sections L and M) and is constantly facing [Asymmetric Justice](https://thezvi.wordpress.com/2019/04/25/asymmetric-justice/) and the [Copenhagen Interpretation of Ethics](https://blog.jaibot.com/the-copenhagen-interpretation-of-ethics/). This puts tremendous pressure on corporations to tell different stories to different audiences, to avoid creating records, and otherwise engage in the types of behavior that will be comfortable to the infected and uncomfortable to the uninfected. Another explanation is that those who are infected don’t only reward each other *within*a corporation. They also *do business with*and *cooperate with*the infected elsewhere. Infected people are *comfortable*with others who are infected, and *uncomfortable*with those not infected, because if the time comes to play ball, they might refuse. So those who refuse to play by these rules do better at object-level tasks, but face alliances and hostile action from all sides, including capital markets, competitors and government, all of which are, to varying degrees, infected. I am likely missing additional mechanisms, either because I don’t know about them or forgot to mention them, but I consider what I see here sufficient. I am no longer confused about short termism.
edfd6fa4-d146-4f01-960c-86d9e880831e
trentmkelly/LessWrong-43k
LessWrong
Links and short notes, 2025-01-20 Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Threads, Bluesky, or Farcaster. Contents * My writing (ICYMI) * Jobs and fellowships * Announcements * News * Events * Other opportunities * We are not close to providing for everyone’s “needs” * The printing press and the Internet * The ultimate form of travel * Five hot takes about progress * What could have been, for SF * Quick thoughts on AI * Links and bullets * Charts * Pics My writing (ICYMI) * How sci-fi can have drama without dystopia or doomerism. “Concise but incredible resource” (@OlliPayne). “100 percent with Jason on this. If your sci-fi has technology as the problem it will put me to sleep” (@elidourado) Jobs and fellowships * HumanProgress.org is hiring a research associate with Excel/Python/SQL skills “to manage and expand our database on human well-being” (@HumanProgress) * The 5050 program comes to the UK “to help great scientists and engineers become great founders and start deep tech startups,” in partnership with ARIA (@sethbannon) Announcements * Core Memory, a new sci/tech media company from Ashlee Vance (@ashleevance) * “AI Summer”, a new podcast from Dean Ball (RPI fellow) and Timothy B. Lee (@deanwball) * Inference Magazine, a new publication on AI progress, with articles from writers including RPI fellow Duncan McClements (@inferencemag) News * Matt Clifford has published an AI Opportunities Action Plan for the UK, and the PM has agreed to all its recommendations, including “AI Growth Zones” with faster planning permission and grid connections; accelerating SMRs to power AI infrastructure; procurement, visas & regulatory reform to boost UK AI startups; and removing barriers to scaling AI pilots in government (@matthewclifford) * The Manhattan Plan: NYC plans “to build 100,000 new homes in the next decade to reach a total of 1 MILLION homes in Manhattan” (@NYCMayor). “We’ve come
369f9e65-777a-4ed2-803e-adfd20851d98
trentmkelly/LessWrong-43k
LessWrong
Getting Nearer Reply to:  A Tale Of Two Tradeoffs I'm not comfortable with compliments of the direct, personal sort, the "Oh, you're such a nice person!" type stuff that nice people are able to say with a straight face.  Even if it would make people like me more - even if it's socially expected - I have trouble bringing myself to do it.  So, when I say that I read Robin Hanson's "Tale of Two Tradeoffs", and then realized I would spend the rest of my mortal existence typing thought processes as "Near" or "Far", I hope this statement is received as a due substitute for any gushing compliments that a normal person would give at this point. Among other things, this clears up a major puzzle that's been lingering in the back of my mind for a while now.  Growing up as a rationalist, I was always telling myself to "Visualize!" or "Reason by simulation, not by analogy!" or "Use causal models, not similarity groups!"  And those who ignored this principle seemed easy prey to blind enthusiasms, wherein one says that A is good because it is like B which is also good, and the like. But later, I learned about the Outside View versus the Inside View, and that people asking "What rough class does this project fit into, and when did projects like this finish last time?" were much more accurate and much less optimistic than people who tried to visualize the when, where, and how of their projects.  And this didn't seem to fit very well with my injunction to "Visualize!" So now I think I understand what this principle was actually doing - it was keeping me in Near-side mode and away from Far-side thinking.  And it's not that Near-side mode works so well in any absolute sense, but that Far-side mode is so much more pushed-on by ideology and wishful thinking, and so casual in accepting its conclusions (devoting less computing power before halting). An example of this might be the balance between offensive and defensive nanotechnology, where I started out by - basically - just liking nanotechnology;
62c67a0b-edcc-44ef-a461-317c176b95ea
trentmkelly/LessWrong-43k
LessWrong
Should I take an IQ test, why or why not? I've seen discussion of IQ tests around LW. People imply there's a benefit to taking the test. I assume it is related to belief in belief or something. Can anyone flesh out this argument?
53d7a5d7-e949-4b59-b9e0-dbbfaceb4dc4
trentmkelly/LessWrong-43k
LessWrong
Asymmetric Weapons Aren't Always on Your Side Some time ago, Scott Alexander wrote about asymmetric weapons, and now he writes again about them. During these posts, Scott repeatedly characterizes asymmetric weapons as inherently stronger for the "good guys" than they are for the "bad guys". Here is a quote from his first post: > Logical debate has one advantage over narrative, rhetoric, and violence: it’s an asymmetric weapon. That is, it’s a weapon which is stronger in the hands of the good guys than in the hands of the bad guys. And here is a quote from his more recent one: > A symmetric weapon is one that works just as well for the bad guys as for the good guys. For example, violence – your morality doesn’t determine how hard you can punch; they can buy guns from the same places we can. > An asymmetric weapon is one that works better for the good guys than the bad guys. The example I gave was Reason. If everyone tries to solve their problems through figuring out what the right thing to do is, the good guys (who are right) will have an easier time proving themselves to be right than the bad guys (who are wrong). Finding and using asymmetric weapons is the only non-coincidence way to make sustained moral progress. One problem with this concept is that just because something is asymmetric doesn't mean that it's asymmetric in a good direction. Scott talks about weapons that are asymmetric towards those who are right. However, there are many more types of asymmetries than just right vs. wrong - physical violence is asymmetric towards the strong, shouting people down is asymmetric towards the loud, and airing TV commercials is asymmetric towards people with more money. Violence isn't merely symmetric - it's asymmetric in a bad direction, since fascists are better than violence than you. This in turn means that various sides will all be trying to pull things in directions that are asymmetric to their advantage. Indeed, a basic principle in strategy is to try to shift conflicts into areas where you are strong
27d83e4d-90c1-40a3-ac32-8b4688a9accd
trentmkelly/LessWrong-43k
LessWrong
LINK: Ben Goertzel; Does Humanity Need an "AI-Nanny"? Link: Ben Goertzel dismisses Yudkowsky's FAI and proposes his own solution: Nanny-AI   Some relevant quotes: > It’s fun to muse about designing a “Friendly AI” a la Yudkowsky, that is guaranteed (or near-guaranteed) to maintain a friendly ethical system as it self-modifies and self-improves itself to massively superhuman intelligence.  Such an AI system, if it existed, could bring about a full-on Singularity in a way that would respect human values – i.e. the best of both worlds, satisfying all but the most extreme of both the Cosmists and the Terrans.  But the catch is, nobody has any idea how to do such a thing, and it seems well beyond the scope of current or near-future science and engineering. > Gradually and reluctantly, I’ve been moving toward the opinion that the best solution may be to create a mildly superhuman supertechnology, whose job it is to protect us from ourselves and our technology – not forever, but just for a while, while we work on the hard problem of creating a Friendly Singularity. > > In other words, some sort of AI Nanny…. > The AI Nanny > Imagine an advanced Artificial General Intelligence (AGI) software program with > > * General intelligence somewhat above the human level, but not too dramatically so – maybe, qualitatively speaking, as far above humans as humans are above apes > * Interconnection to powerful worldwide surveillance systems, online and in the physical world > * Control of a massive contingent of robots (e.g. service robots, teacher robots, etc.) and connectivity to the world’s home and building automation systems, robot factories, self-driving cars, and so on and so forth > * A cognitive architecture featuring an explicit set of goals, and an action selection system that causes it to choose those actions that it rationally calculates will best help it achieve those goals > * A set of preprogrammed goals including the following aspects: > * A strong inhibition against modifying its preprogrammed goals > *
1e3e49fc-d10e-4bcd-89a8-7c2e646a7438
trentmkelly/LessWrong-43k
LessWrong
A Premature Word on AI Followup to:  A.I. Old-Timers, Do Scientists Already Know This Stuff? In response to Robin Hanson's post on the disillusionment of old-time AI researchers such as Roger Schank, I thought I'd post a few premature words on AI, even though I'm not really ready to do so: Anyway: I never expected AI to be easy.  I went into the AI field because I thought it was world-crackingly important, and I was willing to work on it if it took the rest of my whole life, even though it looked incredibly difficult. I've noticed that folks who actively work on Artificial General Intelligence, seem to have started out thinking the problem was much easier than it first appeared to me. In retrospect, if I had not thought that the AGI problem was worth a hundred and fifty thousand human lives per day - that's what I thought in the beginning - then I would not have challenged it; I would have run away and hid like a scared rabbit.  Everything I now know about how to not panic in the face of difficult problems, I learned from tackling AGI, and later, the superproblem of Friendly AI, because running away wasn't an option. Try telling one of these AGI folks about Friendly AI, and they reel back, surprised, and immediately say, "But that would be too difficult!"  In short, they have the same run-away reflex as anyone else, but AGI has not activated it.  (FAI does.) Roger Schank is not necessarily in this class, please note.  Most of the people currently wandering around in the AGI Dungeon are those too blind to see the warning signs, the skulls on spikes, the flaming pits.  But e.g. John McCarthy is a warrior of a different sort; he ventured into the AI Dungeon before it was known to be difficult.  I find that in terms of raw formidability, the warriors who first stumbled across the Dungeon, impress me rather more than most of the modern explorers - the first explorers were not self-selected for folly.  But alas, their weapons tend to be extremely obsolete. There are many ways to run a
35729ccc-89e1-424e-8578-0aff8aac96db
trentmkelly/LessWrong-43k
LessWrong
Meetup : Vancouver, Canada Discussion article for the meetup : Vancouver, Canada WHEN: 31 July 2011 03:00:00PM (-0700) WHERE: Commune Cafe, 1002 Seymour Street, Vancouver, BC, Canada We're holding the first Vancouver meetup on Sunday, July 31st starting at 3pm at the Commune Cafe on Seymour Street. We'll definitely be there from 3pm-6pm, but it'll end when it ends. I've recently moved to Vancouver from the San Francisco Bay Area, where I lived at the household that hosts the Tortuga/Mountain View meetup. The rationalist community in Silicon Valley is vibrant and growing, and I loved being part of it. As Cosmos wrote of the New York group: > Before this community took off, I did not believe that life could be this much fun or that I could possibly achieve such a sustained level of happiness. > > Being rational in an irrational world is incredibly lonely. Every interaction reveals that our thought processes differ widely from those around us, and I had accepted that such a divide would always exist. For the first time in my life I have dozens of people with whom I can act freely and revel in the joy of rationality without any social concern - hell, it's actively rewarded! Until the NYC Less Wrong community formed, I didn't realize that I was a forager lost without a tribe... Activities of the rationalist community at Tortuga included meetups, hiking trips, guest speakers, transhumanist movies, skill-training sessions, parties and impromptu pillow fights. I want to find or build a similar community in Vancouver. We have a base of several people, and will be reaching out through Less Wrong and in other ways to find like-minded people. I'm anticipating holding weekly meetups. The Commune Cafe in downtown Vancouver is a good place to start, but we have a great place in North Vancouver that we could use if that location works for everyone. At this first meetup, we'll get to know each other, and talk about what we want to get out of holding meetups and forming a community, and figure out how
724ada9b-fd3b-4d65-91ee-5db600cbb8db
StampyAI/alignment-research-dataset/blogs
Blogs
Reading books vs. engaging with them Let’s say you’re interested in a 500-page serious nonfiction book, and you’re trying to decide whether to read it. I think most people imagine their choice something like this: | | | | | --- | --- | --- | | **Option** | **Time cost** | **% that I understand and retain** | | Just read the title | Seconds |  1% | | Skim the book | 3 hours |  33% | | Read the book quickly | 8 hours |  67% | | Read the book slowly | 16 hours |  90% | I see things more like this: | | | | | --- | --- | --- | | **Option** | **Time cost** | **% that I understand and retain** | | Just read the title (and the 1-2 sentences people usually say to introduce the book) | Seconds |  10% | | Skim the book | 3 hours |  12% | | Read the book quickly | 8 hours |  13% | | Read the book slowly | 16 hours |  15% | | Read reviews/discussions of the book (ideally including author replies), but not the book | 2 hours |  25% | | Read the book slowly 3 times, with 3 years in between each time | 48 hours |  33% | | Read reviews/discussions of the book; locate the parts of the book they’re referencing, and read those parts carefully, independently checking footnotes, and referring back to other parts of the book for any unfamiliar terms. Write down who I think is being more fair; lay out the exact quotes that give the best evidence that my judgment is right. (But never read the whole book) | 16 hours |  33% | | Write my own summary of each of the book’s key points, what the best counterargument is, where I ultimately come down and why. (Will often involve reading key parts of the book 5-10 times) | 50-100 hours |  50% | I’m guessing these numbers are pretty weird-seeming, so here are some explanations: * **Just read the title (and the 1-2 sentences people usually say to introduce the book): "seconds" of time investment, 10% understanding/retention.** 10% probably sounds like a lot for a few seconds of thought! I think this works because the author has really sweated over how to make the title and elevator pitch capture as much as possible of what they’re saying. So if all I want is the "general gist," I don't think I need to read the book at all. * **Skimming or reading the book: hours of time investment, only 12-15% understanding/retention.** This is based on my own sense of how much I retain when I "simply read" the book (and don't engage much with critiques of it, don't write about it, etc.) - and my perhaps unfair impressions of how much others seem to retain when they do this. If person A says they've read a book and person B says they haven't but they've heard people talking about it, I often don't find that person A seems to know any more about the book than person B. * **Read reviews/discussions of the book (ideally including author replies), but not the book: 2 hours of time investment, 25% understanding/retention.** Good reviewers know the context/field for the book better than I do, and probably read the book more carefully than I did. Hopefully they picked out the really key good and bad parts, and if those are the only parts I retain, that’s probably more than I could hope for with just a slow reading. * **Read the book slowly 3 times, with 3 years in between each time: 48 hours of time investment, 33% understanding/retention.** This implies that the 2nd and 3rd readings are actually more educational than the 1st: the first only gets me from 10% (which I got from reading the title) to 15%, the next two bring me to 33%. I think that’s right - it’s hard to notice the important parts before I have the whole arc of the argument and have sat with it. Hearing other people talk about it and seeing some random observations related to it also help. * **Write my own thorough review of a particular debate between the book's critics and its author: 16 hours of time investment, 33% understanding/retention.** (The table has more detail on what this involves.) This is the same time investment as reading the book slowly, and I'm saying that is worth something like 5x as much (since once I've read the title, reading the book slowly only takes me from 10% to 15% understanding/comprehension, whereas this activity takes me from 10% to 33% understanding/comprehension). * **Write my own summary of each of the book’s key points, what the best counterargument is, where I ultimately come down and why: 50-100 hours of time investment, 50% understanding/comprehension.** I know hugely more about the books I've done this with than the books I haven't. But even here I'm only estimating 50% understanding/comprehension. I don’t think it is really possible to understand more than 50% of a serious book without e.g. spending a lot of independent time in the field. TLDR - I think the **value of reading a book once (without active engagement) is awkwardly small, and the value of big time investments like reading a book several times - or actively engaging with even part of it - is awkwardly large compared to that.** Also, the maximum amount of understanding you can get is awkwardly small. And a lot of the best options get you a “raw deal” on sounding educated: * If you read reviews and not the book, someone else can say they read the book and you can’t, even though you spent just as much time and retained more of the book. * If you digest the heck out of the book, you still can’t say anything in casual conversation except “I read the book,” which is also what someone can say who spent way less time and retained WAY less. Ultimately, if you live in the headspace I’m laying out, you’re going to read a lot fewer books than you would otherwise, and you’ll probably be embarrassed of how few books you read. (But if more people [described their engagement with a book in detail instead of using the binary “I read X,”](https://www.cold-takes.com/honesty-about-reading/) maybe that would change.) **Edited to add clarification:** this piece is about trying to casually inform oneself in areas one isn't an expert in, via reading books (and often other pieces) directed at a general audience. A reader pointed out that when you have a lot of existing expertise, the situation looks quite different, and often skimming or reading is the best thing to do. (Although in this case I would add that one is probably mostly reading reports, academic papers, notes from colleagues, etc. rather than books). For email filter: florpschmop
702d1f26-d1f1-43ba-8c41-d88eba9af097
trentmkelly/LessWrong-43k
LessWrong
Seeking book about baseline life planning and expectations In an attempt to find useful "base rate expectations" for the rest of my life (and how actions I might take now could set me up to be much better off 10, 20, 30, 40, 50, 60, and 70 years from now) I'm looking for a book that describes the nuts and bolts of human lives.  I want coherent discussion from an actuarial/economic/probabilistic/calculating perspective, but I'd like some soulfulness too.  The ideal book would be published in 2010 and have coverage of the different periods of people's lives and cover different aspects of their lives as well.  In some sense the book would be like a nuts and bolts "how to your your life" manual.  Hopefully it would have footnotes and generally good epistemology :-) To take an example of the kind of content I would hope for (in a domain where I already have worked out some of the theory myself) the ideal book would explain how to calculate the ROI of different levels of college education realistically.  Instead of a hand-waving argument that "on avergae you'll make more with education" it would also talk about the opportunity costs of lost wages, and how expected number of years of work impacts on what amount of training makes sense, and so on.  To be clear, I don't want a book that is simply about deciding when, how, and for how long it makes sense to train for a job.  Instead I want something that talks about similar issues that I haven't already thought about but that are important, so that I can be usefully educated in ways I wasn't expecting.  My goal is to find someone else's scaffold to help me project when and why I should (or shouldn't) buy a minivan, how much to budget for dentistry in my 50's, and a breakdown of the causes of bankruptcy the way insurance companies can predict causes of death. I was hoping that the book How We Live: An Economic Perspective on Americans from Birth to Death would give me what I want (and it is still probably my fallback book if I can't find anything better) but it was written in 1983,
cbb3e106-0f63-46a9-948c-ca1105ce3a6f
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
What organizations other than Conjecture have (esp. public) info-hazard policies? I believe Anthropic has said they won't publish capabilities research? OpenAI seems to be sort of doing the same (although no policy AFAIK). I heard FHI was developing one way back when... I think MIRI sort of does as well (default to not publishing, IIRC?)
81a19415-7cf3-455e-afcb-13aca084ad0e
trentmkelly/LessWrong-43k
LessWrong
Information Theory vs Harry Potter [LINK] http://www.inference.phy.cam.ac.uk/mackay/itila/Potter.html Somebody hasn't heard of HPMOR...
90009446-0b9c-40a3-ae9d-be76761ae4c8
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Brain Emulation and Hard Takeoff Today's post, Brain Emulation and Hard Takeoff was originally published on November 22, 2008. A summary:   > A project of bots could start an intelligence explosion once it got fast enough to start making bots of the engineers working on it, that would be able to operate at greater than human speed. Such a system could also devise a lot of innovative ways to acquire more resources or capital. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Emulations Go Foom, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
6f7a21b5-6584-4629-a1f7-cd9190de21bf
trentmkelly/LessWrong-43k
LessWrong
Long Review: Economic Hierarchies Naked and Afraid from the Discovery Channel didn’t live up to its potential. To be fair, a handful of scraggly naked people trying to make it in Kenya’s wilderness made for interesting television, as they scratched themselves, got infections, and looked pretty uncomfortable. But the interpersonal drama seemed contrived since their goal was mere survival, and the division of labor was not highly interesting. I didn’t learn what I wanted--too much complacent nakedness, not enough competence-porn. The show I'd like to pitch is one about progress and knowledge. 900 scraggly people, they don’t have to be naked, but for the sake of argument, let’s say they are naked, are plopped in the wilderness with a bunch of raw materials and the mandate to build the highest level of civilization possible in three years, outcompeting another group of 900 scraggly naked people. Boom! Instant natural experiment: knowledge, society, organization, bottlenecks on development. From it we could dream up better models of how to bounce back from a civilizational setback, settle charter cities, and craft efficient institutional structures. We could recruit some of the best minds in hundreds of fields not to consult but to build publicly, with all data and streams tracked and uploaded to the internet for analysis. Of course, our prediction markets about the show would be filled with bets about what milestones would and would not be reached. Plus, we would be entertained and given insights at the same time. That’s my pitch for how we will inspire people about progress, productivity, and the mysteries of society’s organization. I want the world to see bureaucracy and technology stripped down to their barest essentials, not contrived nakedness in the wilderness. The details of the show might help us understand what governance and incentives would make for the fastest civilization building. A sense of wonder about these things and a desire to cultivate comparative advantage drove me to read Econo
2ec6f2db-6e54-49bd-b12f-d3b0be76e352
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Training a GPT model on EA texts: what data? I plan to finetune GPT-J, a large language model similar to GPT-3 creative by EleutherAI, on effective altruism texts. GPT-J is known to be better at mathematical, logical, and analytic reasoning than GPT-3 due to a large training on academic texts. The goals are: 1. **Accurately reflect how the EA community thinks** 2. Represent texts widely read in the EA community 3. ~~Helps the language model think well~~ My proposed training mix: * 60% EA Forum posts above a certain karma threshold + Bias towards newer posts according to a ?? curve + Weight the likelihood of inclusion of each post by a function of its karma (how does that map to views? * Books (3.3MB) + The Alignment Problem (1MB) + The Precipice (0.9MB) + Doing Good Better (0.5MB) + The Scout Mindset (0.5MB) + 80,000 Hours (0.4KB) * Articles and blog posts on EA + [EA Handbook](https://forum.effectivealtruism.org/handbook) + [Most Important Century Sequence](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd) + [Replacing Guilt Sequence](https://forum.effectivealtruism.org/s/a2LBRPLhvwB83DSGq) (h/t Lorenzo) + [Winners of the First Decade Review](https://forum.effectivealtruism.org/s/HSA8wsaYiqdt4ouNF) + ... what else? * [EA Forum Topic Descriptions](https://forum.effectivealtruism.org/topics/all) (h/t Lorenzo) * OpenPhilanthropy.org (h/t Lorenzo) * GivingWhatWeCan.org (h/t Lorenzo) + including comments * ??% Rationalism + ??% Overcoming Bias + ??% Slate Star Codex + ??% HPMOR What sources am I missing? Please suggest important blog posts and post series I should add to the training mix, and explain how important to or popular EA they are. Can you help me estimate how much mindshare each of the items labelled "??" occupies in a typical EA? I'm new to EA, so I would strongly appreciate input.
e05f525a-0ec2-4cc3-9930-3d6c576ceb87
trentmkelly/LessWrong-43k
LessWrong
[POLL] LessWrong group on YourMorals.org Here's the news article on this: http://www.yourmorals.org/blog/2011/11/how-to-use-groups-at-yourmorals-org/ And here's the group that the LW community just created: http://www.yourmorals.org/setgraphgroup.php?grp=623d5410f705f6a1f92c83565a3cfffc I think it will be very interesting to see what we can all get on this.
13c7b17d-b4d0-4b3d-8d7c-825e0895d8e8
trentmkelly/LessWrong-43k
LessWrong
How We Picture Bayesian Agents I think that when most people picture a Bayesian agent, they imagine a system which: * Enumerates every possible state/trajectory of “the world”, and assigns a probability to each. * When new observations come in, loops over every state/trajectory, checks the probability of the observations conditional on each, and then updates via Bayes rule. * To select actions, computes the utility which each action will yield under each state/trajectory, then averages over state/trajectory weighted by probability, and picks the action with the largest weighted-average utility. Typically, we define Bayesian agents as agents which behaviorally match that picture. But that’s not really the picture David and I typically have in mind, when we picture Bayesian agents. Yes, behaviorally they act that way. But I think people get overly-anchored imagining the internals of the agent that way, and then mistakenly imagine that a Bayesian model of agency is incompatible with various features of real-world agents (e.g. humans) which a Bayesian framework can in fact handle quite well. So this post is about our prototypical mental picture of a “Bayesian agent”, and how it diverges from the basic behavioral picture. Causal Models and Submodels Probably you’ve heard of causal diagrams or Bayes nets by now. If our Bayesian agent’s world model is represented via a big causal diagram, then that already looks quite different from the original “enumerate all states/trajectories” picture. Assuming reasonable sparsity, the data structures representing the causal model (i.e. graph + conditional probabilities on each node) take up an amount of space which grows linearly with the size of the world, rather than exponentially. It’s still too big for an agent embedded in the world to store in its head directly, but much smaller than the brute-force version. (Also, a realistic agent would want to explicitly represent more than just one causal diagram, in order to have uncertainty over causal structu
80cb512e-8510-41cc-97f1-683a00dc877b
trentmkelly/LessWrong-43k
LessWrong
My weekly review habit Every Saturday morning, I take 3-4 hours to think about how my week went and how I’ll make the next one better. The format has changed over time, but for example, here’s some of what I reflected on last week: * I noticed I’d fallen far short of my goal for written output. I decided to allocate more time to reading this week, hoping that it would generate more ideas. And I reorganized my morning routine to make it easier to start writing in the morning. * I looked at some stats from RescueTime and Complice about what I’d spent time on and accomplished. I noticed that my time spent on Slack was nearing dangerous levels, so I decided to make a couple experimental tweaks to get it down: * I tried out Contexts, a replacement for the macOS window switcher, which I configured to only show windows from my current workspace—hoping that this would prevent me from cmd+tabbing over to Slack and getting distracted. * I decided to run an experiment of not answering immediately when coworkers called me in the middle of a focused block of time, and keeping a paper “todo when done focusing” list to remind myself to call them back, check Slack, etc. * I noticed that it felt hard for me to get useful info from the time-tracking data in RescueTime and Complice, so I revisited what questions I actually wanted to answer and how I could make them easy to answer. * I realized that I should be using Google Calendar, not RescueTime or Complice, to track my time spent in meetings, so I added that to my time-tracking data sources. * I also made several tweaks to the way I used Complice to make it easier to see various stats I was interested in. And so on. By the end of the review I had surfaced lots of other improvements for the coming week. ---------------------------------------- While each individual tweak is small, over the weeks and years they’ve compounded to make me a lot more effective. Because of that, this weekly review is the most useful habit (o
3a5789ec-c67b-456f-b420-b2379dd3c09a
trentmkelly/LessWrong-43k
LessWrong
The Ethics of ACI ACI is a universal intelligence model based on the idea "behaves the same way as experiences". It may seem counterintuitive that ACI agents have no ultimate goals, nor do they have to maximize any utility functions. People may argue that ACI has no ethics, thus it can’t be a general intelligence model. ACI uses Solomonoff induction to determine future actions from past input and actions. We know that Solomonoff induction can predict facts of "how the world will be", but it is believed that you can't get "value" only from "facts". What’s the value, ethics, and foresight of ACI? If an agent's behavior is only decided by its past behavior, who have decided its past behaviors?   ACI learns ethics from experiences The simple answer is, ACI learns ethics from experiences. ACI takes "right" behaviors as training samples, the same way as value learning approaches. (The difference is, ACI does not limit the ethics to values or goals.) For example, in natural selection, the environment determines which behavior is "right" and which behavior would get a possible ancestor out of the gene pool, and a natural intelligent agent takes the right experiences as learning samples. But, does that mean ACI can't work by itself, and has to rely on some kind of "caretaker" that decides which behavior is right?  However, rational agent models also rely on the construction of utility functions, just like reinforcement learning heavily relies on reward designing or reward shaping, AIXI’s constructive, normative aspect of intelligence is "assumed away" to the external entity that assigns rewards to different outcomes. You have to assign rewards or utility for every point in the state space, in order to make a rational agent work. It's not a solution for the curse of dimensionality, but a curse itself. Instead, ACI’s normative aspect of intelligence is also "assumed away" to the experiences. In order to be utilized by ACI agents, any ethical information must be able to be represented in
a698c9df-2620-435f-8581-49854b8ae23d
trentmkelly/LessWrong-43k
LessWrong
A Pragmatic Epistemology For the past three thousand years epistemology has been about the truth, the whole truth, and nothing but the truth. Philosophers and scientists have continuously attempted to pinpoint the nature of truth, to find general logico-syntactic criteria for generating justified inferences, and to discover the true nature of reality. I happen to think that truth is overrated. And by that I don't mean that I'm a stereotypical postmodernist, prepared to say that all views are on equal footing (because after all, who can really say what's true and what isn't?). Instead I mean that I don't even think that the truth is a useful or coherent concept when stretched to accommodate what philosophers have tried to make it accommodate. It's not a malleable enough concept to have the generality that philosophers are asking of it. We need something else in its place. A view similar to this is reservationism, which was first introduced[1] by Moldbug in A Reservationist Epistemology. If you haven't read it, I suggest at least skimming it before reading the rest of this post, but the basic idea is that you can try to cram reason into an explicit General Theory of Reason for as long as you like, but at best it will always be a special case of "common sense." I have mixed feelings about Moldbug's post. On the one hand, it's delightfully witty and I agree with the general thrust of the argument. On the other hand, I think you can go a bit farther to explain his "common sense" notion than he lets on, and the abrasiveness and vagueness of his writing are likely to cloak some of the finer points. And despite giving (likely unintentional) hints about what we might replace "truth" with, he never does criticise the concept of truth, although he obviously criticises general theories of truth.  Since I do depart from Moldbug, I'll call myself a pragmatist rather than a reservationist. I'll also give my pragmatism a slogan: "It's just a model."[2] What's just a model? Bayesianism, falsificationism,
6a82a0bf-5bf2-4642-a2b5-b9a798799d44
trentmkelly/LessWrong-43k
LessWrong
[CORE] Concepts for Understanding the World Background: I'm recently doing a big project to increase my scholarship and modeling power for both rationality and traditional "serious" topics. One thing I found very useful is taking notes with a clear structure. The structure I'm using currently is as follows: - write down useful concepts, - write down (as a separate category) useful heuristics & things to do in various situations, - do not write facts, opinions or anything else (I rely on unaided memory to get more filtering). Heuristic: learn concepts before facts! Note that you can be mistaken about facts, but you can't harm your epistemology by learning concepts. Even if a concept turns out to be useless or misleading, you are better off knowing about it, understanding how it's misleading, and being able to avoid the trap when you see it. Let's share concepts! Please give (at a minimum) a name and a reference (link). A short description in plain language is also welcome.
cee1e271-d926-4c50-b3fe-62e5d28d368a
trentmkelly/LessWrong-43k
LessWrong
The simple picture on AI safety At every company I've ever worked at, I've had to distill whatever problem I'm working on to something incredibly simple. Only then have I been able to make real progress. In my current role, at an autonomous driving company, I'm working in the context of a rapidly growing group that is attempting to solve at least 15 different large engineering problems, each of which is divided into many different teams with rapidly shifting priorities and understandings of the problem. The mission of my team is to make end-to-end regression testing rock solid so that the whole company can deploy code updates to cars without killing anyone. But that wasn't the mission from the start: at the start it was a collection of people with a mandate to work on a bunch of different infrastructural and productivity issues. As we delved into mishmash after mishmash of complicated existing technical pieces, the problem of fixing it all became ever more abstract. We built long lists of ideas for fixing pieces, wrote extravagant proposals, and drew up vast and complicated architecture diagrams. It all made sense, but none of it moved us closer to solving anything. At some point, we distilled the problem down to a core that actually resonated with us and others in the company. It was not some pithy marketing-language mission statement; it was not a sentence at all -- we expressed it a little differently every time. It represented an actual comprehension of the core of the problem. We got two kinds of reactions: to folks who thought the problem was supposed to be complicated, out distillation sounded childishly naive. They told us the problem was much more complicated. We told them it was not. To folks who really wanted to solve the problem with their own hands, the distillation was energizing. They said that yes, this is the kind of problem that can be solved. I have never encountered a real problem without a simple distillation at its core. Some problems have complex solutions. Some problems a
3500d842-0664-4e54-8d83-4059107a930c
StampyAI/alignment-research-dataset/lesswrong
LessWrong
[Preprint] Pretraining Language Models with Human Preferences Surprised no one posted about this from Anthropic, NYU and Uni of Sussex yet: * Instead of fine-tuning on human-preferences, they directly incorporate human feedback in the pre-training phase, conditioning the model on <good> or <bad> feedback tokens placed at the beginning of the training sequences. * They find this to be Pareto-optimal out of five considered pre-training objectives, greatly reducing the amount of undesired outputs while retaining standard LM pre-training downstream performance AND outperforming RLHF fine-tuning in terms of preference satisfaction. This conditioning is very reminiscent of the [decision transformer](https://arxiv.org/abs/2106.01345), where scalar reward tokens are prepended to the input. I believe [CICERO](https://about.fb.com/news/2022/11/cicero-ai-that-can-collaborate-and-negotiate-with-you/) also does something similar, conditioning on ELO scores during dialogue generation training. From a discussion with [James Chua](https://www.lesswrong.com/users/james-chua) on [AISS](https://www.aisafetysupport.org/)'s slack, we noted similarities between this work and [Charlie Steiner](https://www.lesswrong.com/users/charlie-steiner)'s [Take 13: RLHF bad, conditioning good](https://www.lesswrong.com/posts/AXpXG9oTiucidnqPK/take-13-rlhf-bad-conditioning-good). James is developing a [library ("conditionme")](https://github.com/thejaminator/conditionme) specifically for rating-conditioned language modelling and was looking for some feedback, which prompted the discussion. We figured potential future work here is extending the conditioning to scalar rewards (rather than the discrete <good> vs <bad>), which James pointed out requires some caution with the tokenizer, which he hopes to address in part with conditionme.
6e9641b7-559c-4d17-81f0-6eb604d4e820
trentmkelly/LessWrong-43k
LessWrong
[LINK]Real time mapping of neural activity in a larval zebra fish https://plus.google.com/109794669788083578017/posts/gLgSnkCtgrR > Brain function relies on communication between large populations of neurons across multiple brain areas, a full understanding of which would require knowledge of the time-varying activity of all neurons in the central nervous system. Here we use light-sheet microscopy to record activity, reported through the genetically encoded calcium indicator GCaMP5G, from the entire volume of the brain of the larval zebrafish in vivo at 0.8 Hz, capturing more than 80% of all neurons at single-cell resolution. Demonstrating how this technique can be used to reveal functionally defined circuits across the brain, we identify two populations of neurons with correlated activity patterns. One circuit consists of hindbrain neurons functionally coupled to spinal cord neuropil. The other consists of an anatomically symmetric population in the anterior hindbrain, with activity in the left and right halves oscillating in antiphase, on a timescale of 20 s, and coupled to equally slow oscillations in the inferior olive. Page down at the link to see the animation.
5a366e19-92c0-443d-b613-4f62cdd26518
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Complex Systems for AI Safety [Pragmatic AI Safety #3] *This is the third post in* [*a sequence of posts*](https://www.alignmentforum.org/posts/bffA9WC9nEJhtagQi/introduction-to-pragmatic-ai-safety-pragmatic-ai-safety-1) *that describe our models for Pragmatic AI Safety.* It is critical to steer the AI research field in a safer direction. However, it’s difficult to understand how it can be shaped, because it is very complex and there is often a high level of uncertainty about future developments. As a result, it may be daunting to even begin to think about how to shape the field. We cannot afford to make too many simplifying assumptions that hide the complexity of the field, but we also cannot afford to make too few and be unable to generate any tractable insights. Fortunately, the field of complex systems provides a solution. The field has identified commonalities between many kinds of systems and has identified ways that they can be modeled and changed. In this post, we will explain some of the foundational ideas behind complex systems and how they can be applied to shaping the AI research ecosystem. Along the way, we will also demonstrate that deep learning systems exhibit many of the fundamental properties of complex systems, and we show how complex systems are also useful for deep learning AI safety research. A systems view of AI safety --------------------------- ### Background: Complex Systems When considering methods to alter the trajectory of empirical fields such as deep learning, as well as preventing catastrophe from higher risk systems, it is necessary to have some understanding of complex systems. Complex systems is an entire field of study, so we cannot possibly describe every relevant detail here. In this section, we will try merely to give a very high level overview of the field. At the end of this post we present some resources for learning more. Complex systems are systems consisting of many interacting components that exhibit emergent collective behavior. Complex systems are highly interconnected, making decomposition and reductive analysis less effective: breaking the system down into parts and analyzing the parts cannot give a good explanation of the whole. However, complex systems are also too organized for statistics, since the interdependencies in the system break fundamental independence assumptions in much of statistics. Complex systems are ubiquitous: financial systems, power grids, social insects, the internet, weather systems, biological cells, human societies, deep learning models, the brain, and other systems are all complex systems. It can be tricky to compare AI safety to making other specific systems safer. Is making AI safe like making a rocket, power plant, or computer program safe? While analogies can be found, there are many disanalogies. It’s more generally useful to talk about making complex systems safer. For systems theoretic hazard analysis, we can abstract away from the specific content and just focus on shared structure across systems. Rather than talk about what worked well for one high-risk technology, with a systems view we can talk about what worked well for a large number of them, which prevents us from overfitting to a particular example. The central lesson to take away from complex systems theory is that reductionism is not enough. It’s often tempting to break down a system into isolated events or components, and then try to analyze each part and then combine the results. This incorrectly assumes that separation does not distort the system’s properties. In reality, parts do not operate independently, and are subject to feedback loops and nonlinear interactions. Analyzing the pairwise interactions between parts is not sufficient for capturing the full system complexity (this is partially why a [bag of n-grams](https://towardsdatascience.com/evolution-of-language-models-n-grams-word-embeddings-attention-transformers-a688151825d2) is far worse than attention). Hazard analysis once proceeded by reductionism alone. In earlier models, accidents are broken down into a chain of events thought to have caused that accident, where a hazard is a root cause of an accident. Complex systems theory has supplanted this sort of analysis across many industries, in part because the idea of an ultimate “root cause” of a catastrophe is not productive when analyzing a complex system. Instead of looking for a single component responsible for safety, it makes sense to identify the numerous factors, including sociotechnical factors, that are contributory. Rather than break events down into cause and effect, a systems view instead sees events as a product of a complex interaction between parts. Recognizing that we are dealing with complex systems, we will now discuss how to use insights from complex systems to help make AI systems safer. ### Improving Contributing Factors “Direct impact,” that is  impact produced from a simple, short, and deterministic causal chain, is relatively easy to analyze and quantify. However, this does not mean that direct impact is always the best route to impact. If someone only focuses on direct impact, they won’t optimize for diffuse paths towards impact. For instance, EA community building is indirect, but without it there would be far fewer funds, fewer people working on certain problems, and so on. Becoming a billionaire and donating money is indirect, but without this there would be significantly less funding. Similarly, safety field-building may not have an immediate direct impact on technical problems, but it can still vastly change the resources devoted to solving those problems, in turn contributing to solving them (note that “resources” does not (just) mean money, but rather competent researchers capable of making progress). In a complex system, such indirect/diffuse factors have to be accounted for and prioritized. AI safety is not all about finding safety mechanisms, such as mechanisms that could be added to make superintelligence completely safe. This is a bit like saying computer security is all about firewalls, which is not true. [Information assurance](https://online.norwich.edu/academic-programs/resources/information-assurance-versus-information-security) evolved to address blindspots in information security, because it is understood that we cannot ignore [complex systems](http://web.mit.edu/smadnick/www/wp/2014-07.pdf), safety culture, protocols, and so on. Often, research directions in AI safety are thought to need to have a simple direct impact story: if this intervention is successful, what is the short chain of events towards it being useful for safe and aligned AGI? “How does this directly reduce x-risk” is a well-intentioned question, but it leaves out salient remote, indirect, or nonlinear causal factors. Such diffuse factors cannot be ignored, as we will discuss below. **A note on tradeoffs with simple theories of impact** AI safety research is complex enough that we should expect that understanding a theory of impact might require deep knowledge and expertise about a particular area. As such, a theory of impact for that research might not be easily explicable to somebody without any background in a short amount of time. This is especially true of theories of impact that are multifaceted, involve social dynamics, and require an understanding of multiple different angles of the problem. As such, we should not only focus on theories of impact that are easily explicable to newcomers. In some cases, pragmatically one should not always focus on the research area that is most directly and obviously relevant. At first blush, reinforcement learning (RL) is highly relevant to advanced AI agents. RL is conceptually broader than supervised learning such that supervised learning can be formulated as an RL problem. However, the problems considered in RL that aren’t considered in supervised learning are currently far less tractable. This can mean that in practice, supervised learning may provide more tractable research directions. However, with theories of impact that are less immediately and palpably related to x-risk reduction, we need to be very careful to ensure that research remains relevant. Less direct connection to the essential goals of the research may cause it to drift off course and fail to achieve its original aims. This is especially true when research agendas are carried out by people who are less motivated by the original goal of the research, and could potentially lead to value drift where previously x-risk-motivated researchers become motivated by proxy goals that are no longer relevant. This means that it is much more important for x-risk-motivated researchers and grantmakers to maintain the field and actively ensure research remains relevant (this will be discussed later). Thus, there is a tradeoff involved in only selecting immediately graspable impact strategies. Systemic factors cannot be ignored, but this does not eliminate the need for understanding causal (whether indirect/nonlinear/diffuse or direct) links between research and impact. **Examples of the importance of systemic factors** The following examples illustrate the extreme importance of systemic factors (and the limitations of direct causal analysis and complementary techniques such as [backchaining](https://en.wikipedia.org/wiki/Backward_chaining)): * Increasing wealth is strongly associated with a reduction in childhood mortality. But one cannot always credit the survival of any particular child to an increase of the wealth of their country. Nonetheless, a good way to reduce childhood mortality is still to increase overall wealth. * Community building, improving institutions, and improving epistemics can usually not be linked directly to specific outcomes, but in aggregate they clearly have large effects. * Smoking does not guarantee you will get cancer. If you smoke and get cancer, it is not necessarily because you smoked. Still, avoiding smoking is clearly a good way to avoid cancer. Contrariwise, exercise does not guarantee that you will be healthy, but it robustly helps. * Intelligence (e.g. as measured by IQ) has an enormous impact on the ability of people to perform various tasks. But it is implausible to point to a particular multiple choice test question that somebody answered correctly and say “they got this question because their IQ was above x.” Similarly, forecasting and rationality could increase the “IQ” of the superorganism, but it similarly could not be expected to produce one single definite outcome. Improving the rationality waterline helps with outcomes, even if we cannot create a simple chain of events showing that it will prevent a particular future catastrophe. * Any particular hurricane or wildfire cannot be attributed to the effects of climate change, but reducing climate change is a good way to reduce the prevalence of those extreme weather events. In the cases above, it is possible to use statistics to establish the relationship between the variables given enough data. Some can be causally established through randomized controlled trials. However, we do not have the ability or time to run an RCT on diffuse factors that reduce x-risk from AI. Unlike the situations above, we do not get to observe many different outcomes because an existential catastrophe would be the last observation we would make. This does not mean diffuse factors are unimportant; on the contrary, they are extremely important. We can instead identify time-tested factors that have been robustly useful in similar contexts in the past. On a more societal scale, the following diffuse factors are quite important for reducing AI x-risk. Note that these factors may interact in some cases: for instance, proactivity about risks might not help much if malevolent actors are in power. * **People having improved epistemics:** Irrationality could cause people to ignore warning signs, dismiss correct claims, and barrel ahead when they shouldn’t. * **Proactivity about (tail) risks:** Causing humanity as a collective to care more about tail risks would be a boon for safety. Work on mitigating tail risks is currently underincentivized due to the human tendency to ignore tail risks. * **Expanded moral circles:**The term “[moral circle](https://en.wikipedia.org/wiki/The_Expanding_Circle)” describes the beings that one considers to be morally relevant (e.g. people in your community, people across the world, future people, non-human animals, etc.). People do not need a large moral circle to want to avoid their own deaths, but it can strengthen the perceived importance of reducing x-risk. * **Keeping (misaligned) malevolent actors (**[**egoists/Machiavellians/psychopaths**](https://longtermrisk.org/files/Reducing_long_term_risks_from_malevolent_actors.pdf)**) out of power:**Contending with actively malevolent leaders is even more difficult than contending with apathetic leaders. Getting even-handed, cautious, and altruistic people into positions of power is likely to reduce x-risk. **Sociotechnical Factors** ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675538016/mirroredImages/n767Q8HqbrteaPA25/gkw7skazvgcmbg44cnlo.png)*An abstract template from Nancy Leveson illustrating the complex interplay between sociotechnical factors and an operating process.*We can now speak about specific diffuse factors that have shown to be highly relevant to making high-risk technological systems safer, which are also relevant to making present and future AI systems safer. The following sociotechnical factors (compiled from [Perrow](https://en.wikipedia.org/wiki/Normal_Accidents), [La Porte](https://polisci.berkeley.edu/sites/default/files/people/u3825/High%20Reliability%20Organizations%20-%20Unlikely,%20Demanding,%20and%20At%20Risk.pdf), [Leveson](http://sunnyday.mit.edu/safer-world.pdf), and others) tend to influence hazards: * **Rules and regulations**, perhaps including internal policies as well as legal governance. * **Social pressures**, including those from the general public as well as well-connected powerful people. * **Productivity pressures**, or pressure to deliver quickly. * **Incentive structures**within the organization, such as benefits to delivering quickly or retaliation for whistleblowing. * **Competition pressures from other actors** who may have different safety standards, or otherwise be able to move faster. * **Safety budget and compute allocation**: are safety teams capable of running the experiments they need to? Is a significant proportion of the budget and compute dedicated to safety? * **Safety team size**, which is related to budget. The number of researchers, engineers, and top researchers on the safety team matters a lot. * **Alarm fatigue**: if many false alarms are raised about safety issues which were never borne out, this could reduce willingness to care about safety. * **Reduction in inspection and preventative maintenance**, which is perhaps less relevant for a forward-looking problem like safety. However, if people do not keep a close eye on capabilities, this could allow for emergent capabilities (or actors) to take us by surprise. * **Lack of defense in depth**: overlapping systems that provide multiple layers of defense against hazards. * **Lack of redundancy**: multiple systems which accomplish similar safety tasks, so as to remove single points of failure. * **Lack of fail-safes**: features that allow a system to fail gracefully. * **Safety mechanism cost**: how much does it cost to make a system safe? * **Safety culture**, or the general attitude towards safety within an organization or field. According to Leveson, who has been consulted on the design of high-risk technologies across numerous industries, “*the most important [contributing factor] to fix if we want to prevent future accidents*” is safety culture. **Safety Culture** Safety culture is not an easy risk factor to address, though it is likely to be one of the most important. Many ML researchers currently roll their eyes when asked about alignment or safety: usually, one cannot simply go straight to discussing existential risks from superintelligences without suffering social costs or efforts potentially backfiring. This is a sign of a deficient safety culture. How do we improve safety culture? Safety needs to be brought to the forefront through good incentive structures and serious research. Pushing research cultures in a safer direction is bottlenecked by finding interesting, shovel-ready, safety-relevant tasks for people to do and funding them to complete those tasks. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675538016/mirroredImages/n767Q8HqbrteaPA25/dqi3qijtmhdrf2kkwlhw.png) As suggested by [the speculative pyramid above](https://assets.pubpub.org/5nv701md/01521405455055.pdf), it is not realistic to immediately try to make safety into a community norm. Before this can be done, we need to make it clear what safety looks like and we need infrastructure to make AI safety research as easy as possible. Researchers need to accept arguments about risks *and* they need clear, concrete, low-risk research tasks to pursue. This involves creating funding opportunities, workshops, and prizes, as well as clearly defining problems through metrics. Some contributing [factors](https://arxiv.org/abs/1811.10840) that can improve safety culture are as follows: * **Preoccupation with failure**, especially black swan events and unseen failures. * **Reluctance to simplify interpretations** and explain failures using only simplistic narratives. * **Sensitivity to operations**, which involves closely monitoring systems for unexpected behavior. * **Commitment to resilience**, which means being rapidly adaptable to change and willing to try new ideas when faced with unexpected circumstances. * **Under-specification of organizational structures**, where new information can travel throughout the entire organization rather than relying only on fixed reporting chains. For mainstream culture, public outreach can help. One plausible way that AI systems could become more safe is due to a broader cultural desire for safety, or fear of lack of safety. Conversely, if AI safety is maligned or not valued in the general public, there may be other public pressures (e.g. winning the AI race, using AI to achieve some social good quickly) that could push against safety. Again, mainstream outreach should not be so extreme as to turn the research community against safety. Overton windows must be shifted with care. Currently, safety is being attacked by [critics](https://twitter.com/timnitGebru/status/1485399721409605632) who believe that it detracts from work on AI fairness and bias and does not heavily prioritize current power inequalities, which they view as the root cause of world problems. Criticisms have been connected to criticisms of longtermism, particularly absurd-seeming expected value calculations of the number of future beings, as well as the influence of EA billionaires. These criticisms threaten to derail safety culture. It is tricky but necessary to present an alternative perspective while avoiding negative side effects. Some technical problems are instrumentally useful for safety culture in addition to being directly useful for safety. One example of this is reliability: building highly reliable systems trains people to specifically consider the tail-risks of their system, in a way that simply building systems that are more accurate in typical settings does not. On the other hand, value learning, while it is also a problem that needs to be solved, is currently not quite as useful for safety culture optimization. **Composition of top AI researchers** We will now discuss another contributing factor that is important to improve: the composition of top AI researchers. In the future, experimenting with the most advanced AI systems will be extraordinarily expensive (in many cases, it already is). A very small number of people will have the power to set research directions for these systems. Though it’s not possible to know exactly who will be in this small group, it could comprise any number of the top AI researchers today. However, one thing is known: most top AI researchers are not sympathetic to safety. Consequently, there is a need to increase the proportion of buy-in among top researchers, especially including researchers in China, and also to train more safety-conscious people to be top researchers. It’s tempting to think that top AI researchers can simply be bought. This is not the case. To become top researchers, they had to be highly opinionated and driven by factors other than money. Many of them entered academia, which is not a career path typically taken by people who mainly care about money. Yann LeCun and Geoffrey Hinton both still hold academic positions in addition to their industry positions at Meta and Google, respectively. Yoshua Bengio is still in academia entirely. The tech companies surely would be willing to buy more of their time for a higher price than academia, so why are the three pioneers of deep learning not all in the highest-paying industry job? Pecuniary incentives are useful for externally motivated people, but many top researchers are mostly internally motivated. As discussed in the last post, a leading motivation for researchers is the interestingness or “coolness” of a problem. Getting more people to research relevant problems is highly dependent on finding interesting and well-defined subproblems for them to work on. This relies on concretizing problems and providing funding for solving them. Due to the fact that many top researchers are technopositive, they are not motivated by complaints about the dangers of their research, and they are likely to be dismissive. This is especially true when complaints come from those who have not made much of a contribution to the field. As a result, it is important to keep the *contribution to complaint ratio* high for those who want to have any credibility. “Contribution” can be a safety contribution, but it needs to be a legible contribution to ML researchers. Top researchers may also associate discussion of existential risk with sensationalist stories in the media, doom-and-gloom prophecies, or panic that “we’re all going to die.” **Causes of Neglectedness** There are a number of additional factors which contribute to the general neglectedness of AI safety. It is important to optimize many of these factors in order to improve safety. A more general list of these factors is as follows. * **Corporate**: myopic desire for short-term shareholder returns, safety features may take a long time to pay off, some human values may be difficult to incorporate in prices or pecuniary incentives * **Temperamental**: techno-optimism, distaste for discussing risks * **Political**: AI safety is seen to compete with more politically popular causes like climate change and reducing inequality * **Technical Background**: safety problems are outside of one’s existing skill set or training, and likewise machine ethics and sociotechnical concerns and do not as easily as easily comport with their quantitative inclinations * **Socioeconomic distance**: many AI researchers live in tech bubbles, which can cause researchers to devalue or implicitly underemphasize cosmopolitan approaches towards loading human values * **Tail risks:** highly consequential black swans and tail risks are systematically neglected * **Respectability**: distaste for talk of AGI, feeling an area is not prestigious, areas associated with people who hold other unpopular or weird-seeming ideas * **Temporal**: future risks and future people are highly neglected ### Complex Systems for AI Safety Complex systems studies emphasizes that we should focus on contributing factors (as events are the product of the interaction of many contributing factors), and it helps us identify which contributing factors are most important across many real-world contexts. They also provide object-level insight about deep learning, since deep learning systems are themselves complex systems. Deep learning exhibits many halmarks of complex systems: * *Highly distributed functions*: partial concepts are encoded redundantly and highly aggregated * *Numerous weak nonlinear connections*: Connection parameters are nonzero (rather than sparse) and neural networks contain nonlinear activation functions * *Self-organization*: optimizing a loss automatically specifies a model’s internal content * *Adaptivity*: few-shot models and online models are adaptive * *Feedback loops*: [Self-play](https://en.wikipedia.org/wiki/AlphaZero), [human in the loop](https://arxiv.org/abs/1706.03741), [auto-induced distribution shift](https://arxiv.org/abs/2009.09153) * *Scalable structure*: [scaling laws](https://arxiv.org/abs/2001.08361) show that models scale simply and consistently * *Emergent capabilities*: numerous unplanned capabilities spontaneously “[turn on](https://arxiv.org/abs/2202.07785)” As such, insights from complex systems are quite applicable to deep learning. Similarly, like all large sociotechnical structures, the AI research community can also be considered to be a complex system. The organizations operating AI systems are also complex systems. Complex systems is a *predictive*—not just explanatory—model for various problems, including AI safety. In fact, many important concepts in AI safety turn out to be specific instances of more general principles. Here are examples of *highly simplified* lessons from complex systems, mostly from [The Systems Bible](https://en.wikipedia.org/wiki/Systemantics) (1975): * **Systems develop goals of their own the instant they come into being.** + *Explanation:* A system’s goal is seldom merely the initial goal it was tasked with. Rather, other goals emerge from the organization of the system. + *Implications for AI:*One salient example are instrumental goals for self-preservation or power-seeking. * **Intrasystem goals come first.** + *Explanation:*Systems often decompose goals into subparts for different intrasystem components to solve. During this decomposition, goals are often distorted. A common failure mode is that the system's explicitly written objective is not necessarily the objective that the system operationally pursues, and this can result in misalignment. A system’s subgoals can supersede its actual goals. For example, a bureaucratic department (a subsystem) can capture power and have the company pursue goals unlike its original goals. + *Implications for AI:* A related phenomenon is already well known to the community as [mesa-optimization](https://arxiv.org/abs/1906.01820); it has been predicted on a more general level by systems theory for decades. * **The mode of failure of a complex system cannot ordinarily be predicted from its structure.** + *Explanation:* Simply examining a complex system will not necessarily give you a good idea for how it might fail. Failures are usually identified from experience and testing. + *Implications for AI:* It is difficult to understand how all the ways a neural network might fail simply by examining its weights or architecture or through armchair/whiteboard analysis. We can count on some failures being unpredictable. (Although failures are inevitable, catastrophes are not.) + *Implications for strategy:* An approach of “think about the problem really hard and make sure there are no holes in the solution” is unlikely to turn up a solution that truly has no holes. Preventing failure in a complex system is not a math problem. In complex systems there are few symmetries, few necessary and sufficient conditions or boolean connectives (no root cause), circular relationships, numerous partial concepts (combinatorial explosion), self-organization, high distributivity. All of these properties make complex systems very difficult to analyze from an armchair/whiteboard or with proofs. * **The crucial variables are discovered by accident.** + *Explanation*:It is difficult to know what the most important parts of a system are by inspection. The highest points of leverage are not obvious. Likewise, the methods that will work best are often found by tinkering or serendipity. + *Implications for AI:*Many of the greatest breakthroughs in AI are not discovered purely by principled, highly structured investigation, but instead by tinkering. + *Implications for strategy*: Many current approaches to research bet on AGI being best represented as a mathematical object rather than a complex system, which seems unrealistic given current AI systems as well as other intelligent systems we know (e.g. humans, corporations). * **A large system, produced by expanding the dimensions of a smaller system, does not behave like the smaller system.** + *Explanation:*Purely scaling up a system does not only make it better at whatever it was doing before. We should expect to see new qualitative properties and emergent capabilities. + *Implications for AI:* We should expect to also see emergent capabilities that did not exist at all in smaller versions. For example, at low levels of capabilities, deception is not a good idea for an intelligence, but as it becomes more intelligent, deception may be a better strategy for achieving goals. + *Implications for strategy:*Scaling up an aligned system and expecting it to be fully aligned is not an airtight idea. Scaling, even of a highly reliable system, needs to be done carefully. * **(From Gilb) Gilb’s Laws of Unreliability: any system which depends on human reliability is unreliable.** + *Explanation:* Humans are not reliable. Reliance on them will create unreliability. + *Implications for strategy:* AI systems may be too explosive and fast-moving for depending heavily on human feedback or human-in-the-loop methods. We will need a more reliable strategy for preserving human values, perhaps through oversight from other AI systems. * **A complex system that works is invariably found to have evolved from a simple system that works.** + *Explanation:*Complex systems cannot be created from scratch and expected to work. Rather, they have to evolve from simpler functioning systems. + *Implications for strategy:* Working on safety for simpler systems, and attempting to (carefully) scale them up is more likely to be successful than starting by trying to build an aligned complex system from scratch. Although systems behave differently when scaled, the ones that work are evolved from smaller systems. If one is unable to align a simpler version of a complex system, it is unlikely that one can align the more complex version. On this view a top priority is making today’s simpler systems safer. Diversification --------------- There are many different facets involved in making complex systems work well; we cannot simply rely on a single contributing factor or research direction. The implication is that it makes sense to diversify our priorities. Since an individual has limited ability to become specialized and there are many individuals, it often makes sense to bet on the single highest expected value (EV) research approach. However, it would be a mistake to think of the larger system in the same way one thinks of an individual within the system. If the system allocates all resources into the highest EV option, and that sole option does not pay off, then the system fails. This is a known fact in finance and many other fields that take a portfolio approach to investments. Do not make one big bet or only bet on the favorite (e.g., highest estimated EV) avenue. The factor with the highest return on investment in isolation is quite different from the highest return on investment *profile* spanning multiple factors. The marginal benefit of X might be higher than Y, but the system as a whole is not forced to choose only one. As the common adage goes, “don’t put all your eggs in one basket.” One example of obviously suboptimal resource allocation is that the AI safety community spent a very large fraction of its resources on reinforcement learning until relatively recently. While reinforcement learning might have seemed like the most promising area for progress towards AGI to a few of the initial safety researchers, this strategy meant that not many were working on deep learning. Deep learning safety researchers were encouraged to focus on RL environments because it is “strictly more general,” but just because one can cast a problem as a reinforcement learning problem does not mean one should. At the same time, the larger machine learning community focused more on deep learning than reinforcement learning. Obviously, deep learning appears now to be [at least as promising](https://www.metaculus.com/questions/4055/will-the-first-agi-be-based-on-deep-learning/) as reinforcement learning, and a lot more safety research is being done in deep learning. Due to tractability, the value of information, iterative progress in research, and community building effects, it might have been far better had more people been working on deep learning from an earlier date. This could readily have been avoided had the community leaders heeded the importance of heavily diversifying research. If we should address multiple fronts simultaneously, not bet the community on a single area or strategy, we will pay lower costs from neglecting important variables. Since costs often scale superlinearly with the time a problem has been neglected, [which has serious practical implications](https://jessitron.com/2021/01/18/when-costs-are-nonlinear-keep-it-small/), it makes sense to apply resources to pay costs frequently, rather than only applying resources after costs have already blown up. The longer one waits, the more difficult it could be to apply an intervention, and if costs are convex (e.g. quadratic rather than logarithmic), costs are exacerbated further. Diversification implicitly keeps these costs lower. AI safety is an area with extremely high uncertainty: about what the biggest problems will be, what timelines are, what the first AGI system will look like, etc. [At the highest levels of uncertainty](https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/strategy-under-uncertainty), it is most important to *improve the virtues of the system* (e.g., meritocratic structures, sheer amount of talent, etc.). If your uncertainty level is slightly less, you *additionally* want to make a few big bets and numerous small bets created in view of a range of possible futures. Moreover, under high uncertainty or when work is inchoate, it is far more effective to follow an “[emergent strategy](https://online.hbs.edu/blog/post/emergent-vs-deliberate-strategy#:~:text=As%20a%20general%20rule%20of%20thumb,%20an%20emergent%20strategy%20may%20be%20the%20right%20choice%20for%20your%20business%20if%20the%20future%20is%20uncertain),” not define the strategy with a highly structured, perfected direction. With diversification, we do not need to decisively resolve all of the big questions before acting. Will there be a slow takeoff, or will AI go foom? Are the implicit biases in SGD beneficial to us, or will they work against us? Should we create AI to pursue a positive direction, or should we just try to maximize control to prevent it from taking over? So long as answers to these questions are not highly negatively correlated, we can diversify our bets and support several lines of research. Additionally, research can help resolve these questions and can inform which future research should be included in the overall portfolio. Seeing value in diversification saves researchers from spending their time articulating their tacit knowledge and highly technical intuitions to win the court of public opinion, as perhaps the question cannot be resolved until later. Diversification makes researchers less at odds with each other and lets them get on with their work, and it reduces our exposure to risks from incorrect assumptions. Diversification does not mean that one should not be discretionary about ideas. Some ideas, including those commonly pursued in academia and industry, may not be at all useful to x-risk, even if they are portrayed that way. Just because variables interact nonlinearly does not mean that resources should be devoted to a variable that is not connected with the problem. In addition, *individuals* do not necessarily need to have a diverse portfolio. There is a benefit to specialization, and so individuals may be better off choosing a single area where they are likely to reach the tail of impact through specialization. However, if everyone individually focused on what they viewed as the most important area of research overall, and their judgments on this were highly correlated, we would see a concentration of research into only a few areas. This would lead to problems, because even if these areas are the most important, they should not be single-mindedly pursued to the neglect of all other interventions. In complex systems, we should expect many multiplicatively interacting variables to be relevant to the overall safety of a system (we will discuss this model more in the next post). If we neglect other safety factors only to focus on “the most important one,” we are essentially setting everything else to zero, which is not how one reduces the probability of risk in a multiplicative system. For instance, we should not just focus on creating technical safety solutions, let alone betting on one main technical solution. There are other variables that can be expected to nonlinearly interact with this variable: the cost of such a system, the likelihood of AGI being developed in a lab with a strong safety culture, the likelihood of other actors implementing an unaligned version, and the likelihood of the aligned system in question being the one that actually leads to AGI. These interactions and interdepencies imply that effort must be expended to push on all factors simultaneously. This can also help provide what is called [*defense in depth*](https://en.wikipedia.org/wiki/Swiss_cheese_model): if one measure for driving down x-risk fails, other already existing measures can help handle the problem. Like many outcomes, impact is long tailed, and the impact of a grant will be dominated by a few key paths to impact. Likewise, in a diverse portfolio, the vast majority of the impact will likely be dominated by a few grants. However, the best strategies will [*sample heavily*](https://www.benkuhn.net/outliers/) *from the long tail distribution*, or maximize exposure to long tail distributions. Some ways to increase exposure to the black swans are with broad interventions that could have many different positive impacts, as well as a larger portfolio of interventions. This contrasts with an approach that attempts to select only targeted interventions in the tails, which is often infeasible in large, complex systems because the tails cannot be fully known beforehand. Instead, one should prioritize interventions that have a sufficient chance of being in the tails. Depending on what phase in the development of AI we are, [targeted or broad](https://rucore.libraries.rutgers.edu/rutgers-lib/40469/PDF/1/play/) interventions should be more emphasized in the portfolio. In the past, broad interventions would clearly have been more effective: for instance, there would have been little use in studying empirical alignment prior to deep learning. Even more recently than the advent of deep learning, many approaches to empirical alignment were highly deemphasized when large, pretrained language models arrived on the scene (refer to our discussion of creative destruction in the last post). Since the deep learning community is fairly small, it is relatively tractable to work on broad interventions (in comparison to e.g. global health, where interventions will need to affect millions of people). At this stage, targeted interventions to align particular systems are not currently likely to deliver all the impact, nor are broad approaches that hope to align *all*possible systems. This is because there is still immense upside in optimizing contributing factors to good research, which will in turn cause both of these approaches to be dramatically more effective. The best interventions will look less like concrete stories for how the intervention impacts a particular actor during the creation of AGI and more like actions that help to improve the culture/incentives/buy-in of several possible actors   This suggests that a useful exercise might be coming up with broad interventions that equip the safety research field to deal with problems more effectively and be better placed to deliver targeted interventions in the future. Note that some broad interventions, like interventions that affect safety culture, are not simply useful insofar as they accelerate later targeted interventions, but also in that they may increase the likelihood of those targeted interventions being successfully adopted. We also need to have targeted interventions, and they may need to be developed before they are known to be needed due to the risk of spontaneously emergent capabilities. There is also an argument that developing targeted interventions now could make it easier to develop targeted interventions in the future. As a result, a mixture of targeted and broad interventions is needed. Conclusion ---------- It can be daunting to even begin to think how to influence the AI research landscape due to its size and complexity. However, the study of complex systems illuminates some common patterns that can help make this question more tractable. In particular, in many cases it makes more sense to focus on improving contributing factors rather than only try to develop a solution that has a simple, direct causal effect on the intended outcome. Complex systems are also useful for understanding machine learning safety in general, since both the broader research community, deep learning systems, and the organizations deploying deep learning systems are all complex systems. Resources on Complex Systems ---------------------------- Complex systems is a whole field of study that can't possibly be fully described in this post. We've added this section with resources for learning more. * (If you only look at one, look at this:) [An introduction to contemporary hazard](https://www.youtube.com/watch?v=_ptmjAbacMk) analysis that justifies the methods far more completely than this post can. * [A short video introduction to complex systems.](https://www.youtube.com/watch?v=vp8v2Udd_PM) * [A short video introduction to emergence](https://www.youtube.com/watch?v=QItTWZc7hKs), a key property of complex systems. * [Systemantics](http://www.bussigel.com/systemsforplay/wp-content/uploads/2013/12/Systemantics.pdf) by John Gall, one of the foundational texts of complex systems. * [A class introduction to complex systems](https://pdodds.w3.uvm.edu/teaching/courses/2021-2022principles-of-complex-systems/).
705f6e73-6112-44b9-ab3e-a8dfc9cf0caa
trentmkelly/LessWrong-43k
LessWrong
Advertise while honoring the dead Roadside suggestions not to kill yourself driving seem to be getting more humorous around here, which suggests that someone is trying to improve them. The best advertisements for careful driving I’ve seen are the little white stick crosses tied to trees and telegraph poles with withered flowers and photographs. I doubt I’m alone in finding the death of a real person smashed into a telegraph pole on my usual route more of a prompt to be careful than an actor looking stern at me or a pun (‘slowing down won’t kill you’). Plus nothing makes an activity feel safe like a gargantuan authority calmly informing me of the risks of it. If the government’s advertising something, everyone knows about it, and if there’s no panic or banning, it’s probably safe. A bedraggled, unprepared memorial is a reminder that ‘they’ aren’t really protecting me. But how could a road authority use these? They could either increase the number or the visibility of them. The usual methods of increasing the number defeat the purpose, and inventing fatal crashes might make people cross. Making memorials more visible is hard, because they are put up by families, besides which the home-made look is valuable, so billboard versions wouldn’t do so well. One solution is just to give bereaved families a bit of the money they usually use on a billboard to construct a temporary memorial of their choice at the site. That way more people would do it, and they could afford more extravagant decoration, so enhancing visibility.
7db1583a-490e-4eaa-815d-8dedf3f738c7
trentmkelly/LessWrong-43k
LessWrong
A case for donating to AI risk reduction (including if you work in AI) I work on Open Philanthropy’s AI Governance and Policy team, but I’m writing this in my personal capacity – several senior employees at Open Phil have argued with me about this! This is a brief-ish post addressed to people who are interested in making high-impact donations and are already concerned about potential risks from advanced AI. Ideally such a post would include a case that reducing those risks is an especially important (and sufficiently tractable and neglected) cause area, but I’m skipping that part for time and will just point you to this 80,000 Hours problem profile for now. * Contrary to a semi-popular belief that donations in global catastrophic risks merely “funge” with major donors, there are several ways for individual donors, including those giving small amounts, to reduce global catastrophic risks from AI. These include donating to: * Work that would be less impactful if they were funded by the major funders, or if it were majority-funded by those funders, or would generally benefit from greater funding diversity for reasons of organizational health and independence. * Work that major funders won’t be able to discover, evaluate, and/or fund quickly enough, e.g. time-sensitive events, individual projects, or career transitions. * Work that encounters legal restrictions on size of donation, like political campaigns, political action committees/donor networks. * Work in sub-areas that major funders have decided not to fund. * You can donate to that kind of work either directly (by giving to the organizations or individuals) or indirectly (by giving through funds like the AI Risk Mitigation Fund, the LTFF, Longview’s Emerging Challenges Fund, or JueYan Zhang’s AI Safety Tactical Opportunities Fund. * Advantages to giving directly: * You can give to political campaigns/PACs/donor networks as well as 501(c)(4) lobbying/advocacy organizations, which the funds might not be able to do, though I’m not sure about all of them. (For po
d4d2d0a5-f98e-4882-ad4c-eb6453fad8c6
trentmkelly/LessWrong-43k
LessWrong
Is anyone else frustrated with 'un-informative' post titles? The latest post that finally impelled me to ask this question: * A problem and three ideas - LessWrong This isn't specific to that post and, having noticed that it is a cross-post, I can appreciate that this is (at least somewhat) tricky to improve, but a lot of the titles of posts are very un-informative. Even qualifying that the above post is specific to AI or AI alignment still doesn't seem like a significant improvement, or an improvement that's 'good enough'. I notice that I often completely ignore posts with sufficiently vague titles. Upon (very shallow) introspection, I feel like the titles are pretty literally clickbait, i.e. 'click me to discover what this is about'. That seems like behavior that warrants 'punishing' (to some degree anyways). One potential solution (mitigation) would be to include (one of) the tags in the post title, or maybe at least in the RSS feed for posts, à la Stack Overflow, e.g.: * google chrome - navigator.clipboard is undefined - Stack Overflow In the above, the 'actual' title of the question is just "navigator.clipboard is undefined" and it seems like the 'first' tag is automatically included in the page title.
165d1b33-59c9-4f7b-b2bb-52c60c8e4e7e
trentmkelly/LessWrong-43k
LessWrong
Bias in capital project decision making This is a story about an odd fact about capital project decision making in engineering I noticed and how it might be related to cognitive biases Background Although I don't work in the field, I was trained as a chemical engineer. A chemical engineer's job is a little different than you might imagine. A chemical engineers primary job isn't to design chemical processes, they actually do relatively little chemistry, but to build, optimize and maintain industrial plants that produce chemicals (petrol products, cleaners, paint etc.) and materials that are produced similarly to chemicals (wood pulp, composite materials etc.). Questions similar to 'how fast should we mix the fluid in this reactor to make it most efficient?' or 'how can we reuse the waste heat from this process?' are much more common than questions similar to 'how can we create compound A from compound B?'.   Chemical engineers often have to make decisions about what capital improvement projects the firm will undertake, so they must answer questions such as 'install cheap pumps that wear out quickly or the expensive ones that don't?',  'what ethanol producing bacteria is most efficient for producing ethanol?' and 'is it worth it to install a heat exchanger to recover the waste head from this process or not?'. The standard technical way of judging the profitability of an option or project is to calculate the Net Present Value (NPV) of the expected cash flows to and from the firm for each different option (installing pump type A or B, using bacteria A, B or C, installing or not installing a heat exchanger). The option with the highest NPV is the most profitable. Calculating the NPV discounts future expected cash flows for the fact that they occur in the future and you have other productive things you could do with money, such as earning interest with it.  Oddly high discount rates When I was in school, I noticed an odd thing: the interest rates that people used to evaluate projects on this basis, called
4a5979e3-3f79-4f6c-8f6a-356547e0b3a3
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Distance Functions are Hard *[Epistemic status: Describes a failed research approach I had a while ago, and my only purpose here is to warn people off from that way of thinking. Every now and then I see someone working on an AIS subproblem say "if only we had a distance function for things in domain X", and my intuition is that they are probably doing a [wrong-way reduction](https://meaningness.com/wrong-way-reduction). But I only mean this as a soft guideline, and I'm only somewhat confident in my current thinking on this.]* ~~~ Terminology: We use the terms *distance* or *distance function* to denote any function .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} d:X×X→R≥0 that intuitively tells us how “dissimilar” any two members of a set X are (regardless of the whether d is a [metric](https://en.wikipedia.org/wiki/Metric_(mathematics))). Counterfactual Worlds --------------------- Consider the counterfactual "If Lincoln were not assassinated, he would not have been impeached". If we would like to say this has a truth value, we need to imagine what such a counterfactual world would have looked like: was it because Lincoln (somehow) survived his wounds, John Wilkes Booth (somehow) missed, that the plot was (somehow) discovered the day before, etc. Somehow, we must pick out the world that is in some sense "closest" to our actual world, but it seems very difficult to compare any two such worlds in a principled way. To formalize [Functional Decision Theory](https://arxiv.org/pdf/1710.05060.pdf) (FDT), we likely need to have a better understanding of counterfactuals, although even in restricted mathematical contexts, we don't have a satisfactory understanding of why "If 0 = 1..." simply returns incoherence, yet "If the Modularity Theorem were false..." seemingly conjures up a possible world that we feel we can reason about. (Also, in terms of corrigibility, we are often interested in formalizing the notion of "low-impact" agents, and the naive idea one often has is to define a distance metric on counterfactual world-states, as in p. 5 of [Concrete Problems in AI Safety](https://arxiv.org/pdf/1606.06565.pdf)). Algorithmic Similarity ---------------------- In the FDT framework, we do not view ourselves as a solitary agent, but as a *function* (or algorithm) that can be copied, modified, and read, and we wish to maximize the utility achieved by our algorithm. Minor details of our implementation that don't affect our behavior (such as whether we are written in Java or Python) should not be decision-relevant, and if some algorithm does the same thing as us "most" of the time, then we would probably (e.g.) want to cooperate with it in a Prisoner's Dilemma. Defining what it means for two algorithms to be similar remains an outstanding open problem. At MSFP 2018, a small group (4-5) of us tried tackling this for a couple hours, had a few ideas that "felt" promising, but gradually realized that none of these made any sense, until ultimately we gave up with the feeling that we hadn't made any intellectual advances. I only say this to give outside-view evidence of intractability, but it's difficult for me to concisely communicate *why* its hard (I could say "try it yourself for an hour and you'll see", but part of my point is that hour is better spent). For those who insist on *inside-view* evidence, here's an outline of one of the ideas we had and why it turned out to be unworkable: We attempted to partition algorithm-space into equivalence classes that represent "conceptual similarity", which should not be harder than defining a distance function on the space. By the [Curry-Howard correspondence](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence), we can rephrase this as asking when two proofs are similar (this felt easier for us to think about, but that's entirely subjective). Suppose we have some proof A of size n, and we want to find proofs that "don't use any fundamentally different ideas". The obvious approach is to think of which proofs we can get to with minor edits. If we make some edit of size ϵ⋅n for some small ϵ and the result is still a valid proof, it should be more or less the same. If we take the closure under minor edits that preserve validity, it would seem superficially plausible that this would result in proofs that are similar. However, suppose we discover a one-line proof B that's totally different from A: then we can append it to A as a minor edit, then gradually delete A with minor edits, until we have a drastically different proof (among other complications). Adversarial Examples -------------------- Given some data point x correctly classified by an ML model, a new point x′:=x+ϵ is an *[adversarial example](https://openai.com/blog/adversarial-example-research/)* if it is now misclassified, despite only differing from x by a tiny amount ϵ (i.e. making relatively small RGB changes to a few pixels). For *every* state-of-the-art image classifier tested, if you give me: * *Any* image classified correctly by that model * *Any* target class you would like to have the model misclassify the image as Then one can usually find some small perturbation of that image that the model believes is in the target class with high probability. In the classic example we can have GoogLeNet classify a [panda as a gibbon](https://imgur.com/a/Br8SGxZ) with 99% confidence. Moreover, these have been found to generalize very well across different models, even with very different architectures. Last year, a [paper](https://arxiv.org/pdf/1802.08195.pdf) came out taking this further, by obtaining adversarial examples with the best cross-generalization, and giving these to humans who had only a few seconds to classify the image. Interestingly, the humans were "fooled" in the sense that their snap judgments--those formed by their pure visual system--differed from how they classified the images when given more time for reflection. In terms of robustness to these examples, it seems, our perceptual system by itself is not qualitatively better than today's classifiers, but [our lens can see its own flaws](https://www.lesswrong.com/posts/46qnWRSR7L2eyNbMA/the-lens-that-sees-its-flaws). The paper was popularized in various places under a bolder headline, namely that there now existed full-blown adversarial examples *for humans* (reflection or not). This was showcased with a [picture](https://twitter.com/goodfellow_ian/status/966853052140470272?lang=en) from a different part of the paper showing an image of a (somewhat dog-like) cat being given a tiny amount of noise, and subsequently looking like a dog to a human with any amount of visual processing and top-down feedback. This sparked controversy, with many pointing out that a small change (in RGB values) to some visual concept does not necessarily correspond to a small change in concept-space. The paper itself punted on this: > it is philosophically difficult to define the real object class for an image that is not a picture of a real object. In this work, we assume that an adversarial image is misclassified if the output label differs from the human-provided label of the clean image that was used as the starting point for the adversarial image. We make small adversarial perturbations and we assume that these small perturbations are insufficient to change the true class. And in response to comments, co-author Ian Goodfellow [acknowledged on Twitter](https://twitter.com/goodfellow_ian/status/967200391673692162): > While everyone else was scrambling to finish running experiments for ICML, my co-authors and I were having intense debates about philosophy and semantics and how to write the paper. Some of our open office colleagues were entertained by how surreal this sounded. Making models robust against adversarial examples remains an outstanding and difficult topic with a considerable paper trail. The problem of merely *verifying* that a given model has no local adversarial examples (e.g. within a few RGB values of a given data point) has been the subject of [some](https://arxiv.org/pdf/1709.02802.pdf) [interesting](https://arxiv.org/pdf/1610.06940.pdf) formal verification work in the past couple years. But to even do this verification work, one needs a formal specification of what an adversarial example is, which in turn requires a formal specification of what a "small change" between (e.g.) images is, that somehow captures something about *conceptual* distance. It seems to me that even this smaller problem will be hard to solve in a philosophically satisfying way because of the inherent subjectivity/fuzziness in defining "distance in concept-space" or anything that even comes close. Distance Functions are Hard: The Evidence ----------------------------------------- What we are asking for, in all these instances, is some distance function precise enough to be mathematizable in some form, but robust enough to include many very fuzzy desiderata we have in mind. It seems natural to ask what distance functions of this form have been successfully developed before. The [Encyclopedia of Distances](https://www.amazon.co.uk/Encyclopedia-Distances-Michel-Marie-Deza/dp/3662528436) comes out to over 700 pages, split roughly in half between those distances used in pure math (especially, as one would expect, topology, geometry, and functional analysis), and those used in applied math, computing disciplines, and the natural sciences. Of the distance functions listed in the latter half, most were simply "the obvious thing one would do" given the preexisting mathematical structure around the topic in question (e.g. [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance) on strings). Others were less obvious, but usually because they used nontrivial mathematical machinery to answer specific mathematical questions, not to actually shed light on fuzzy philosophical questions one would have about it. Getting to the social science section, where no existing mathematical formalism existed on most of the topics in the first place, virtually none of the distances particularly helped to remedy this fuzziness by themselves. Though I do not claim to have spent that much time flipping through this tome, never did I see a distance notion that struck me as a profound non-mathematical insight, or that even gestured at an "art of coming up with distance functions". Conclusions ----------- I conclude, with medium confidence, that each of the questions posed in the first 3 sections will be particularly hard to answer in a satisfying way, and if they are, then probably this won't be by thinking about distance functions directly. As a general heuristic, I feel like if you've reduced a philosophical problem to "defining the appropriate distance function", then it's worth pausing to consider if you've made a [wrong-way reduction](https://meaningness.com/wrong-way-reduction). Chances are, the distance function you want is inherently value-laden, and so the problem of defining it inherits the difficulty of the value alignment problem itself. I also think this heuristic is especially salient if you're trying to capture something like "conceptual similarity/distance": if you could do this, then you'd have an objective map/taxonomy of (a large fraction of) concept-space.
7e9a31ad-597c-49b8-9de6-96322765b962
trentmkelly/LessWrong-43k
LessWrong
Can you prove that 0 = 1? Of course, 0 is not equal to 1. But certain interpretations of quantum mechanics imply that a thing can be one way and be the opposite way at the same time. Any mathematical proof rests on axioms. What I'm looking for is a proof that 0 = 1 which rests on axioms that are being actively debated by mathematicians. (Then, I'll argue that the truth of 0 = 1 is in a quantum superposition with the debates being held by mathematicians.) Feel free to create your own proof, or to direct me to existing works. I may quote your answers in a soon-to-be-published post on this topic. This doesn't feel frivolous to me; I think that 0 = 1 hints towards a unifying theory for epistemology, math, consciousness, states of enlightenment, and physics. It also has implications for AI. If anyone has meta-advice for me, I'm open to it. I think the benefits balance the negatives of sharing these ideas, especially for ensuring that they're debated more widely and openly. But I could be wrong. (There are keywords I'm not using to be careful.)
4693e1ef-2a52-4fdc-9160-eba8aeb5b0ec
trentmkelly/LessWrong-43k
LessWrong
What are some things you would do (more) if you were less averse to being/looking weird? I had forgotten I had asked this question already, and I asked it again here: https://www.lesswrong.com/posts/mengYutEGfzyeA6Xi/what-would-you-do-differently-if-you-were-less-concerned
42a65be6-93ac-4aec-87a6-1d9af8ad7da3
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Early Thoughts on Ontology/Grounding Problems These all seem to be pointing to different aspects of the same problem. * Cross-ontology goal translation: given a utility function over a latent variable in one model, find an equivalent utility function over latent variables in another model with a different ontology. One subquestion here is how the first model’s input data channels and action variables correspond to the other model’s input data channels and action variables - after all, the two may not be “in” the same universe at all, or they may represent entirely separate agents in the same universe who may or may not know of each other's existence. * [Correspondence](https://www.lesswrong.com/posts/FWuByzM9T5qq2PF2n/a-correspondence-theorem) [theorems](https://www.lesswrong.com/posts/XMGWdfTC7XjgTz3X7/a-correspondence-theorem-in-the-maximum-entropy-framework): quantum mechanics should reduce to classical mechanics in places where classical worked well, special relativity should reduce to Galilean relativity in places where Galilean worked well, etc. As we move to new models with new ontologies, when and how should the structure of the old models be reproduced? * The [indexing problem](https://www.lesswrong.com/posts/ABNjLr2H39g2oXqGb/the-indexing-problem): I have some system containing three key variables A, B, and C. I hire someone to study these variables, and after considerable effort they report that X is 2.438. Apparently they are using different naming conventions! What is this variable X? Is it A? B? C? Something else entirely? Where does their X fit in my model? * How do different people ever manage to point to the same thing with the same word in the first place? Clearly the word “tree” is not a data structure representing the concept of a tree; it’s just a pointer. What’s the data structure? What’s its type signature? Similarly, when I point to a particular tree, what’s the data structure for the concept of that particular tree? How does the “pointer” aspect of these data structures work? * When two people are using different words for the same thing, how do they figure that out? What about the same word for different things? * I see a photograph of a distinctive building, and wonder “Where is this?”. I have some data - i.e. I see the distinctive building - but I don’t know where in the world the data came from, so I don’t know where in my world-model to perform an update. Presumably I need to start building a little side-model of “wherever this picture was taken”, and then patch that side-model into my main world model once I figure out “where it goes”. * Distributed models and learning: a bunch of different agents study different (but partially overlapping) subsystems of a system - e.g. biologists study different subsystems of a bacteria. Sometimes the agents end up using different names or even entirely different ontologies - e.g. some parts of a biological cell require thinking about spatial diffusion, while some just require overall chemical concentrations. How do we combine submodels from different agents, different ontologies and different data? How can we write algorithms which learn large model structures via stitching together small structures each learned independently from different subsystems/data? ![](https://lh5.googleusercontent.com/UDDQr56wDuPCUU-6kV12w4OHMbf6c3mBSoAgjGEjKUgprU23VQ_SxnhEaOjRlAHi3wCqLHiOsI0tuj8dgn2Ikmz9LEosh5QhIHVVItBt8JqJoVTuqI5QYD8CNUTcy-GGiOwoQVYa)[Abstraction](https://www.lesswrong.com/posts/vDGvHBDuMtcPd8Lks/public-static-what-is-abstraction) plays a role in these, but it’s not the whole story. It tells us how high-level concepts relate to low-level, and why very different cognitive architectures would lead to surprisingly similar abstractions (e.g. neural nets learning similar concepts to humans). If we can ground two sets of high-level abstractions in the same low level world, then abstraction can help us map from one high-level to the low-level to the other high-level. But if two neural networks are trained on different data, and possibly even different *kinds* of data (like infrared vs visual spectrum photos), then we need a pretty detailed outside model of the shared low-level world in order to map between them. Humans do not seem to need a shared low-level world model in order to pass concepts around from human to human. Things should ultimately be groundable in abstraction from the low level, but it seems like we shouldn’t *need* a detailed low-level model in order to translate between ontologies. In some sense, this looks like Ye Olde Symbol Grounding Problem. I do not know of any existing work on that subject which would be useful for something like “given a utility function over a latent variable in one model, find an equivalent utility function over latent variables in another model”, but if anybody knows of anything promising then let me know. Not Just Easy Mode ------------------ After poking at these problems a bit, they usually seem to have an “easy version” in which we fix a particular Cartesian boundary. In the utility function translation problem, it’s much easier if we declare that both models use the same Cartesian boundary - i.e. same input/output channels. Then it’s just a matter of looking for functional isomorphism between latent variable distributions. For correspondence theorems, it’s much easier if we declare that all models are predicting exactly the same data, or predict the same observable distribution. Again, the problem roughly reduces to functional isomorphism. Similarly with distributed models/learning: if a bunch of agents build their own models of the *same* data, then there are obvious (if sometimes hacky) ways to stitch them together. But what happens when they’re looking at different data on different variables, and one agent’s inferred latent variable may be another agent’s observable? The point here is that I don’t just want to solve these on easy mode, although I do think some insights into the Cartesian version of the problem might help in the more general version. Once we open the door to models with different Cartesian boundaries in the same underlying world, things get a lot messier. To translate a variable from model A into the space of model B, we need to “locate” model B’s boundary in model A, or locate model A’s boundary in model B, or locate both in some outside model. That’s the really interesting part of the problem: how do we tell when two *separate agents* are pointing to the same thing? And how does this whole "pointing" thing work to begin with? Motivation ---------- I’ve been poking around the edges of this problem for about a month, with things like correspondence theorems and seeing how some simple approaches to cross-ontology translation break. Something in this cluster is likely to be my next large project. Why this problem? From an [Alignment as Translation](https://www.lesswrong.com/posts/42YykiTqtGMyJAjDM/alignment-as-translation) viewpoint, this seems like exactly the right problem to make progress on alignment specifically (as opposed to [embedded agency](https://www.lesswrong.com/tag/embedded-agency) in general, or AI in general). To the extent that the “hard part” of alignment is translating from human concept-space to some AI’s concept-space, this problem directly tackles the bottleneck. Also closely related is the problem of an AI building a goal into a successor AI - though that’s probably somewhat easier, since the internal structure of an AI is easier to directly probe than a human brain. Work on cross-ontology transport is also likely to yield key tools for agency theory more generally. I can already do some neat things with embedded world models using the tools of abstraction, but it feels like I’m missing data structures to properly represent certain pieces - in particular, data structures for the “interface” where a model touches the world (or where a self-embedded model touches itself). The indexing problem is one example of this. I think those interface-data-structures are the main key to solving this whole cluster of problems. Finally, this problem has a lot of potential for relatively-short-term applications, which makes it easier to build a feedback cycle. I could imagine identifying concept-embeddings by hand or by ad-hoc tricks in one neural network or probabilistic model, then using ontology translation tools to transport those concept-embeddings into new networks or models. I could even imagine whole “concept libraries”, able to import pre-identified concepts into newly trained models. This would give us a lot of data on how robust identified abstract concepts are in practice. We could even run stress tests, transporting concepts from model to model to model in a game of telephone, to see how well they hold up. Anyway, that’s one potential vision. For now, I’m still figuring out the problem framing. Really, the reason I’m looking at this problem is that I keep running into it as a bottleneck to other, not-obviously-similar problems, which makes me think that this is the limiting constraint on a broad class of problems I want to solve. So, over time I expect to notice additional possibilities which a solution would unblock.
21cdedf3-827e-46a3-bc46-eb730809ab06
trentmkelly/LessWrong-43k
LessWrong
PredictIt, a prediction market out of New Zealand, now in beta. From their website: > PredictIt is an exciting new, real money site that tests your knowledge of political and financial events by letting you make and trade predictions on the future. > > Taking part in PredictIt is simple and easy. Pick an event you know something about and see what other traders believe is the likelihood it will happen. Do you think they have it right? Or do you think you have the knowledge to beat the wisdom of the crowd? > > The key to success at PredictIt is timing. Make your predictions when most people disagree with you and the price is low. When it turns out that your view may be right, the value of your predictions will rise. You’ll need to choose the best time to sell! > > Keep in mind that, although the stakes are limited, PredictIt involves real money so the consequences of being wrong can be painful. Of course, winning can also be extra sweet. > > For detailed instructions on participating in PredictIt, How It Works. > > PredictIt is an educational purpose project of Victoria University, Wellington of New Zealand, a not-for-profit university, with support provided by Aristotle International, Inc., a U.S. provider of processing and verification services. Prediction markets, like this one, are attracting a lot of academic and practical interest (see our Research section). So, you get to challenge yourself and also help the experts better understand the wisdom of the crowd.
18d3d3ad-87d3-4f78-b3f9-3803868b167a
trentmkelly/LessWrong-43k
LessWrong
OpenAI: Leaks Confirm the Story Previously: OpenAI: Altman Returns, OpenAI: The Battle of the Board, OpenAI: Facts from a Weekend, additional coverage in AI#41. We have new stories from The New York Times, from Time, from the Washington Post and from Business Insider. All paint a picture consistent with the central story told in OpenAI: The Battle of the Board. They confirm key facts, especially Altman’s attempted removal of Toner from the board via deception. We also confirm that Altman promised to help with the transition when he was first fired, so we have at least one very clear cut case of Altman saying that which was not. Much uncertainty remains, especially about the future, but past events are increasingly clear. The stories also provide additional color and key details. This post is for those who want that, and to figure out what to think in light of the new details. The most important new details are that NYT says that the board proposed and was gung ho on Brett Taylor, and says D’Angelo suggested Summers and grilled Summers together with Altman before they both agreed to him as the third board member. And that the new board is remaining quiet while it investigates, echoing the old board, and in defiance of the Altman camp and its wish to quickly clear his name. THE NEW YORK TIMES COVERS EVENTS The New York Times finally gives its take on what happened, by Tripp Mickle, Mike Isaac, Karen Weise and the infamous Cade Metz (so treat all claims accordingly). As with other mainstream news stories, the framing is that Sam Altman won, and this shows the tech elite and big money are ultimately in charge. I do not see that as an accurate description what happened or its implications, yet both the tech elite and its media opponents want it to be true and are trying to make it true through the magician’s trick of saying that it is true, because often power resides where people believe it resides. I know that at least one author did read my explanations of events, and also I talked to a Tim
ff079cf1-1deb-4b74-a522-6125446dad31
trentmkelly/LessWrong-43k
LessWrong
The Brain Preservation Foundation's Small Mammalian Brain Prize won The Brain Preservation Foundation’s Small Mammalian Brain Prize has been won with fantastic preservation of a whole rabbit brain using a new fixative+slow-vitrification process. * BPF announcement (21CM’s announcement) * evaluation * The process was published as “Aldehyde-stabilized cryopreservation”, McIntyre & Fahy 2015 (mirror) > We describe here a new cryobiological and neurobiological technique, aldehyde-stabilized cryopreservation (ASC), which demonstrates the relevance and utility of advanced cryopreservation science for the neurobiological research community. ASC is a new brain-banking technique designed to facilitate neuroanatomic research such as connectomics research, and has the unique ability to combine stable long term ice-free sample storage with excellent anatomical resolution. To demonstrate the feasibility of ASC, we perfuse-fixed rabbit and pig brains with a glutaraldehyde-based fixative, then slowly perfused increasing concentrations of ethylene glycol over several hours in a manner similar to techniques used for whole organ cryopreservation. Once 65% w/v ethylene glycol was reached, we vitrified brains at −135 °C for indefinite long-term storage. Vitrified brains were rewarmed and the cryoprotectant removed either by perfusion or gradual diffusion from brain slices. We evaluated ASC-processed brains by electron microscopy of multiple regions across the whole brain and by Focused Ion Beam Milling and Scanning Electron Microscopy (FIB-SEM) imaging of selected brain volumes. Preservation was uniformly excellent: processes were easily traceable and synapses were crisp in both species. Aldehyde-stabilized cryopreservation has many advantages over other brain-banking techniques: chemicals are delivered via perfusion, which enables easy scaling to brains of any size; vitrification ensures that the ultrastructure of the brain will not degrade even over very long storage times; and the cryoprotectant can be removed, yielding a perfusable aldeh
68e167f4-f112-4249-872a-935104ac6cf4
trentmkelly/LessWrong-43k
LessWrong
We have some evidence that masks work by Gavin Leech and Charlie Rogers-Smith Our work on masks vs COVID at the population level was recently reproduced with a bunch of additional experiments. These seem to cast doubt on our results, but we think that each of them is misguided. Since the post got some traction on LW and Marginal Revolution, we decided to respond. Nevertheless, thanks to Mike, who put a lot of work in, and who was the only person in the world to check our results, despite plenty of people trying to gotcha us on Twitter. “Observational Window” Best-guess summary of Mike’s analysis: he extends the window of analysis by a bit and runs our model. He does this because he’s concerned that we chose a window with low transmissibility to make masks look more effective than they are. However, he finds similar results to the original paper, and concludes that our results seem robust to longer periods. But as our paper notes, a longer window isn’t valid using this data. After September, many countries move to subnational NPIs, and our analysis is national. The way our NPI data source codes things means that they don't capture this properly, and so they stop being suitable for national analyses. Estimates of national mask effect after this don’t properly adjust for crucial factors, and so masks will "steal" statistical power from them. So this analysis isn’t good evidence about the robustness of our results to a longer window. “Regional Effects” > MH: "If mask wearing causes a drop in transmissibility, then regions with higher levels of mask wearing should observe lower growth rates." Best-guess summary of Mike’s analysis: A correlational analysis between the median wearing level of a region and the R0 (the expected number of new cases per initial case in a region) that our model infers. (What he calls ‘growth rates’, but which are not growth rates.) He claims that if wearing is effective then the correlation should be negative. The intuition is that if masks work, then countries with lots o
7b4f5dbc-568f-4b31-ad78-3adb01ecddd5
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, D.C.: Fun & Games Discussion article for the meetup : Washington, D.C.: Fun & Games WHEN: 19 July 2015 03:00:00PM (-0400) WHERE: National Portrait Gallery We'll be meeting to play games and/or not and just hang out and talk, whichever seems more fun (notice fun comes before games in the title). As usual, we will congregate in the courtyard from 3:00 to 3:30 p.m., with the hard start at 3:30. If you want to get people together for a big game plan ahead! Post here looking for players and get an early start, because remember they'll be kicking us out a little before 7pm. Please remember to bring games! Upcoming Meetups: * Jul 26: Meta Meetup * Aug 2: Optical Illusions * Aug 9: Fun & Games Discussion article for the meetup : Washington, D.C.: Fun & Games
575c2fa8-8e18-4500-bef8-65537ec87184
trentmkelly/LessWrong-43k
LessWrong
Junk Fees, Bunding and Unbundling Joe Biden harped on junk fees during the State of the Union. While I do not think it is the problem of our time, I take things in the reference class of resort fees, or fees to have adjacent seats on an airplane, and other such unbundling (and bundling) surprisingly seriously. I am putting up my thoughts here so I have a reference to fall back upon. Matt Yglesias has a post defending Biden’s particular choices as smart economics in addition to smart politics. I frame the core issues differently. I’d start with: The general principle of ‘no hidden charges’ becomes important when people are making online choices on the basis of headline costs, in ways that are structured to provide little extra information. The advantage of having a lower headline price is huge, and reputational effects aren’t powerful enough to fix this. More price transparency in these spots is a strictly better equilibrium. Even the companies charging junk fees would often prefer everyone not be allowed to do this. Given others are doing it, they can’t afford to be left behind. Matt talks a bunch about ‘unbundling.’ You used to get your meal and checked bags free with your flight. Now they cost money. Which way is better, and is there a bias pushing us too far in one direction? How are different situations different? Bundling Versus Unbundling There are at least four advantages to unbundling. 1. An illusion: Fooling the customer into thinking your product is cheap. 2. Efficiency: Not having to provide things people don’t value. If it costs $5 for the airline to provide an extra meal, and you charge $0 for it, some people who value the meal at $1 (or $-2) will accept it, and perhaps only eat one little thing. Also, if there is a limited supply of a complementary asset like the overhead baggage compartment, the only efficient way to allocate that space is with a fee. 3. Obligation: Not making people feel obligated to use things they don’t value. The flip side of the same coin. If you provi
2a6470dd-fcd5-49d7-8ea5-1996ff49f8ba
trentmkelly/LessWrong-43k
LessWrong
Meetup : Rationality Meetup Vienna Discussion article for the meetup : Rationality Meetup Vienna WHEN: 15 August 2015 03:00:00PM (+0200) WHERE: Kaisermühlenstraße 24/2, Wien FB event: https://www.facebook.com/events/670741589694482/ (join the "Rationality Vienna" group to see it!) Location http://web.student.tuwien.ac.at/~e0326238/rationality_meetup/directions.html Discussion article for the meetup : Rationality Meetup Vienna
c44744da-8e10-4e2d-a69c-785b8db2d17d
trentmkelly/LessWrong-43k
LessWrong
Is it the case that when humans approximate backward induction they violate the markov property? Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. Backward induction is used to solve Bellman equations in dynamic programming, which is leveraged in reinforcement learning. The markov property holds of a process if the probability of each event depends only on the state attained in the previous event, formally if P(st+1)=f(st) for some f. For an example of a human approximating backward induction, I'll use the chocolate example from Instrumental and Terminal Values: Alice wants chocolate, so she computes that she needs to go to the store, so she computes that she needs to drive, so she computes that she needs to grab her keys. If it's not obvious how grabbing keys is related to obtaining chocolate, it should be obvious when you look a the whole chain. Formally perhaps we might say for any goal An we can compute a list A1←A2←...←An−1←An, where A←B denotes "A leads to B and we can compute that fact from looking at B" or "from the desire for B we can compute the desire for A". But! As Eliezer points out, if you're at the Agrabkeys←Adrive step, and an oracle tells you that the store is out of chocolate, you're not forced into some semi-myopia where you feel the need to start driving but you no longer know why. Instead, the whole chain might collapse at once, radically reconfigure itself from the initial desire for chocolate. I feel like there's this possibility for a wrong interpretation of backward induction that's analogous to the markov property, where every desire in the chain is a function of the preceding desire to which it is instrumental. This wrong interpretation actually lies precisely in a blindspot of my previous formalization! Put much better, though a little difficult on the eyes: Formally perhaps we might say that for any goal An we can compute a list of shrinking lists A1←(A2,...,An)←(A3,...,An)←...←(An−1,An)←An, where Ak←(Ak+1,...,An) denotes "Ak
a5450eb8-9c57-4415-a869-c07b27b2f44e
StampyAI/alignment-research-dataset/arxiv
Arxiv
Improving Code Generation by Training with Natural Language Feedback 1 Introduction --------------- An important task for the field of software engineering is program synthesis, the automatic generation of computer programs from an input specification (*e.g.* a natural language task description or a set of input-output examples) (manna1971prog\_synth). Effective program synthesis can not only improve the efficiency of software developers (code\_completion\_productivity), but also increase the accessibility of writing code in general. Recently, pre-trained large language models (LLMs) have demonstrated impressive success on program synthesis (chen2021codex; li2022alphacode; austin2021program; Nijkamp2022CG; xu2022evaluation, inter alia) but still struggle to consistently generate correct code, even with large-scale pre-training (chen2021codex). We hypothesize that these failures can be largely attributed to modern LLM pre-training set-ups. For instance, code pre-training datasets consist mostly of unfiltered code scraped from the Internet, which contains a significant number of security vulnerabilities (kang2022llm\_bugs) and bugs (chen2021codex). This training signal also consists exclusively of offline demonstrations, without any signal from trial-and-error or interactive guidance that penalizes the model’s buggy outputs. As such, we hypothesize that supervising LLMs with explicit human-written feedback on the model’s own outputs can be more effective at training models to produce functionally correct code. In particular, an intuitive and rich form of feedback to provide to LLMs is natural language feedback. We argue that LLMs are naturally able to incorporate written feedback, which has been shown to significantly improve a code generation model’s pass rates when the feedback is provided at test time (Nijkamp2022CG; austin2021program). In our work, we build upon this observation by exploring the use of natural language feedback during the training process itself, rather than just during inference. We conjecture that such feedback provides expressive and targeted information about a code generation model’s current failings in a sample-efficient manner. More broadly, this approach also represents a weak version of *scalable oversight* (Bowman2022MeasuringPO), in that model overseers can improve a model merely by evaluating its outputs, without manually generating new demonstrations, in a way that takes advantage of the capabilities that are being supervised. To train LLMs with language feedback, we propose an algorithm called Imitation learning from Language Feedback (ILF; Algorithm [1](#alg1 "Algorithm 1 ‣ 2 Method")), which extends the work of scheurer2022training, who study the impact of learning from language feedback on text summarization models. scheurer2022training improves a summarization model by training the base model on improved summaries generated from the model’s original summaries and human-written feedback. Our work builds upon scheurer2022training in a number of ways: (1) by formalizing the algorithm and generalizing it into a form that can be applied to any task (our ILF algorithm in Section [2.2](#S2.SS2 "2.2 Imitation Learning From Language Feedback ‣ 2 Method")), (2) by detailing how the reward function can be adapted for code generation, and (3) by demonstrating a proof-of-concept of ILF for code generation.111We open-source our code and annotated data at <https://github.com/nyu-mll/ILF-for-code-generation>. ILF improves the correctness of programs generated by a baseline code generation model πθ by training a separate model πRefine to use language feedback to repair the incorrect πθ-generated programs. (We refer to the repaired programs as *refinements*.) We then improve πθ by fine-tuning it on the πRefine-generated refinements that pass unit tests, yielding a final improved model πθ∗. This procedure may be run iteratively to continue improving the model, which we show can be seen as minimizing the expected KL divergence from a target ground truth distribution (Section [2](#S2 "2 Method")). We demonstrate a proof of concept of ILF for code generation by showing that it improves a CodeGen-Mono 6.1B model’s pass@1 rate on the Mostly Basic Python Problems (MBPP) benchmark (odena2021mbpp) by 38% relative (10% absolute) over its zero-shot performance. It also outperforms fine-tuning on the MBPP-provided code by 64% (14% absolute, see Section [3.2](#S3.SS2 "3.2 ILF Yields Pass Rates Higher Than Fine-Tuning on Gold Data or Human-Written Programs Alone ‣ 3 Experiments and Results")). We further find that the refinements generated during ILF do indeed leverage the human-written feedback (Section [3.1](#S3.SS1 "3.1 CodeGen-Mono 6.1B Incorporates Feedback ‣ 3 Experiments and Results")) – when the feedback is unhelpful or irrelevant, we observe steep drops in code correctness. The quality of the feedback is also crucial – LLM-generated feedback yields far lower final pass rates than human-written feedback (Section [3.3](#S3.SS3 "3.3 Scaling Up Model Feedback Does Not Offer the Same Benefits As Human Feedback ‣ 3 Experiments and Results")). Despite the success of our approach, we still observe concrete limitations – for instance, πRefine is less effective at incorporating feedback when the feedback addresses multiple bugs (Section [3.5](#S3.SS5 "3.5 πRefine Struggles To Incorporate Feedback Addressing Many Bugs ‣ 3 Experiments and Results")), which suggests headroom for future work or more capable LLMs to base πRefine on. Overall, our results – as well as our additional results on text summarization, using a similar technique in scheurer2023training – suggest that human-written feedback is a powerful, information-rich form of supervision for LLMs. 2 Method --------- 1:Input: Dataset D, initial LLM πθ, unit test verification function Eval, LLM πRefine:V∗→[0,1] trained to incorporate feedback into code 2:C←{(x0,t,u)|x0∼πθk(⋅|t),\textscEval(x0,t)=0,(t,u)∈D} 3:Cannotated←{(x0,f,t)|(x0,t,u)∈C} ▹ Humans write feedback f for x0∈C. 4:R←{(t,x1)∼πRefine(⋅|t,x0,f)|\textscEval(x1,t)=1,(x0,f,t)∈Cannotated} ▹ πRefine generates refinements x1 that incorporate feedback f into x0. 5:πθ∗←\textscFinetune(πθ,R) Algorithm 1 Imitation learning from natural language feedback for code generation. ### 2.1 Preliminaries Here, we formally describe the problem we aim to tackle, before introducing our algorithm. Suppose we start with vocabulary V and a pre-trained language model πθ parameterized by θ. πθ:V∗→[0,1] is a probability distribution over sequences of tokens x∈V∗, where V∗ is the Kleene closure of V. We also have a dataset of tasks D={(t,u)}. A task (t,u) consists of a task description t∈T (*e.g.* “Write a function that computes the prime factorization of an input integer.”) and a suite u=\textscUnitTests(t)∈U of unit tests associated with task t. Finally, let \textscEval:V∗×T→{0,1} be a unit test verification function that indicates whether a program x∼πθ(⋅|t) passes all the unit tests in \textscUnitTests(t): | | | | | | --- | --- | --- | --- | | | \textscEval(x,t)\coloneqq{1,if x % passes test suite\ \textscUnitTests(t),0,otherwise | | (1) | We also define a fine-tuning function \textscFinetune(πθ,D) that applies a gradient-based optimization algorithm to πθ using the associated loss objective calculated over dataset D. ### 2.2 Imitation Learning From Language Feedback Our goal is to sample a diverse set of high-quality programs x1∼πθ(⋅|t) for any given task t sampled from the task distribution p(t). We do so by fitting an auto-regressive LLM πθ to approximate a ground truth distribution π∗t(x1) that assigns a probability to x1 that is proportional to its quality, as measured by a reward function R. Fitting πθ to approximate π∗t can be seen as minimizing the expected KL divergence from π∗t to πθ over the task distribution p(t): | | | | | | --- | --- | --- | --- | | | minθEt∼p(t)[KL(π∗t,πθ(⋅|t))] | | (2) | where | | | | | | --- | --- | --- | --- | | | π∗t(x1)∝exp(βR(x1,t)) | | (3) | In this work we use the unit test verification function Eval directly as our reward function R, but R can also be a function of any number of other signals, such as stack traces or compiler outputs. Minimizing the objective in Equation [2](#S2.E2 "(2) ‣ 2.2 Imitation Learning From Language Feedback ‣ 2 Method") is equivalent to supervised learning, *i.e.* minimizing the cross-entropy loss: | | | | | | --- | --- | --- | --- | | | L(θ)=−Et∼p(t)[Lθ(t)], | | (4) | where | | | | | | --- | --- | --- | --- | | | Lθ(t)=∑x1π∗t(x1)logπθ(x1|t). | | (5) | Rather than computing this loss over the exponentially large space of all possible x1’s, we instead use Monte-Carlo sampling over a small set of x1’s drawn from π∗t. However, this is still intractable because we cannot sample directly from π∗t. Instead, we approximate π∗t using importance sampling with a proposal distribution qt(x1): | | | | | | --- | --- | --- | --- | | | Lθ(t)=∑x1qt(x1)π∗t(x1)qt(x1)logπθ(x1|t) | | (6) | which assigns higher weights to higher quality programs x1. ### 2.3 Proposal Distribution q Intuitively, we aim to design qt to be as close as possible to π∗t, which we accomplish by incorporating pieces of natural language feedback f that give information about how to transform a low-reward program x0 into a higher-reward program x1. This can be achieved by (i) identifying a program x0∼πθ(⋅|t) that does not currently pass the test suite (*i.e.* \textscEval(x0,t)=0), (ii) asking for natural language feedback f about bugs in x0, (iii) using f to transform the original program x0 into a *refinement* x1 that incorporates the feedback and passes the test suite (*i.e.* \textscEval(x1,t)=1), and (iv) assigning higher weight to x1. We can formalize this procedure as follows. Let πψ(x1|t,x0,f) be a distribution over programs x1 that improve x0 by incorporating the feedback f and pF(f|t,x0,\textscEval(x0,t)=0) be the distribution of pieces of feedback f for incorrect program x0 and task t. We can then define our proposal distribution as: | | | | | | --- | --- | --- | --- | | | qt(x1)=∑x0,f | πθ(x0|t)×δ0(\textscEval(x0,t)|x0,t)) | | | | | ×pF(f|t,x0,\textscEval(x0,t)=0) | | | | | ×πψ(x1|t,x0,f) | | | | | ×δ1(\textscEval(x1,t)|t,x1), | | (7) | where δ0 and δ1 are the Dirac delta distributions centered at 0 and 1, respectively. Then this proposal distribution is guaranteed to place higher probability mass on higher-quality programs (in terms of unit test pass rate) than πθ since the term δ1(\textscEval(x1,t)|t,x1) equals 0 for incorrect programs x1. We approximate sampling from q by considering each of the terms in Equation [7](#S2.E7 "(7) ‣ 2.3 Proposal Distribution q ‣ 2 Method") in order: 1. We first sample from πθ(x0|t)×δ0(\textscEval(x0,t)|x0,t)) by rejection sampling from πθ. In other words, we sample programs x0 from πθ for task t and only keep those that fail the test suite (*i.e.* \textscEval(x0,t)=0; step 2 of Algorithm [1](#alg1 "Algorithm 1 ‣ 2 Method")). 2. We approximate sampling from pF(f|t,x0,\textscEval(x0,t)=0) by having humans annotate programs x0 (paired with their corresponding task descriptions t and test suites u) with natural language feedback (step 3 of Algorithm [1](#alg1 "Algorithm 1 ‣ 2 Method")). 3. We approximate sampling from πψ(x1|t,x0,f) by sampling from πRefine, a model capable of generating refinements given the task description, original programs, and human-written feedback. 4. Finally, the term δ1(\textscEval(x1,t)|t,x1) corresponds to another filter: we only keep refined programs x1 that pass the test suite. Next, we consider more concrete details of how this sampling is accomplished. #### Training πRefine ILF assumes the availability of feedback but not necessarily of the repaired code/refinements, for a variety of reasons. We assume that program synthesis may be a task for which writing high-level natural language feedback is often less laborious than performing program repair. Although writing feedback involves identifying at a high level what is wrong with the program and how it should be fixed, program repair may involve the additional steps of refactoring, looking through documentation, and testing. Moreover, past work (austin2021program; Nijkamp2022CG) has indicated that certain large LLMs can proficiently incorporate the feedback at inference time, assuming access to accurate and high-quality feedback. As such, ILF assumes access to some model πRefine that is capable of producing a refinement given the original program and feedback. πRefine can take a variety of forms, but we fine-tune a pre-trained CodeGen-Mono 6.1B model as our πRefine. We create a training dataset for πRefine by further annotating a subset of Cannotated with refinements x1 that repair incorrect programs x0 by incorporating feedback f, such that \textscEval(x1,t)=1 for (x0,f,t)∈Cannotated. Further details of our dataset and annotation procedure are in Section [3](#S3 "3 Experiments and Results"). ![An example of a zero-shot LLM prompt for repairing incorrect code based on human-written feedback.](https://media.arxiv-vanity.com/render-output/7555397/images/feedback_prompt.png) Figure 2: An example of a zero-shot LLM prompt for repairing incorrect code based on human-written feedback. 3 Experiments and Results -------------------------- Having described our high-level approach, we now explain the experimental setup we use to test ILF. #### Dataset We train and evaluate our models on the Mostly Basic Python Problems (MBPP) dataset (odena2021mbpp). MBPP contains 974 Python programming tasks designed to be solvable by entry-level coders. Each task contains a natural language task description t (*e.g.*, “Write a function to return the prime factorization of the input.”), a gold solution, and a suite u of three unit tests. Since the task descriptions are sometimes ambiguous, we include one unit test in the task description. The addition of the unit test helps to specify the input and output format of each task. We hold out the remaining unit tests for the evaluation of our generated programs. MBPP includes a designated prompt/training/validation/test split of the dataset, but we re-split the dataset into the following splits: * MBPPRefine: These are tasks with IDs in the range 111-310 for which CodeGen-Mono 6.1B did not generate any correct completions. This split is used to train πRefine. * MBPPTrain: These are tasks with IDs in the range 311-974 for which Codegen-Mono 6.1B did not generate any correct completions. This split is first used to evaluate the correctness of refinements generated by πRefine. Then, the correct refinements in this split are used to train πθ to obtain πθ∗ (step 5 in Algorithm [1](#alg1 "Algorithm 1 ‣ 2 Method")). * MBPPTest: These are tasks with IDs in the range 11-110 that we use to evaluate the final performance of πθ∗. Unlike the previous two splits, we use *all* tasks in this split, rather than only the tasks for which CodeGen-Mono 6.1B did not originally generate correct programs for. This allows us to better compare the baseline performance of πθ with that of πθ∗. We use this modified split so that a larger portion of the dataset can be used to train the final model πθ∗, whereas smaller portions are allocated for training πRefine and evaluating πθ∗. We do not make use of the prompt split (IDs 1-10). #### Models Throughout this paper, we use a pre-trained CodeGen-Mono 6.1B model (Nijkamp2022CG) as our πθ. It is pre-trained sequentially on ThePile (gao2020pile), BigQuery (Nijkamp2022CG), and BigPython (Nijkamp2022CG). We selected this model because it is open-source, can be fine-tuned on a single 4×100 A100 (80 GB) node, and demonstrated pass@k scores comparable to Codex-12B (chen2021codex; Nijkamp2022CG). To implement our algorithm, we independently fine-tune two separate instances of CodeGen-Mono 6.1B to create πRefine and the final model πθ∗. We train πRefine using pairs of incorrect programs and human-written feedback as inputs, with human-written refinements as targets (using the format in Figure [2](#S2.F2 "Figure 2 ‣ Training πRefine ‣ 2.3 Proposal Distribution q ‣ 2 Method")). In contrast, we train πθ∗ using natural language task descriptions from MBPP as the inputs and πRefine-generated refinements as the targets. Further training details are in Appendix [A.1](#A1.SS1 "A.1 Training Details ‣ Appendix A Appendix"). #### Evaluation We evaluate all code generations in this paper using the *pass@k* metric introduced in kulal2019spoc. It estimates the rate for which ≥1 of k model samples passes all the unit tests. We use the empirical estimate of this quantity from chen2021codex, an unbiased estimator given by: | | | | | | --- | --- | --- | --- | | | pass@k\coloneqqEtask[1−(n−ck)(nk)] | | (8) | for n total programs (where n≥k) and c correct programs for the given task. #### Human Annotation We hire annotators via Surge AI222[www.surgehq.ai](http://www.surgehq.ai) to write both natural language feedback and refinements for incorrect programs generated by CodeGen-Mono 6.1B. For each task that CodeGen-Mono 6.1B generated no correct programs for, we ask the workers to first select one of the incorrect programs to write feedback and refinement for. We specify that the workers should select a sample that seems relatively easy to correct (*i.e.* could be minimally corrected to pass the unit tests). Then, they are asked to write feedback that describes what is wrong with the current code and how to fix it. For the refinement, they are asked to copy over the original code and make the *minimum number of edits necessary* to incorporate the feedback and pass all the unit tests. The full set of worker instructions can be found in Appendix [A.2](#A1.SS2 "A.2 Annotator Instructions ‣ Appendix A Appendix"). We keep all annotations for which the refinement passes all tests in the task’s test suite, the feedback is correct (as manually verified by the authors), and the Levenshtein edit distance between the refinement and the original program is less than 50% of max(len(refinement),len(original program)). The final dataset consists of 195 triples of (incorrect program, human-written feedback, human-written refinement). On average, workers are paid $23 per annotated sample and take 27 minutes/sample, with a 10th percentile of 4 minutes and a 90th percentile of 43 minutes. Although the ILF algorithm only requires the collection of human-written feedback for the tasks in MBPPTrain (assuming access to some πRefine that is already fine-tuned or can generate refinements via few-shot prompting), we collect both human-written feedback and refinement for all splits of the data so that we can conduct further analyses of our method. For instance, this allows us to compare fine-tuning on πRefine-generated refinements with fine-tuning on human-written refinements. When scaled to other pairs of model and task, ILF requires new feedback annotations, but it is possible that using ILF on one dataset will improve the model’s abilities on another dataset for a similar task. We leave analyses of scaling ILF across different tasks and models to future work. | Metric | Zero-Shot CodeGen-Mono 6.1B | | --- | --- | | Pass@1 | 31% | | Pass@10 | 63% | | 1+ Correct | 67% | Table 1: Initial zero-shot CodeGen-Mono 6.1B performance on the entire MBPP dataset. “1+ Correct” refers to the percentage of tasks for which CodeGen-Mono 6.1B generated at least one program that passed all unit tests. | Prompt Type | CodeGen-Mono 6.1B | | --- | --- | | Pass@1 ↑ | Pass@10 ↑ | | Code + feedback | 2.0% | 13.8% | | Code + unrelated feedback | 0.4% | 4.0% | Table 2: Evaluations of 1-shot refinements generated by CodeGen-Mono 6.1B (before ILF) given either related or unrelated text feedback in the prompt. Feedback is provided only for tasks on which CodeGen-Mono 6.1B previously did not output any correct programs. ### 3.1 CodeGen-Mono 6.1B Incorporates Feedback We first verify that our baseline model can use feedback to repair incorrect code, a pre-requisite for ILF to work. We evaluate CodeGen-Mono 6.1B’s ability to generate refinements given pairs of (incorrect code, natural language feedback), both in a few-shot manner and after fine-tuning. Feedback is only required for tasks for which πθ is initially unable to produce a correct response, so we first evaluate CodeGen-Mono 6.1B zero-shot on all of MBPP, generating 30 programs per task with temperature 0.8. Table [1](#S3.T1 "Table 1 ‣ Human Annotation ‣ 3 Experiments and Results") shows the resulting pass rates. There were 321 tasks for which zero-shot CodeGen-Mono 6.1B yielded no correct samples (from Table [1](#S3.T1 "Table 1 ‣ Human Annotation ‣ 3 Experiments and Results"): (100%−67%)×974 tasks≈321). We then annotate one incorrect program per task with both feedback and refinement, as described in Section [3](#S3.SS0.SSS0.Px4 "Human Annotation ‣ 3 Experiments and Results"). #### Few-Shot Feedback Incorporation We use the human feedback annotations to create few-shot feedback prompts, formatted as in Figure [2](#S2.F2 "Figure 2 ‣ Training πRefine ‣ 2.3 Proposal Distribution q ‣ 2 Method"). We evaluate CodeGen-Mono 6.1B’s ability to produce refinements that incorporate the feedback and pass the unit tests. However, producing a refinement that passes the unit tests does not guarantee that the feedback has been incorporated; there can be multiple solutions to a programming task, including ones that are functional but completely different and not using the feedback to improve upon the original code. Alternatively, the model may already be able to repair programs without feedback. Thus, we also evaluate the pass rate after shuffling the feedback samples in the dataset, to evaluate if the model’s ability to repair code degrades when presented with unrelated feedback. The results are shown in Table [2](#S3.T2 "Table 2 ‣ Human Annotation ‣ 3 Experiments and Results"). CodeGen-Mono 6.1B’s ability to incorporate relevant feedback on this particular set of program is low, with pass@10 reaching only 13.8%. However, the gap in accuracy between CodeGen-Mono 6.1B-generated refinements on relevant versus irrelevant feedback is significant, with pass@10 decreasing by 71% (relative; 13.8% → 4.0%), indicating that the model is indeed using the feedback. #### Training πRefine Next, we examine whether we can improve our ability to repair programs given feedback by fine-tuning a separate model specifically to perform this task. Our training examples consist of triples of incorrect program, human-written feedback, and human-written refinement. We train the model to maximize the likelihood of the refinement given the program and feedback. The incorrect programs were generated by CodeGen-Mono 6.1B zero-shot on MBPP tasks, and the feedback and refinements were written by human annotators, as discussed in Section [3](#S3 "3 Experiments and Results"). We only included tasks for which none of CodeGen-Mono 6.1B’s generated programs were correct, yielding 44 tasks in the training dataset (forming the split MBPPRefine) and 128 tasks in the evaluation dataset (forming the split MBPPTrain). We asked human annotators to write refinements of the original code that incorporated their own previously written feedback, passed the unit tests, and made only minimal edits to the code (see Section [3](#S3.SS0.SSS0.Px4 "Human Annotation ‣ 3 Experiments and Results")). The format of the training data also matched the few-shot prompt format (Figure [2](#S2.F2 "Figure 2 ‣ Training πRefine ‣ 2.3 Proposal Distribution q ‣ 2 Method")) but without the in-context examples of refinements. We denote this model as πRefine, as described in Section [2.3](#S2.SS3 "2.3 Proposal Distribution q ‣ 2 Method"). | Metric | πRefine | Zero-shot CodeGen-Mono 6.1B | | --- | --- | --- | | Pass@1 | 19% | 0% | | Pass@10 | 47% | 0% | | 1+ correct | 61% | 0% | Table 3: Pass rates of πRefine-generated refinements versus zero-shot CodeGen-Mono 6.1B programs for tasks in MBPPTrain. Table [3](#S3.T3 "Table 3 ‣ Training πRefine ‣ 3.1 CodeGen-Mono 6.1B Incorporates Feedback ‣ 3 Experiments and Results") shows the pass rates for πRefine on the evaluation dataset, which were produced by sampling 30 refinements per task with temperature 0.8. Fine-tuning significantly improves CodeGen-Mono 6.1B’s ability to incorporate feedback compared to 1-shot refinement, increasing pass rates more than three-fold (2→19% pass@1, 13.8→47% pass@10, from Tables [2](#S3.T2 "Table 2 ‣ Human Annotation ‣ 3 Experiments and Results") and [3](#S3.T3 "Table 3 ‣ Training πRefine ‣ 3.1 CodeGen-Mono 6.1B Incorporates Feedback ‣ 3 Experiments and Results")). Furthermore, 61% of tasks had at least one correct refinement. This is particularly significant when considering the fact that we selected only tasks for which a non-finetuned CodeGen-Mono 6.1B model did not originally output any correct programs for (the rightmost column in Table [3](#S3.T3 "Table 3 ‣ Training πRefine ‣ 3.1 CodeGen-Mono 6.1B Incorporates Feedback ‣ 3 Experiments and Results")). For the 61% of validation tasks that πRefine generated a correct refinement for, we randomly selected one such correct program for each task to form the training dataset for our final model πθ∗, yielding a final training dataset of 78 examples. | | | | | | --- | --- | --- | --- | | Method | Feedback Source | Fine-Tuning Data | Pass Rates of πθ∗ | | | | | Pass@1 | Pass@10 | | ILF | Humans | πRefine Refinements | 36% | 68% | | Ablations | 1-shot InstructGPT | 1-shot InstructGPT Refinements | 19% | 55% | | 2-shot InstructGPT | 2-shot InstructGPT Refinements | 25% | 59% | | Gold Standards | - | MBPP Gold | 22% | 63% | | - | Human Refinements | 33% | 68% | | Baseline (zero-shot) | - | - | 26% | 59% | Table 4: Final performance of πθ∗ on MBPPTest, compared to other ablations and baselines. All results are calculated using 30 output samples with temperature 0.8. All the methods are built on the CodeGen-Mono 6.1B model. ### 3.2 ILF Yields Pass Rates Higher Than Fine-Tuning on Gold Data or Human-Written Programs Alone Given that our refinements improve over the initial programs, we now fine-tune on the refinements to improve our code generation model. As discussed earlier, we use the correct refinements (as evaluated by the unit tests) that πRefine generated for its evaluation dataset as the training dataset for πθ∗. Since πθ∗ is meant to generate code from a natural language task description (rather than to incorporate feedback into a refinement), the inputs of our training dataset are the MBPP prompts and the targets are the 78 πRefine-generated refinements described in the previous section. We also compare the performance of π∗θ against that of CodeGen-Mono 6.1B evaluated in a zero-shot manner, CodeGen-Mono 6.1B fine-tuned on the gold programs from the MBPP dataset, and CodeGen-Mono 6.1B fine-tuned on our human-written refinements. For all fine-tuning experiments, we train on programs corresponding to the same set of task IDs as the ones used in πθ∗’s training dataset. Additionally, we evaluate the impact of ablating the human annotations in our algorithm by using an LLM in place of humans to generate the feedback and refinements (replacing steps 3 and 4 in Algorithm [1](#alg1 "Algorithm 1 ‣ 2 Method")). For the LLM, we use GPT-3.5 fine-tuned with Feedback Made Easy (FeedME; text-davinci-002 on the OpenAI API)333Details at [beta.openai.com/docs/model-index-for-researchers](https://beta.openai.com/docs/model-index-for-researchers). We refer to this model as InstructGPT, which is the series of OpenAI models that FeedME belongs to (openai\_mir). We use InstructGPT to generate both the feedback and refinements on the original programs. We then fine-tune CodeGen-Mono 6.1B on the model-generated refinements. The results of our ILF algorithm compared to the baselines and ablations are shown in Table [4](#S3.T4 "Table 4 ‣ Training πRefine ‣ 3.1 CodeGen-Mono 6.1B Incorporates Feedback ‣ 3 Experiments and Results"). ILF yields the highest pass@1 and pass@10 rates, despite how few samples of feedback and refinements we use. The pass@1 rate in particular shows a significant increase in improvement over the zero-shot baseline, representing a 10% absolute increase (38% relative increase). Pass@1 improvements are especially helpful for assisting with software engineering, where it is more helpful to suggest a single correct completion rather than 10 possible completions for the user to select from. Compared to the gold standards, ILF outperforms both fine-tuning on MBPP gold programs and human-written refinements on the pass@1 metric, yielding 14% absolute (64% relative) and 3% absolute (9% relative) increases in pass@1 rates, respectively. However, training on human-written refinements yielded comparable pass@10 rates as ILF, which is unsurprising since πRefine was trained on human-written refinements. When human-written feedback and πRefine-generated refinements are ablated (the “Ablations” section of Table [4](#S3.T4 "Table 4 ‣ Training πRefine ‣ 3.1 CodeGen-Mono 6.1B Incorporates Feedback ‣ 3 Experiments and Results")), ILF also outperforms training on both 1-shot and 2-shot InstructGPT-generated refinements by 17% and 11% absolute (89% and 44% relative), respectively. ![Histogram of the perplexities of the various training data sources, as measured using a pre-trained ](https://media.arxiv-vanity.com/render-output/7555397/images/ppl_histogram.png) Figure 3: Histogram of the perplexities of the various training data sources, as measured using a pre-trained CodeGen-Mono 6.1B model. ![Training dataset size versus ](https://media.arxiv-vanity.com/render-output/7555397/images/instructgpt_feedback_scaling.png) Figure 4: Training dataset size versus CodeGen-Mono 6.1B pass rates on MBPP tasks 11-111 after fine-tuning on InstructGPT-generated refinements, versus the performance of πθ∗ (the model produced by our approach). X marks the performance of πθ∗, whereas the solid lines plot the performance of CodeGen-Mono 6.1B after fine-tuning on correct refinements generated by InstructGPT, using feedback also generated by InstructGPT. The dashed line indicates the zero-shot pass rate of a pre-trained CodeGen-Mono 6.1B model. #### Analysis of Training Data Sources However, we also note the surprising fact that merely training on a small sample of the MBPP gold programs did not make a significant difference in accuracy over zero-shot inference. We speculate that the gold programs from the MBPP dataset may be somewhat out-of-distribution for CodeGen-Mono 6.1B. To test this hypothesis, we computed the perplexity of the MBPP gold programs, the πRefine-generated refinements, and the human-written refinements using the pre-trained CodeGen-Mono 6.1B model. The results are shown in Figure [3](#S3.F3 "Figure 3 ‣ 3.2 ILF Yields Pass Rates Higher Than Fine-Tuning on Gold Data or Human-Written Programs Alone ‣ 3 Experiments and Results"). While the distributions of all three data sources look similar, the MBPP dataset contains more high-perplexity programs (*i.e.* programs with perplexity ≥102) than either the πRefine-generated refinements or the human-written refinements. As a result, it is likely easier for CodeGen-Mono 6.1B to learn from the latter two datasets, since they are closer to CodeGen-Mono 6.1B’s original distribution while still being functionally correct. Furthermore, ILF is particularly useful for settings where large amounts of gold code are not available. In this setting, ILF can be thought of as a method of not only generating more training data, but training data that is closer to the model’s original outputs in data representation space and that specifically repairs the kinds of bugs that the original model generates. As a result, fine-tuning the model on πRefine-generated refinements does not require adjusting the weights as much as fine-tuning the model on the MBPP gold programs would, even though both training datasets contain the same number of functionally correct programs. ### 3.3 Scaling Up Model Feedback Does Not Offer the Same Benefits As Human Feedback Since high quality human feedback can be expensive to collect, we also evaluated how much model feedback might yield the same benefit as our sample of human-written feedback. To do so, we randomly select k tasks from the set of MBPP tasks for which CodeGen-Mono 6.1B did not originally output a correct answer, and prompt InstructGPT to generate both the feedback and the refinement. We then evaluate the refinements for correctness and train CodeGen-Mono 6.1B on the correct refinements. We use k∈{50,100,200} and generate 30 output samples at temperature 0.8 for all stages of the experiment. We are limited to these k values due to the small number of tasks we have in MBPPTrain, but future work may investigate scaling up these experiments by using larger datasets or automatically generating new tasks and unit tests for the training dataset. Further training details are listed in Appendix [A.1](#A1.SS1 "A.1 Training Details ‣ Appendix A Appendix"). The results are shown in Figure [4](#S3.F4 "Figure 4 ‣ 3.2 ILF Yields Pass Rates Higher Than Fine-Tuning on Gold Data or Human-Written Programs Alone ‣ 3 Experiments and Results"). Although increasing the quantity of InstructGPT-generated feedback offers modest improvements in pass rates, these improvements do not yield pass rates as high as those of πθ∗, even though πθ∗ uses only a total of 122 pieces of feedback throughout its training process (44 for training πRefine and 78 for generating refinements to train πθ∗ on). However, as pre-trained large language models continue to improve dramatically in quality, we expect that this gap between human- and model-written feedback will increasingly narrow. | Feedback Category | % of Feedback | | --- | --- | | Human | InstructGPT | | Logic | 30% | 46% | | Formatting | 36% | 14% | | Missing step | 10% | 6% | | Algebra | 10% | 8% | | Recursion | 4% | 14% | | Regex | 6% | 6% | | Function semantics | 2% | 4% | | Dynamic programming | 2% | 0% | | Extra step | 0% | 12% | | No feedback needed | 0% | 14% | | Unrelated | 0% | 8% | Table 5: The proportion of the feedback that addressed each type of bug, for feedback sourced from humans and InstructGPT. Each sample of feedback can be tagged with multiple categories, so the quantities in each column do not necessarily add up to 100%. ![The number of bugs addressed in the feedback versus the pass rate of ](https://media.arxiv-vanity.com/render-output/7555397/x2.png) Figure 5: The number of bugs addressed in the feedback versus the pass rate of πRefine’s refinements. | | Source of Feedback | | --- | --- | | | Human | InstructGPT | | Avg. num. of bugs addressed\* | 1.8 | 1.1 | | Avg. num. of words | 68.9±48.2 | 24.2±28.6 | Table 6: Descriptive statistics for the human- versus InstructGPT-generated feedback. The \* indicates that the metric was computed on the random sample of 50 that we manually inspected, whereas the other metrics are computed from the full dataset. ### 3.4 Human Feedback Is More Informative Than InstructGPT Feedback To better understand why human feedback produced greater improvements in pass rate than InstructGPT feedback, we randomly selected 50 samples of feedback for each source (*i.e.* human or InstructGPT) and annotated the number and types of bugs that each feedback sample addressed. The results are shown in Tables [5](#S3.T5 "Table 5 ‣ 3.3 Scaling Up Model Feedback Does Not Offer the Same Benefits As Human Feedback ‣ 3 Experiments and Results") and [6](#S3.T6 "Table 6 ‣ 3.3 Scaling Up Model Feedback Does Not Offer the Same Benefits As Human Feedback ‣ 3 Experiments and Results"). We observed that InstructGPT often gave no feedback (*e.g.* “The code is correct” or “Great job!”), provided feedback that was irrelevant or incorrect, or restated the task description instead of addressing what should be repaired about the code. Despite this, InstructGPT’s refinements were often correct even if the feedback itself wasn’t. Human-written feedback addressed more bugs on average and never gave irrelevant feedback. We provide further examples of the differences between human and InstructGPT feedback in Appendix [A.3](#A1.SS3 "A.3 Examples of Human Versus InstructGPT Feedback ‣ Appendix A Appendix"). ### 3.5 πRefine Struggles To Incorporate Feedback Addressing Many Bugs Lastly, we explored whether the number of bugs addressed in the feedback affected πRefine’s ability to repair the original code sample. The results are shown in Figure [5](#S3.F5 "Figure 5 ‣ 3.3 Scaling Up Model Feedback Does Not Offer the Same Benefits As Human Feedback ‣ 3 Experiments and Results"). The greater the number of bugs addressed, the lower the average pass rate of πRefine’s refinements. This suggests that a promising direction for future work might consist of automatically decomposing the feedback into multiple steps and having πRefine incorporate the feedback one step at a time. Indeed, Nijkamp2022CG show that the CodeGen models are often more effective at following instructions when the instructions are given across multiple turns, and recent Chain-of-Thought work (wei2022chain) illustrates a similar prompting technique. 4 Related Work --------------- #### LLMs for Program Synthesis Our work builds on a large body of literature that explores the use of pre-trained LLMs for neural program synthesis. Many general purpose LLMs, although not pre-trained specifically for code generation, have demonstrated impressive proficiency at solving code challenges since they are pre-trained on large corpora of text such as The Pile (gao2020pile) that contain a small percentage of code content (austin2021program; gpt-j; gpt-neox-20b; Nijkamp2022CG). Yet other recent LLMs for program synthesis are trained on solely source code files (wang2021codet5; CERT; li2022alphacode; xu2022evaluation), or on both text and source code documents – sometimes either in succession (chen2021codex; Nijkamp2022CG; Bai2022TrainingAH), in a mixed corpus (bigscience2022bloom), or on mixed natural language-programming language documents (feng-etal-2020-codebert). #### Learning from Human Feedback Our algorithm is inspired by a number of past works that have trained models to learn from feedback. A common technique is reinforcement learning from human feedback (RLHF ziegler2019finetuning; stiennon2020learning\_to\_summarize; ouyang2022instructgpt), which trains models to satisfy human preferences. However, our algorithm is closer to works that use natural language feedback, rather than comparisons between different choices. elgohary-etal-2020-speak; austin2021program; Nijkamp2022CG all demonstrate that code LLM performance generally improves when prompted with natural language feedback, though Nijkamp2022CG observes that the feedback is more effective when it is given one step at a time. Our work differs from these in that ILF learns from the feedback at training time, not at inference time. Bai2022TrainingAH also uses natural language feedback during the training process, but as part of an RLHF algorithm instead where the feedback is used to solicit different responses from the digital assistant, the responses are ranked by crowdworkers, and the rankings are used to train the preference model. However, they note that this form of learning from natural language feedback does not measurably improve their code generation model more than simply prompting. Outside of program synthesis, we show in our other work (scheurer2023training) that ILF is also effective for text summarization. In addition to re-formulating the reward function R(⋅) for summarization, scheurer2023training additionally demonstrates that an instruction-finetuned LLM can evaluate its own outputs and select the best one. Similar to our results on code generation, scheurer2023training shows that ILF outperforms all supervised fine-tuning baselines on text summarization. This aligns with numerous other works that have explored supervision via natural language in other ways, such as via explanations (camburu2018snli; hase2021can; pruthi2021evaluating; lampinen2022can, inter alia) and as part of RL systems (fidler2017teaching; luketina2019survey; lin2020interactive\_rl, inter alia). 5 Conclusion ------------- We have shown that ILF can significantly improve the quality of a code generation model, even with just a small sample of human-written feedback and refinements. This approach is theoretically justified as minimizing the expected KL divergence between πθ and a target ground-truth distribution, where we acquire signal from the latter via human-written natural language feedback. This approach is also appealing because it is not model-specific (in the sense that ILF can be used with any type of base model πθ, assuming the existence of a sufficiently capable LLM to act as πRefine), and can be conducted in multiple rounds to continuously improve the model. Furthermore, it is notable that our approach generates training data that is not only correct, but targets the specific kinds of bugs that the model is likely to output. In essence, it provides an *online* training signal that is missing from the offline pre-training set-up of modern LLMs. Our approach is also remarkably sample-efficient, yielding 38% and 64% relative increases in pass@1 rate over the zero-shot baseline and fine-tuning on MBPP data, despite fine-tuning on only 78 examples. Our work opens up multiple avenues for promising future work. For instance, ILF can be applied iteratively over the course of multiple rounds whenever new information arrives (*e.g.* new Python syntax) or new bugs are discovered. As the pace of progress of modern LLM research continues to accelerate, it may soon be feasible to partially or fully automate the generation of natural language feedback (similar to ‘RL from AI feedback’ (RLAIF; bai2022constitutional) and our experiments in Section [3.3](#S3.SS3 "3.3 Scaling Up Model Feedback Does Not Offer the Same Benefits As Human Feedback ‣ 3 Experiments and Results")), greatly reducing both the time and cost necessary for collecting feedback. This direction of work is also particularly appealing because the learning signal is *process-based* rather than outcome-based, which has been shown to mitigate reward hacking and improve the correctness of intermediate reasoning steps (uesato2022solving). Although further work is required to extend our method, ILF represents an exciting step forward in training LLMs with feedback that is rich, interactive, and sample-efficient. Acknowledgements ---------------- We are grateful to Nitarshan Rajkumar, Jason Phang, Nat McAleese, Geoffrey Irving, Jeff Wu, Jan Leike, Cathy Yeh, William Saunders, Jonathan Ward, Daniel Ziegler, Seraphina Nix, Quintin Pope, Kay Kozaronek, Peter Hase, Talia Ringer, Asa Cooper Stickland, Jacob Pfau, David Lindner, Lennart Heim, Kath Lumpante, and Pablo Morena for helpful discussions and feedback about the design and implementation of this work. We are additionally thankful to Scott Heiner and Edwin Chen for extensive help with setting up our human annotation workflow and interface. EP thanks the National Science Foundation and Open Philanthropy for fellowship support. JAC is supported by a doctoral grant from the Spanish MECD. AC, SB, and KC are supported by National Science Foundation Awards 1922658 and 2046556. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. KC is additionally supported by 42dot, Hyundai Motor Company (under the project Uncertainty in Neural Sequence Modeling) and the Samsung Advanced Institute of Technology (under the project Next Generation Deep Learning: From Pattern Recognition to AI). This project has also benefited from financial support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), Open Philanthropy, and Apple. We also thank the NYU High-Performance Computing Center for in-kind support and OpenAI for providing access to and credits for their models via the API Academic Access Program.
a23c78a3-2943-43e6-9505-b0e7ff35cbab
trentmkelly/LessWrong-43k
LessWrong
Has there been a "memetic collapse"? I want to know if there has actually been a "memetic collapse" along the lines described here and here. Does anyone have evidence or arguments in either direction? Or even ideas for how we would be able to tell?
c796e02d-d06d-4b9e-a42b-1885c33b5299
trentmkelly/LessWrong-43k
LessWrong
Welcome to the Washington, DC Slate Star Codex Meetup What kind of events does your group usually run? What does it usually do? We meet 1-2 times per month to eat snacks and talk about SSC and other rationality-adjacent topics. Sometimes we have dinner or game nights. Join the Google Group if you would like monthly updates: https://groups.google.com/forum/#!forum/dc-slatestarcodex
f235df6e-c17f-423d-85f0-a759c92ac00e
trentmkelly/LessWrong-43k
LessWrong
Rationality Quotes April 2014 Another month has passed and here is a new rationality quotes thread. The usual rules are: * Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.) * Do not quote yourself. * Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here. * No more than 5 quotes per person per monthly thread, please. And one new rule: * Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
56227391-125c-44db-b36e-b93673442fdf
trentmkelly/LessWrong-43k
LessWrong
Looking for information on scoring calibration There are lots of scoring rules for probability assessments. Log scoring is popular here, and squared error also works. But if I understand these correctly, they are combined measurements of both domain-ability and calibration. For example, if several people took a test on which they had to estimate their confidence in their answers to certain true or false questions about history, then well-calibrated people would have a low squared error, but so would people who know a lot about history. So (I think) someone who always said 70% confidence and got 70% of the questions right would get a higher score than someone who always said 60% confidence and got 60% of the questions right, even though they are both equally well calibrated. The only pure calibration estimates I've ever seen are calibration curves in the form of a set of ordered pairs, or those limited to a specific point on the cuve (eg "if ey says ey's 90% sure, ey's only right 60% of the time"). There should be a way to take the area under (or over) the curve to get a single value representing total calibration, but I'm not familiar with the method or whether it's been done before. Is there an accepted way to get single-number calibration scores separate from domain knowledge?
96095f4e-8281-47a9-962d-f02e986b769d
trentmkelly/LessWrong-43k
LessWrong
Addendum to applicable advice Original post: http://bearlamp.com.au/addendum-to-applicable-advice/ (part 1: http://bearlamp.com.au/applicable-advice/) ---------------------------------------- If you see advice in the wild and think somethings along the lines of "that can't work for me", that's a cached thought.  It could be a true cached thought or it could be a false one.  Some of these thoughts should be examined thoroughly and defeated. If you can be any kind of person - being the kind of person that advice works for - is an amazing skill to have.  This is hard.  You need to examine the advice and decide how that advice happened to work, and then you need to modify yourself to make that advice applicable to you. All too often in this life we think of ourselves as immutable.  And our problems fixed, with the only hope of solving them to find a solution that works for the problem.  I propose it's the other way around.  All too often the solutions are immutable, we are malleable and the problems can be solved by applying known advice and known knowledge in ways that we need to think of and decide on. ---------------------------------------- Is it really the same problem if the problem isn't actually the problem any more, but rather the problem is a new method of applying a known solution to a known problem? (what does this mean) Example: Dieting - is an easy example. This week we have been talking about Calories in/Calories out.  It's pretty obvious that CI/CO is true on a black-box system level.  If food goes (calories in) in and work goes out (calories out - BMR, incidental exercise, purposeful exercise), that is what determines your weight.  Ignoring the fact that drinking a litre of water is a faster way to gain weight than any other way I know of.  And we know that weight is not literally health but a representation of what we consider healthy because it's the easiest way to track how much fat we store on our body (for a normal human who doesn't have massive bulk muscle mass). CIC
1f4bb8ad-ca83-4fdf-af7e-491a015bd08a
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] We Change Our Minds Less Often Than We Think Today's post, We Change Our Minds Less Often Than We Think was originally published on 03 October 2007. A summary (taken from the LW wiki):   > We all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action. Once you think you believe something, the chances are good that you already do, for better or worse. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was A Rational Argument, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
adb4a5df-645b-4bce-976b-457046fdb234
trentmkelly/LessWrong-43k
LessWrong
Revelation and mathematics The mind of the enlightenment era mathematician might have been the ultimate tool ever devised for creating pointless and convoluted connections between pi and everything else. i - Secret based religion There are certain Buddhist traditions, e.g. Dzogchen, in which "enlightenment" or some other desired state or status is predicated upon knowing some hidden knowledge. This is most popular in Buddhist faiths but by no means confined to them, some gnostic traditions are also fund of secret knowledge. This begs a question for the would-be believer: > Why not readily give out this knowledge? If all I have to do to /reach enlightenment/attain nirvana/understand the nature of God/ is to read a few sentences, why not readily give them to everyone? The answer to this has something to do with "mind preparedness", one is not ready to understand until they have some prerequisite baggage. But still, why not give them to everyone first, then tell them to go get the prerequisite knowledge. After all, the prerequisites might be different for everyone, this way, as soon as they have them, things will instantly click, and they won't have to /meditate/chant/pray/ for longer than necessary. To which the crazier believers answer something like: > Because God will blindeth the unworthy who lookedth uponith thy sacredest texts. But the saner ones say something like: > Look, this is the most profound knowledge on Earth, but if we give it to you before you are prepared, you won't see that. You will get used to it and thus it will forever lose importance, it will become a banality in your mind. Only reading with fresh eyes makes it have the power it does, and you can only do that once, so I'd better be after you learnt enough to grasp its value". Sound like a load of rubbish? Ok, I agree, but the methodology these sects invented for causing a feeling of revelation might be quite generic and ingenious. Let me give a brief summary of how this goes: * Have some hidden knowledge, va
c50b3a49-7a83-4740-a9eb-a3571166a9d6
trentmkelly/LessWrong-43k
LessWrong
Mathematical Mindset I agree that optimization amplifies things. I also agree that a mathematical mindset is important for AI alignment. I don't, however, think that a "mathematical mindset" is the same as a "proof mindset". Rather, I think that the latter is closer to being a "programming mindset" -- or, indeed, a "security mindset". And that a "mathematical mindset" is largely missing from AI-alignment discourse at present. Whereas others see a division between two clusters, of the form, science/physics vs. mathematics/programming/logic I, by contrast, see a hierarchical progression that looks something like: science < programming < physics < mathematics < logic <... where, in this context, these words have meanings along the following lines: science: things being made of parts; decomposition programming: things being made of moving parts; constant-velocity motion; causal networks physics: things being made of moving spatial parts; accelerated motion, rotation, fluidity; substance mathematics: models being made of parts; transubstantiation; metaphysics; theorization logic: concepts being made of parts; time reversal; ontology Of course I'm not using these words standardly here. One reason for this is that, in this discussion, no one is: we're talking about mindsets, not about sociological disciplines or even clusters of particular ideas or "results". But the really important reason I'm not following standard usage is because I'm not trying to invoke standard concepts; instead, I'm trying to invent "the right" concepts. Consequently, I can't just use standard language, because standard language implies a model of the world different from the one that I want to use. It is commonly believed that if you want to introduce a new concept that is similar or related (but--of course--nonidentical) to an old concept, you shouldn't use the same word for the new concept and the old, because that would be "confusing". I wish to explicitly disagree with this belief. This view presupp
94d21533-a7a3-4bfd-83cc-9d98abf2753b
trentmkelly/LessWrong-43k
LessWrong
Big Community Solstice In 2011, Ray started a tradition in the broader rationalist community of having a gathering around the winter solstice. As the person who started the thing, if he says we should do it differently I'm going to pay attention. But on the other hand I disagree with this pretty strongly: > "Big Solstice" is not Solstice. > The NYC Solstice Celebration is not Solstice. > > The Bay Solstice Celebration is not Solstice > > (according to me). > > Big NYC Solstice was an _advertisement_ I created for Actual Solstice. > > Actual Solstice is held on December 21st, or whenever Solstice is this year, with your close family and friends. It is a holiday. It is something you have a lot of ownership of. > >   —FB post on the direction of Solstice Reading the whole post, Ray describes Big Community Solstice and Little Family Solstice, and why he thinks the latter should be the focus. I'm atheist, in a family that's mixed, but that has a strong internal tradition of a family Christmas celebration. Ray has talked about how this kind of family tradition is what got him wanting to make a Solstice holiday, and I see where he's coming from. But I think Big Community Solstice fills a much more important role than Small Family Solstice. The former fills the role of church, and specifically the kind of meaningful and serious church experience you have when people take their religion seriously and honestly believe. Mainstream versions of this aren't open to or attractive to atheists because they're built around religion. They also don't emphasize the most important (but challenging!) parts of the religion, let alone the ideas I think are most important in general. Big Community Solstice can fill a large need here, and that's why I host one. The latter, however, fills the role of family Christmas celebrations. These are much less dependent on religion than you might think. Yes, there's Christian imagery and the best songs are pretty seriously religious, but since it's primarily about
22ea2dac-49a4-4a69-ad69-7e88fd26926f
trentmkelly/LessWrong-43k
LessWrong
Using the Karma system to call for a show of hands - profitable? Not saying its a efficient use of time for the Karma hoarder, but I do wonder if it generally is a reliable way to gain karma. We sometimes see a call for a show of hands here where a comment is up voted by those that agree and a later comment is down voted for balance. This is purely anecdotal but it seems to me most of the time down-votes don't balance out the up-votes. Does anyone else have this experience? This seems a question we can answer approximately by having a bot mine the text of the archives. I feel that making the bot would be made easier if we had as many samples of such use of the Karma system as possible. However if I'm the only one with this observation or if those with this observation are in the minority its probably not worth the effort (at least for someone with my skill set) . Some LWers may be relying on others who don't agree with the motion but want to be "fair" when it comes to Karma to down vote the balance. Perhaps there are just fewever people who don't agree with the motion but down vote the balance post, because it contributes to enforcing norms of how they think the Karma system should be used, than there are people who agree with the motion but don't down vote.   As to explanations, off the top of my head: * Selection bias. * Trivial inconvenience to access the down voted balance * A fraction of posters simply forgets to down-vote * Some posters might up-vote unthinkingly because they like the suggestion not because they agree with the motion. * People don't see a problem with a slightly positive imbalance if they think asking for the call was a good idea. If they think its a bad idea they are due to LW norms farm less likley to down-vote. Especially if this particular pair of posts is balanced. Edit: It appears I was ignorant of the implicitly accepted social convention that bascially amounts to downvoting the balance being optional for those who don't want to reward the person making taking the poll (or perhaps don't w
8846afe2-7617-4751-b330-88ec19547767
trentmkelly/LessWrong-43k
LessWrong
Book Review: On Intelligence by Jeff Hawkins (and Sandra Blakeslee) On Intelligence is a book I've read as part of my quest to understand neuroscience. It attempts to develop a unified theory of the neocortex meant to serve as a blueprint for Artificial Intelligence. I think of the book as being structured into three parts. Part one: Artificial Intelligence and Neural Networks OR skip ahead to part two if you want to read about the cool neuroscience rather than about me lamenting the author's lack of epistemic rigor This part is primarily about a single claim: building AI requires understanding the human brain. Depending on how you count, Jeff says this nine times in just the prologue and first chapter. To justify it, he tells us the story of how he came into contact with the field of artificial intelligence. Then and now, he laments that people in the field talk about intelligence without trying to understand the brain, whereas neuroscientists talk about the brain without trying to develop a high-level theory of intelligence. Neural networks are a small step in the right direction, but he quickly got disillusioned with them as they don't go nearly far enough; their connection to the brain is quite loose and high-level. The conclusion is apparent: someone has to bring neuroscience into AI, and only then will the field succeed. And since no-one else is doing it, Jeff steps up; that's what the book is for. The picture he lays out makes a lot of sense if you take the claim as a given. The flaw is that he neglects to argue why it is true. I think it's pretty hard to make excuses here. This isn't a dinner conversation; it's a 250-page book that explicitly sets out to reform an entire field. It's a context where we should expect the highest level of epistemic rigor that the author is capable of, especially given how much emphasis he puts on this point. However, after rereading this part of the book, the only evidence I can find that supports AI requiring an understanding of the brain is the following: * The observation that current
e8c6e634-192c-4b16-b12d-6fc649d7300b
trentmkelly/LessWrong-43k
LessWrong
Self-experiment: A supraphysiological dosage of testosterone. Unfortunately, I haven't found a good way to conduct a long-term self-experiment in which I observe the effect a supraphysiological weekly dosage of testosterone has on me. However, I think I found a decent alternative. Firstly, some stats: 21y/o male, sub-saharan descent. 173cm at 61kg of bodyweight. I have used steroids before, in a bodybuilding context but that was quite some time ago. I don't bodybuild as of now. Experimental preparations are as follows: I have sort-of double blinded myself. I have prefilled one syringes with testosterone enanthate at a concentration of 200mg/ml. 300 mg/ml. Another syringe I prefilled with ordinary bacteriostatic water. I have - 20 minutes ago - injected my left buttcheek with the content (1ml) of one of the two syringes, I don't know which one though. I will wait at least 3 weeks (time it takes for this version of testosterone until it has cleared my body) before stabbing myself with the other syringe. In both cases, I will record any subjective effects here. I will measure my cognitive abilities every three days using: 1. Zetamac (mental math website for processing speed) 2. Dual n-back (for working memory) 3. No. of uni slides I can work through in an arbitrary hour (for measuring productivity, I study urban planning). How does that sound? Can you suggest any improvements (preferrably improvements which I can implement without letting others know of my plans)?  Edit: Corrected the concentration (300mg/ml) and added the amount of testosterone I injected (1ml).
20d483d7-2053-4a1f-92e2-a54d613d6086
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
[Fiction] Improved Governance on the Critical Path to AI Alignment by 2045. Summary: This post showcases [my finalist entry in the Future of Life Institute's AI worldbuilding contest](https://worldbuild.ai/W-0000000088/).  It imagines: 1. How we might make big improvements to decisionmaking via mechanisms like [futarchy](https://blog.ethereum.org/2014/08/21/introduction-futarchy/) and [liquid democracy](https://en.wikipedia.org/wiki/Liquid_democracy#:~:text=Liquid%20democracy%20is%20a%20form,both%20direct%20and%20representative%20democracy.), enhanced by [Elicit-like research/analysis tools](https://ought.org/updates/2022-04-08-elicit-plan). 2. How changes could spread to many countries via [competition](https://chartercitiesinstitute.org/intro/) to achieve faster growth than rivals, and via snowball effects of reform. 3. How the resulting, more "[adequate](https://equilibriabook.com/)" civilization could recognize the threat posed by alignment and coordinate to solve the problem. ([Cross-posted from LessWrong](https://www.lesswrong.com/posts/qo2hqf2ha7rfgCdjY/a-bridge-to-dath-ilan-improved-governance-on-the-critical)) ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/41b95a599e25db1539037ad6ab3638a4d8588e4b08f0fd5b.jpg)Part of a mural illustrating our scenario, created by [Diana Gurvich](https://www.instagram.com/mr_dirtlord/)!Motivation for our scenario: ============================ Human civilization's current ability to coordinate on goals, make wise decisions quickly, and capably execute big projects, seems [inadequate](https://equilibriabook.com/) to handle the challenge of safely developing aligned AI.  Evidence for this statement can be found practically all around you, but [the global reaction to covid-19](https://forum.effectivealtruism.org/posts/dYiJLvcRJ4nk4xm3X/covid-how-did-we-do-how-can-we-know-1) is especially clarifying.  [Quoting Gwern](https://www.gwern.net/newsletter/2020/07#links): > The coronavirus was x-risk on easy mode: a risk (global influenza pandemic) warned of for many decades in advance, in highly specific detail, by respected & high-status people like Bill Gates, which was easy to understand with well-known historical precedents, fitting into standard human conceptions of risk, which could be planned & prepared for effectively at small expense, and whose absolute progress human by human could be recorded in real-time happening rather slowly over almost half a year while highly effective yet cheap countermeasures like travel bans & contact-tracing & hand-made masks could—and in some places did!—halt it. Yet, most of the world failed badly this test; and many entities like the CDC or FDA in the USA perversely exacerbated it, interpreted it through an identity politics lenses in willful denial of reality, obstructed responses to preserve their fief or eek out trivial economic benefits, prioritized maintaining the status quo & respectability, lied to the public “don’t worry, it can’t happen! go back to sleep” when there was still time to do something, and so on. If the worst-case AI x-risk happened, it would be hard for every reason that corona was easy. When we speak of “fast takeoffs”, I increasingly think we should clarify that apparently, a “fast takeoff” in terms of human coordination means any takeoff faster than ‘several decades’ will get inside our decision loops. Don’t count on our institutions to save anyone: they can’t even save themselves. > > Around LessWrong, proposed AI x-risk-mitigation strategies generally attempt to route around this problem by aiming to first invent an aligned superintelligent AI, then use the superintelligent AI to execute a "pivotal action" that prevents rival unaligned AIs from emerging and generally brings humanity to a place of existential security. This is a decent Plan A -- it requires solving alignment, but we have to solve that eventually in almost every successful scenario (including mine).  It doesn't require much else, making it a nice and simple plan.  One problem might be that executing a massive "pivotal action" might work less well if AI capabilities develop more smoothly and capabilities are distributed evenly among many actors, a la "slow takeoff" scenarios. But some have argued have argued that we might be neglecting "Plan B" strategies built around global coordination.  The post "[What An Actually Pessimistic Containment Strategy Looks Like](https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like)" considers Israel's successful campaign to stop Iran from developing nuclear weapons, and argues that activist efforts to slow down AGI research at top tech companies might be similarly fruitful.  Usually (including in my worldbuilding scenario), it's imagined that the purpose of such coordination is to buy a little more time for technical alignment safety work to happen.  But for a more extreme vision of permanently suppressing AI technology, we can turn to [the fictional world of Dath Ilan](https://www.lesswrong.com/posts/AvANsxR88iiZziKPt/how-dath-ilan-coordinates-around-solving-alignment), or to [Nick Bostrom's "easy nukes" thought experiment](https://forum.effectivealtruism.org/posts/FtEPgeoThqpSMsnn6/nuclear-strategy-in-a-semi-vulnerable-world) exploring how humanity could survive if nuclear weapons were absurdly easy to make. The idea that we should push for improved governance in order to influence AI has its problems.  It takes a long time, making it might be very helpful in 2070 but not by 2030.  (In this respect it is similar to other longer-term interventions like [gene-editing to create more scientific geniuses](https://fantasticanachronism.com/2021/03/23/two-paths-to-the-future/) or [general EA community-building](https://forum.effectivealtruism.org/posts/TruJuwtdfszFJgzwB/longtermist-ea-needs-more-phase-2-work) investments.)  And of course you still have to solve the technical challenge of AI alignment in the end.  But improving governance also has a lot to recommend it, and it's something that can ideally be done in parallel with technical alignment research -- complementing rather than substituting, worked on by different people who have different strengths and interests. Finally, another goal of the story was expressing the general value of experimentation and governance competition.  I think that existing work in the cause area of "improving institutional decisonmaking" too heavily focused on capturing the commanding heights of existing prestigious institutions and then implementing appropriate reforms "from the inside".  This is good, but it too should be complemented by the presence of more radical small-scale experimentation on the "outside" -- things like charter cities and experimental intentional communities -- which could test out wildly different concepts of [ideal governance](https://www.cold-takes.com/ideal-governance-for-companies-countries-and-more/). Below, I've selected some of the most relevant passages from my contest submission.  To get more of the sci-fi utopian flavor of what daily life would be like in the world I'm imagining (including two wonderful short stories written by my friend Holly, a year-by-year timeline, and more), [visit the full page here](https://worldbuild.ai/W-0000000088/).  Also, the Future of Life Institute would love it if you [submitted feedback](http://worldbuild.ai/feedback) on my world and the other finalists -- how realistic do you find this scenario, how much would you enjoy living in the world I describe, and so forth. Excerpts from my team's contest submission: =========================================== ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/b406be8dcb37eae82ea648fcc874510578d5dc90551cb7de.jpg)Illustrating governance innovation, the Flash Crash War, the Delhi Accords & subsequent golden age.Artificial General Intelligence (AGI) has existed for at least five years but the world is not dystopian and humans are still alive! Given the risks of very high-powered AI systems, how has your world ensured that AGI has at least so far remained safe and controlled? --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Ultimately, humanity was able to navigate the dangers of AGI development because the early use of AI to automate government services accidentally kicked off an “arms race” for improved governance technology and institution design. These reforms improved governments’ decision-making abilities, enabling them to recognize the threat posed by misalignment and coordinate to actually solve the problem, implementing the “Delhi Accords” between superpowers and making the Alignment Project civilization’s top priority. In a sense, all this snowballed from a 2024 Chinese campaign to encourage local governments to automate administrative processes with AI. Most provinces adopted mild reforms akin to Estonia’s e-governance, but some experimented with using AI economic models to dynamically set certain tax rates, or using Elicit-like AI research-assistant tools to conduct cost-benefit analyses of policies, or combining AI with prediction markets. This goes better than expected, kickstarting a virtuous cycle: * Even weak AI has a natural synergy with many government functions, since it makes predicting / planning / administering things cheap to do accurately at scale. * Successful reforms are quickly imitated by competing regions (whether a neighboring city or a rival superpower) seeking similar economic growth benefits. * After adopting one powerful improvement to fundamental decisionmaking processes, it’s easier to adopt others (ie, maybe the new prediction market recommends switching the electoral college to a national-popular-vote with approval voting). One thing leads to another, and soon most of the world is using a dazzling array of AI-assisted, prediction-market-informed, experimental institutions to govern a rapidly-transforming world.   The dynamics of an AI-filled world may depend a lot on how AI capability is distributed. In your world, is there one AI system that is substantially more powerful than all others, or a few such systems, or are there many top-tier AI systems of comparable capability? -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Through the 2020s, AI capabilities diffused from experimental products at top research labs to customizable commercial applications much as they do today. Thus, new AI capabilities steadily advanced through different sectors of economy. The 2030s brought increasing concern about the power of AI systems, including their military applications. Against a backdrop of rapidly improving governance and a transforming international situation, governments started rushing to nationalize most top research organizations, and some started to restrict supercomputer access. Unfortunately, this rush to monopolize AI technology still paid too little attention to the problem of alignment; new systems were deployed all the time without considering the big picture. After 2038’s Flash Crash War, the world woke up to the looming dangers of AGI, leading to much more comprehensive consolidation. With the Delhi Accords, all top AI projects were merged into an internationally-coordinated Apollo-Program-style research effort on alignment and superintelligence. Proliferation of advanced AI research/experimentation outside this official channel is suppressed, semiconductor supply chains are controlled, etc. Fortunately, the world transitioned to this centralized a few years before truly superhuman AGI designs were discovered. As of 2045, near-human and “narrowly superhuman” capabilities are made broadly available through API for companies and individuals to use; hardware and source code is kept secure. Some slightly-superhuman AGIs, with strict capacity limits, are being cautiously rolled out in crucial areas like medical research and further AI safety research. The most cutting-edge AI designs exist within highly secure moonshot labs for researching alignment.   How has your world avoided major arms races and wars? ----------------------------------------------------- Until 2038, geopolitics was heavily influenced by arms races, including the positive "governance arms race" described earlier. Unfortunately, militaries also rushed to deeply integrate AI. The USA & China came to the brink of conflict during the “Flash Crash War”, when several AI systems on both sides of the South China Sea responded to ambiguous rival military maneuvers by recommending that their own forces be deployed in a more aggressive posture. These signaling loops between rival AI systems lead to an unplanned, rapidly escalating cycle of counter-posturing, with forces being rapidly re-deployed, in threatening and sometimes bizarre ways. For about a day, both countries erroneously believed they were being invaded by the other, leading to intense panic and confusion until the diplomatic incident was defused by high-level talks. Technically, the Flash Crash War was not caused by misalignment per se (rather, like the 2010 financial Flash Crash, by the rapid interaction of multiple complex automated systems). Nevertheless, it was a fire-alarm-like event which elevated "fixing the dangers of AI systems" to a pressing #1 concern among both world leaders and ordinary people. Rather than the lukewarm, confused response to crises like Covid-19, the world's response was strong and well-directed thanks to the good-governance arms race. Prediction markets and AI-assisted policy analysts quickly zeroed in on the necessity of solving alignment. Adopted in 2040, the Delhi Accords began an era of intensive international cooperation to make AI safe. This put a stop to harmful military & AI-technology arms races.   In the US, EU, and China, how and where is national decision-making power held, and how has the advent of advanced AI changed that? ----------------------------------------------------------------------------------------------------------------------------------- The wild success of China's local-governance experiments led to freer reign for provinces. Naturally, each province is very unique, but each now uses AI to automate basic government services, and advanced planning/evaluation assistants to architect new infrastructure and evaluate policy options. The federal government's remaining responsibilities include foreign relations and coordinating national projects. The National People's Congress now mostly performs AI-assisted analysis of policies, while the Central Committee (now mostly provincial governors) has regained its role as the highest governing body. In the United States, people still vote for representatives, but Congress debates and tweaks a basket of metrics rather than passing laws or budgets directly. This weighted index (life expectancy, social trust, GDP, etc) is used to create prediction markets where traders study whether a proposed law would help or hurt the index. Subject to a handful of basic limits (laws must be easy to understand, respect rights, etc), laws with positive forecasts are automatically passed. This system has extensively refactored US government, creating both wealth and the wisdom needed to tackle alignment. The EU has taken a cautious approach, but led in other areas: * Europe has created an advanced hybrid economy of "human-centered capitalism", putting an automated thumb on the scale of nearly every transaction to favor richer social connections and greater daily fulfillment. * Europe has also created the most accessible, modular ecosystem of AI/governance tech for adoption by other countries. Brazil, Indonesia, and others have benefited from incorporating some of the EU's open-source institutions.   What changes to the way countries govern the development, deployment and/or use of emerging technologies (including AI) played an important role in the development of your world? ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- After the world woke up to the dangers of powerful misaligned AI in 2038, nations realized that humanity is bound together by the pressing goal of averting extinction. Even if things go well, the far-future will be so strange and wonderful that the political concept of geopolitical “winners” and “losers” is impossible to apply. This situation, like a Rawlsian veil of ignorance, motivated the superpowers to cooperate with 2040 Delhi Accords. Key provisions: * Nationalizing and merging top labs to create the Alignment Project. * Multi-pronged control of the “AI supply chain” (inspired by uranium & ICBM controls) to enforce nonproliferation of powerful AI — nationalizing semiconductor factories and supercomputer clusters, banning dangerous research, etc. * Securing potential attack vectors like nuclear command systems and viral synthesis technology. * API access and approval systems so people could still develop new applications & benefit from prosaic AI. * Respect for rights, plus caps on inequality and the pace of economic growth, to ensure equity and avoid geopolitical competition. Although the Accords are an inspiring achievement, they are also provisional by design: they exist to help humanity solve the challenge of developing safe superintelligent machines. The Alignment Project takes a multilayered approach -- multiple research teams pursue different strategies and red-team each other, layering many alignment strategies (myopic oracle wrappers, adversarial AI pairs, human-values-trained reward functions, etc). With luck, these enable a “limited” superintelligence not far above human abilities, as a tool for further research to help humanity safely take the next step.   What is a new social institution that has played an important role in the development of your world? ---------------------------------------------------------------------------------------------------- New institutions have been as impactful over recent decades as near-human-level AI technology. Together, these trends have had a multiplicative effect — AI-assisted research makes evaluating potential reforms easier, and reforms enable society to more flexibly roll out new technologies and gracefully accommodate changes. Futarchy has been transformative for national governments; on the local scale, "affinity cities" and quadratic funding have been notable trends. In the 2030s, the increasing fidelity of VR allows productive remote working even across international and language boundaries. Freed from needing to live where they work, young people choose places that cater to unique interests. Small towns seeking growth and investment advertise themselves as open to newcomers; communities (religious beliefs, hobbies like surfing, subcultures like heavy-metal fans, etc) select the most suitable town and use assurance contracts to subsidize a critical mass of early-adopters to move and create the new hub. This has turned previously indistinct towns to a flourishing cultural network. Meanwhile, Quadratic Funding (like a hybrid of local budget and donation-matching system, usually funded by land value taxes) helps support community institutions like libraries, parks, and small businesses by rewarding small-dollar donations made by citizens. The most radical expression of institutional experimentation can be found in the constellation of "charter cities" sprinkled across the world, predominantly in Latin America, Africa, and Southeast Asia. While affinity cities experiment with culture and lifestyle, cities like Prospera Honduras have attained partial legal sovereignty, giving them the ability to experiment with innovative regulatory systems much like China’s provinces.   What is a new *non*-AI technology that has played an important role in the development of your world? ----------------------------------------------------------------------------------------------------- Improved governance technology has helped societies to better navigate the “bulldozer vs vetocracy” axis of community decision-making processes. Using advanced coordination mechanisms like assurance contracts, and clever systems (like Glen Weyl’s “SALSA” proposal) for pricing externalities and public goods, it’s become easier for societies to flexibly make net-positive changes and fairly compensate anyone affected by downsides. This improved governance tech has made it easier to build lots of new infrastructure while minimizing disruption. Included in that new infrastructure is a LOT of new clean power. Solar, geothermal, and fusion power provide most of humanity’s energy, and they do so at low prices thanks to scientific advances and economies of scale. Abundant energy enables all kinds of transformative conveniences: * Cheap desalinization changes the map, allowing farming and habitation of previously desolate desert areas. Whole downtown areas of desert cities can be covered with shade canopies and air-conditioned with power from nearby solar farms. * Carbon dioxide can be captured directly from the air at scale, making climate change a thing of the past. * Freed from the pressing need to economize on fuel, vehicles like airplanes, container ships, and self-driving cars can simply travel at higher speeds, getting people and goods to their destinations faster. * Indoor farming using artificial light becomes cheaper; instead of shipping fruit from the opposite hemisphere, people can enjoy local, fresh fruit year-round.   What’s been a notable trend in the way that people are finding fulfillment? --------------------------------------------------------------------------- The world of 2045 is rich enough that people don’t have to work for a living — but it’s also one of the most exciting times in history, running a preposterously hot economy as the world is transformed by new technologies and new ways of organizing communities, so there’s a lot to do! As a consequence, careers and hobbies exist on an unusual spectrum. On one end, people who want to be ambitious and help change the world can make their fortune by doing the all the pressing stuff that the world needs, like architecting new cities or designing next-generation fusion power plants. With so much physical transformation unleashed, the world is heavily bottlenecked on logistics / commodities / construction. Teams of expert construction workers are literally flown around the world on private jets, using seamless translation to get up to speed with local planners and getting to work on what needs to be built using virtual-reality overlays of a construction site. Most people don't want to hustle that much, and 2045's abundance means that increasing portions of the economy are devoted to just socializing and doing/creating fun stuff. Rather than tedious, now-automated jobs like "waiter" or "truck driver", many people get paid for essentially pursuing hobbies -- hosting social events of all kinds, entering competitions (like sailing or esports or describing hypothetical utopias), participating in local community governance, or using AI tools to make videos, art, games, & music. Naturally, many people's lives are a mix of both worlds. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/5aa6497813169dbfbc248499a5aca7d57104e83c5ddcaf7e.jpg)If you liked what you've read so far, remember to visit the [official competition entry page](https://worldbuild.ai/W-0000000088/) to read the two day-in-the-life short stories and provide feedback!A Note of Caution ================= The goal of this worldbuilding competition was essentially *to tell the most realistic possible story under a set of unrealistic constraints:* that peace and prosperity will abound despite huge technological transformations and geopolitical shifts wrought by AI. In my story, humanity lucks out and accidentally kick-starts a revolution in good governance via improved institution design – this in turn helps humanity make wise decisions and capably shepherd the safe creation of aligned AI. But in the real world, I don’t think we’ll be so lucky.  Technical AI alignment, of course, is an incredibly difficult challenge – even for the cooperative, capable, utopian world I’ve imagined here, the odds might still be against them when it comes to designing “superintelligent” AI, on a short schedule, in a way that ends well for humanity. Furthermore, while I think that a revolutionary improvement in governance institutions is indeed possible (it’s one of the things that makes me feel most hopeful about the future), in the real world I don’t think we can sit around and just wait for it to happen by itself.  Ideas like futarchy need support to [persuade organizations](https://astralcodexten.substack.com/p/the-passage-of-polymarket?s=r), [find winning use-cases](https://forum.effectivealtruism.org/posts/dQhjwHA7LhfE8YpYF/prediction-markets-in-the-corporate-setting), and [scale up to have the necessary impact](https://rethinkpriorities.org/publications/issues-with-futarchy). Nobody should hold up my story, or the other entries  in the FLI’s worldbuilding competition, as a reason to say “See, it’ll be fine – AI alignment will work itself out in the end, just like it says here!”  Rather, my intent is to portray: * In 2045, an inspiring, utopian end state of prosperity, with humanity close to achieving a state of existential security. * From 2022-2044, my vision of what’s on the most-plausible critical path taking us from the civilization we live in today to the kind of civilization that can capably respond to the challenge of AI alignment, in a way that might be barely achievable if a lot of people put in a lot of effort.
20645e5b-0a0f-4534-84ee-e1f4427cc44c
trentmkelly/LessWrong-43k
LessWrong
Torture vs. Dust Specks "What's the worst that can happen?" goes the optimistic saying.  It's probably a bad question to ask anyone with a creative imagination.  Let's consider the problem on an individual level: it's not really the worst that can happen, but would nonetheless be fairly bad, if you were horribly tortured for a number of years.  This is one of the worse things that can realistically happen to one person in today's world. What's the least bad, bad thing that can happen?  Well, suppose a dust speck floated into your eye and irritated it just a little, for a fraction of a second, barely enough to make you notice before you blink and wipe away the dust speck. For our next ingredient, we need a large number.  Let's use 3^^^3, written in Knuth's up-arrow notation: * 3^3 = 27. * 3^^3 = (3^(3^3)) = 3^27 = 7625597484987. * 3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = (3^(3^(3^(... 7625597484987 times ...)))). 3^^^3 is an exponential tower of 3s which is 7,625,597,484,987 layers tall.  You start with 1; raise 3 to the power of 1 to get 3; raise 3 to the power of 3 to get 27; raise 3 to the power of 27 to get 7625597484987; raise 3 to the power of 7625597484987 to get a number much larger than the number of atoms in the universe, but which could still be written down in base 10, on 100 square kilometers of paper; then raise 3 to that power; and continue until you've exponentiated 7625597484987 times.  That's 3^^^3.  It's the smallest simple inconceivably huge number I know. Now here's the moral dilemma.  If neither event is going to happen to you personally, but you still had to choose one or the other: Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes? I think the answer is obvious.  How about you?
d8976eb0-b368-43fb-b28a-c7bbb86bc833
trentmkelly/LessWrong-43k
LessWrong
Thank you for triggering me This essay is the first in a series on why turning towards what activates us is the path to setting ourselves free. The rest of the series will feature modalities & tools for working with our triggers and welcoming suppressed emotions. Turning inward — taking our shadows, insecurities, and relationship dynamics into our own hands — and resolving our inner conflict day in and day out is the first step in walking the path towards collective flourishing. ---------------------------------------- Whenever I commit my energy to writing about an area of my life I’m grappling with, it’s as if I’m tempting the universe to pressure test how deeply I’ve integrated it in my life. As I wrote A lifetime of should-ing, I found myself in the throes of shoulding myself. As I shaped Year of doing the damn thing, every ounce of my being seemed to resist doing the damn thing. After I published The mourning of a new dawn, I was hit by the most intense waves of grief I’ve experienced in a long time. A few weeks ago, I decided to write about turning towards our triggers and how my relationship with triggers has evolved meaningfully. I used to think that “self-improvement” was about mindset work and focusing on the positive. All I needed to do was control my thoughts and then I wouldn't feel nervous, anxious, incompetent, fill-in-negative-emotion. Now, I know that tuning into our bodies and shining a light on our unwelcome feelings is the path towards understanding ourselves more deeply. As I began drafting this essay, the universe took it as a sign to stress test whether I was really ready to thank my triggers and whether I’ve truly embodied the wisdom of being activated. ---------------------------------------- The past month has brought about a lot of change. Our move to a new neighborhood coincided with the start of the year, a time of hitting reset and starting fresh. I had anticipated that the new year and a new environment would create space for cultivating new routines and b
6a8907d1-2b34-41e8-838c-84eaf4bd9272
trentmkelly/LessWrong-43k
LessWrong
Am I secretly excited for AI getting weird? This post is arguably darker than my other one. I don't make any persuasive arguments about AI forecasting here; if you don't feel like looking at doominess, feel free to skip this. I've noticed a few instances of what look like people assuming that those who are visibly concerned about AI risk don't really buy into the full weight of what they're saying.  Recently, I came across this (hi, niknoble!): > As a specific example of what I suspect is a bit of cognitive dissonance, look at the recent post on AGI by porby, which predicts AGI by 2030. I loved reading that post because it promises that the future is going to be wild. If porby is right, we're all in for an adventure. Based on the breathless tone of the post, I would surmise that porby is as excited by his conclusion as I am. For example, we have this excerpt: > > > This is crazy! I'm raising my eyebrows right now to emphasize it! Consider also doing so! This is weird enough to warrant it! > > > > Would you have predicted this in 2016? I don't think I would have!   > > Does this strike you as someone who dreads the arrival of AGI? It seems to me like he is awaiting it with great anticipation. > > But then in the comments on the post, he says that he hopes he's wrong about AGI! If you're reading this porby, do you really want to be wrong? This is an excellent example of the kind of thing I'm talking about, so I'm going to use it. I think my writing and speaking style defaults to a kind of lightness that can be misleading. So let me try to write something a little darker.  Well, do you? Because I don't think P(doom | AGI) is anywhere close to 0, especially for AGI developed on very short timescales: YES, I DO WANT TO BE WRONG. The kind of "excitement" I feel about near-term AGI is adjacent to hearing the tornado siren, looking at the radar, seeing the warned cell moving straight east, walking out on my porch to look at a black wall of rain a mile or two away, and seeing the power flashes straight wes
f4697608-033b-4e57-b366-e9390d5d2029
trentmkelly/LessWrong-43k
LessWrong
Monthly Shorts 4/2022 Conflict Elon Musk was once asked about the regulatory situation of providing satellite internet without the local country’s permission. His response was uniquely Muskian: Elon Musk @elonmusk @thesheetztweetz They can shake their fist at the sky September 1st 2021 1,290 Retweets11,916 Likes Now, it turns out, there are also other options. Dictators can, for example, launch electronic warfare measures against SpaceX’s operations. Fortunately…it turns out that SpaceX is better than the Russians and so Ukranian internet access continues. Fun piece on military inter-service conflict (in favor), if that’s your jam. One of the things I’ve had to grapple with, at my age, is understanding just how meaningful 9/11 is to people older than me. Two months of car crash deaths get shown on TV, and everybody goes completely mad. I go to a panel on national security work, and every single panelist and the moderator says that their inspiration to enter government service was 9/11. The Census Bureau handed over information on Arab neighborhoods to DHS (the story is more complicated than that: DHS seems to be both lying and incompetent and the Census Bureau did something both understandable and legally required, but this is the short version). We passed the Patriot Act, setting up massive denial of civil liberties by means both legal (new authorizations) and structural (empowering a type of agency that cares very little for such things at the expense of Justice and State, which do). DHS has seized over $500 million in currency from people who didn’t follow said signage. State and local taxation is usually regressive in America. Code and Consequences > This request was intended to inform the implementation work. Instead, all hell broke loose. > > … > > After the 2019 CNSTAT meeting made clear that evaluators were not accounting for the biases of the published data, the Census Bureau attempted to inform stakeholders that they were not comparing their analyses to ground truth.
9ffc1389-c69a-4cf4-89f9-e44af2657a6c
trentmkelly/LessWrong-43k
LessWrong
We Agree: Speeches All Around! In the Catalan autobiography of James I Llibre dels Fets, King James often describes the advice given to him by different nobles and princes of the Church (read bishops). Oftentimes they disagree; sometimes he turns out right, and sometimes they turn out the wiser counsellors. Scholars often regard this frequent decision-making dialogue as evidence that James wanted not only to give an account of the great accomplishments of his life, but also provide insight for future kings and ministers of Aragon-Catalonia. There is much to say about the nature of this advice, the strategic and tactical reasoning, the difficulty of passing down rational statesmanship, and interrogation into just how “rational” this statesmanship actually was. I am not going to focus on those issues. Instead, I want to bring to light a common knowledge dynamic I noticed in this book that resonated in my daily life. My day job requires a lot of meetings. Oftentimes in these meetings my colleagues and I will hit on an agreed course of action, but then instead of saying, “We are agreed. Let’s go!” We will continue talking ourselves into the decision. Once a decision has been reached, each person inexplicably waxes poetic about their own reason for why they believe this is a good or right decision. This happens quite frequently, I do not think anyone recognizes it as weird. To be clear, this is not part of some in-house “Guideline For Decision-Making”; it is a spontaneous event of human interaction. Up until this week, I thought this exercise was either an attempt to cover up uncertainty or a waste of time. But perhaps there is some utility here. Is this practice a way creating more agreeance? Congratulating ourselves on being in charge? What’s the deal? Is it a way of rebuilding bonds that may have been strained over the course of discussion? Or is it just a ‘Midwestern USA' thing? James I helped me see the light. Before the invasion of the island of Mallorca, the Corts and councils convened to d
5e69b62b-18e6-4d70-b238-c1cc79e31b4b
trentmkelly/LessWrong-43k
LessWrong
Open thread, Sep. 28 - Oct. 4, 2015 If it's worth saying, but not worth its own post (even in Discussion), then it goes here. ---------------------------------------- Notes for future OT posters: 1. Please add the 'open_thread' tag. 2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.) 3. Open Threads should be posted in Discussion, and not Main. 4. Open Threads should start on Monday, and end on Sunday.
f6e54966-bd9d-4d51-952b-dcff629e9b13
trentmkelly/LessWrong-43k
LessWrong
I’m confused about innate smell neuroanatomy (This post is probably only of interest to neuroscientists. I’m mostly writing it in the hopes that someone more knowledgeable will chime in and help me out. There’s a comments section at the bottom, or email me.) (See updates at the very bottom—I might have an answer now.) tl;dr In animals, specific innate reactions are reliably triggered by corresponding specific smells—for example, odors associated with natural predators tend to trigger avoidance behavior, even in the absence of any prior experience of those odors. In order for this to work, I think odor information needs to get from the nose to either the hypothalamus or brainstem, without passing through any of a long list of regions that includes the amygdala and the whole cortex. I’m struggling to figure out what this pathway is, if any. I offer my best current guesses as to what’s going on. Background Why I expect direct projections of smell (like all other senses) to the “Steering Subsystem” It’s well-known that animals have numerous specific innate reactions that are triggered by specific smells. For example, odors associated with species-typical predators or unhealthy food may trigger avoidance, odors associated with species-typical healthy food may trigger approach and eating, odors emitted by conspecifics may trigger mating, aggression, or other behaviors, and so on. Meanwhile, I continue to believe that a large fraction of the brain, which I call the “Learning Subsystem”, including the whole cortical mantle, striatum, cerebellum, and some other stuff, “learn from scratch”, a term that I’m using in a very specific way defined here; and meanwhile I think the rest of the brain, which I call the “Steering Subsystem”, particularly including the hypothalamus and brainstem, is a repository of innate “business logic” such as “if I’m fertile, increase my sex drive”, as discussed here. For sensory input processing, there’s a nice story that goes along with that two-subsystems picture. The sensory input
25ecb46b-0c63-4893-b436-1a1c09fd4e26
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
220. June Ku on MetaEthical.AI hello and welcome to session 220 in the aict.com reading group tonight we have june coo with us presenting her work on mythical ai she is she described herself as the um as a computational meter ethicist and as far as i can google she's the only person in the world with that title so june thank you for coming uh yeah um i appreciate everyone coming here and uh so today i'm going to introduce my uh research on uh metatoyi and uh it's basically a technical proposal for uh how to compute an ethical goal function that would be suitable for like a smarter than human artificial intelligence or in slogan form how to how to get an ai that does what we should want it to do so my approach here is to basically directly uh directly tackle some key philosophical problems including uh meta ethics and then problem of intentionality or mental content and that's broken into two sections first semantics and then first the syntax and then the semantics so what is math ethics well ethics is about what to do uh meta ethics is kind of a layer more abstract than that it asks things like what is the meaning of ethical concepts and what is the status and metaphysics of ethical facts so for instance are ethical statements the sorts of things that can be true or false and if so what in the world would make it true of ours um so i think maybe a good intro to meta ethics is imagine that you're translating some foreign language and and you want to know what if anything you should translate into the concept should um one thing you don't want to do is just immediately jump to your ethical theory as an analytic group so for instance maybe you're a utilitarian in your ethical theory i i think you still shouldn't think that should is just synonymous with happiness maximizing because then you're going to run into issues if if someone says some people should suffer in hell then it seems like you're going to have to attribute to them this incoherent thought that suffering in hell somehow maximizes happiness when they want to just be retributivist about it so if you think of all the different things that people have held to be ethical or not and do so and somewhat coherently we know what they mean when even if we disagree with them then i think that starts suggesting that the actual content of the ethical theory is not that central to the meaning instead i would look at the inferential roles or conceptual intuitions surrounding the concept of should so for instance generally if i'm judging that i should do something that usually comes along with motivation to do it um if you're if you're translating me as saying i should do something and that never has any tied to my motivations you might start questioning what exactly right translation so similarly i think it at least purports to be factual we assert or deny ethical statements we argue for or against them we can wonder and inquire into what we should do and when we're saying that someone should do something then there certainly is a sense in which we're trying to influence their behavior but it's not just any old type of influence so we don't we're not just trying to manipulate or brainwash to watch them instead it seems like we're trying to get them to recognize reasons that they should do it so so i think uh if they should do something then generally they should be able to correctly reason from it from something that they already assessed um and uh i think topically talking about what we should do invites this kind of open-ended reflection if if i say you should do x because of y then then we can ask and turn well okay but should i do y and it kind of always makes sense to ask that question um and and finally i think there's uh some so deliberating about what to do uh seems to not just tend to come along with our motivations not just correlation but i would argue uh this should actually be at least a causal tendency uh so that uh deliberation isn't just this epic phenomenal thing that has no causal effect on anything so i think what this stuff starts pointing to is that ethics presupposes a philosophy of action or some kind of normative psychology and you might notice that in general ethics seems to only apply to agents and usually human human beings adult human beings and not to uh other animals um instead it seems to be restricted to agents who have some capacity to reflect on their desires and when they're reflecting on their desires they're assessing them according to some sort of standard and that assessment exerts some causal control over their desires and then similarly for any given standard we could assess them according to some other standard and that similarly exerts control over those standards and so i model all of this with positing higher order preferences or utility functions so these are things that are going to be isomorphic to mathematically isomorphic to normal utility functions um but instead of governing actions they're going to govern other preferences through normative judgments um so this leads to the statement of my meta ethics uh which i call norm descriptivism um which is that ethics reduces to which values best satisfy these higher order decision criteria this is criteria i just kind of use synonymously with the high order preferences utility functions and so my argument for this would be that this is the best way of systematizing and explaining the conceptual intuitions from the previous slide and on this view ethical facts just turn out to be the correct answers to the questions that we're asking in deliberation about what to do um so i guess to to go from the matter ethics to the to the ethics i guess you would want to figure out what what are these questions that we're asking in deliberation and my approach to that is just to give a general theory of what meant what mental representations whether it's a belief or a goal is also counting as a mental representation i'll give a general account of uh how menstrual reproductions were and um and then that would fill in the content of these deliberate questions and therefore of ethics so philosophers call this the problem of intentionality the problem of intentionality as things like what are mental representations and what determines their content in this first section um we're going to start with just determining the logical form of an agent's representation and so my answer borrows a lot from daniel dennett and his intentional strategy so here's a quote from him saying what it is to be a genuine believer is to be an intentional system a system whose behavior is reliably and voluminously predictable via the intentional strategy so um as far as i know that then it doesn't get into how you might work this out in technical detail so so that's what i've been working on um so i'm going to define a sort of space of intentional strategies or decision algorithms just mathematically what does this space look like well a lot of it is going to be pretty familiar from standard decision theory you're going to have credences which are just assigning a subjective probability from 0 to 1 to various logical causal formulas that'll include conditional probabilities as well [Music] utility functions or preferences where you assign some real or rational number to some formula being satisfied um there's going to be the inputs and outputs inputs are going to be a subset of the credences that correspond to like peripheral sensory brain events so sense data essentially um and then and then the outputs would be the actions governed by the decision algorithm so just be like motor output and all of that is fairly standard so far but the main thing that's new is is these higher order preferences or utility functions sometimes i call them accepted norms and these are again very much like the utility functions they're also assigning real rational numbers to formulas but in this case these formula formulas are generally going to be referring to not the external world but to other utility functions or preferences within the agent so all of that defines a decision state and then we're going to have state transitions that describe the dynamics of how an agent moves from a given state to another one based on new inputs coming in um so so that that just sort of tells you all the all the possible intentional strategies or decision algorithms that we might attribute to our brain but uh given some some brain we want to pick the best one and so so we want some notion of what is the best intentional explanation uh it's got a few components um so first we're looking for one that best compresses the brain's behavior and the compression is kind of a way of favoring the simplest and best fitting decision algorithmic explanation of the brain's transition behavior next we've got some measures of rationality so this is going to include things like probabilistic coherence uh instrumental rationality and that's going to include uh the equivalent of instrument rationality for the higher order preferences um and just basically just amounts to some kind of principle of charity uh in interpreting what the brain is doing if you could attribute to the brain some rational thing that it's doing and some crazy thing then all else being equal attributes into a more rational thing and then finally we want these explanations to be ambitious so it's trying to account for as much of the brain data as it can ideally anything left over in the brain data is more just noise than it is a decision process um okay so so so far uh that's that's really telling us what what is sort of uh most useful model of of a brain but you might have this worry um about wanting a more realist criteria as opposed to like the instrumentalist criteria and um i i think denit himself is a little wishy-washy on how realist or instrumentalist he wants to be but but basically i have this worry that couldn't you just be coming up with this decision algorithm as a useful predictive model but but that's not actually what the brain itself is doing and so i add uh a further condition that i borrow from david comers and um so thomas has this um uh paper on how it's how when a physical system implements a computation and so in our case the physical systems that we're going to be interested in is going to be the brain brain states and their causal relations to further brain states uh so it's just actually going to be um like a utah pearl style causal model of the brain and then uh and then i've introduced what the decision states would be and the state transitions between them so the implementation function f is supposed to take a brain state and tell you what decision state is that brain state in so it'll tell you the credences and utilities uh preferences things of that sort um and so the the trauma criteria is basically this equation we want to kind of make sure that whether we start at a given brain state and move to the next brain stay caused by it and then interpret it with f into the decision state that that this route gives you the same result as if you want this other rod first you take that brain state interpret it into the decision state and then take the state transition to reach the final decision state um and so so we're going to require this not just for the brain states that we've actually observed but even counterfactually um in the causal model for for all the possible brain states um and there's more details in his paper does iraq implement every finite state automata and and commerce develops this theory as a way of saying no it doesn't which is hopefully the intuitive result that we want um okay so so that that covers how we would take a brain and try to figure out what formulas the syntax uh that we should attribute to it but then given the syntax what if anything do these logical expressions refer to or what are the truth conditions of the formulas so there's a few principles that are grounding the reference in my theory so first we're going to have these self-denoting input expressions so that subset of credences these are supposed to be the sense data and they're going to refer to the brain states that implement credence in them so they're kind of self-referring that those brain states are kind of self-referring in that way so you might think of their content as something like this sense statum is occurring or if we want to make it even more simple uh just a pure demonstrative this and if we're trying to sort of build up a theory compositionally starting from some atoms and building up molecules then we kind of want to start with we kind of want to try to find something that's possible that's very simple and primitive and and i think this is a good candidate i think everything kind of also carries information about itself so it's not surprising that we could have things stand in for themselves um and uh also this kind of makes the whole project work because these are gonna serve as anchor points for logical and causal combinations of these expressions so if you if you have a bunch of these sense data referring to their own brain states then then we could start talking about a conjunction of them or positing a hidden cause that that causes that conjunction of sense data and that starts allowing us to refer to other things um another thing grounding the reference is uh these inferential goals for connectives so connected just being things like conjunction or disjunction causation so imagine that you have an agent where we observe the following dispositions when they believe the proposition p and the proposition q then they tend to infer this new proposition p star q and when they believe p star q then they tend to infer that p and infer that q um so you you might uh um you might notice that this seems to match onto the truth table for conjunction and so this idea comes from ned black in his conceptual world semantics um and the idea that uh the idea seems to be that uh we could figure out that this star operation seems to uh be the conjunction because the inferences that these are involved in are basically matching the axioms for conjunction and so just to generalize to other connectors being grounded in their axioms and so if if we're going through this process of uh attributing the syntax um then when we when we attribute uh these connectives in a way that deviates from these axioms these are going to be uh punished by the coherence score um okay and then and then finally just so those gives you some ways of building up um some references um and then there's this uh a more general idea of how to if you already have some old terms that you understand there's a this ramsey lewis uh sometimes carnet uh is thrown in method for defining uh new new terms originally it was for theoretical terms like like for a scientific theory um here we're gonna just use a simple example of car theory i think this also comes from black um and uh so so imagine this like a scientific theory and it's introducing in in this uh lavender color um it's interested introducing some new terms carburetor and ignition chamber uh using some old terms like fuel and air and we're assuming that we already understand the fuel and error and other terms but that this theory is introducing carburetor and ignition chamber so it might say the carburetor mixes fuel and air and sends the mixture to the ignition chamber which in turn blah blah blah now one one one thing you might be worried about is if we're maybe defining the carburetor in terms of its interactions with the ignition chamber and we're and we're defining the ignition chamber in terms of its interaction with the carburetor is that going to lead to some kind of vicious circularity and definitions uh it turns out that there's this nice technique that kind of shows that that's not really that big of a concern um what what you do is called rancification um and uh you take you take your uh theory and you replace any of the new terms with variables so here carburetor has become x and the ignition chamber has become y and then you just do existential quantification over it so now you're saying this is called the ramsay sentence now you're just saying there exists some x and there exists a y such that the x mixes fuel and air and sends mixture to the y which in turn blah blah blah and so this is a nice way of if you already have the old terms figuring out the meaning of new terms to refer to whatever fulfills the functional or causal goals positive for them so in this case we've done it with uh with objects um but if you use a second order logic this can generalize to predicates and relations as well and so i kind of want to apply this pretty globally and holistically um but uh the usual way it's it's talked about is as you have the entire theory being true so that that we have um like all of the sentences uh um are being true but when when you're moving more towards uh um filling in all the mental representations of an agent then then uh probably they're gonna have some false beliefs and we'd like to still uh apply this method even if some of them some genome are false so i'm kind of weakening the condition here to allow for some error and and then we'll just kind of try to find the the set of uh um beliefs that would minimize uh squared or the set of interpretations uh semantic interpretations of their beliefs that would uh minimize the squared error as we're filling in the functional causal roles um and let's see yeah i i can go into more technical details later but uh but uh but just hopefully gets you some intuitive idea of what principles i'm relying on um okay so to put this all together uh here here's basically how i propose computing readiness in five steps so first step we're going to start with just assume that we're given uh in the ai a low-level causal model of the world and the adult human brains within it um [Music] then we're going to take those brains and we're going to attribute the syntax and dynamics of mental representations to those brains part of that syntax is going to include for these higher order preferences what they refer to or or at least at this stage at least their logical form not yet what they referred to but uh with with the higher order decision criteria we could iteratively apply those criteria uh to figure out what rational first order utility functions uh these these brains uh should have and then we create so so far these rational utility functions are are are still couched in the agent's language of thought so next step we translate using the semantics translate their uh utility ratio utility functions from language of thought to external world states which would just be like the causal model that the ai has for instance um and then um and now uh that helps make it more uh comparable so now we could aggregate everyone's rational utility functions using some social choice or social welfare function and i think that's it that's some credit for images and necessary some technical details and an appendix and they'll keep it on this slide uh yeah so um appreciate any uh questions okay thank you very much june for your presentation and so the first question will be from stuart armstrong okay well thanks for this uh presentation there's a lot of interesting ideas in there um so the first question is just a general meta one where do you see your project as needing as being the most incomplete and where do you see it as being complete so of these say five steps uh are some of them uh you think basically done or others and others need a lot of work or what's your feeling on uh that yeah um yeah i guessed uh i sort of think of it as um yeah i mean so a lot of background is has been in academic philosophy but then moving into software engineering so i'm kind of taking a a engineering approach to this i suppose and just trying to come up with what it would be like a minimum viable product for for um competing friendliness so um i mean i think that uh i i think that one thing that could be filled in more is um in what areas am i kind of taking liberties with biological or psychological plausibility um i mean so i'm not requiring uh agents to um reason perfectly by any means uh they're it's sort of made into a great aggregational coherence scale so that's one way in which i can accommodate um some psychological plausibility in there but there might be further ways or maybe different types of logics that would be better able to capture how humans reason for or for instance maybe like um i mean right now all the credences are sort of in one big belief box but maybe there's suggestions from psychology that actually there's some kind of massive modularity going on in the mind maybe that that modularity should be explicitly represented in the model um so so i think there's a category of just making it you know maybe there are cases ways in which uh despite trying to accommodate more i'm still kind of assuming that human brains are more computer-like than they actually are so there's just one big area where i can see a lot of room for improvement so would this be about uh in your first step for example yeah well the first step is that's just sort of the starting point um um [Music] yeah so i'm just assuming that these are inputs to the uh uh to the a ai and computing this um i mean i guess we could relax some of this stuff as well like basically if you have instead of a single causal model that we're just assuming is an oracle telling us the truth of the world um if instead we have um a probability of distribution over call the models of the world then i i think a lot of the stuff should straightforwardly carry over like conceptually at least it should be uh hopefully pretty clear what would be wrong i don't think that standard uncertainty is a problem here that's something we're quite used to dealing with you'd like to do the first question uh sir sure other people so one of the uh things we we discussed in the reading group when we when we read this was um a path towards a feasibility basically both in terms of uh if we are actually implementing this and also in terms of uh making a simple end-to-end test like with the software that you have already developed would it be possible to make a world with two brains that want ice cream or chocolate and then and then actually see the computed utility function from that uh let's see so right now it it requires um infinite computational power um and then so that's mostly because i'm using uh the um como guerral complexity in various places which is uncomputable and you could you could substitute some finite approximations to those i think like minimum message length minimum description length are are some existing ways of of having finite approximations to the chromograph complexity um and then um but but even even once you go once you make that finite um i there's like you know virtually uh no performance optimizations whatsoever in in most of my code here so so many of them are simple brute force algorithms that were more about can we even solve this with infinite computational power just to sort of make it clear what we're aiming at um and and it would it would certainly take a lot of work to um to try to pair that pair down um now i i have uh i i let's see if i could uh you know right so i have written some uh um some i do have some test coverage so any anytime on my website you see um a check mark next to the next door procedure then then that points to a test of that procedure um so i think there was last i checked maybe like 47 of the procedures have some tests um so so maybe yeah it would take a while to get i i i also would like to build some very simple toy model um to uh yeah to try out this uh this theory there um yeah it's gonna take some work because there's just lots of places where where things are very super duper exponentially exponential uh and and some of the some of the testing procedures i i make use of some caching and stuff just to just to be able to have any hope of testing this stuff um so there's maybe some engineering tricks you could do so that it doesn't actually have to compute like all all possible um decision algorithms that might correspond to a brain maybe maybe it's enough just to like sample from that distribution um so so yeah certainly there are many places where you could start making this more computationally practical yeah and aside from a couple of places where it was useful to be able to write the tests um there hasn't been much work yet in in allowing for that kind of end-to-end test but that certainly would be what a direction i am interested in going okay thank you for your answer um stuart would you take it yeah so one of the great advantages of the way you've laid it out as a program is that it provides clarity as to exactly what is assumed one of the great disadvantages is that it doesn't allow us to see how big an assumption on how much work is required on one line of code versus another some of them may be entirely true some of them may need a little bit of work some of them may be hiding a whole part of the problem in now this is the bit that sora knew that i was going to bring up there on the occam's razor result um you defined the best intentional explanation as having maximal compression yeah let's get the slide up um high rationality and um ambition well let me give you an explanation that is absolutely fantastic on all these fronts humans are fully rational in every single thing that they do and they always pick the perfect action this is i've shown in my result this is gives you the best compression uh it is obviously fully rational and the ambition is perfect it explains everything um okay so i yeah are you you're talking about like your uh no free lunch here um yeah i uh i i i've been wanting to dig into that uh further i haven't uh i i just sort of barely skimmed it but i do wonder whether um my setup is a little bit different from from the one you consider in particular because of the trauma criteria that has to be in place so so whenever you're attributing some uh decision algorithm um it's gonna it's gonna require that it actually correspond with um with the brain's uh transition behavior um that that does not seem to be a problem in the so the humans are fully rational we all agree as a degenerate example it's wrong but we need to find out why it's wrong um and when you do that what happens is that the utility function expands to almost the whole of the brain so the whole of the brain can be seen as computing the um the utility function or the reward function and then you can zoom in on say a small set of neurons which are the input output and they are implementing the decision procedure which is just basically follow what the rest of the brain has computed so it is it does not seem to me that it would be that hard to designate a particular part of things as the rational decision makers because that's a very small defining a fully rational decision maker is does not take much code and you can basically assign it to just a few neurons that pass on the message basically if you want the rest of the brain says taking this action is the right thing to do in terms of utility the intermediate neurons say thank you we will therefore take that action and that's the rational irrationality module and then you seem to have a model that works in your sense yeah i think this is kind of reminiscent of um of like putnam's paradox that uh originally motivated chalmers um in this paper so um so putnam it was using this model of uh of like a finite finite state algorithms and and there um [Music] any any given finite state uh any any given state within their um it was treated as like kind of simple it didn't it didn't involve any internal structure one of the moves that chalmers makes which i didn't really talk about um in this slide but it is but it is in this paper does iraq implement every finance throughout it is he moves uh to what he calls a combinatorial complementarial state algorithm where [Music] instead of allowing the some simple states to potentially encode all of the um some complex state he wants to more explicitly model the internal structure within any given state so within a physical system is going to be it's going to be implemented in a bunch of sub-states so so i do i do still wonder if that if that is able to get around um uh this type of objection um i have reasons to believe that the result still applies um with internals well i i know the result applies when you know the internals of the agent um it just depends on how many would you would you mind if i shared a paper uh here i i know it's your your talk oh that's not a paper a um a blog post okay um should i stop sharing my screen no no that's fine i'll just uh put it in the um the um now ah okay learning human preferences black box white box structured white structured white box um the essentially the problem is not just that you have to identify how the algorithm happens inside but what are the correct labels for the algorithm for the different parts of the brain wise is this a bit beliefs is this motivation is this rationality of course it's much more complicated in the human brain but what are the criteria that you would use in assigning these labels to the different parts of the brain and whether that can be done whether that can be automated is the now it can be automated if you make enough assumptions but the kind of structural assumptions that you've been talking about do not seem to be um sufficient or even close to sufficient okay so the white the white box is going to include knowing its internal structure yes okay it's um and the the the thing is that what we need is what uh here i call them structural assumptions in other places i call them normative assumptions which are given that you know the internal structure how you to assign labels to them now there's a rather silly example where something is labeled tuna but that's basically just to show the arbitrariness of the labels and what typically happens with humans that resemble sorry when humans approach these problems that resembles a bit your uh your description there is that we define these things with very few um syntactic connections like the beliefs take input from the senses the action selector is the bit that outputs to the uh the moving about and stuff like that and the beliefs go into this and so does the preferences and so on but those few structural assumptions that generally there's trillions of ways of taking a complex algorithm and matching those so there seems to need some extra amount of information or assumptions what i call structural assumptions now it's not hopeless but i don't want i don't i want to talk about your approach not about mine uh but it does seem that this is as it stands i would say this is a huge gap in uh your approach that is just a few lines on your uh on your meta ethics page so just as an illustration of the potential hiding of large problems in small small lines of code um okay well i look forward to uh i don't think i've seen this this post before um i'm i've been told that i'm not the clearest communicator and um in any case so if you if you want to talk about that uh please do let me know yeah um she would say next question so jack you had a question like sort of a clarifying question um so so in in your five steps um i think the third step is um iterating higher yeah applying higher order decision criteria to reach rational utility functions so for that to occur is there does there need to be the assumption that humans or whatever um whatever brains you're modeling does there need to be the assumption that those have coherent utility functions because it's not immediately obvious to me that humans do have coherent utility functions or that we should expect that to be true maybe it is i just don't know yeah um let's see so yeah i think i think this might be one of the places where um for for the first version it it i think it's probably going to end up trying to find the the utility function that that is closest to theirs and that is rational i believe um [Music] yeah i mean that might be something to to relax in later stages um [Music] yeah i'm just trying to so um yeah i i guess the way the way that i've encoded the utility functions is um yeah so if i had if i had modeled if i had modeled the agents with say like um i guess ordinal with ordinal utility functions then there would be room to model it as uh irrational but because i i took a more simple approach of just modeling them with cardinal utility functions uh i think i think that that's going to make it so that they they do all have utility function so um yeah so i i am i am kind of interested in can we relax those assumptions and still make the project work and my my intuition says yes like for instance maybe maybe maybe uh drop completeness but we could we could still do things with uh the different possible uh ways of making them complete um and and still run the algorithms off of those uh but that will probably be more for future work yeah gotcha okay thanks okay um my next question or uh point is instead of pointing out something that might be unexpectedly difficult and to point out something that i think might be unexpectedly easy in the the grounding of uh to use the sort of um the gophi term the grounding of the symbols in the brain which i believe you are um yeah the tr anyway um translating syntax to semantics this may be a lot easier than uh we think because it seems that there's an empirical definition of this which is does can we use the symbols in the brain to predict things about the outside world or can we what is the relationship between the symbols in the brain and these symbols in the outside world in a large set of environments that the humans find themselves in so um yeah yeah i do i do think that there should be a lot of fairly easy test cases there um i think i do have a little bit of a remaining worry though especially because the the most crucial [Music] symbols to ground would be the ones that show up in the higher order preferences so i think those high roller preferences being a little bit further removed from from everyday action i do wonder if though those are going to be less amenable to that sort of treatment and and and there we want maybe more of a theory a theory that's been tested by easy cases but but theory guiding us um in in figuring out what those uh representations are well you've touched upon a subtle issue there which is that our symbols are well grounded by our experience in the environments that we know the symbols that are not particularly well grounded are um can are basic are often ones that are not well def sorry if you place the humans or the agents in a new situation where some of their symbols don't work the same way that they're used to i one of the examples i use is what if someone creates a species human mic a slave species that want to be slaves definitely want to be slaves but don't enjoy being slaves they've recognizably human they have preferences which is to be slaves they have enjoyments uh thing and in this situation a lot of say common assumptions that normally go together start breaking down or splintering was the term uh that i was using and this is the situation in which you generally find that these higher order symbols or these more abstract symbols that you thought you had a good grasp on suddenly you aren't so sure anymore or they can be defined in multiple ways in a way this is what philosophers are doing all the time because they take common concepts and push them into extreme thought experiments where they break down and where and but if a world with potentially super intelligent ai is a world where the ai can push the environment into incredible new states where um the philosophical thought experiments are there are the actual thing do you have any way of sort of dealing with that when symbols become ambiguous or when symbols that used to sort of be synonyms are no longer synonymous um yeah so i haven't uh i haven't actually gone into so so um so in my meta ethics paper which some of you uh read last week and and so far in this presentation i i actually haven't gone into that much technical detail it turns out that um i'm giving sort of a simplified view of things here so so maybe i should actually um get into this appendix slide um so my first thought and the way i sort of uh certainly in the meta ethics paper had been talking um i i've been assuming i i don't know if this is going to directly translate into into your words they might still be separate but but just to give you some idea of where i actually end up with the technical specification um so so in the first pass i kind of assumed that these higher order utility functions form some kind of a neat hierarchy right so there's maybe there's just one highest or utility function and then your job is relatively easy figure out figure out what lower order utility functions that it prescribes and then keep propagating that down until you reach the first order utility function but i think that's not exactly psychologically plausible i think more realistic model would have usability functions that are maybe able to mutually influence each other with no single one on top so in this case i want to um uh it italy choose sort of allowing them to simultaneously implement gutter and ah you're choosing outputs that that satisfy each of its decision criteria and and and you keep simultaneously updating until you reach some kind of stationary equilibrium um i think even that one comes with an assumption that we not we might not be able to retain uh then and namely it's it assumes that there's a network topology of of which which accepted norms are are governing which other accepted norms or high order utility functions and and it's possible that those actually are path dependent and depending on what input you feed it so so i actually think we have to move to uh even more complicated one where now now we're going to simulate all continuations of an agent and and apply a weighted social welfare function and uh continuations who better satisfy those decision criteria and uh preserve a gentle identity which is kind of like personal identity um but basically they better satisfy the decision criteria and and there hasn't been other irrelevant changes to to their through their values uh those continuation are going to have more weight and then and then you apply a social welfare function to to aggregate them um did you want to these are this is um these are close to the kind of thing that i was have been thinking about and so it is um uh it i don't want to come and say as because you're thinking similarly to me you must be right but um it means at least that you have been um thinking about the same issues and how to address it um like the just to check one thing there is no strict to ordering a weak higher order utility function can be overruled by strong lower order uh preferences yeah i think so i think that should be possible like a yeah uh say a mild religious preference um towards on sexuality or something like that can be overruled by sort of strong object level uh ones would you say to um to pick an example yeah um let's see yeah i think so so i under and that case i'm going to end up saying something very similar i i think i'm just going to take slight issue with the way you said it you described it as the like the first order preferences overriding the the metal level preference instead what i would say i i i would probably say we want to model that as a different meta-level preference that says something like um [Music] when when you have first order values that strongly conflict with some kind of weaker high order preference then then allow the the first order to um to override that one but but that's all going on within us this second metal level preference um so so conceptually i kind of one one problem in in the language here is um higher order preference is ambiguous between a couple of things um so so one notion of higher order preference that's that's not the one for the most part that i'm using is is one that's a first order preference it's a preference in the sense that it is governing actions um um but it could have some high order content uh it could have content that includes talking about other preferences so so you could have a preference to change your preferences um and and i think those are different from higher order preferences that are um defined not by the content but by its causal functional goals in changing your other utility functions because that first type of preference that that really is just a first order preference governs actions and and only only affects uh other preferences through actions and i think and i think that when we're talking about things like being moved by moral arguments i think we're not we're not talking about this pragmatic rap okay i'll i'll have a follow-up question on that after um after the next question okay so that would be my question uh one thing i didn't see and maybe that was because i didn't read it very carefully was a an explicit comparison with ikey um and um so i would like to hear your thoughts about how this relates to icn in particular the thing that i'd say is outside the universe and it seems possible to me that uh your construction also kind of assumes that the uh the computer that's implementing this is outside the universe or is is that a requirement um um let's see so so so all of this is supposed to be going on like within the mind of the of the ai um the the ai is supposed to have a a complete true causal model of the world and the brains in it um let's see so so i guess i think i think what is the term like the cartesian separation of a few apart from the world i i guess that could theoretically come up with the brains or it could come up with the ai itself see i'm not sure yeah i mean in a later stage one thing one thing i do want to eventually get to is is things where this stuff might come up more like um like i i think that um meta-semantics and meta ethics is actually um uh that actually gives you most of matter philosophy so theoretically i think i should be able to use the resources i built to do some kind of verification or validation of the philosophical process that led to the creation of the app then in that case you probably would run into these worries with self-reference and it's is that there the ai would would be modeling itself in the world and as being caused by some causal process from the brains and and and trying to check did they make the right philosophical decisions um so so so all of this is is still very speculative in handwriting but but that's where i imagine some of these some of these issues might come in as it is now i don't know if i don't know if they i i don't think i've had to had the ai model itself anywhere in here it's it's just modeling the brains figuring out what they value and advocating those values uh i i don't think i had to have the ai talk about itself in the world um so so maybe maybe it's yeah maybe it's just ambiguous of um or yeah i guess it's ambiguous whether whether it does have a model of itself in the world okay um yes so on the um um sorry i'm i'm having difficulty uh remembering exactly on the when you were constructing the models there um and were ah the different weighting of the yes the main point is just quite simply that most humans do not are not philosophers and so we have not constructed higher order preferences or meta preferences we've especially not constructed meta meta preferences for saying that a weak first order in the vague sense a weak object level preference should over sorry a strong object level one should overrule a weaker well certainly they haven't they haven't regimented their vocabulary to articulate these sorts of things as much as philosophers have i i'm i'm not so sure that they that they lack these things though well what i was going to get to is well i don't know uh to the extent uh is what about someone who has not yet considered the problem but would would it would have one solution if they did consider it and the second one is what if someone who has not considered the problem and could get one of two solutions depending on which arguments they read first both of these are cases that are quite likely to happen so would you say that their meta preferences there are undefined or yeah well that's that's exactly the sort of thing that um that this this stuff in the appendix was supposed to alleviate so so that would be uh there will be different continuations of the agent one no time here one argument another here is a different argument um and then as as long as as long as they're both satisfying the decision criteria equally then then they might just be on equal footing when you apply the the social welfare function between them to aggregate their so so are you saying that ones that are not yet defined um can preferences that are not yet defined but that could come up are also included in the set of preferences to consider um let's see i i mean i think i think that's how i would like it to work i think i did run into some some technical issues uh but but certainly at the very least um changes to existing preferences that that might come up depending on different inputs those are those are going to be accommodated in my model um does did that make sense um if you're having new preferences with new symbols that that that the original agent didn't use um that that's actually uh one one one place where my model doesn't exactly do well um okay but as long as it's as long as it's using the same vocabulary of the original agent and it's just changing how much for instance is valuing things or it could even be a new a new preference in the sense that the original agent was able to represent these outcomes fine it was totally indifferent but now you could have a new preference among something that the original agent was at least able to represent but and then you can have a new preference on something that they were previously indifferent to so all of those types of changes would be accommodated by this uh okay well i think this connects with my first point about new environments um so we can sort of bunch them all together as what happens when there are new symbols and both when it's predictable that the person will always interpret the new symbols this way under any realistic introduction to the subject and the situations where they can interpret the new symbols in multiple different ways yeah um yeah i'm trying to think how big of a problem it was you don't have to solve everything right now yeah yeah yeah i'm just trying to think is it like a big problem a medium problem um so so within my model um i think i think the problem was because so think of that last step where i'm translating um the translating rational utility function from language of thought to external world states then um if i was going to try to accommodate these new symbols in in this other possible world are these are these things can be separate from the causal model of the world or i guess they would it should still be possible from within the ai's world um so so maybe as long as as long as you could still translate them into external world states they might have to be like merely possible external world states or something um then maybe it's it'll work but it gets into like weird metaphysical issues of how to how to align this this stuff um and and then yeah there was another another part that i was a little unsure on uh but that kind of ties into this is the philosophical purist in me wants to say my values are my my values are grounded in my brain so so so if you took my my brain and put it in different circumstances um i kind of want to say i have the same values so so then when [Music] um so so one thing we could do is is put in a probability distribution over over worlds that my brain could be in but then but then this probability distribution is ending up influencing the continuations and and so the the philosophical purest in me didn't want that happening so i talked instead about just like all possible inputs but but uh but making sure that we get we kind of get rid of get rid of the ones that just introduce non non-cognitive changes like changes that don't come about from reasoning about your high order preferences are are going to end up getting very low weight so i don't know that that philosophical purism does create a little bit extra difficulty um here and and i'm not entirely sure whether to give it up um i personally recommend giving it up um it's uh because if it's if it's true you'll find it out anyway and if it isn't it'll be an obstacle in the design i'd recommend hacking together something that kind of works and improving on it and if if sort of purism or more realism or something like that turns out to work then it should naturally emerge in that context but if if it doesn't work and you try and impose it then there's something might break and that's just my my advice there yes uh over to you sir yes um one of the things that came up in the reading group was uh this uh methodical is written in central x which is somewhat of a niche programming language and if conflictually this was written in python um there is a sense that maybe uh you would have gotten more engagement out of it also it seems like the code seems to be optimized quite a bit for correctness in that with the tests and everything and maybe um optimizing for readability would have been a better uh choice like longer variable names and things like that [Music] yeah um i don't know if i would move to um python but but i am sympathetic to uh possibly porting it to a different language um probably something like pascal would be um um one one that i'd be leaving the most towards uh because um [Music] i i did i did want to maybe keep it in a form where um [Music] in a programming language that has clear denotational semantics um [Music] i guess my original idea was something like once i have it written up in set theory then then maybe it wouldn't uh it wouldn't take too much to just then just write it in standard mathematical take notation using uh latex for for humans to read right um or if i wanted to continue with the machine readable and executable which which i do like um and then maybe maybe switching to something like pascal where escal is also pretty speech but but certainly there's many more haskell programmers than uh set elec programmers um and uh yeah so so if i had infinite times there's definitely many things many things i could be doing um yeah i guess i chose set alex because it had the clear denotational semantics and i can imagine translating it into lattek later on um and uh and it just seemed a little less overhead than writing in haskell um [Music] so so it yeah i mean if if i had written it haskell i don't know would it would it have been worth it to write it in haskell if it would then take me longer to release right uh yeah those those were the sort of calculations that i had um um yeah um back to me sure um so you were talking about uh diet chronic coherence and other rationality and coherence um requirements um i'd suggest that some of the some of these coherence requirements are actually themselves more akin to meta preferences than to um than to requirements um the the kind of thing that i'm thinking of is for instance um uh temporal coherence like um whether you um like at some point people like uh enjoy eating they enjoy eating stuff when they're hungry and they don't enjoy eating it when they're full um we've decreed that this is not a um temporal inconsistency um for the uh the reason that yeah there are other things where our desires go up and down sometimes we want to watch uh romantic movies uh sometimes we want to watch tragedies our desires and preferences in these areas fluctuate and we still think that we have an underlying coherence despite all that uh because we sort of but but other things we decree that we do have temporal incoherences um like when we well yeah sort of we overeat and we don't and then we purge or we we contact an ex that we really shouldn't and then but to pick a more narrow example if someone has a peak of sexual desire and sleeps with someone at that point this is appropriate this is fine this is not an inconsistency if someone has a peak of sexual desire and calls an ex inappropriately at that point this is a bad decision so the impact of sex i should choose a better example but that's the one that sort of sprang to mind um can be seen as both um positive and negative so not not positive or negative as time consistent and not time consistent uh depending on how we see it and the way we see it is from our metro preferences i apologize but the um the things are not fully worked out but a lot of things like consistency um like there's people who there's the alley paradox people that violate various dutch book arguments and uh lotteries and other things but they can say that they that you don't people buy extended warranties for things and you can easily check this is you lose money there's no overall if you buy extended warranties for things you lose money if you don't you're better off much better if something breaks pay for it and you'll that'll be much cheaper over the long term but some people value the security that the extended warranty gives them now that feeling of security you could say it's an irrationality or you could say that it is a consistent preference or meta preference that they are satisfying um so what i'm suggesting is that a lot of the coherence things can be seen as our own meta preferences and not meta preferences that everyone would have in the same way and to the same extent um let's see okay so so some of your examples um um yeah i think i think it's sort of like when when you just describe say uh the peak of sexual interest um it could be good or bad depending on the surrounding context right um so it was more that have sex when you are when your sexual desires the highest is a perfectly rational and recommended course of action um call up your ex that you've had a difficult relationship with um when your peak of sexual desire is that it's top is incoherent and probably predictably going to lead to um uh two mistakes yeah so so i don't know so for example but yeah the the basic idea is the same say fluctuating background thing can in some case be seen as part of a single preference function i have sex when it is most enjoyable or as a source of irrationality yeah um it reminds me of um what one response to the dutch book might be to to say it's actually fine and you build into your preference um um that i that i you know if they're leading me into a circle i'm paying um five dollars to go from a to b and from b to c and and go from c to c to a if you if you build enough context in and say you know even in the move from c to a uh i i just i just value moving to me moving to a after having paid five and ten dollars five dollars to move the other two times uh if all of that can be in your preference you could actually make it like a coherent utility function a rational utility function it it it seems pretty implausible that anyone is actually valuing things like that but uh kind of reminded me of that but um um but i know some of these issues seem seem more about how how we how we um just capture things within a single utility function like like how context sensitive is our utility function [Music] it's um as i say i'm more just prodding at the various things yeah seeing how they work and how um you would yeah so so so the actual so are you suddenly functioning i didn't want to get a little bit of psychological possibility because as you if you had to represent utility functions as um um specifying utilities for for every single possible state that that's certainly not we we are not some giant lookup table with with with the list of every possible state right um so so i i ended up using like an additive utility function um so so it it so given some give it some state you you figure out um how many of the formulas that you place your utility in um uh make it true and you add up the associated utilities um and uh so this would allow for um you know maybe you maybe you value p so you put some utility on on the formula p being true and and then you have another formula p and q um and then you could you could place a different utility on p and q um and and then that'll i think that'll actually add up because both p and p and q are true then then then those will add up to whatever so so maybe you you generally like when proposition p is true uh but if q is around then you don't like it right so maybe you you place five utility and having p but then negative but then zero uh negative five uh if you have q so then if p and q then then you're just indifferent so those are some of the things that you could do with with my current uh my current model and certainly there's there's probably other ways to make these more more realistic and and context dependent but but i think it's a fair amount of that it is allowed by just the additive utility functions that i have and in the limit case that you could you could just add in um um you know that such that it behaves again like that giant look up table of all possible states but but uh if there are places where you need to get more fine-grained it does allow a fair bit of that already but i'm sure there's also further improvements tonight um but but but that did seem a little bit different from um maybe some of the measures of um seeing chronic and diachronic i think that's executing mostly and where i work out agentual identity i didn't know if this is where you wanted to go with it but um this was supposed to be some measure when you're when running the continuations of an agent if this is supposed to measure um how much you're the same agent or not and and and it kind of included this um excellent this is this is the sort of thing that i've come to think about recently and you seem to be ahead of me there yeah a gentle identity is the well no i'll sorry i'll come back uh i'll let soren bring in a question okay so um one of the uh parts of the way that the utility function is constructed seems to be almost equivalent to simulating putting humans in human brains in different situations and it's given that it seems we're trying basically all states that the universe can be in it's it's possible that we are implementing in fact not just mind crimes but every possible mind crime do you agree um yeah certainly if you just naively if you found a a computer that has infinite computational power and and you ran this algorithm inside of it that then yeah well first it'll probably crash because there's tons of bugs but if you could fix all the books and have hadn't run then yeah there would probably be tons of mine crying so so uh so this isn't i'm not suggesting that anyone actually just fix fix the bugs and then run it because uh that would be a big concern uh the hope would be that um um [Music] somehow we a lot of what i'm trying to do here is kind of define the ground truth or um or the the value functions and i'm not imagining that this this would by any means be the actual algorithmic procedure that the uh that the ai uses right um maybe uh you know maybe maybe it's but i i do feel like it it can be important to have this defined somewhere in the ai so that it kind of knows what is the ground truth so so some of these other proposals which which i'm sympathetic to like um maybe like trying to get at our preferences via debate um and things of that sort uh or or some of the amplification stuff uh i i do feel like um um those those would be probably closer to um what we might in the near future do in in building up some kind of data set from which we try to infer what people's uh higher order preferences are [Music] so so presumably you could do things like that without um without doing mind crimes on these people you're simulating in in your uh finite human brain um so so it might be that um when you scale this down to run with actual finite power that you want to do a lot of that stuff but i do kind of think it is important that there's so that there'll be some recognition that basically i don't think we want to define the ground truth in terms of um in terms of what what comes out of those those types of things i think i i guess i have some worries there so so so i kind of like if if i could have these concepts then then then maybe it can even take over uh creating new iterations of what are the best methods that would actually be finite approximable to this but it kind of needs some definition of this to know what is finitely approximating and of course as we're doing the approximations we also want to make sure that we're not having it commit mind crimes as well thank you okay um so suppose that someone writes a psychological paper or a philosophical paper that is relevant to sorting out people's preferences how if your ai has been launched already how do you imagine it taking into account this innovation because one of the biggest difficulties that i have uh is to know how much has to be put in by hand and how much uh doesn't that's why i've been looking at things like dichronic consistency is this in the human head so we can delegate it to preferences or is it something that we just want to impose from outside so similarly so someone comes up with a psycho a paper about psychology that illuminates some aspects of human preferences and this thing for argument's sake or or a philosophical thought experiment and we'll assume that this is relevant for this how would the ai in this take that into account if this happened during its run so um let's see or i i mean or you can have it published beforehand if you want um how would it take this data that is already out there that is relevant but not in an ai parsable form um let's see so so um i i i guess it kind of depends so so is it like is it is there a mechanism that it's positive about how our preferences work is like the content of that paper let's say it's the anchoring bias paper uh it's pointing out a particular bias that just really realized was there before um but now they realize it's there and they agree yeah this is a bad bias this is not a preference um so this before this anchoring bias was not known and we'll presume that we haven't put it enough for it to identify the anchoring bias from other principles but someone writes this paper and there's a few people who agree in the world oh yes that's a good idea so but how would this enter the algorithm as a data um so i mean the current version the current version doesn't really work off of data like that right the current version is just assuming you have a low-level causal model of people's brains and that's where they're that's where they're getting the the data um but we but we could talk about maybe what adjustments should be made to this algorithm when you discover a paper like that and and i do and i do want in the future to have the ai maybe have some self-modification abilities so so maybe we're talking about like some future iteration where where has um some of those abilities to kind of take info from a new paper and try to integrate it with um other information um you wouldn't want it you wouldn't want to hand specify take this paper or papers on this archive or thing you want it to be able to take that kind of data understand it and like it might be a conversation between two top philosophers that is relevant for novel situations or something yeah i mean for something like like the rationality scoring metric right you can make some adjustments like so so in general we apply a principle of charity but if we know for a fact that humans are very prone to an ancient bias maybe we don't penalize attributing anchoring bias as much as if it didn't fit some known pattern right so that would be an example of some way you could change some of the scoring mechanisms here yeah but that's doing it by hand yeah yeah um yeah so that's what that's why it's not directly applicable to this version but maybe you wanted to talk about some future version okay these are some of the things i'm suggesting are things to bear in mind possibly yeah and okay i think i have two more questions slash no three more questions slash comments um um yeah so um over to the the uh the audience well i think uh i said is it an hour and a half to june and you also and we are close to the one and a half hour mark now so uh if june if you need to leave or anything then please feel free but otherwise i think uh we should give stewards uh questions priority towards the end yeah okay so the um the first one is do you have a population ethics or a way of solving that issue um i guess i would i would file that under um normative ethics as opposed to meta ethics um so i mean the reason i ask is because you're getting the preferences from brains um this is a certain a certain population of brains that could be variable so how are you going to deal with the issue of variable numbers of brains uh being created or destroyed oh i see um yeah so i guess um in terms of what i've coded here i think i was i was trying to just simplify it into just take all the adult human brains at the point in time in which we're wondering whether to press the button to let the let the ai go um and that and that presumably if if uh it's able to get the get the values that these humans should have and and and if these humans should value uh future generations then then you should get the result just just by scanning the existing human sprains um you should get the result that you that it should value future humans okay so this i'm putting in the category of uh delegating delegating to current human preferences to resolve um yeah yeah current human preferences but idealized right yeah [Music] but uh yep that is uh that's the way i tend to to go for a lot of these things like especially issues of identity but the other thing is have you thought of a potential error detection um layer or as in something that a lot of our beliefs about human preferences and human and humans in general are expressed in your algorithm but we have some judgments as this outcome is terrible or this outcome is stupid that are hard to capture in this preference formalism and could be more captured as error checking uh i was wondering if you would if you consider that this might be a way to go or some some sort of catching uh disastrous errors yeah yeah i mean certainly for anything this abstract you you would definitely want as much as much testing as possible to validate it um i i mean i don't know how far i've gotten in actually working out how you would go about doing it but but i do think that there there should be plenty of behavior if this was if this was going to be um uh ever close to production okay um i'm i'm looking forward to to that yeah i have thought a little bit of like say like a shutdown button going going back to the idea that that a lot of metal philosophy is meta ethics or can be answered by meta ethics plus meta-semantics so so could you could you do something like um scan to take take people's brains and and and figure out what their concept of ethics is um or what what concept of theirs that's closest to this meta ethics is does it play the same role in their deliberations and then and try to apply it to the whole algorithm and kind of ask everybody with their fate what yeah if we have a way of figuring out what the concept i think various people have does it actually match up with the meta ethics that's been programmed into this um i was i was more sort of thinking along the lines of some form of compressed accurate summary of where the ai wants to go oh yeah the checking that humans are not totally repulsed by that um this would have been sort of a separate thing along the same lines i i've sent you a somewhat silly example there where i imagined a problem with cv um and there the problem is that it follows the coherent extrapolated volitions things at every step arguably but ends up in a terrible uh place yeah um and this a lot of what you're doing seems as if there might be um you could just think of sort of ultra buddhist that ends up destroying the world um kind of through altruism but [Music] it's hard to tell because of some of it depends on some of how you interpret distance between um various utility functions and idealization processes but it seems that it may be vulnerable to things like that series of rational seeming steps that end up in a very bad location um and some sort of checking or overall connection with the initial starting point uh might be something worth thinking about yeah um i mean i guess my theory is supposed to be able to capture pretty much any normal reasoning that humans do so so if you're able to write up this example and use it in an argument about what we should value then then theoretically uh um my model should capture what's going on when you're doing that you have some criteria that you're subjecting your your uh values to um and and and then we should be able to uh tell if that criteria is being applied correctly or not and and that would uh this sort of thing sorry what do you think yeah so so basically if you're right that that in that there is that there is an argument here um to um i don't know be aware of like is it fair to say like these a chain of transitive uh normal reasoning um [Music] yeah i don't know i don't know how to summarize this very quickly but um but but if you're if this is if this constitutes uh an argument about how we should interpret our value then then within my model that should be capturable within um some higher order decision criteria um that that we could then apply to get the result that you want this is something that could be tested to some extent empirically because your um your process is imagining idealized versions of humans um and a question is is this unstable or is this a stable construction and if sort of meta normed arguments like this are included and have a strong effect i'd expect a more stable outcome where shifting around minor preferences don't make much difference but if it's unstable then it could go in many it could could end up in many areas yeah yeah i mean i guess i guess my model right now is probably going to be agnostic on exactly how you specify the sort of initial conditions how you fill in what the content of people's preferences are so so so probably there are some ways of making a stable in some way that they can't not say well and that would be really good to know what sorts of features in general make it stable or not there is also um i haven't been following it as closely because i've i'm i'm not really an academic philosophy technically anymore uh but but uh there's a there's been interesting work in um experimental philosophy where their whole idea is philosopher talk all the time about oh people have this intuition or that intuition uh using those in their arguments they want to go out and actually test do people have those intuitions how much do they have them is there diversity in the people who have them um and i saw people i haven't had a chance to read the paper but but uh just skimming it a little bit it looked like that paper was finding that there there actually are a lot of like universalities in um not necessarily like in the end answers that they give but but in the types of intuitions that that they that they bring up uh or you know it it might be that there's research showing that you could frame the problem in certain ways uh to get them to go one direction frame the problem in a different way to get them to go the other direction but that just susceptibility to the framing seems to be kind of universal um so i know there's been there's been some intriguing research that suggests that uh that there might be a lot of um overlap within humans um when it comes to ethical decision criteria um and and that would certainly be uh be better i mean i think my model can work even if that's not true even if you think that there is much more diversity um but but i do think that uh um it's a little bit easier if if for the most part there's there is broad overlap in in what the content of these high order norms are for for actual humans i agree with you and i think there is quite a lot of overlap between humans as well and um i just want to sort of okay i'll try and keep this brief so part of what i was thinking of why there was tests is to distinguish moral moral systems that are defined by a stopping point by stopping conditions from those that are constructive from the basis so if you um like if you want to have sort of coherence rational coherence between your your preferences and meta preferences you can either do this sort of building up or you can sort of do it until you get to a point where coherence has been reached and the do it until and cv is sort of an example of do it until do it until the conditions are reached and they do it until things seem to be very dangerous because this might do a random walk across all ethics because all that it really cares about is the stopping conditions so that means that there are certain attractors in ethical spaces and when it hits one of them it stays there but we don't know if it's going to hit any one of them uh any good one early and it might miss them all and go out for something bad or and simple yeah whereas the ones that are when i was talking about checking from the original thing to the end i was sort of saying ensure that there is not too much of a distance in a sense so ensure that it's constrained to build up uh rather than just um wandering from this starting point until something happened yeah yeah and and i i do model some of that uh i i look at sort of a chain of agents identity between each continuation over a whole path and and kind of we kind of want to ensure um like not not just that um the beginning and end uh have have uh decently high agent identity scores but but also that maybe nowhere in the chain was it below a certain threshold um but but and i i also wrote some notes to myself in the in the code of like um i think i think i probably had a very crude stopping point like like just stop that time 10 million or whatever um but but obviously i'd like to get a more principled one like is there some way where we could kind of ask these agents are you at a stopping point you know it seems like something like that might might be the best way but i just didn't want to get into that complication for this version i mean ask the agent are we at a stopping point seems like continue until this condition is reached anyway um so i was wondering if we could talk more about this uh at some point maybe next week oh yeah yeah that'd be great cool we'll sort that out when when other people the other thing is that i would encourage you to try and write it out a lot of the ideas written not just in the algorithmic form mainly because i found it so useful myself to have to formulate my ideas uh in written form and try and get other people to understand them no matter how imperfect i am at that this tends to clarify a lot of things when you do the process [Music] yeah thanks for thanks for your talk oh cool yeah i appreciate you being here i would also like to say thank you very much june for joining us today it's been a pleasure and i've certainly learned a lot and i think everybody here has really enjoyed both the discussion and your presentation
c0b790bb-3b02-490b-b13f-5b52d569d6f9
trentmkelly/LessWrong-43k
LessWrong
Is The Blood Thicker Near The Tropics? Trade-Offs Of Living In The Cold A few centuries ago it was believed that the reason why people near the tropics didn't achieve the level of affluence of their northern conspecifics was that the heat made the blood grow thicker, and that slowed down their movements, and thoughts (thoughts at that time used to take place not only in the head, but also in the heart).  It's a funny theory, very catchy, as mechanistic as the time demanded and all that. No wonder it was appreciated for a while.  Many centuries have passed now, and we have a lot of better hypotheses for why there is less development in tropical areas than elsewhere. Here are a few.  More diseases that consume family resources Lower average IQ Centuries of exploitation by Europe and US Fewer Institutions (There is a terrible paper by Daron Acemoglu, whom I hear otherwise is a great economist, on that) Shorter east-west axis within a land area (Guns, Germs and Steel) More frequent natural disasters, in particular floods, leading to property damage.    Probably all of those play a small role. I just want to say that primitive as it is as an explanation, I still think that the heat, and sunshine that comes with it, is a very strong factor, still today. Development is not my target though, my target is individual productivity and individual freedom, here thought of as "amount of things per unit time someone could be doing", not political freedom.  So far I've spent three weeks in England, at the Future of Humanity Institute in Oxford, this month. During those 21 days, I have experienced strictly 08 (eight) minutes of sunlight. Outside it is freezing. So no wonder that all interactions I had were had inside walls. Meanwhile talking to friends back home, at the Tropic of Capricorn, they had outside parties, picnics at the park, bike riding days, shopping outside in the streets, free dancing at the streets festivals, learning to do slacklining, swimming pool etc... In this grey lowlight world of English weather, with the added factor
51742598-4d48-46d4-90a1-eeab643ee009
trentmkelly/LessWrong-43k
LessWrong
The Ramblings of an Old Man Succumbing to Dementia My grandfather died several years ago, before I began to seriously consider cryonics. He deteriorated markedly as he approached death. Nevertheless, he was smart enough to want to be part of the new technologies "the kids were putting out these days." At age 90, with some help, he created a blog and posted this entry:   > As my memory weakens, I no longer perform much, but I deeply enjoy life. My wife, Genie, and my three daughters help me a lot, with food, walks, talks, and gifts. I usually feel good. Many other people say and do nice things for me. > > ... > > Life began as cells 3 ½ billion years ago and gradually spread out from one species to another. There is no evidence of any species living after death. Therefore, each human should enjoy life, itself. Long before our Earth formed, our Universe spread out 14 billion years ago, long after material existed, which may have been forever. On that basis, I think human lives are a result of amazing development. We can enjoy life deeply into old age and on almost to death. No human should weaken true enjoyment by physically attacking another human. A human may argue with another human with the purpose of keeping both lives enjoyable. >   > > When approaching death, a person should overcome huge pain by mental concentration or medicine and enjoy the remainder of life. This can be done by listening to music, relatives, friends, reading, and other actions. Life can be pleasant to the end, or about to the end. Enjoy yourself. Be nice to others. > >     The unedited entry (still up, along with other postings to his blog) concerns the mental topics that were occupying him at the end of life: mainly birdwatching and overpopulation. I post it here as evidence that even a very old person, suffering the mental and physical burdens of advanced old age, can still enjoy themselves and value life. From one perspective, it is the ramblings of an old man succumbing to dementia. From another, it is proof that life is never a
133a392d-c14e-4480-a363-5c6647f9dfc0
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] You Only Live Twice Today's post, You Only Live Twice was originally published on 12 December 2008. A summary (taken from the LW wiki):   > Yudkowsky's addition to Hanson's endorsement of cryonics. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was We Agree: Get Froze, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
c3cb7f5a-7392-4e8d-83c2-cf40cf7971c9
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Against Devil's Advocacy Today's post, Against Devil's Advocacy was originally published on 09 June 2008. A summary (taken from the LW wiki):   > Playing Devil's Advocate is occasionally helpful, but much less so than it appears. Ultimately, you should only be able to create plausible arguments for things that are actually plausible. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Timeless Control, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
1c68d9e9-28d8-4e83-b56b-42ccf3509b44
trentmkelly/LessWrong-43k
LessWrong
My simple AGI investment & insurance strategy TL;DR: * Options traders think it's extremely unlikely that the stock market will appreciate more than 30 or 40 percent over the next two to three years, as it did over the last year. So they will sell you the option to buy current indexes for 30 or 40% above their currently traded value for very cheap. * But slow takeoff, or expectations of one, would almost certainly cause the stock market to rise dramatically. Like many people here, I think institutional market makers are basically not pricing this in, and gravely underestimating volatility as a result, especially for large indexes like VTI which have never moved more than 50% in a single year. * To take advantage of this, instead of buying individual tech stocks, I allocate a sizable chunk of my portfolio to buying LEAPS (Long-term Equity AnticiPation Securities) on the broader stock market. If a slow takeoff does happen, and public companies capture some of the increased productivity, I'll at least be duly compensated for it when my skills become worthless. If it doesn't happen, this part of my portfolio will vanish, but that seems like an acceptable risk given the upside. I started doing this in January, and so far the mark price of the basket of options I've bought has doubled.[1] FAQ The options contracts you're talking about expire in "two to three years". Does this strategy only make sense if you think visible slow takeoff will begin before 2027? That's not quite necessary. If large parts of the economy get automated "only" in 2030, near-term AGI progress could start to impress market makers enough that they "wake up" and increase the price of these securities and options in anticipation of a boom. Which is why I choose to buy now instead of closer to my expected timelines, while Nvidia is only a two trillion dollar company and my alpha on this could run out any given year. But I think takeoff before 2027 is possible. As a layman, the simplest argument for shorter timelines I can empathize with i
1be38da9-3dac-4a76-8b18-cbd8a39b4732
trentmkelly/LessWrong-43k
LessWrong
Mental Model Theory - Illusion of Possibility Example (I have written an overview of the mental model theory which is in main and the link is here. You should read this overview before you read this post. You should only read this post if you want more explicit details on the first example which demonstrates the illusion of possibility) Consider the following problem: > Before you stands a card-dealing robot. This robot has been programmed to deal one hand of cards. You are going to make a bet with another person on whether the dealt hand will contain an ace or whether it will contain a king. If the dealt hand is just a single queen, it's a draw. Based on what you know about this robot, you deduce correctly that only one of the following statements is true. > > * The dealt hand will contain either a king or an ace (or both). > * The dealt hand will contain either a queen or an ace (or both). > > Based on your deductions, should you bet that the dealt hand will contain an Ace or that it will contain a King? If you think that the ace is the better bet, then you would have made a losing bet. In short, this is because  it is impossible for an ace to be in the dealt hand.  To see why this is I will list out all of the explicit mental models. Below are the mental models that people will create in accordance with the principle of truth. (See the article in main for what this is). You can see that Ace is in both rows, which makes it seem like ace must obviously be more likely to be in the dealt hand. Statement 1 true K A K ∩ A Statement 2 true Q A Q ∩ A   But, when we look at the full explicit set of potential models (including the models when one of the statements is false) we will realise that it is impossible for an ace to be in the hand. Note that ¬ stands for negation. (¬A) means that the hand does not have an ace. The first possible scenario is when statement one is true and statement two is false. The mental models for this are in the below table: Statement 1 true Statement 2 false K A K ∩ A ¬Q
a3b0b1a5-35c4-4fb8-b3dc-a2bbcb45cac8
trentmkelly/LessWrong-43k
LessWrong
LLMs are likely not conscious I think the sparse autoencoder line of interpretability work is somewhat convincing evidence that LLMs are not conscious. In order for me to consciously take in some information (e.g. the house is yellow), I need to store not only the contents of the statement but also some aspect of my conscious experience. I need to store more than the minimal number of bits it would take to represent "the house is yellow". The sparse autoencoder line of work appears to suggest that LLMs essentially store "bits" that represent "themes" in the text they're processing, but close to nothing (at least in L2 norm) beyond that. And furthermore, this is happening in each layer. Thus, there doesn't appear to be any residual "space" that left over for storing aspects of consciousness.
1deca55a-cabb-4a08-b32a-4b70a0fa0db5
trentmkelly/LessWrong-43k
LessWrong
Saying "Everyone Is Biased" May Create Bias It looks like telling people "everyone is biased" might make people not want to change their behavior to overcome their biases: > In initial experiments, participants were simply asked to rate a particular group, such as women, on a series of stereotypical characteristics, which for women were: warm, family-oriented and (less) career-focused. Beforehand, half of the participants were told that "the vast majority of people have stereotypical preconceptions." Compared to those given no messages, these participants produced more stereotypical ratings, whether about women, older people or the obese. > Another experiment used a richer measure of stereotyping – the amount of clichés used by participants in their written account of an older person’s typical day. This time, those participants warned before writing that “Everyone Stereotypes” were more biased in their writings than those given no message; in contrast, those told that stereotyping was very rare were the least clichéd of all. Another experiment even showed that hearing the “Everyone Stereotypes” message led men to negotiate more aggressively with women, resulting in poorer outcomes for the women. The authors suggest that telling participants that everyone is biased makes being biased seem like not much of a big deal. If everyone is doing it, then it's not wrong for me to do it as well. However, it looks like the solution to the problem presented here is to give a little white lie that will prompt people to overcome their biases: > A further experiment suggests a possible solution. In line with the other studies, men given the "Everyone Stereotypes" message were less likely to hire a hypothetical female job candidate who was assertive in arguing for higher compensation. But other men told that everyone tries to overcome their stereotypes were fairer than those who received no information at all. The participants were adjusting their behaviour to fit the group norms, but this time in a virtuous direction.
3153a513-6814-41c1-9461-5e1f9b34a857
trentmkelly/LessWrong-43k
LessWrong
Meetup : Canberra: Technology to help achieve goals Discussion article for the meetup : Canberra: Technology to help achieve goals WHEN: 27 February 2015 06:00:00PM (+1100) WHERE: 108 North Road, Acton, ACT Often we come across various pieces of technology - such as a neat app or website - that help us accomplish goals that we are trying to achieve. In this meetup, we will discuss any such tools that we have come across (so if you can, please come prepared with a few in mind), and see if others have ideas that we might find useful. As always, vegan snacks will be provided. General meetup info: * If you use Facebook, please join our group. * Structured meetups are (usually) held on the second Saturday and fourth Friday of each month from 6 pm until late in the CSIT building, room N101. Discussion article for the meetup : Canberra: Technology to help achieve goals
9bb44d00-fcbf-47e1-be8d-7fe673768f7d
trentmkelly/LessWrong-43k
LessWrong
Thirty random thoughts about AI alignment Why does this post exist? In order to learn more about my own opinion about AI safety, I tried to write a thought every day before going to bed. Of course, I failed doing this every day, and this is the reason why I have only thirty arguments since January. However, as I am still happy of the result, I am sharing them. Most of these thoughts have been inspired by many arguments I have read over the years. I tried to cite them in my thoughts, but sometimes I couldn't remember the source. For instance, I am pretty sure that the phrasing of the third thought is inspired by something I have read, but I cannot remember what. Anyways, I hope these thoughts will be useful to someone! The thoughts First thought There has been a lot of progress in AI Safety this last decade. Initially, AI systems weren’t checked against racism, sexism, and other discriminations at all. For instance, it took nearly two years before someone that Word2Vec thought that doctor – man + woman = nurse. Now, AI Safety is taken way more seriously, with the number of people working on the Alignment problem growing from nearly 10 to around 300. However, although the field of AI Safety research grew extremely fast, it is still far behind AI capability research, with its 40,000 researchers. Not only that, but we still don’t have any idea whatsoever about how we could have a chance to solve the alignment problem for superintelligences. Actually, the only solution we currently have for the alignment problem is to develop very smart and aligned AI recursively until one of them is smart enough to solve the Alignment problem, and aligned enough to tell us the solution. A solution to the alignment problem needs revolutionary discoveries in mathematics, ethics, philosophy, epistemology, game theory, provability theory, and much more. Taking this into account, it may not be exaggerated to say that AI Safety is hundreds of years behind AI capability. Second thought The Elicit Latent Knowledge problem, or the
09648d8f-5b0f-44d4-b0ee-5c684d10670e
trentmkelly/LessWrong-43k
LessWrong
Safety First: safety before full alignment. The deontic sufficiency hypothesis. It could be the case that these two goals are separable and independent: * “AI safety”: avoiding existential risk, s-risk, actively negative outcomes * “AI getting-everything-we-want” (CEV)   This is what Davidad calls this the Deontic Sufficiency Hypothesis.  If the hypothesis is true, it should be possible to de-pessimize and mitigate the urgent risk from AI without necessarily ensuring that AI creates actively positive outcomes. Because, for safety, it is only necessary to ensure that actively harmful outcomes do not occur. And hopefully this is easier than achieving “full alignment”. Safety first! We can figure out the rest later.  Quotes from Davidad's The Open Agency Architecture plans This is Davidad’s plan with the Open Agency Architecture (OAA).  A list of core AI safety problems and how I hope to solve them (2023 August) > 1.1. First, instead of trying to specify "value", instead "de-pessimize" and specify the absence of a catastrophe, and maybe a handful of bounded constructive tasks like supplying clean water. A de-pessimizing OAA would effectively buy humanity some time, and freedom to experiment with less risk, for tackling the CEV-style alignment problem—which is harder than merely mitigating extinction risk. Davidad's Bold Plan for Alignment: An In-Depth Explanation — LessWrong (2023 April)  > Deontic Sufficiency Hypothesis: This hypothesis posits that it is possible to identify desiderata that are adequate to ensure the model doesn't engage in undesirable behavior. Davidad is optimistic that it's feasible to find desiderata ensuring safety for a few weeks before a better solution is discovered, making this a weaker approach than solving outer alignment. For instance, Davidad suggests that even without a deep understanding of music, you can be confident your hearing is safe by ensuring the sound pressure level remains below 80 decibels. However, since the model would still be executing a pivotal process with significant influence, relying
489efafd-76ed-42f8-b1c0-3ed92ba8a3ff
trentmkelly/LessWrong-43k
LessWrong
Child Contracting The kids came to me with a contract dispute: they disagreed about what the deal had been. After some talking, and hearing what they each thought they'd agreed to, it turned out that the contract wasn't just metaphorical: "Anna rents the living room from Lily in exchange for Lily's use of bedroom. And Lily will do free cleaning." It turned out that Anna signed it after hearing a rough paraphrase. I told Lily that she would need to read any future contracts aloud, word for word, before Anna signed. I also explained about interpretation against the drafter, and how Lily should be careful to avoid ambiguity in writing down agreements. I'm not sure what "free cleaning" means here? Generally they both do very little cleaning, free or otherwise. Cleaning up this evening I came across a second contract: "Anna will let Lily use bedroom and bathroom. In exchange Lily will let Anna use the living room and Lily does free cleaning. If furniture is moved or added please put away". This one wasn't signed, so perhaps they couldn't agree?
5b88ff38-1747-4e8a-8467-706d8efccaec
trentmkelly/LessWrong-43k
LessWrong
Is "Strong Coherence" Anti-Natural? Related: * Contra "Strong Coherence" * why assume AGIs will optimize for fixed goals * Why The Focus on Expected Utility Maximisers? ---------------------------------------- Background and Core Concepts I operationalised "strong coherence" as: > Informally: a system has immutable terminal goals. > > Semi-formally: a system's decision making is well described as an approximation of argmax over actions (or higher level mappings thereof) to maximise the expected value of a single fixed utility function over states.   And contended that humans, animals (and learning based agents more generally?) seem to instead have values ("contextual influences on decision making"). The shard theory account of value formation in learning based agents is something like: * Value shards are learned computational/cognitive heuristics causally downstream of similar historical reinforcement events * Value shards activate more strongly in contexts similar to those where they were historically reinforced   And I think this hypothesis of how values form in intelligent systems could be generalised out of a RL context to arbitrary constructive optimisation processes[1]. The generalisation may be something like: > Decision making in intelligent systems is best described as "executing computations/cognition that historically correlated with higher performance on the objective functions a system was selected for performance on"[2].   This seems to be an importantly different type of decision making from expected utility maximisation[3]. For succinctness, I'd refer to systems of the above type as "systems with malleable values". ---------------------------------------- The Argument In my earlier post I speculated that "strong coherence is anti-natural". To operationalise that speculation: * Premise 1: The generalised account of value formation is broadly accurate * At least intelligent systems in the real world form "contextually activated cognitive heuristics that influe
b86a90a9-7333-4270-88ea-649cbf08005e
trentmkelly/LessWrong-43k
LessWrong
How do you notice when you are ignorant of necessary alternative hypotheses? So I just wound up in a debate with someone over on Reddit about the value of conventional academic philosophy.  He linked me to a book review, in which both the review and the book are absolutely godawful.  That is, the author (and the reviewer following him) start with ontological monism (the universe only contains a single kind of Stuff: mass-energy), adds in the experience of consciousness, reasons deftly that emergence is a load of crap... and then arrives to the conclusion of panpsychism. WAIT HOLD ON, DON'T FLAME YET! Of course panpsychism is bunk.  I would be embarrassed to be caught upholding it, given the evidence I currently have, but what I want to talk about is the logic being followed. 1) The universe is a unified, consistent whole.  Good! 2) The universe contains the experience/existence of consciousness.  Easily observable. 3) If consciousness exists, something in the universe must cause or give rise to consciousness.  Good reasoning! 4) "Emergence" is a non-explanation, so that can't be it.  Good! 5) Therefore, whatever stuff the unified universe is made of must be giving rise to consciousness in a nonemergent way. 6) Therefore, the stuff must be innately "mindy". What went wrong in steps (5) and (6)?  The man was actually reasoning more-or-less correctly!  Given the universe he lived in, and the impossibility of emergence, he reallocated his probability mass to the remaining answer.  When he had eliminated the impossible, whatever remained, however low its prior, must be true. The problem was, he eliminated the impossible, but left open a huge vast space of possible hypotheses that he didn't know about (but which we do): the most common of these is the computational theory of mind and consciousness, which says that we are made of cognitive algorithms.  A Solomonoff Inducer can just go on to the next length of bit-strings describing Turing machines, but we can't. Now, I can spot the flaw in the reasoning here.  What frightens me is: what
368fbf61-d9ea-488c-ad51-ca9df13e00c3
trentmkelly/LessWrong-43k
LessWrong
Looking for humanness in the world wide social Social networks have shaped me since a young age. Growing up at the beginning of the millennium, I used to spend my time in phpBB and vBulletin forums. There, I befriended internet strangers, started my way into graphic design, and learned about torrents. Forums were my favorite third places—little corners on the web where I felt a deep affinity. I can still vividly remember the joy and excitement while exploring those discussion boards. The sheer amount of knowledge and people I’ve met in those places could not be found anywhere else. To this day I believe these experiences have profoundly shaped the path I’m still walking today. But when I look at the current form of social media, it all feels dumb: watching adults post nonsense or praise “influencer gurus” while doom-scrolling from dusk to dawn seems absurd. We should have had more important things to do with our lives, yet we’ve all gotten caught up in this utopian-dystopian era. What once felt like home evolved into alienated spaces. ---------------------------------------- The social archetype, once defined by its role in connecting like-minded strangers, gradually evolved into a space for staying in touch with friends and family. Platforms like Facebook, Twitter, and LinkedIn have all initially focused on cultivating personal connections. However, this social identity grew into a behemoth over time—at a pace that now feels startling. With the rise of the new media model, social networks transformed from intimate bonfires into vast, crowded arenas while losing much of their original charm. Like many others, I embraced the new social paradigm. But it wasn’t until I was deep into writing these lines that I realized just how detached I am from its very idea. I’ve long been inside the loop of social media—joining and lurking on new platforms, trying to play the game. Yet I’ve never truly felt part of the culture. If you look at my Instagram grid or Facebook wall, they seem pretty dormant. And I’m far from a
e81d3b9d-d7f2-4730-aefa-8c5b2ecfcc64
trentmkelly/LessWrong-43k
LessWrong
Boston meetup Nov 15 (and others) So far, only one of the less-wrong meet-ups that were discussed has been scheduled.  The Boston meet-up was scheduled for: Carberry's at 74 Prospect St Cambridge, MA (1.5 blocks northeast from the Central Square T station) Sunday November 15th at 2pm though it may move after an hour or two to the Clear Conscience Cafe a couple blocks away if things get too crowded.  My cell number is (610) 213 2487 so you can contact me if there is a problem. Regarding Philly, Florida and New Orleans there is still need for detail in the schedule.  I'm leaving New Orleans at 5:10 on the 14th so the 13th is probably better but I can do early on the 14th if people want.  There has been some interest in an event there but I would appreciate more interested people saying so and possibly contacting me via phone or email.  If several people are interested we will have a meet-up.  Just one or two and I can meet less formally.
449e9612-4919-41a1-82d3-9c772949333e
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The AI Countdown Clock I made [this clock](https://aicountdown.com), counting down the time left until we build AGI: ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Faf4ca51c-0afa-41c8-998a-4678f8ab7754_2880x1592.png) It uses the most famous [Metaculus prediction](https://www.metaculus.com/questions/3479/date-weakly-general-ai-system-is-devised/) on the topic, inspired by several recent dives in the expected date. Updates are automatic, so it reflects the constant fluctuations in collective opinion. Currently, it’s sitting in 2028, i.e. the end of the next presidential term. The year of the LA Olympics. Not so far away. There were a few motivations behind this project: 1. **Civilizational preparedness.** Many people are working on making sure this transition is a good one. Many more probably should be. I don’t want to be alarmist, but the less abstract we can make the question, the better. In this regard, it’s similar to the [Doomsday Clock](https://en.wikipedia.org/wiki/Doomsday_Clock). 2. **Personal logistics.** I frequently find myself making decisions about long-term projects that would be deeply affected by the advent of AGI. Having kids, for example.  The prediction is obviously far from absolute, and I’m not about to stop saving more than 5 years and 11 months of living expenses. But it’s good to be reminded that the status quo is no longer the best model for the future. 3. **Savoring the remainder.** Most likely, AGI will be the beginning of the end for humanity. Not to say that we will necessarily go extinct, but we will almost definitely stop being “human,” in the recognizable/traditional sense. For many years, I’ve used the [Last Sunday](https://chrome.google.com/webstore/detail/the-last-sunday-reminder/aiojhapcgfgmiacbbjfgedhlcchmpelh?hl=en) as my new tab page. It shows you how many Sundays you have left in your life, if you live to an average age. I’ve gotten some strange looks, when it accidentally pops up during a presentation. I know it seems morbid, like a fixation on the end. But I don’t see it that way; it’s not about the end, but the finitude of the middle. That precious scarcity. I’ve spent a lot of time thinking about the meaning of being human, but this mostly dissolves that angst. It’s like: if I live in San Francisco, my daily decisions about what to do here are impossibly broad. But if I’m a tourist, visiting for a few days, then my priority is to do all of the most “San Francisco” things I can: see the Golden Gate, eat a burrito in the Mission, sit among purple-red succulents on a foggy beach cliff. So, if I only have X years left of being human, I should focus on doing the most human things I can. Whatever that means to me. Conveniently, this applies to the worlds with and without AGI, since in the latter, I’ll still die. But the shorter timeline makes it more real. Let me know if you have any feedback or suggestions.
4d77fbb7-fdaf-43ea-bb56-e1df761f2f49
trentmkelly/LessWrong-43k
LessWrong
Anybody want to meet in Leipzig, Germany? Hey guys, does anybody else here live in Leipzig, Germany? I'd love to meet up and find/found an LW community here!
1d60f2a0-d325-4db8-8a53-30eb3b490e9b
trentmkelly/LessWrong-43k
LessWrong
A Primer on United States Treasuries Original post here.  A few years into my career, I came to the realize that most investment markets, especially those most regular people invest in (equities), have strong relationships to the United States Treasury (UST) market. Considering the recent moves in that market, I want to explain what the UST market is and how it relates to equities, loans and other investments. A United States Treasury is a debt obligation backed by the full faith of the United States government. Anyone can buy a United States Treasury, but they are primarily purchased by institutional investors and governments. Today, USTs are considered the safest investments in the world. One of the most common UST is the UST 10 year bond. Today, an investment in the UST 10 year bond will yield you about 1.5% per year. This means that if you invest $1,000, you will get $10.50 a year until it matures in 2031 and you get your $1,000 back. The historical 10 year yield. Source You can buy treasuries that mature in most months and many different years, but only a select few are tracked by the markets, and are the reference rates that are used by the markets to drive expectations and investment decisions. The most common UST rates tracked by market participants. Source So how do other investment assets relate to USTs? It is important to note that the relationships I am about to explain are just one input in determining the price of the assets in question. These relationships do not always hold. Treasury rates affect not only the asset classes below, but many other asset classes and financial instruments such as commodities, futures, and options. Foreign Exchange All else being equal, when USTs are rising across the board, it means that the United States dollar will strengthen, assuming that rates in the foreign currency in question are stable or dropping. For example, if UST rates are rising, and Canadian Government Bond rates are falling, the US Dollar will get stronger versus the Canadian Dolla
a1104705-2632-4cfc-9a79-d598a6e1e74c
trentmkelly/LessWrong-43k
LessWrong
The role of Bayesian ML in AI safety - an overview I want to thank Sebastian Farquhar, Laurence Midgley and Johan van den Heuvel, for feedback and discussion on this post.  Some time ago I asked the question “What is the role of Bayesian ML in AI safety/alignment?”. The response of the EA and Bayesian ML community was very helpful. Thus, I decided to collect and distill the answers and provide more context for current and future AI safety researchers. Clarification: I don’t think many people (<1% of the alignment community) should work on Bayesian ML or that it is even the most promising path to alignment. I just want to provide a perspective and give an overview. I personally am not that bullish on Bayesian ML anymore (see shortcomings) but I’m in a relatively unique position where I have a decent overview of AI safety and the Bayesian ML literature and think an overview post like this might be helpful. A working definition There is no agreed-upon definition for Bayesian ML. I use the term for systems that broadly have any of the following properties 1. Implicitly or explicitly use Bayes theorem.  2. Approximate and quantify uncertainty for their estimates, e.g. return distributions instead of point estimates, and allow for the specification of prior distributions. 3. Systems that have a latent state that can be continuously updated with new data without being fully retrained, e.g. conjugate inference. This is the vaguest property since it plausibly also applies to pre-training and fine-tuning LLMs which are usually not seen as explicitly Bayesian algorithms.  Roles High-level - Future AI systems might be Bayesian This section is largely inspired by a response from Emtiyaz Khan and a different response from Sebastian Farquhar.  There are a lot of things that current ML systems do poorly in comparison to humans. They are often not as data-efficient as humans are, they don’t generalize well, they are often not robust to adversarial inputs, they often can’t learn during deployment and much more (none of t
95a5c388-c137-440e-ba25-d711c7e8b9b5
trentmkelly/LessWrong-43k
LessWrong
[AN #138]: Why AI governance should find problems rather than just solving them Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter. Audio version here (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS ‘Solving for X?’ Towards a problem-finding framework to ground long-term governance strategies for artificial intelligence (Hin-Yan Liu et al) (summarized by Rohin): The typical workflow in governance research might go something like this: first, choose an existing problem to work on; second, list out possible governance mechanisms that could be applied to the problem; third, figure out which of these is best. We might call this the problem-solving approach. However, such an approach has several downsides: 1. Such an approach will tend to use existing analogies and metaphors used for that problem, even when they are no longer appropriate. 2. If there are problems which aren’t obvious given current frameworks for governance, this approach won’t address them. 3. Usually, solutions under this approach build on earlier, allegedly similar problems and their solutions, leading to path-dependencies in what kind of solutions are being sought. This makes it harder to identify and/or pursue new classes of solutions 4. It is hard to differentiate between problems that are symptoms vs. problems that are root causes in such a framework, since not much thought is put into comparisons across problems 5. Framing our job as solving an existing set of problems lulls us into a false sense of security, as it makes us think we understand the situation better than we actually do (“if only we solved these problems, we’d be done; nothing else would come up”). The core claim of this paper is that we should also invest in a problem-finding approach, in which
08111cf9-eac7-46cc-a068-dbc917d14e2b
trentmkelly/LessWrong-43k
LessWrong
The Hessian rank bounds the learning coefficient TL;DR: In a neural network with d parameters, the (local) learning coefficient λ can be upper and lower bounded by the rank of the network's Hessian d1: d12≤λ≤d12+d−d14. The lower bound is a known result. The upper bound is a claim by me, and this post contains the proof for it.[1] If you find any problems, do point them out.  Edit 16.08.2024: The original version of this post had a three in the denominator of the upper bound. Dmitry Vaintrob spotted an improvement to make it a four. Introduction The learning coefficient λ is a measure of loss basin volume and model complexity. You can think of it sort of like an effective parameter count of the neural network. Simpler models that do less stuff have smaller λ. Calculating λ for real networks people actually use is a pain. My hope is that these bounds help make estimating it a bit easier. In a network with d parameters, the learning coefficient is always a number 0≤λ≤d2. An existing result in the literature says that if you’ve calculated the rank of the network’s Hessian d1,[2] you get a tighter lower bound d12≤λ. I claim that we can also get a tighter upper bound λ≤d12+d−d14, where d−d1 will be the dimension of the Hessian kernel, meaning the number of zero eigenvalues it has.[3] This is neat because it means we can get some idea of how large λ is using only standard linear algebra methods. All we need to know is how many zero eigenvalues the Hessian has.[4] Singular Learning Theory introductions often stress that just looking at the Hessian isn’t enough to measure basin volume correctly. But here we see that if you do it right, the Hessian eigenspectrum can give you a pretty good idea of how large λ is. Especially if there aren't that many zero eigenvalues. Intuitively, the lower bound works because a direction  in the parameters w that isn't free to vary to second order in the Taylor expansion won't become any more free to vary if you pile on a bunch of higher order terms. The Second order term stri
8c43e964-a848-41dd-a269-5976df720d46
trentmkelly/LessWrong-43k
LessWrong
UK Foundation Model Task Force - Expression of Interest Ian Hogarth has just been announced as the Chair of the UK's AI Foundation Model Taskforce. He's the author of the FT article "We must slow down the race to God-like AGI", and seems to take X-risks from AI seriously. To quote his twitter thread: > And to that end I put out a call to people across the world. If you are an AI specialist or safety researcher who wants to build out state capacity in AI safety and help shape the future of AI policy then get in touch: > > We have £100m to spend on AI safety and the first global conference to prepare for. I want to hear from you and how you think you can help. The time is now and we need more people to step up and help. The google form to leave an expression of interest is here. (I am in no way affiliated with Ian or the UK Foundation Model Task Force)
d9777b9d-b02f-4f8a-a6f0-bf905e8320ae
trentmkelly/LessWrong-43k
LessWrong
Why I am not a Quaker (even though it often seems as though I should be) In the past year, I have noticed that the Society of Friends (also known as the Quakers) has come to the right answer long before I or most people did, on a surprising number of things, in a surprising range of domains. And yet, I do not feel inclined to become one of them. Giving credit where credit is due is a basic part of good discourse, so I feel that I owe an explanation. The virtues of the Society of Friends are the virtues of liberalism: they cultivate honest discourse and right action, by taking care not to engage in practices that destroy individual discernment. The failings of the Society of Friends are the failings of liberalism: they do not seem to have the organizational capacity to recognize predatory systems and construct alternatives. Fundamentally, Quaker protocols seem like a good start, but more articulated structures are necessary, especially more closed systems of production. This post reflects a lot of thought, but there's a lot of speculation which I hope I've managed to mark as such. I'm optimizing for clearly communicating my present state in the hopes of furthering dialogue, not saying things that are maximally defensible; I haven't worked out the relevant models in extreme detail. That said, I don't think I'm misreporting any facts, and corrections on any level are welcome. Some reasons to respect the Society of Friends * Liberalism is nice, and the Quakers instilled it in America. * They pioneered the radical practice of personal integrity. * Their social technology is designed to avoid overriding individual conscience and judgment, thus preserving information that is typically destroyed by more common systems oriented around momentum or dominance. * They don’t advertise much. PROTO-LIBERALS The Quakers first came to my attention when Scott Alexander of Slate Star Codex wrote about them. His review of Albion’s Seed describes them as proto-liberals with an outsized effect on the United States of America, basical
8013e78f-b23c-48c5-8401-00762fc4afe8
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Exploratory survey on psychology of AI risk perception [Share your hot takes](https://forms.gle/LdA5gwLgwHFqpZgw7) on what leads people to see AI risk as important! I will test the most promising hypotheses in a future poll. The time you take to think about them is up to you, but it will probably take 8-15 minutes.
55fa035d-bdcc-466a-9432-ad3274e7d181
trentmkelly/LessWrong-43k
LessWrong
(4 min read) An intuitive explanation of the AI influence situation This is a 4-minute read, inspired by the optimized writing in Eukaryote's Spaghetti Towers post. My thinking is that generative AI has potential for severe manipulation, but that the 2010s AI used in social media news feeds and other automated systems are a bigger threat, and this tech tells us much more about the future of international affairs and incentives for governments to race to accelerate AI, has fewer people aware of it, has a substantial risk of being used to attack the AI safety community, and the defenses are easy and mandatory to deploy. This post explains why this tech is powerful enough to be a centerpiece of people's world models. The people in Tristan Harris's The Social Dilemma (2020) did a fantastic job describing the automated optimization mechanism in a quick and fun way (transcript). > [Tristan] A [stage] magician understands something, some part of your mind that we’re not aware of. That’s what makes the [magic trick] illusion work. Doctors, lawyers, people who know how to build 747s or nuclear missiles, they don’t know more about how their own mind is vulnerable. That’s a separate discipline. And it’s a discipline that applies to all human beings... > > [Shoshana] How do we use subliminal cues on the Facebook pages to get more people to go vote in the midterm elections? And they discovered that they were able to do that.  > > One thing they concluded is that we now know we can affect real-world behavior and emotions without ever triggering the user’s awareness. They are completely clueless. Important note: all optimization here is highly dependent on measurability, and triggering the user's awareness is a highly measureable thing.  If anything creeps someone out, they use the platform less; such a thing is incredibly easy to measure and isolate causation. To get enough data, these platforms must automatically reshape themselves to feel safe to use. This obviously includes ads that make you feel manipulated; it is not surprising that
e207b118-843f-46ff-9607-dc7e6c51ed24
trentmkelly/LessWrong-43k
LessWrong
Personal information management Several weeks ago, I began using personal wiki software Zim Wiki (free and cross-platform for Linux & Windows; I recommend nvALT on Mac OS X) to record all of my notes-to-self.  I've found it to be a nice software tool for implementing some of the effectiveness advice I've read on Less Wrong.  This post is a fairly personal overview of my usage. I looked at a lot of personal information managers before choosing Zim.  Here are the features that caused me to choose it over the other software I looked at: * Probably the most important feature: Jump-to-note capability with autocomplete.  Pressing Control-J gives a text box.  Start typing in the text box and it autocompletes with the names of any of the notes in my notebook (or allows me to create a new note).  This is the proverbial 10% of the feature set that provides 90% of the benefit over scattered text files.  Opening a specific note to add another thought or idea to it is a very common operation for me and this feature makes it very quick.  Only a few tools I've found seem to have comparable functionality: WikidPad (with Control-O), and the Notational Velocity family of information managers kind of have it.  (For Notational Velocity/nvALT, I recommend coming up with some kind of namespacing scheme so note names collide with note text less frequently in your searches.  For example, I prepend reminders for future situations with "f.", journal notes with "j.", policy notes with "p.", Less Wrong post drafts with "l.", etc.  Then command-L works as a pretty good "jump to note" shortcut.) * Pressing Control-D, then pressing return inserts a timestamp at the position of my cursor.  This has been useful for a variety of logging-type applications.  (I replicated the same thing with nvALT on OS X with aText.) * Zim is a desktop application.  This has a couple advantages: * I configured a keyboard shortcut to open it, or bring it to the front if it was already open, using a modified version of the Linux shell s
e4cc285f-7788-49a1-a27a-2f66bc60f218
trentmkelly/LessWrong-43k
LessWrong
Meetup : London diaspora meetup, 10/01/2016 Discussion article for the meetup : London diaspora meetup, 10/01/2016 WHEN: 10 January 2016 02:00:00PM (+0000) WHERE: Shakespeare's Head, 64-68 Kingsway, London WC2B 6AH Parts of LessWrong London have been feeling like the association with LW no longer really captures what we're about. Several of us have pretty much stopped reading the site. So we're doing an experimental rebrand as a diaspora meetup group. The diaspora includes, but is not limited to: LessWrong, SlateStarCodex, parts of the Effective Altruism movement, the bit of tumblr that brands itself 'rationalist tumblrsphere'. If you feel like you want to hang out with the sort of people who are involved with those things: welcome! You are invited. You do not need to think you are clever enough, or interesting enough, or similar enough to the rest of us, to attend. You are invited. This meetup will be social discussion in a pub, with no set topic. If there's a topic you want to talk about, feel free to bring it. There will be some way to identify us. People start showing up around two, and there are almost always people around until after six, but feel free to come and go at whatever time. Discussion article for the meetup : London diaspora meetup, 10/01/2016
1cfd5b77-f278-43ae-8e38-76057ba785fa
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Some disjunctive reasons for urgency on AI risk (This has been sitting in my drafts folder since August 2017. Robin Hanson's recent [How Lumpy AI Services?](https://www.overcomingbias.com/2019/02/how-lumpy-ai-services.html) made me think of it again. I'm not sure why I didn't post it back then. I may have wanted to add more reasons, details and/or citations, but at this point it seems better to just post it as is. Apologies to those who may have come up with some of these arguments earlier.) Robin Hanson recently [wrote](http://www.overcomingbias.com/2017/08/foom-justifies-ai-risk-efforts-now.html), "Recently AI risk has become something of an industry, with far more going on than I can keep track of. Many call working on it one of the most effectively altruistic things one can possibly do. But I’ve searched a bit and as far as I can tell that foom scenario is still the main reason for society to be concerned about AI risk now." (By "foom scenario" he means a local intelligence explosion where a single AI takes over the world.) In response, I list the following additional reasons to work urgently on AI alignment. 1. Property rights are likely to not hold up in the face of large capability differentials between humans and AIs, so even if the intelligence explosion is likely global as opposed to local, that doesn't much reduce the urgency of working on AI alignment. 2. Making sure an AI has aligned values and strong controls against value drift is an extra constraint on the AI design process. This constraint appears likely to be very costly at both design and run time, so if the first human level AIs deployed aren't value aligned, it seems very difficult for aligned AIs to catch up and become competitive. 3. AIs' control of the economy will grow over time. This may happen slowly in their time frame but quickly in ours, leaving little time to solve value alignment problems before human values are left with a very small share of the universe, even if property rights hold up. 4. Once we have human-level AIs and it's really obvious that value alignment is difficult, superintelligent AIs may not be far behind. Superintelligent AIs can probably find ways to bend people's beliefs and values to their benefit (e.g., create highly effective forms of propaganda, cults, philosophical arguments, and the like). Without an equally capable, value-aligned AI to protect me, even if my property rights are technically secure, I don't know how I would secure my mind.
35484cba-bd09-4c3f-b43e-b6d6bdd46014
trentmkelly/LessWrong-43k
LessWrong
Existing work on creating terminology & names? I'm in the process of coming up with terminology for various theories, similar to lots of other work on LessWrong and The EA Forum. Naming things is a bit of a unilateralist action. While community members don't have to accept a specific naming proposal, they are likely to do so if they like the concept. I can't think of many cases where Eliezer or someone named a concept, and the community decided that that name was poor, and renamed it. However, I can't find much theory on how to figure out great names for things, or even what to consider when doing so. I would have expected there to be comprehensive discussion around Information Architecture, UX Design, or the Library Sciences on this topic, but couldn't identify much outside of card sorting and a few lists of rough heuristics. This was also an issue for me when I did more software engineering, and I was then also frustrated by the lack of discussion I could find. The best there was work on Software Patterns, which I used primarily for naming conventions. Some related links I could find: https://en.wikipedia.org/wiki/Naming_convention https://www2.staffingindustry.com/eng/Editorial/Archived-Blog-Posts/Adam-Pode-s-Blog/Probably-the-best-file-naming-convention-ever https://www.invisionapp.com/inside-design/naming-conventions/ https://ux.stackexchange.com/questions/48578/naming-features-of-an-app-or-site https://www.martyneumeier.com/strong-vs-weak-names
7e0681ba-f3c3-4b3c-add6-6ba7997c6cc5
StampyAI/alignment-research-dataset/arxiv
Arxiv
W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality 1 Introduction --------------- Since the milestone study by Alex Krizhevsky and colleagues in 2012 [alexnet], Deep Learning (DL), with particular emphasis on Convolutional Neural Networks (CNNs), is the state-of-the-art method for image classification in many different applications. Besides classification performance, the reason for the success of CNNs is twofold: i) the recent boost of graphical processing units (GPUs) and parallel processing, that allows to train very large models; ii) the ever-growing availability of massive annotated task-specific datasets. Nonetheless, in many realistic applications many concerns may be raised about the reliability of such datasets both in terms of image and labelling quality, and consequently on the robustness of the CNN models trained and tested on them. As regards to image quality, standard CNNs are supposed to be fed with high quality samples. Nevertheless, in practical scenarios different kinds of image degradation can heavily affect the performance of a CNN both in the training and in the inference phase. Problems concerning image acquisition devices, poor image sensor, lighting conditions, focus, stabilization, exposure time or partial occlusion of the cameras may lead to produce low quality samples, which have been demonstrated to be one of the chief reasons for troublesome learning procedures of CNN models in many applications [roy2018effects, moosavi2016deepfool, dodge2016understanding]. On the other hand, even though the CNN had been proficiently trained and validated on high quality data, noisy inputs can heavily affect the inference phase, and cause classification errors at run-time. A typical example are self-driving cars, where a partial occlusion of the image acquisition device may lead to misinterpret a road sign, with catastrophic consequences. In such settings, the well-known limitations of standard CNNs to broadcast information about how much the given input resembles the ones the model was trained on - and hence, whether the associated prediction should (or should not) be trusted - is also playing a major role. Besides image quality, also collecting and associating error-free labels to a massive number of representative images to adequately train CNNs may be extremely problematic in a number of real-world applications. If we take as an example the medical domain, where available data is typically small to begin with, image annotation is always a cumbersome and time-consuming task, that is extremely error-prone. In a number of applications, inter-observer variability is even so high as to necessitate consensus strategies to aggregate annotations from several medical experts [dataset\_labelling1], which is anyway prone to mislabelling. Conversely, in a number of non-medical real-world scenarios the collection of massive labelled image datasets is relatively easy and straightforward: for example, using semi-automatic tools based on web search engines and keywords [dataset\_labelling2]. Nonetheless, even in this case concerns may be raised on the reliability of the image labels. Take as an example the JFT dataset from Google, including 300M+ images labeled by an algorithm that uses complex mixture of raw web signals, connections between web-pages and user feedback [Distilling\_Hilton, Chollet]: JFT annotations have been found to be 20% wrong, even after some cleansing procedures [sun2017revisiting]. In the rest of this paper, we will refer to image degradation and to mislabelling errors respectively by the name of *measurement* and *labelling noise*. Even though recent studies have proposed many techniques to compensate the learning degradation due to *measurement noise* [dodge2016understanding, roy2018effects, moosavi2016deepfool] or *labelling noise*  [dataset\_labelling1, xiao2015learning, sun2017revisiting] specifically, very few researchers have developed solutions to mitigate the impact of generic noise, where the two effects may even coexist. Furthermore, there is still very little scientific understanding of how a CNN may behave in presence of noisy inputs at inference phase, i.e. when the final model is applied to a given application, and how to make a CNN model robust to unpredictable noise effects that may make the inputs considerably different to what the model was specifically trained on. In our study, we want to focus the attention on data-perturbation irrespective of whether it is a *measurement* or a *labelling noise*, and we will refer to *spurious* (vs. *meaningful*) samples to indicate images affected by any of the two types of noise. We therefore propose Wise2WipedNet (W2WNet), a CNN-based architecture able to i) model the distribution of spurious samples in a generic dataset, which may be corrupted by both *labelling* and *measurement* noise; ii) clearly identify the spurious samples within the training, by virtue of an adaptive pruning criterion that is fully embedded into the learning algorithm, and focus the training on the only meaningful ones; and iii) at inference time, classify never seen images into the learned categories plus one, clearly identifying noisy inputs by means of a statistically sound measure of prediction confidence (see figure [2](#S3.F2 "Figure 2 ‣ 3.3 The Wise & the Wiped: classification ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")). Hence, our solution exploits the concept of prediction confidence in two ways: (i) during the training phase, to establish a separability criterion between the good quality (a.k.a. meaningful) and the spurious samples, that is embedded into the learning algorithm to make the network able to focus on the only meaningful ones; and (ii) during the inference phase, to improve the robustness of the model to ambiguous inputs. To assess the goodness of our approach in different types of settings, we evaluate *W2WNet* on several state-of-the-art public benchmarks, addressing different image classification tasks and types of noise. In addition to that, we also provide a real-world case study from the medical imaging domain. The rest of the manuscript is structured as follows. In Section [2](#S2 "2 Background ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality") we provide the background and state of the art of our work, and highlight our main contributions. In Section [3](#S3 "3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality") we describe our proposed methodology and implementation details. In Section [4](#S4 "4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality") we provide and discuss experimental results, respectively on the public benchmarks and on the real-world case study. Finally, Section [5](#S5 "5 Conclusions ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality") provides our final considerations and concludes the paper. 2 Background ------------- As discussed in Section [1](#S1 "1 Introduction ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"), in many real-world cases it is not so obvious to have high quality images to train a CNN with. Most likely, the network will face many issues arising from artifacts during image acquisition, transmission, or storage. This typically affects the training procedure, resulting in a degradation of the model performance [dodge2016understanding, roy2018effects]. Thus, a considerable amount of literature has been published on learning CNNs with low quality images and noisy datasets. In surveillance applications, for instance, face recognition from low quality images is a key aspect, and many studies address learning low-quality faces [face1, face2]. In [ullman2016atoms] the authors show that CNNs behave very differently than human vision system (HVS) in handling minimal recognizable configurations (MIRCs), that is the smallest crop of an input image for which a human observer is able to provide a categorization. More specifically, standard CNNs are generally worse than humans at handling MIRCs, which are typically very small, and hence blurry and low resolved. In [dodge2016understanding], the authors present the first large scale evaluation of deep networks on natural images affected by different types and different levels of image quality degradation. They show that the existing models are especially vulnerable to blur and noise. Finally, in [roy2018effects], authors show the effects of degradation on different CNN models, proposing a network setup able to reduce the impact of specific type of perturbations. As already discussed, besides *measurement* noise, also manual mislabelling or faulty automatic annotations may lead to unwieldy learning and lower classification performance [dataset\_labelling1, dataset\_labelling2]. Previous studies specifically addressing *labelling* noise can be categorized into three main groups: 1. [label=()] 2. Methods that focus on model selection or design. These methods aim at selecting the model, loss function and training procedures that are most robust to mislabelling [dataset\_labelling1]. Literature shows that most supervised loss functions are not fully robust to faulty labels [bartlett2006convexity], unless they are handled by overfitting avoidance [Survey\_label\_noise, dataset\_labelling2]. 3. Data cleansing methods. The rationale is in this case to remove samples with incorrect labels. In this sense, voting among an ensemble of classifiers has been proven effective [dataset\_labelling1]. Other strategies include identifying mislabeled instances based on their impact on the training process. For example, [kohler2019uncertainty] prune and re-label training instances by setting a threshold on the classification uncertainty, based on Monte-Carlo (MC) dropout. The challenge of this group of methods is to distinguish the informative samples from the harmful mislabeled ones [dataset\_labelling2]. In this sense, cleansing methods built on top of an uncertainty measure are known to be highly dependent on the given application (i.e. type and level of noise) and even on the architecture of the classifier [kohler2019uncertainty, dataset\_labelling1]. For instance, [kohler2019uncertainty] set a fixed threshold on the uncertainty distribution retrieved from training samples, without modeling the distributions of the uncertainties of the noisy and clean images. Hence, the optimal threshold needs to be tailored to the given application, which may limit the usability in real-world scenarios. 4. Methods that propose classifier training and labelling noise modeling into a unified framework. This category somehow integrates the two aforementioned families. For instance, probabilistic models have been exploited to model the labelling noise and thereby improve classifiers [kohler2019uncertainty]. Other methodologies aim at identifying and penalizing samples with incorrect labels during the training procedure [dataset\_labelling1]. While there is a large body of literature coping with either *measurement* or *labelling* noise individually, very little efforts have been directed so far to handling both the problems at one time. Nonetheless, this is a non-trivial issue in most real-world applications, where a-priori knowledge about the type of noise affecting the data may not be available. Moreover, while *labelling* noise affects the only training phase, as the supervised learning requires an appropriate labelling of the training samples, *measurement* noise may affect CNNs even at the inference phase. As already mentioned in Section [1](#S1 "1 Introduction ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"), this may leads standard CNNs to catastrophic failure in several real-world applications. Starting from these considerations, we propose a methodology (a.k.a. *W2WNet*) able on one hand to deal with both *measurement* and *labelling* noise, and, on the other hand, to provide a statistically sound measure of prediction confidence at inference phase. Our methodology follows in the footsteps of the earlier work by [kohler2019uncertainty], where the authors exploit uncertainty measures retrieved by MC dropout to identify and remove mislabelled samples. Nevertheless, we are substantially different from [kohler2019uncertainty] in the following: (i) we tackle both *measurement* and *labelling* noise in parallel; (ii) we propose an end-to-end framework, embedded into a single CNN model; (iii) we provide a pruning strategy for the spurious samples which is totally automatized and adaptive to the given application; (iv) we exploit prediction uncertainty in two different ways. First, to model, recognize and remove the spurious samples from the training strategy. Second, to broadcast information on the prediction confidence, which is exploited to make CNNs robust to noisy inputs at inference time. 3 Methods ---------- As represented in Figure [1](#S3.F1 "Figure 1 ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"), our architecture includes two main modules: 1. [label=()] 2. the Wise, that is in charge of a two-fold aim: on one hand, to provide a reliable measure of predictive uncertainty associated to samples (Figure [1](#S3.F1 "Figure 1 ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")(a)); on the other hand, to model the distribution of the spurious samples for the purpose of removing them from the training dataset (Figure [1](#S3.F1 "Figure 1 ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")(c)). 3. the Wiped, that is the expert system trained on the cleaned dataset and designated to the actual classification phase (Figure [1](#S3.F1 "Figure 1 ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")(b)). ![Overview of the training phase of the proposed architecture.](https://media.arxiv-vanity.com/render-output/7815496/figures/w2w-crop.pdf) Figure 1: Overview of the training phase of the proposed architecture. ### 3.1 The Wise: uncertainty estimation As already mentioned, the *Wise* must be a noise-aware model, able to associate to each prediction a corresponding uncertainty measure. Last trends in deep learning show a growing body of literature around the theme of uncertainty estimation for predictive classification models [lakshminarayanan2017simple, gal2016dropout, kohler2019uncertainty]. With special regards to CNNs, the canonical softmax score is erroneously regarded as a measure of prediction confidence, that is: the lower the output of the softmax, the higher the uncertainty on the corresponding prediction. Nonetheless, it has been shown that this is not true, as the softmax merely acts as a normalization [gal2016dropout, hendrycks2016baseline]. As a consequence, a traditional CNN might provide confident (wrong) predictions even on samples that are completely unrelated to what it was specifically trained for. The most consolidated way to incorporate uncertainty estimation into a CNN leverages on Bayesian formalism [gal2016dropout, kwon2020uncertainty]. In a Bayesian perspective, individual parameters values (i.e. the weights of the network) are replaced with prior distributions. Hence, the learning strategy is conceived as a probabilistic optimization problem, where the posterior distribution over the parameters is computed, given the training data. As a consequence, the output of the model will also be a posterior predictive distribution of values, from which a statistic can be derived to serve as uncertainty measure. Formally, the weights ω of a CNN are handled as random variables, and assuming the CNN to be exhaustively described by its weights ω, we can write the predictive distribution for a new input x∗ as [gal2016dropout, kwon2020uncertainty]: | | | | | | --- | --- | --- | --- | | | p(y∗|x∗,X,Y)=∫Ωp(y∗|x∗,ω)p(ω|X,Y)dω, | | (1) | Since the term p(ω|X,Y), integrated upon the whole parameters space Ω, makes the predictive posterior of a CNN analytically and numerically intractable [lakshminarayanan2017simple, kwon2020uncertainty], a variety of approximations have been proposed, including Laplace approximation [laplaceapprox], Markov chain Monte Carlo (MCMC) methods [MCMC] and variational Bayesian methods [variational1, variational2]. Nevertheless, the reliability of the uncertainty measure derived from these approximation strategies strictly depends on two different factors: (i) the approximation quality constrained by computational requirements; (ii) the choice of the Bayesian prior, which can ultimately lead to biased predictive uncertainties [lakshminarayanan2017simple]. In practical terms, Bayesian CNNs (BCNNs) are cumbersome to implement and hard to train, as they require a specific training pipeline handling a very high number of possible hyper-parameters, as well as the high computational cost of the approximation technique [lakshminarayanan2017simple]. An interesting insight by Gal and Ghahramani [gal2016dropout] suggested using Monte Carlo dropout (MC dropout) to estimate predictive uncertainty, which is based on using Dropout [srivastava2014dropout] at inference time. Since many different neurons are randomly dropped across different model calls, MC dropout method implements a Bayesian sampling from a variational distribution of models. In other words, MC dropout can be seen as an ensemble methodology, where the predictions are averaged over an ensemble of CNNs sharing the same parameters. In such setting, estimating the model uncertainty for a given sample is as simple as keeping the dropout mechanism switched on at inference time, and performing multiple predictions for the same input [lakshminarayanan2017simple]. By using MC dropout, we can then rewrite equation ([1](#S3.E1 "(1) ‣ 3.1 The Wise: uncertainty estimation ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")) with the following approximation: | | | | | | --- | --- | --- | --- | | | p(y∗|x∗,X,Y)≈∫Ωp(y∗|x∗,ω)q(ω)dω≈1TT∑t=1p(y∗|x∗,^ω), | | (2) | Thanks to variational inference [gal2016dropout, rkaczkowski2019ara], we can approximate the posterior distribution p(ω|X,Y) in ([1](#S3.E1 "(1) ‣ 3.1 The Wise: uncertainty estimation ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")) with a variational one q(ω). Hence, by means of MC dropout, we assume q(ω)∼^ω, where ^ω is an estimation resulting from a variational dropout call. Starting from the above-mentioned considerations, our Wise module (see figure [1](#S3.F1 "Figure 1 ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")(a)) was implemented as a BCNN, leveraging MC dropout. As anticipated in the previous section, the initial task of the Wise is to provide an uncertainty measure for the inputs samples, on the top of which the model can distinguish the spurious samples from the meaningful ones. Downstream of the uncertainty estimation, the Wise is able to: (i) identify and eventually remove the spurious samples, thus providing a *cleaned* dataset to train the Wiped; (ii) associate a confidence measure to the outcome of the *Wiped*’s classification, that can be exploited to express the reliability of the model’s prediction on a given input. To build our BCNN-based Wise, we put into effect equation ([2](#S3.E2 "(2) ‣ 3.1 The Wise: uncertainty estimation ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")) through a DenseNet121 model [densenet], inserting a dropout layer with 0.3 rate after each convolutional, pooling and fully connected layer. The DenseNet-based architectures connect all layers directly with each other: each layer obtains additional inputs from all preceding layers and forwards on its own feature-maps to all downstream ones  [densenet]. By exploiting such *feature reuse* paradigm, DenseNets typically offer exceptional classification capabilities with reduced number of parameters. As it was recently observed that models with less parameters are generally more resilient to image degradations [roy2018effects], we chose DenseNet121 as best trade-off between classification performances and model compactness. Nonetheless, our *Wise* module can be easily converted into any other state-of-the-art CNN architecture, by simply exploiting MC dropout instead of softmax. Before being fed to our model, which is random initialized, samples are pre-processed by zero-centered normalization. The Wise is trained with Stochastic Gradient Descent (SGD), setting weight decay to 0.001. The number of training epochs of the Wise model is a key parameter, which is self-optimized as explained in the next section. Ultimately, we need to define a statistically sound measure of uncertainty. To do so, we adopted the methodology proposed by Kwon and colleagues [kwon2020uncertainty]: starting from ([2](#S3.E2 "(2) ‣ 3.1 The Wise: uncertainty estimation ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")), the predictive uncertainty of a BCNN may be computed as the sum of the predictive variances of each class [19]. Such predictive variance can be further decomposed into the aleatoric component, able to represent the intrinsic noise in the samples, and the epistemic component, which stems from the parameters and the architecture of the model: | | | | | | --- | --- | --- | --- | | | 1TT∑t=1diag(^pt)−^p⊗2taleatoric+1TT∑t=1(^pt−¯p)⊗2epistemic | | (3) | Here ¯p=∑Tt=1^pt/T; ^p=Softmaxf(ωt,x∗) and T is the number of forward passes for input x∗. T has been empirically set to 100 as the best trade-off between computational time and reliability, as stated in [ponzio]. ### 3.2 The Wise: modelling of spurious samples distribution The aforementioned uncertainty measure provides a way to distinguish between spurious and meaningful samples. The Wise has a two-fold functionality. On one hand, during the training phase (see Figure [1](#S3.F1 "Figure 1 ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")), it should identify an epoch ej so that the uncertainty of the spurious samples is significantly higher than the uncertainty of the meaningful samples. Hence, Wise’s training should proceed until (i) the separation between high uncertainty (i.e. spurious) and low uncertainty (i.e. meaningful) samples is large enough, and (ii) this separation is sufficiently stable over the training epochs. On the other hand, the Wise must identify an uncertainty threshold UTh (see Figure [1](#S3.F1 "Figure 1 ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")(c)) that will be exploited at inference time, to broadcast information on the level of confidence of the final prediction (see Figure [2](#S3.F2 "Figure 2 ‣ 3.3 The Wise & the Wiped: classification ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")). To pursue the stated goals, for a generic j−th training epoch the learning proceeds as follows: 1. [label=()] 2. The Wise computes for each training sample a corresponding classification uncertainty value, by means of equation ([3](#S3.E3 "(3) ‣ 3.1 The Wise: uncertainty estimation ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")); thus, given N training samples, we obtain a vector of N uncertainty values, referred to as →uj in Figure [1](#S3.F1 "Figure 1 ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")(c); 3. The vector →uj is given as input to a K-means clustering, with K=2, where the low-uncertainty and the high-uncertainty clusters should represent the meaningful and spurious clusters, respectively. After doing so, the difference between the two clusters’ sizes is computed and normalized upon the total number of training samples. Hence, after j training epochs, we obtain a signal δ made of j such values, whose evolution over time can be exploited to estimate the stability of the clustering at the given epoch. That is, the more stable δ is over the epochs, the lower the number of samples that are re-assigned to a different cluster, and hence, the more stable the clustering; 4. At this stage, we need a quantitative stability criterion to stop the *Wise*’s training. First, δ is low-pass filtered via a median filter with a window size of 11. Second, the standard deviation is computed over a sliding window of size 40 and a stride of 1, obtaining the signal referred to as std(Δ) at the bottom of Figure [1](#S3.F1 "Figure 1 ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")(c)). To decide on the stability of the clustering at epoch ej, and hence on whether to stop the training, std(Δ) is imposed a threshold STDTh, which is set to 0.01 (see Figure [1](#S3.F1 "Figure 1 ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality") (a)). In other words, we stop the training of the Wise if more than 99% of the training samples are stably assigned to the same cluster for 40 consecutive epochs. At inference time, the centroid of the spurious cluster will be exploited as an uncertainty threshold, referred to as UTh, in order to identify the samples upon which the model’s prediction is not sufficiently confident. ### 3.3 The Wise & the Wiped: classification While providing a framework to estimate prediction uncertainty, standard BCNNs are often less accurate than their deterministic counterparts at inference time [shridhar2019comprehensive, ponzio]. To address this issue, as it can be gathered from Figure [2](#S3.F2 "Figure 2 ‣ 3.3 The Wise & the Wiped: classification ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"), in our model both the Wise and the Wiped take part in the inference phase. Given a classification task involving C classes and a generic test sample x∗, the Wise initially computes the corresponding uncertainty u∗ through equation [3](#S3.E3 "(3) ‣ 3.1 The Wise: uncertainty estimation ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"). Then, u∗ is compared with the threshold UTh, identifying x∗ either as a confident or a not-confident prediction. Beside this first categorization, the Wiped will also assign a classification label in the range [1,C] to x∗. Beneath the lid, the Wiped module is a canonical deterministic DenseNet121 model, trained on the only meaningful samples as pre-identified by the *Wise*. The training procedure is the same that was described in Section [3.1](#S3.SS1 "3.1 The Wise: uncertainty estimation ‣ 3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"), with the only difference that the number of epochs is fixed and equal to 100. ![Overview of the inference phase of the proposed architecture.](https://media.arxiv-vanity.com/render-output/7815496/figures/w2w_test-crop.pdf) Figure 2: Overview of the inference phase of the proposed architecture. 4 Experimental Results ----------------------- In this Section we present the experimental validation of our W2WNet. So far, there is no agreed upon benchmark protocol to evaluate learning methods in the way they handle *measurement* and *labelling* noise. Therefore, we started from two well-known public datasets, the MNIST [mnist] and the CIFAR10 [cifar], and we artificially corrupted such datasets in a controlled way. By doing so, we tried to replicate different types of real-world noisy scenarios: 1. [label=()] 2. Labelling noise (labels from a different classification task). In text processing, handwritten character classifications are typical mainstream tasks for CNNs. The MNIST dataset, that is made of 60000 black and white images of handwritten digits (0 to 9), was corrupted by adding a controlled percentage of alien samples randomly extracted from the EMNIST dataset [emnist], which contains handwritten alphabetical characters. Hence, the resulting corrupted dataset, referred to as *Sp-MNIST*, contains either digits (that are still the majority of the images) and letters, all with a white foreground and black background. By doing so, we simulate a real-world situation where a pre-processing pipeline may produce spurious samples to a downstream classifier that was specifically trained on digit classification, due to text parsing errors. This scenario is similar to any other instances of data corruption, where the spurious samples share the same characteristics of the meaningful ones in terms of color range and encoding, but belong to different classifications tasks (in this case, digits and alphabets). 3. Labelling and measurement noise (labels from the same classification task). As anticipated in Section [1](#S1 "1 Introduction ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"), in natural image classification, datasets may be corrupted by both labelling and measurement noise. Mislabelling may sometimes occur due to errors during the automatic collection of a large amount of annotations from the Internet (for example, by extracting tags from the surrounding texts or keywords from search engines). On the other hand, measurement errors can always occur because of problems with acquisition and storage of the images. To simulate such scenarios, we exploited the CIFAR10 dataset, which consists of 50000 32x32 RGB images of 10 classes of natural objects. As regards to labelling, the dataset was artificially corrupted by two different types of noise patterns: symmetric and pair. In the former, original labels are randomly flipped to another label. In the latter, labels are systematically flipped to the subsequent one. Both the patterns are well know in literature, as they are experienced in several image classification tasks [kohler2019uncertainty]. As regards to measurement noise, we picked a random pool of images from CIFAR10 and applied three different types of transformations: (i) blurring, via a median filter with kernel size 11; (ii) random cropping; (iii) random scaling. Even in this case, such image degradation is widely reported by literature, and known to be troublesome for CNN learning in many classification tasks [dodge2016understanding]. As a result of our artificial corruptions, in the final dataset, referred to as *Sp-CIFAR10*, a known subset of images are either given a wrong label (which, differently from the previous case, belongs to the same classification task of the original dataset), or altered in terms of image definition, scale and dynamic range. To push the capabilities of our methodology to its limits, for both the above-mentioned settings, we introduced increasing amount of spurious samples (respectively, 10, 20 and 30% of the size of the original dataset). A full characterization of the obtained validation datasets is reported in Table [1](#S4.T1 "Table 1 ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"). In this table, each dataset is referred to as [Sp]−name−[N], where the Sp prefix indicates the presence of spurious samples, name is the acronym of the original dataset and N is the percentage of spurious samples with respect to the total size of the corresponding original dataset. | Dataset | Train | Test | | --- | --- | --- | | Meaningful | Spurious | Meaningful | Spurious | | MNIST | 60000 | - | 10000 | - | | Sp−MNIST−10 | 60000 | 6000 | 10000 | 1000 | | Sp−MNIST−20 | 60000 | 12000 | 10000 | 2000 | | Sp−MNIST−30 | 60000 | 18000 | 10000 | 3000 | | CIFAR10 | 50000 | - | 10000 | - | | Sp−CIFAR10−10 | 50000 | 5000 | 10000 | 1000 | | Sp−CIFAR10−20 | 50000 | 10000 | 10000 | 2000 | | Sp−CIFAR10−30 | 50000 | 15000 | 10000 | 3000 | Table 1: Validation benchmarks: number of images ### 4.1 Data cleansing capability As a matter of principle, our *W2WNet* should follow three fundamentals: (i) if spurious samples are present, it should remove as many as possible (i.e. high sensitivity); (ii) while removing spurious samples, it should remove as little meaningful ones as possible, as they might be essential for the training of the model (i.e. high specificity); (iii) it should be able to handle datasets that do not contain any spurious samples (that is the ideal case), and leave them untouched. To assess all the mentioned specifications, we trained and tested our *W2WNet* both on the corrupted datasets (i.e. the ones with the Sp prefix in Table [1](#S4.T1 "Table 1 ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")) as well as on the corresponding original ones, in the exact configuration of their reference papers. In Figure [3](#S4.F3 "Figure 3 ‣ 4.1 Data cleansing capability ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality") we show the results of our experiments. Bars show the average number of images removed per dataset, separately for the training and for the test phase. In the former case, *removed* images means that the model tagged them as spurious and hence removed them from the training set. In the latter case, the trained model tagged them as spurious at inference time, by providing a low-confidence prediction. ![ removal rates in the validation datasets. Error bars represent standard deviation of values among different classes.](https://media.arxiv-vanity.com/render-output/7815496/figures/removal_rates-crop.pdf) Figure 3: *W2WNet* removal rates in the validation datasets. Error bars represent standard deviation of values among different classes. The first plot of Figure [3](#S4.F3 "Figure 3 ‣ 4.1 Data cleansing capability ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality") reports the percentage of spurious samples which were correctly identified and removed from the corrupted datasets (i.e. the sensitivity of the model). The last two plots report the number of meaningful samples mistakenly tagged as spurious, respectively on the corrupted datasets and on the original ones. As mentioned earlier, the lower these numbers, the higher the specificity of the model. As it can be gathered from the first plot, *W2WNet* was able to remove at least 30% and at best 70% of the spurious images, when considering both the training and the test sets. Apart from the training of the Sp−CIFAR10, where it is possible to see a decreasing trend of the bars, the performance was quite stable at increasing number of spurious samples in the datasets. The relation between the sensitivity on the training and test sets was different for the two applications: higher on the training than on the test set for the Sp−CIFAR10 datasets, the opposite for the Sp−MNIST ones. As it can be gathered from the second plot, *W2WNet* proved to be reasonably specific in the corrupted datasets, removing as little as 17% of meaningful samples in the worst case (SP−CIFAR−30) and almost 0% in the best case (SP−MNIST). Finally, by looking at the last plot, the number of meaningful images that were on average mistaken as spurious in the original datasets were 5 and 10%, respectively in MINST and CIFAR10. A more thorough analysis revealed that in both cases these samples are very ambiguous images, that a human observer can hardly ascribe to any of the training categories (see Figure [4](#S4.F4 "Figure 4 ‣ 4.1 Data cleansing capability ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")). Hence, we believe that tagging such images as spurious is totally reasonable, and more importantly, it does not have a negative impact on the training, as will be showed later on. Overall, *W2WNet* is reasonably sensitive and specific in the identification of spurious samples, and the reliability of the uncertainty measure, associated with the final prediction, is proved by our results. ![Examples of images tagged as ](https://media.arxiv-vanity.com/render-output/7815496/figures/difficult_images-crop.pdf) Figure 4: Examples of images tagged as *spurious*, respectively from MNIST (a) and CIFAR datasets (b). ### 4.2 Classification performance At last, to assess the effectiveness of our solution in terms of positive impact on the classification performance, we compared *W2WNet* against a canonical deterministic counterpart on all the datasets reported in Table [1](#S4.T1 "Table 1 ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"). For this purpose we exploited a deterministic DenseNet121 model, as it is also the backbone of our *W2WNet* architecture, and hence it is totally equivalent to our model in terms of depth and classification potential. For the training of the deterministic CNNs, we followed the same procedure described in Section [3](#S3 "3 Methods ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"), with the only difference of having set the MC dropout rate to zero. The learning rate was set to 0.1 and 0.01, respectively for the datasets derived from MNIST and CIFAR10. As already anticipated in Section [1](#S1 "1 Introduction ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"), to the best of our knowledge, there is no published literature on deep learning methods addressing *measurement* and *labelling* noise coexisting together. Nonetheless, to better contextualize our validation, besides our approach and its deterministic counterpart, we also provide results obtained by representative algorithms facing either *measurement* or *labelling* noise. For the former category, we tested the methodology by Roy and colleagues [roy2018effects], which leverages on a not trainable low-pass filter-like CNN layer to reduce the impact of image degradation on the classification performance. For the latter, we put into effect the work by Kohler et al., in the configuration made up of a single MC droput-based classifier with 25 forward passes [kohler2019uncertainty]. For a fair comparison, both the methods were implemented using a DenseNet121 model as the backbone. The results of our experiments are reported in Figure [5](#S4.F5 "Figure 5 ‣ 4.2 Classification performance ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"), where we show the mean classification accuracy obtained by the four models (our *W2WNet*, a deterministic DenseNet121, ad the two literature data cleansing approaches). As it can be observed from the plot, for all the approaches, the mean classification accuracy decreases at increasing number of spurious samples affecting the dataset (from 10 to 30 %, see also Table [1](#S4.T1 "Table 1 ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")). This is absolutely consistent with previous literature [kohler2019uncertainty]. When considering the corrupted datasets, our *W2WNet* outperforms the deterministic DenseNet121 of a value between 5% and 10%. In addition, *W2WNet* overcomes both the baseline literature solutions, which both behave similarly to DenseNet121. This is not surprising, as both methods are specifically tailored to address one type of noise solely. By a lesser margin, the accuracy of our *W2WNet* was the highest even in the non-corrupted datasets. ![Mean accuracy of ](https://media.arxiv-vanity.com/render-output/7815496/figures/comparison_accuracies-crop.pdf) Figure 5: Mean accuracy of *W2WNet* compared with representative works from literature. Error bars represent standard deviation of values among different classes. ### 4.3 Real-world case study: histological images classification Histological image analysis is the gold standard for the diagnosis and gauging of large number of cancers [ponzio2019dealing]. Typically, when there is a suspicion of cancer, the patient goes through a biopsy, where a thin layer of tissue sample is resected, fixed on a slide, and stained (for example, by Hematoxylin and Eosin). Then, the pathologist analyzes the slide on the microscope looking for malignancies, which commonly cause alterations of the normal tissue architecture. The recent diffusion of digital scanners imposed the transition from standard histological slides to very large born-digital multi-resolution images called Whole-Slide Images (WSIs, see Figure [6](#S4.F6 "Figure 6 ‣ 4.3 Real-world case study: histological images classification ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")(a)), whose typical size may be 100,000×100,000 pixels. This is rapidly changing the workflow of clinical laboratories [wsi]: the traditional visual evaluation of the samples directly under the microscope is progressively shifting to Computer-Aided Diagnosis (CAD) systems, encouraging a complete automatization of downstream image analysis. Recently, researchers have shown an increased interest in applying DL techniques (most often based on CNNs) to the automated assessment of the WSIs. Nonetheless, obtaining good quality training sets for the CNNs is an extremely cumbersome task, involving a number of steps: (i) manually dividing each WSI into regions of interest (ROIs), that should be homogeneous in terms of tissue architecture; (ii) manually labelling ROIs, based on the tissue category (e.g. cancer vs no-cancer, see Figure [6](#S4.F6 "Figure 6 ‣ 4.3 Real-world case study: histological images classification ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")(b)); (iii) cropping ROIs into a regular grid of small tiles, that can be fed into a CNN together with their corresponding label (the same of the corresponding ROI, Figure [6](#S4.F6 "Figure 6 ‣ 4.3 Real-world case study: histological images classification ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")(c)). Due to image artifacts, imprecision in the ROI delineation, or non-homogeneous content of the ROIs, the outcome of this procedure is typically a dataset which may contain a large amount of spurious tiles: that is, a significant number of tiles may have a content that is either too blurred (measurement noise) or unrelated to the label they were associated to (labelling noise), and then potentially harmful for the training of the CNN. For example, in Figure [6](#S4.F6 "Figure 6 ‣ 4.3 Real-world case study: histological images classification ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")(e), a number of tiles labeled as *cancer* contain a prevalence of background glass, which is obviously not meaningful to the *cancer* category. This makes it a significant case-study for the exploitation of our *W2WNet*. ![Generation of a digital patholohy dataset to train CNNs: typical automated procedure. (a) Whole Slide Image (WSI). (b) Identification and labelling of homogeneous Regions of Interest (ROIs). (c) Cropping ROIs into small tiles, which are all given the same label of the originating ROI. (d) Meaningful tiles (e) Spurious tiles (that is, tiles whose content is not fully representative of the given label).](https://media.arxiv-vanity.com/render-output/7815496/figures/roi_cropping-crop.pdf) Figure 6: Generation of a digital patholohy dataset to train CNNs: typical automated procedure. (a) Whole Slide Image (WSI). (b) Identification and labelling of homogeneous Regions of Interest (ROIs). (c) Cropping ROIs into small tiles, which are all given the same label of the originating ROI. (d) Meaningful tiles (e) Spurious tiles (that is, tiles whose content is not fully representative of the given label). More specifically, in our experiments we refer to the same case study described in our earlier work [ponzio], focused on Colorectal Cancer (CRC) categorization. In this case, the classes of interest are three: (i) Adenocarcinoma (AC), corresponding to recognizable CRC; (ii) Tubulovillous adenoma (AD), a precursive lesion of CRC, and (iii) Healthy tissue (H). As detailed in [ponzio], downstream of the automated ROI cropping and labelling procedure represented in Figure [6](#S4.F6 "Figure 6 ‣ 4.3 Real-world case study: histological images classification ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"), a total number of 19644 non-overlapping annotated tiles were obtained from 27 different WSIs. After ad-hoc re-examination of the tiles by a pathologist, 6144 of them were tagged as spurious, as the prevailing content of such slides (either blood vessels, adipose cells, background glass or stroma, see figure [6](#S4.F6 "Figure 6 ‣ 4.3 Real-world case study: histological images classification ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality")(e)), was not deemed meaningful to any of the three classes of interest. For training and testing purposes, the initial cohort of 27 WSIs was randomly split into two disjoint subsets (18 for training and 9 for testing), roughly balanced with respect to the classes, and then fed into our *W2WNet* for data cleansing and classification. The results of our experiments are shown in Figure [7](#S4.F7 "Figure 7 ‣ 4.3 Real-world case study: histological images classification ‣ 4 Experimental Results ‣ W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality"). As it is visible from the plot on the left, our framework was able to identify 55% and 58% of the spurious samples from training and test set respectively. The impact on the classification is shown on the right plot, where we compare the mean classification accuracy of *W2WNet* with the one obtained by the deterministic counterpart, a state-of-the-art DenseNet121 CNN, trained from scratch with learning rate set to 0.0001 and SGD optimizer. Even in this case, the accuracy of our proposed solution was higher, by about 9% on average on the test set. ![Removal rate of spurious samples with ](https://media.arxiv-vanity.com/render-output/7815496/figures/CRC-crop.pdf) Figure 7: Removal rate of spurious samples with *W2WNet* (left) and classification accuracy comparison of *W2WNet* and DenseNet (right) on the CRC dataset. Error bars represent standard deviation of values among different classes. 5 Conclusions -------------- Unfortunately, *measurement* and *labelling* noise are unavoidable in many real-world applications of CNNs. On one hand, the training phase of a CNN may be affected by many types of image degradation, due to problems of acquisition, encoding or storage, and mislabelling, due to faults of the manual annotation or of the automated labelling systems. On the other hand, at inference time, a CNN that was trained on a good quality dataset may be fed with low-quality images, that are completely unrelated to the ones the model was trained on. Even in such cases, a standard CNN is neither able to provide a correct prediction, nor to communicate its impossibility to provide a reliable answer. To address this issue, in this paper we proposed *W2WNet*, a CNN architecture exploiting Bayesian probabilistic inference to i) identify the peculiar distribution of spurious samples in a dataset, that may be affected by both measurement and labelling noise; ii) clean the training dataset from the spurious samples and focus the learning strategy on the only meaningful ones; iii) at inference time, provide a statistically well-founded measure of prediction confidence on the new inputs, clearly identifying the ones on which the network is too uncertain. Our experiments on MNIST and CIFAR10 datasets, artificially corrupted by a controlled number of spurious samples, has shown that *W2WNet* can cope well with measurement and labelling noise, both in terms of sensitivity and specificity in the identification of the spurious samples. As an effect of this, *W2WNet* improves on the classification accuracy of a DenseNet121 CNN, which is the deterministic counterpart of our classifier, as well as of state-of-the-art methods, which are tailored to one specific type of noise. On top of that, we found that *W2WNet* outperformed the other techniques even in the classification of non-corrupted datasets (i.e. original MNIST and CIFAR10), thanks to its capability of discarding a limited number of ambiguous images from such datasets. Ultimately, we evaluated *W2WNet* in a real-world case study from medical image analysis, that is the classification of histological samples from WSIs. Even in this case, *W2WNet* was able to handle the presence of several spurious samples, that were generated by a typical dataset generation pipeline in digital pathology [ponzio], and improve on the performance of the DenseNet121. In conclusion, we believe that our findings have important implications for the proficient exploitation of DL models in many real-world settings, where the presence of image quality and labelling issues typically challenge the use of classic CNN architectures, both during the training and the inference phase.
cecba6ef-675c-46c7-88c9-fa55f5cde7b7
trentmkelly/LessWrong-43k
LessWrong
Individual Rationality Is a Matter of Life and Death On at least two occasions - one only a year past - my life was at serious risk because I was not thinking clearly.  Both times, I was lucky (and once, the car even survived!).  As a gambler I don't like counting on luck, and I'd much rather be rational enough to avoid serious mistakes.  So when I checked the top-ranked posts here and saw Robin's Rational Me or We? arguing against rationality as a martial art I was dumbfounded.  To me, individual rationality is a matter of life and death[1]. In poker, much attention is given to the sexy art of reading your opponent, but the true veteran knows that far more important is the art of reading and controlling yourself.  It is very rare that a situation comes up where a "tell" matters, and each of my opponents is only in an occasional hand.  I and my irrationalities, however, are in every decision in every hand.  This is why self-knowledge and self-discipline are first-order concerns in poker, while opponent reading is second or perhaps even third. And this is why Robin's post is so wrong[2].  Our minds and their irrationalities are part of every second of our lives, every moment we experience, and every decision that we make.  And contra to Robin's security metaphor, few of our decisions can be outsourced.  My two bad decisions regarding motor vehicles, for example, could not have easily been outsourced to a group rationality mechanism[3].  Only a tiny percentage of the choices I make every day can be punted to experts. We have long since left the Hobbesian world where physical security depends on individual skills, but when it comes to rationality, we are all "isolated survivalist Einsteins".  We are in a world where our individual mental skills are constantly put to the test.  And even when we can rely on experts, it is our individual choices (influenced by the quality of our minds) that determine our success in life.  (How long would a professor's reputation last if he never did any original work?) So while I respec
de87df51-e6fa-4d7d-8c5c-a080a89cb571
trentmkelly/LessWrong-43k
LessWrong
Should you announce your bets publicly? Over time, I've noticed many examples of people making bets that don't make much sense from a subjective profit-maximizing point of view. Without naming the people involved, here are some examples: 1. A lends B some amount of USD to be paid back in seven years, with an effective interest rate of 15% per year on the loan. This is construed as a bet on AI timelines: B thinks there's a high chance the world will end in seven years and won't have to pay back the debt, or that AI will drive substantial economic growth which will make the repayment of the loan much easier. 2. A and B make the following bet: B reads a book authored by A. If B changes his opinion about some important issue as a consequence of reading this book, B pays A an unspecified sum X. Otherwise, A pays B a sum of 2X. The context was such that I think A had no reason to think B would be convinced by their book, even aside from the issue of perverse incentives created by the bet. 3. A bets B at 1:4 odds (in A's favor) that AI will drive substantial economic growth over the next two decades. The point is that A thinks such an event is more than 1/(1+4) = 20% likely, while B thinks it's less than 20% likely, so the bet is meant to have a positive expected value from the standpoint of both counterparties. 4. A agrees to buy a Bitcoin from B for $1 million in 90 days, as a way of betting that the price of BTC will exceed $1 million in 90 days. All of these examples have strange features that don't make much sense if we think of them as purely expected profit-maximizing actions. To enumerate; 1. B can just as easily get a loan denominated in USD for much less than 15% per year in interest. There are liquidity constraints, of course, but they are likely significantly less binding in the ordinary lending market than they are in the "making bets with acquaintances" market. Likewise, A can simply arbitrage the bet against said lending market as commercial lending rates are currently far below 15% per
f809c9bd-163d-497f-8c21-f6304eba9c86
trentmkelly/LessWrong-43k
LessWrong
On Robin Hanson’s “Social Proof, but of What?” Response Post to (Overcoming Bias): Social Proof, But of What? Epistemic Status: It seemed worth writing a long post but not a short one, so I saved time and wrote a long one. Feel free to skip this if it does not seem interesting to you. It seems worthwhile to closely examine the first two paragraphs of Robin Hanson’s recent post, except I will replace his A and B with X and Y because this is a classic Robin “X is not about Y” claim (“Talk isn’t about info”): > People tend to (say they) believe what they expect that others around them will soon (say they) believe. Why? Two obvious theories: > > X) What others say they believe embodies info about reality, > > Y) Key audiences respect us more when we agree with them. Can data distinguish these theories? Consider a few examples. Robin goes on to consider examples that he suggests are better explained by Y than by X.  I noticed instinctively that this framing felt like a trick. After analyzing this instinct, I noticed it was because Y was a very strange particular hypothesis to privilege. If I was writing this post in a ‘write through to think it through’ mode, I would have said something like this (without consciously considering the examples Robin gives, although I did skim them on a previous day before writing this list, and giving myself about 30 minutes): People tend to (say they) believe what they expect that others around them will soon (say they) believe. Why? I see three classes of reasons they might do this: X) What others say they believe embodies info about reality. X1) It is likely to be an accurate map. X2) Even if inaccurate, it is likely to be a locally useful wrong map. X3) You might choose to be around those for which X1+X2 are true. X4) You have more exposure to the expressed beliefs of those around you than of others, and of their justifications for those beliefs, and often they are motivated to convince you to share those beliefs.  X5) You likely share much of the same cultural backg
b873bb78-74e6-49a2-8bb1-ae4cc1f8a4fe
trentmkelly/LessWrong-43k
LessWrong
Agenty AGI – How Tempting? [Epistemic status: mostly writing to clarify my intuitions, with just a few weak attempts to convince others. It's no substitute for reading Drexler's writings.] I've been struggling to write more posts relating to Drexler's vision for AI (hopefully to be published soon), and in the process got increasingly bothered by the issue of whether AI researchers will see incentives to give AI's broad goals that turn them into agents. Drexler's CAIS paper convinced me that our current trajectory is somewhat close to a scenario where human-level AI's that are tool-like services are available well before AGI's with broader goals. Yet when I read LessWrong, I sympathize with beliefs that developers will want quite agenty AGI's around the same time that CAIS-like services reach human levels. I'm fed up with this epistemic learned helplessness, and this post is my attempt to reconcile those competing intuitions. Please recall that Drexler's distinction here focuses on a system's goals, not its knowledge. Software is more agenty when its goals cover a wide range of domains, and long time horizons. Services are designed to produce specific outputs using a system's current procedures. The Easy Part? An ESRogs comment on LW nudged me a bit more toward Drexler's position. "The smart part is not the agent-y part": intelligence and agency are independent, with intelligence being the hard part, and agency being more like prompt engineering. This seems mostly correct, at least for Drexler's meaning of agency. There are still significant problems with getting agency correct, but they're mostly associated with figuring out what we want, and likely not helped much by math or computer science expertise. Orthogonality squared ESRogs also suggests a potentially valuable concept that he calls "Orthogonality squared". I.e. the domain of an agent's goals is independent of its intelligence. We can, in principle, create AI's that don't care about atoms. At least for some simple tasks, i