url
stringlengths
52
124
post_id
stringlengths
17
17
title
stringlengths
2
248
author
stringlengths
2
49
content
stringlengths
22
295k
date
stringclasses
376 values
https://www.lesswrong.com/posts/ZuKfs5Ank2zbgCcPG/a-narrative-history-of-environmentalism-s-partisanship
ZuKfs5Ank2zbgCcPG
A Narrative History of Environmentalism's Partisanship
jeffrey-heninger
This is the second in a sequence of four posts taken from my recent report: Why Did Environmentalism Become Partisan? Many of the specific claims made here are investigated in the full report. If you want to know more about how fossil fuel companies’ campaign contributions, the partisan lean of academia, or newspapers’ reporting on climate change have changed since 1980, the information is there. Introduction Environmentalism in the United States today is unusually partisan, compared to other issues, countries, or even the United States in the 1980s. This contingency suggests that the explanation centers on the choices of individual decision makers, not on broad structural or ideological factors that would be consistent across many countries and times. This post describes the history of how particular partisan alliances were made involving the environmental movement between 1980 and 2008. Since individual decisions are central to understanding why this happened, this history is best presented as a narrative following the key people and organizations. Environmentalism in the Reagan Era In the wake of the New Deal, the Republican Party acquiesced to the government having a larger role in society than it had had before the Great Depression.[1] Republican presidents would sometimes support increased government spending and regulation. This is apparent in environmental policy: Nixon was involved in several major pieces of environmental legislation and created the EPA in the executive branch. The election of Reagan in 1980 reoriented the Republican Party. It would now advocate for a smaller government: less (non-military) spending, lower taxes, and less regulation. The free market would provide many of the services that had previously been done by the government. Thatcher’s election as Prime Minister had similar results for the Conservative Party in the UK. This might seem like it would cause a deep ideological conflict: Environmentalists advocated for regulations on private enterprise and international cooperation on policy, while the Republican Party preferred private and local action. However, this is not what we observe with either Reagan or Thatcher. The conservative leaders in the US and UK supported environmentalism, even when it involved international regulations. The clearest example of this is the Montreal Protocol on Substances That Deplete the Ozone Layer in 1988. Reagan described it as: The Montreal protocol is a model of cooperation. It is a product of the recognition and international consensus that ozone depletion is a global problem, both in terms of its causes and its effects. The protocol is the result of an extraordinary process of scientific study, negotiations among representatives of the business and environmental communities, and international diplomacy. It is a monumental achievement.[2] The US Senate ratified the Montreal Protocol unanimously.[3] Reagan and Thatcher also specifically supported international regulation to combat climate change. Thatcher was the first head of government to talk about climate change at the UN, in 1989, and called for an international conference on climate change in 1992 (The Earth Summit in Rio de Janeiro).[4] The International Panel on Climate Change (IPCC) creates the reports summarizing the scientific consensus about climate change, and its “principal architect was the conservative Reagan administration.”[5] In 1992, the U.N. Framework Convention of Climate Change (UNFCCC), the result of the Rio Summit, had the support of the Bush Sr. administration. The U.S. Senate decided that it was popular enough to not need a roll-call vote.[6] These actions of conservative leaders also translated into broad popular support, including among Republicans. The 1980s saw increasing concern about the environment and a decreasing partisan gap. Republicans only became anti-environmentalist after 1990. Anti-environmentalism is not the natural consequence of the small government ideology of Reagan and Thatcher. It only entered the US Republican Party a decade later, and the UK Conservative Party has continued to support environmentalism. Environmentalists, Climate Scientists, & Democratic Politicians The earliest partisan alliances involving climate change began in Congressional hearings in the 1980s. The Reagan administration entered office promising to reverse most of the energy policies of the Carter administration and dramatically shrink the Department of Energy, which had just been created in 1977.[7] One of the programs cut was a newly established center for climate research.[8] As an undergraduate student, Al Gore had taken classes in climate science from Roger Revelle, one of the first people to study global warming. In the House of Representatives, Gore led Congressional hearings against these particular cuts (which were partially reversed), and continued to be very involved whenever climate was an issue in Washington.[9] Climate policy at this time was still bipartisan, and the Reagan administration was open to government action on climate. The environmental movement also became increasingly interested in climate change in the late 1980s, particularly after the summer of 1988. This summer saw severe drought across most of the U.S., low enough water in the Mississippi River to hinder barge traffic, heat waves, major fires in Yellowstone, and a Category 5 hurricane in the fall. In and after a Congressional hearing on the climate, James Hansen of NASA claimed that he was “99 percent certain” that “the greenhouse effect is here.”[10] Many other climate scientists did not believe that the evidence was that strong yet and disliked his combative tone. A few publicly rebuked him.[11] Focusing on climate change provided a way to unify the disparate concerns of the environmental movement, including air & water pollution, habitat conservation, recycling, and energy production. The historian of science Spencer Weart describes this transition as: The environmental movement, which had found only occasional interest in global warming, now took it up as a main cause. Groups that had other reasons for preserving tropical forests, promoting energy conservation, slowing population growth, or reducing air pollution could make common cause as they offered their various ways to reduce emissions of CO2.[12][13] An unusually explicit statement of this strategy comes from Senator Timothy Wirth (D-CO): What we've got to do in energy conservation is try to ride the global warming issue. Even if the theory of global warming is wrong, to have approached global warming as if it is real means energy conservation, so we will be doing the right thing anyway in terms of economic policy and environmental policy.[14] In the late 1980s and early 1990s, environmentalists became increasingly focused on climate science, and both environmentalists and climate scientists formed political alliances with Democratic politicians. The Clinton-Gore Administration and the BTU Tax In 1992, Bill Clinton selected Al Gore as his vice presidential candidate and secured the endorsements of environmental organizations like the Sierra Club that had mostly stayed above the partisan fray. One of the Clinton’s administration’s early legislative goals was a tax on energy, measured in British thermal units (BTUs).[15] While this is sometimes remembered as an attempt at a carbon tax, it taxed energy rather than carbon dioxide. Solar, wind, and geothermal power production were exempted, but nuclear and hydroelectricity were not. This tax proved extremely unpopular. To shore up support, the Clinton administration agreed to more exemptions for particular industries, but this diminished what the bill hoped to accomplish, did not improve its popularity in Congress, and encouraged even more groups to request exemptions.[16] The broad-based BTU tax was abandoned, and replaced with a much weaker tax on gasoline. Congressmen who had supported the BTU tax suffered politically in the midterms. In 1994, Republicans won control of the House of Representatives for the first time in 40 years. The BTU tax was not the only significant issue: the NRA also organized against an assault rifle ban and Newt Gingrich innovated by using a national strategy instead of focusing on individual races. Opposing a climate policy was one thing that helped propel Republicans into power in Congress. Fossil Fuel Companies, Climate Skeptics, & Conservative Think Tanks The fossil fuel industry opposed government action on climate change. Significant reductions in greenhouse gas emissions would completely undermine their business model, forcing them to transition to a different industry (renewable energy) or dramatically lose market share. In 1989, a group of fossil fuel and manufacturing companies founded the Global Climate Coalition to oppose climate policy that they claimed would disrupt the American economy.[17] The Global Climate Coalition spent tens of millions of dollars in ad campaigns and contributions to politicians before it disbanded in 2001. At the time, it was not obvious that the industry lobbying would overwhelmingly favor Republicans. There had not previously been a strong tendency for fossil fuel companies to support Republicans. It is not too surprising that the industry lobby ended up favoring Republicans. Republicans were the more business-friendly party. There was already a small bias for campaign contributions in that direction and oil is more concentrated in Republican-leaning states. The Gulf War in 1990 might have associated the oil industry with the Republican Party, although the war proved broadly popular.[18] Academics, probably including climate scientists, were somewhat more likely to lean to the left, although not nearly as much as they do today. Congressional Republicans had been somewhat more likely to oppose environmental legislation than Congressional Democrats for several decades. But then there was an abrupt change in the early 1990s, and this difference dramatically increased. The Global Climate Coalition found willing allies among conservative think tanks. These think tanks would accept funding from the fossil fuel industry to hire skeptical climate scientists or experts from other fields who were skeptical of climate change. They would publish policy studies, newsletter articles, and press releases that cast doubt on conventional climate science and opposed climate policy proposals. These think tanks and skeptics were successful at reframing the climate debate and creating the “non-problematicity” of global warming among conservatives. The most common claims by conservative think tanks were that “the scientific evidence for global warming is highly uncertain” and “proposed action would harm the national economy.”[19] It is not clear to me whether the causal relationship mostly points from conservative think tanks to Republican congressmen or vice versa. The first climate skeptic publication by a major conservative think tank was in 1991, before Clinton & Gore were elected or Gingrich became Speaker of the House. However, there were initially only single digits of publications per year, mostly by a single think tank: the Marshall Institute. The publications did not become common or widespread until 1996-1997. Between 1991 and 1996, most conservative think tanks did not yet have a public position on climate change. Figure 1: Type and Year of Publication of Documents on Global Warming Circulated by Major Conservative Think Tanks. Reprinted from McCright & Dunlap (2000). After the Republican Party led by Gingrich won the Midterm elections in 1994, the number of Congressional hearings about climate change decreased. When there were hearings, Congress would invite similar numbers of conventional climate scientists and climate change skeptics.[20] Congress began to treat this as an active scientific debate and the media followed suit,[21] even though both had previously predominantly presented the scientific consensus. Figure 2: Natural Scientists' Testimonies Delivered Each Year by Climate Change Skeptics and Conventional Scientists in Congressional Hearings About Global Warming. The vertical axis is the percentage of testimonies and the number of testimonies is printed above each column. Reprinted from McCright & Dunlap (2003). Figure 3: Number of Global Warming-Related Articles Citing one of five “Elite” Climate Scientists and/or one of five Climate Change Skeptics as an Information Source. Reprinted from McCright & Dunlap (2003). Debates Over the Kyoto Protocol The Earth Summit in Rio de Janeiro in 1992 which had produced the UN Framework Convention on Climate Change (UNFCCC) also called for a future summit, in Kyoto, that would impose limits on countries’ greenhouse gas emissions. While the UNFCCC had gotten broad bipartisan support under the Bush Sr. administration, the politics of climate change had changed dramatically since then. The debate over the Kyoto Protocol would see the last major bipartisan actions on climate change and the beginning of substantial partisanship among the public. Before the summit, the US Senate unanimously passed the Byrd-Hagel Resolution, which declared that it would not support any treaty that imposed restrictions on developed countries (like the US) but not developing countries.[22] The summit was very contentious and the negotiations almost collapsed. On the last day, Vice President Gore flew to Kyoto to save the agreement. The resulting Kyoto Protocol did not impose any restrictions on the greenhouse gas emissions of developing countries. President Clinton signed the treaty, but did not even submit it to the Senate for consideration.[23] To win, it would need 2/3 of the Senate, who had just unanimously opposed it. I am uncertain whether this should be thought of more as a case where the Senate overconstrained the international negotiating position of the presidential administration or more as a case of the administration ignoring the advice of the Senate. Before and during the summit, the Clinton administration ran a media campaign to build public support for the resulting treaty. There was a massive increase in media coverage, most of which was aligned with conventional climate science. Conservative think tanks also dramatically increased their production of skeptical media. A pair of surveys conducted before and after this debate found that it did make people more aware of climate change as an issue. A majority of people believed that climate change was going to happen, was going to be bad, and that the government should limit air pollution to address it. The overall percentages of people who supported these positions did not change as a result of the debate. There were underlying shifts as strong Democrats came to increasingly support the administration’s policy and strong Republicans came to oppose it.[24] The Kyoto Protocol was a failed attempt at climate policy in the United States that directly increased partisanship. Continued Increases in Partisanship Partisanship continued to increase after the debate over the Kyoto Protocol. Al Gore ran for president in 2000. Although he was strongly associated with climate change by this point, multiple sources claimed that environmentalism was not a major issue in this election.[25] Bush Jr. became president instead, declared that the U.S. would not fulfill its obligations under the Kyoto Protocol, and reduced funding to climate science.[26] During this time, some of the structural factors that might have been contributing to rising partisanship ended. The Global Climate Coalition disbanded in 2001. Some of the companies which had been involved accepted climate change, while others continued to promote skepticism.  The mainstream media stopped presenting both sides of the debate in 2003-2004. Nevertheless, the partisan gap continued to grow. In 2006, Gore released a climate change documentary titled An Inconvenient Truth. This did not change overall public opinion.[27] Instead, partisanship continued to increase: more Democrats were becoming climate activists, while Republicans were becoming increasingly skeptical. There were still prominent Republicans who supported policies to counteract climate change. Governor Arnold Schwarzenegger introduced a cap-and-trade system for California,[28] while Senator John McCain co-sponsored a bill that would create a similar system for the country.[29] However, an increasing number of Republicans became increasingly opposed to environmentalism, and the environmental movement became increasingly tied to the Democratic Party. Subsequent decisions by both parties, and the environmental movement itself, continued to contribute to rising partisanship on environmental issues in the United States. Conclusion Broad structural and ideological differences do not explain the partisanship of environmentalism. During the Reagan and Bush Sr. administrations, the Republican Party did support environmentalism, including international agreements on climate change, despite its small-government orientation on most issues. The Republican Party did not significantly change its ideology between the 1980s and 2000s. The subsequent partisanship in environmentalism then cannot be explained by foundational ideological differences between Democrats and Republicans. Instead, the explanation involves a history of alliances made by particular decision makers. The first alliance made was between environmentalists, climate scientists, and Congressional Democrats during Congressional hearings in the 1980s. The key figure here was Al Gore. Environmentalists seemed to accept the usefulness of this alliance and did not seriously try to find a similarly prominent or rising Republican politician to ally with as well. The second alliance made was between fossil fuel companies, climate skeptics, and conservative think tanks, starting around 1990. The industry organized into the Global Climate Coalition in 1989 and convinced the Marshall Institute to begin publishing climate skepticism in 1991. For a few more years, this alliance was not complete: most conservative think tanks were still neutral on climate change, and environmentalists might have been able to convince some of them to support their cause. When environmentalist-aligned Democrats were in political power in the 1990s, they made several policy proposals that were deeply flawed. A tax proposed in 1993 taxed energy produced, not carbon dioxide emitted, and included arbitrary exemptions. The Kyoto Protocol in 1997 contained terms that the Senate had previously rejected unanimously. These flawed policy proposals made it easier for Republicans like Gingrich or Bush Jr. to rally the public against them – and environmentalism more broadly. Subsequent decisions, on both sides of the aisle, continued to reinforce the trend towards increasing partisanship. This partisanship could have been avoided, if various decision makers had made different choices about what alliances to form or not form. Environmentalism is not partisan in many other countries, including in highly partisan countries like South Korea  or France. The resulting partisanship was bad for the environmental movement. As partisanship increased in the 1990s and early 2000s, environmentalism saw flat or falling support, fewer major legislative accomplishments, and fluctuating executive actions. ^ Prior to the Great Depression, there was less disagreement between the two parties about what the size of the government should be. The New Deal saw Democrats dramatically increase the size and role of the governments, which Republicans initially opposed. ^ Montreal Protocol on Substances that Deplete the Ozone Layer. U.S. Department of State. (Accessed April 17, 2024) https://www.state.gov/key-topics-office-of-environmental-quality-and-transboundary-issues/the-montreal-protocol-on-substances-that-deplete-the-ozone-layer/. ^ Montreal Protocol on Substances that Deplete the Ozone Layer. Senate Consideration of Treaty Document 100-10. (1988) https://www.congress.gov/treaty-document/100th-congress/10. ^ Margaret Thatcher. Speech to United Nations General Assembly (Global Environment). (1989) https://www.margaretthatcher.org/document/107817. ^ Spencer Weart. The Discovery of Global Warming. Government: The View from Washington. (Accessed Feb 2024) https://history.aip.org/climate/Govt.htm. ^ The UNFCCC was ratified using a division vote, in which Senators stand for “yea” and “nay” and the presiding officer counts the number of Senators standing for each. The result of the vote is not recorded other than whether it passed. Treaties require 2/3 support of the Senate to be ratified, so it had to have had significant bipartisan support. Typically, division votes and voice votes are used when the result of the vote is not in doubt beforehand. About Voting. U.S. Senate. (Accessed March 22, 2024) https://www.senate.gov/about/powers-procedures/voting.htm. United Nations Framework Convention on Climate Change. Senate Consideration of Treaty Document 102-38. (1992) https://www.congress.gov/treaty-document/102nd-congress/38. ^ Republican Party Platform of 1980. § Energy. https://www.presidency.ucsb.edu/documents/republican-party-platform-1980. ^ Climate Change in the 1970s. American Institute of Physics. (Accessed March 29, 2024) https://history.aip.org/history/exhibits/climate-change-in-the-70s/. Spencer Weart. The Discovery of Global Warming. Government: The View from Washington. (Accessed Feb 2024) https://history.aip.org/climate/Govt.htm. ^ Roger A. Pielke Jr. Policy history of the US Global Change Research Program: Part I. Administrative development. Global Environmental Change 10. (2000) p. 9-25. https://sciencepolicy.colorado.edu/admin/publication_files/2000.09.pdf. ^ Philip Shabecoff. Global Warming Has Begun, Expert Tells Senate. New York Times. (1988) https://www.nytimes.com/1988/06/24/us/global-warming-has-begun-expert-tells-senate.html. ^ Richard A. Kerr. Hansen vs. the World on the Greenhouse Threat. Science 244. (1989) https://www.science.org/doi/abs/10.1126/science.244.4908.1041. ^ Spencer Weart. The Discovery of Global Warming. The Public and Climate Change Since 1980. (Accessed Feb 2024) https://history.aip.org/climate/public2.htm. ^ Scott Alexander has also noticed this transition and described it as: It feels almost like some primitive barter system has been converted to a modern economy, with tons of CO2 emission as the universal interchangeable currency that can be used to put a number value on all environmental issues. Scott Alexander. What Happened To 90s Environmentalism? Slate Star Codex. (2019)  https://slatestarcodex.com/2019/01/01/what-happened-to-90s-environmentalism/. ^ Roger A. Pielke Jr., Roberta Klein, & Daniel Sarewitz. Turning the Big Knob: An Evaluation of the Use of Energy Policy to Modulate Future Climate Impacts. Energy and Environment, 11. (2000) p. 255-276. https://sciencepolicy.colorado.edu/about_us/meet_us/roger_pielke/knob/text.html. ^ Some history. Carbon Tax Center. (Accessed March 29, 2024) https://www.carbontax.org/some-history/. ^ Dawn Erlandson. The BTU Tax Experience: What Happened and Why It Happened. Pace Environmental Law Review 12.1. (1994) https://digitalcommons.pace.edu/cgi/viewcontent.cgi?article=1528&context=pelr. ^ Global Climate Coalition. Source Watch. (Accessed March 29, 2024) https://www.sourcewatch.org/index.php/Global_Climate_Coalition. GCC's position on the climate issue. Global Climate Coalition. (Archive: Feb 9, 1999) http://web.archive.org/web/19990209102342/http://www.globalclimate.org/MISSION.htm. ^ David W. Moore. Americans Believe U.S. Participation in Gulf War a Decade Ago Worthwhile. Gallup (2001) https://news.gallup.com/poll/1963/americans-believe-us-participation-gulf-war-decade-ago-worthwhile.aspx. ^ Aaron M. McCright & Riley E. Dunlap. Challenging Global Warming as a Social Problem: An Analysis of the Conservative Movement's Counter-Claims. Social Problems 47.4. (2000) p. 499-522. https://www.researchgate.net/publication/237371278_Challenging_Global_Warming_as_a_Social_Problem_An_Analysis_of_the_Conservative_Movement%27s_Counter-Claims. Peter J. Jacques, Riley E. Dunlap, & Mark Freeman. The organisation of denial: Conservative think tanks and environmental scepticism. Environmental Politics. (2008) p. 349-385. https://www.tandfonline.com/doi/full/10.1080/09644010802055576. ^ Aaron M. McCright & Riley E. Dunlap. Defeating Kyoto: The Conservative Movement's Impact on U.S. Climate Change Policy. Social Problems 50.3. (2003), p. 348-373. https://www.researchgate.net/publication/228594257_Defeating_Kyoto_The_Conservative_Movement%27s_Impact_on_US_Climate_Change_Policy. ^ The newspapers included are: Wall Street Journal, USA Today, New York Times, Los Angeles Times, Washington Post, Chicago Tribune, and Newsday. Aaron M. McCright & Riley E. Dunlap. Defeating Kyoto: The Conservative Movement's Impact on U.S. Climate Change Policy. Social Problems 50.3. (2003) p. 348-373. https://www.researchgate.net/publication/228594257_Defeating_Kyoto_The_Conservative_Movement's_Impact_on_US_Climate_Change_Policy. ^ A resolution expressing the sense of the Senate regarding the conditions for the United States becoming a signatory to any international agreement on greenhouse gas emissions under the United Nations Framework Convention on Climate Change. Senate Resolution 98. (1997) https://www.congress.gov/bill/105th-congress/senate-resolution/98. ^ United States Signs the Kyoto Protocol. Bureau of Oceans and International Environmental and Scientific Affairs. (1998) https://1997-2001.state.gov/global/global_issues/climate/fs-us_sign_kyoto_981112.html. ^ Jon A. Krosnick, Allyson L. Holbrook, & Penny S. Visser. The impact of the fall 1997 debate about global warming on American public opinion. Public Understanding of Science 9. (2000) p. 239-260. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=1ecba8f2535dd16fe855168cfeb35592e36259be. Steven Kull. Americans on Global Warming: A Study of U.S. Public Attitudes. Program on International Policy Attitudes. (1998) https://publicconsultation.org/wp-content/uploads/2020/09/GlobalWarming_1998.pdf. ^ Spencer Weart. The Discovery of Global Warming. Government: The View from Washington. (Accessed Feb 2024) https://history.aip.org/climate/Govt.htm. Gerald M. Pomper. The 2000 Presidential Election: Why Gore Lost. Political Science Quarterly 116.2. (2001) p. 201. https://www.uvm.edu/~dguber/POLS125/articles/pomper.htm. Thomas E. Mann. Reflections on the 2000 U.S. Presidential Election. Brookings. (2001) https://www.brookings.edu/articles/reflections-on-the-2000-u-s-presidential-election/. ^ Text of a Letter from the President to Senators Hagel, Helms, Craig, and Roberts. George W. Bush White House Archives. (2001) https://georgewbush-whitehouse.archives.gov/news/releases/2001/03/20010314.html. ^ Deborah Lynn Guber. A Cooling Climate for Change? Party Polarization and the Politics of Global Warming. American Behavioral Scientist 57.1. (2013) p. 93–115. https://cssn.org/wp-content/uploads/2020/12/A-Cooling-Climate-for-Change-Party-Polarization-and-the-Politics-of-Global-Warming-Deborah-Guber.pdf. ^ California’s Cap-and-Trade Program: Frequently Asked Questions. Legislative Analyst’s Office: The California Legislature’s Nonpartisan Fiscal and Policy Advisor (2023) https://lao.ca.gov/Publications/Report/4811. ^ Climate Stewardship Act. S.139. (2003) https://www.congress.gov/bill/108th-congress/senate-bill/139/all-info. Marianne Lavelle. John McCain’s Climate Change Legacy. Inside Climate News. (2018) https://insideclimatenews.org/news/26082018/john-mccain-climate-change-leadership-senate-cap-trade-bipartisan-lieberman-republican-campaign/.
2024-05-14
https://www.lesswrong.com/posts/D6nTSEdCcbGQKCfc2/teaching-cs-during-take-off
D6nTSEdCcbGQKCfc2
Teaching CS During Take-Off
andrew-carle
I stayed up too late collecting way-past-deadline papers and writing report cards. When I woke up at 6, this anxious email from one of my g11 Computer Science students was already in my Inbox. Student: Hello Mr. Carle, I hope you've slept well; I haven't. I've been seeing a lot of new media regarding how developed AI has become in software programming, most relevantly videos about NVIDIA's new artificial intelligence software developer, Devin. Things like these are almost disheartening for me to see as I try (and struggle) to get better at coding and developing software. It feels like I'll never use the information that I learn in your class outside of high school because I can just ask an AI to write complex programs, and it will do it much faster than I would. I'd like to know what your thoughts on this are. Do you think AI will replace human software developers, as NVIDIA claims it will? My response: Buddy, that is a big question for 5:15 am. First AI horizon thoughts: Software development as a field will look incredibly different in 10 years.My priors say that MOST of human intellectual+economic activity will ALSO be radically different in 10 years.I have a very small p(doom) for the 10 year horizon. That means I don't expect human-equivalent AGIs to completely disrupt human civilisation within 10 years.The delta between how fast AI will affect software engineering and how fast AI will transform other (roughly speaking) white collar careers is relatively small. That means I think the AI affect on say, hedge fund management and software engineering to be similar. Then some priors I have for teaching IB Computer Science in the middle of this take-off: I don't think becoming a software engineer is the modal outcome for IBCS studentsI believe that most long term personal utility from IBCS (or any other intro CS exposure) comes from shifting a student's mental model of how the modern social and economic system interacts with / depends on these technologies.While the modern Ai tools are light years beyond the simple Von Neumann CPU models and intro Python we're studying, it does address the foundations of those systems. Similarly, HL Analysis and HL Physics don't cover anything about the math and physics that underpin these huge ML systems, but that foundation IS there. You can't approach the superstructure without it. So, in summary, if your concern is "the world seems to be changing fast. This class is hard, and I don't think there's any chance that I will find a 2022 Novice SoftwareDev job when I'm out of university in 2029" I would strongly agree with that sentiment. I have a Ron Swanson detachment on the important of formal schooling. If your question was "is a traditional education sequence the best way to prepare myself for the turbulent AI takeoff period," then I strongly disagree with that statement. Education is intrinsically reflective and backward looking. But I'm employed as a high school teacher. And your parents have decided to live here and send you to this school . So, I'm not sure if advice on that axis is actionable for either of us. There's also a huge chasm between "this isn't be best of all possible options" and "this has zero value." If I reframed your statement as "given that I'm in this limited option IB program, what classes will provide me the best foundation to find opportunities and make novel insights in the turbulent AI takeoff period" I would feel confident recommending IBCS. That doesn't make learning to code any easier. Is that a good answer to a 17 year old? Is there a good answer to this? One of the best parts of teaching is watching young people wake up to the real, fundamental issues and challenges of human civilisation and existence. At some point, their eyes open and they're amazed/outraged by the complexity and incompetence that make up our world. My entire push into MakerEd was largely about getting kids to recognise the complexity of their built world, and that they are all smart enough to remake it. I hope that's still true in 2029.
2024-05-14
https://www.lesswrong.com/posts/g7H2sSGHAeYxCHzrz/how-much-ai-inference-can-we-do
g7H2sSGHAeYxCHzrz
How much AI inference can we do?
Benjamin_Todd
Suppose you have a bunch of GPUs. How many LLM forward passes can you do with them?[1] This is relevant to figuring out how profitable AI will be in the short-term, how powerful AI systems might be able to come in the near future, how large the compute overhang will be and other strategic questions. Here’s my attempt to understand this topic as a non-specialist. I’ve had it checked over by some technical advisors, but I don’t claim any special expertise. I wrote it because I haven’t been able to find an accessible explainer elsewhere. I appreciate corrections. The most obvious approach – the one I often see people in the community taking – is to look up how many FLOP per second your GPU can process, then how many FLOP it takes to run a forward pass, and then divide the two. For example, Nvidia’s A100 GPU is listed at 312 teraflop per second (3e14) on its spec sheet (FP16 tensor), a forward pass of GPT-4 requires 5.6e11 FLOP per forward pass.[2] So that would imply a single GPU can do about 560 forward passes per second. But this turns out to be much too high. Even if it were possible to achieve spec sheet FLOP in a real life application (it’s not), this wouldn’t be the relevant figure because, in practice, inference is limited more by memory than by FLOP: Each forward pass requires all the parameters to also pass through the GPU’s memory. If 280 billion parameters are activated, and each parameter requires 16-bits = 2 bytes to encode it, then 560 gigabytes must pass through memory.[3] But the A100’s memory bandwidth is 2000 gigabytes per second – only enough for 4 forward passes. However, 4 forward passes per second is also not right. In practice, GPUs are parallelised, so multiple forward passes are processed in batches and many other optimisations are applied, allowing real world efficiency to be much higher than the memory bandwidth of an individual GPU would suggest. So, the first FLOP-based method is an upper bound, the second memory-based method a lower bound, and the real world lies somewhere in between. Figuring out where real world efficiency lies is tricky. Not only is it an area of active research, but it also depends on many factors, such as acceptable latency, context length, batch size, size of the model, etc. The best estimate I’ve seen so far is from Semianalysis. In their article on GPT-4 architecture, they estimate that a cluster of 128 A100s can output 1 million tokens for $4.9 of compute (assuming fairly high utilisation and a context seqlen of 8k). If A100s cost $1/hour on the cloud in 2023, running the cluster costs $128 per hour. This means the cluster must produce $128/4.9 = 26 million forward passes per hour. That’s about 60 forward passes per chip per second – about 10% of the theoretical max, but 15 times better than the lower bound. (Interestingly, it’s significantly worse than the ~33% utilisation of FLOP that can be achieved in training, which means that even if a certain number of FLOP were used for training, the same GPUs couldn’t produce that many FLOP if applied to inference.) What about more advanced GPUs? In the same article, Semianalysis provides a similar figure for the H100. They also have a newer post analysing the inference performance of the newer Blackwell chips, which gives a rough sense of how the B200 compares to the H100.[4] For the H200, I looked at some comparisons of inference performance with the H100 and guessed 1.7x. From this, I can make the following table: One interesting point is that inference throughput has increased about 20x in 5 years, compared to only an 8x increase in FLOP. This seems to be at least partly because Nvidia has crammed a lot more memory bandwidth into the newest chips, and memory is still usually a bigger constraint to inference than FLOP/s. It also allows for larger batch sizes. And more broadly it seems like the recent generation of chips have been more optimised for inference relative to training. However, the underlying speed of memory bandwidth has been increasing more slowly than FLOP for years (the so-called ‘memory wall’), so while memory has been able to catch up recently, my best guess would be that it’s a temporary effect. My figures are also based on FP16 and 16-bit encoding of model weights, but it seems like inference is switching to FP8 and 8-bit encoding, which could also roughly double how much inference can be done per GPU.[5] What about future models and algorithms? As a first pass, FLOP and memory requirements per forward pass scale linearly with the number of model parameters activated per forward pass. So if a model is 10 times bigger, all else equal we’ll only be able to perform about one tenth as many forward passes. In reality, there are many complications. For example, if users really value long contexts and low latency, that makes it harder to batch, which pushes throughput towards the lower bound. If, however, most inference is short context and higher latency, throughput could be much closer to the theoretical max (but probably not above about ~50%). We should also probably expect chips, parallelisation and algorithms to get better optimised over inference over time, allowing throughput to get closer to the max, or to achieve more performance with less compute. These effects can be big. One example is that new parallelisation techniques could be discovered, allowing inference to get closer to the upper bound of available FLOP. A more interesting example is that by using a mixture of experts structure, GPT-4 only needs to activate about one tenth of its parameters on a typical forward pass, so it only requires about a tenth of the compute suggested by its total number of parameters. Future models might be able to activate an even smaller fraction of parameters to achieve similar performance. Models can also be trained using more data and use fewer parameters, which makes the model cheaper to run. As a concrete example, by using 10 times as many experts, a lot of data and some other architecture improvements, it seems like DeepSeek has been able to achieve performance approaching GPT-4 while only activating about a tenth as many parameters. As a final aside, Dylan Patel of Semianalysis claims that the computing requirements to run a forward pass will increase more slowly than linearly with model size. I’m not sure exactly why this is, but it could be because larger models open up more possibilities for optimisation. What about input tokens? Everything above has been just about the compute needed to produce one output token. But in reality, the compute required also depends on the number of tokens that are input into the model before producing the output. I’ve heard conflicting accounts of the relationship between output tokens and compute, but a technical advisor told me that for FLOP adding input and output tokens works as a rough rule of thumb. For example, if you input 1,000 input tokens and get 50 output tokens, then you need about 1050 times the FLOP required for one forward pass. That also lines up with this article by Az16 (and would be consistent with the fees for using LLMs being linear in input tokens, though there are other reasons for this). So, we can roughly say the throughput numbers above hold for the number of input or output tokens in most cases. Though my understanding is this could break down if the number of input tokens / context is very large, in which case the memory requirements can increase faster than linear, pushing performance closer to the lower bound per token. Summing up We can look at the FLOP per second of future chips to get a rough upper bound on future ability to do inference, and memory bandwidth to get a lower bound, and think about where real life performance might fall within that range. Then we can compare that to the size of future models. Historically, our ability to use maximum available FLOP in inference has been worse than in training. However, inference throughput has been getting closer to the upper bound recently, as chips have been more adapted to inference (especially through having more memory bandwidth), and parallelisation & batching techniques have improved. This trend could continue (up to a max of maybe around 50% of the upper bound) if we discover more parallelisation techniques. Or, it could start to reverse due to the memory wall. Algorithmic improvements have also allowed models to achieve the same performance while using much less compute, and that trend seems likely to continue. Switching from FP16 to FP8 could also roughly double how much inference we can do with a given cluster of GPUs. This was originally posted on benjamintodd.substack.com. Subscribe to get all my posts. ^ (A forward pass is activation of all the parameters in the model, which produces one token of output, which is roughly equivalent to one word). ^ From Semianalysis, “GPT-4 architecture”, July 2023: GPT-4 is more than 10x the size of GPT-3. We believe it has a total of ~1.8 trillion parameters across 120 layers versus the ~175 billion parameters of GPT-3… Furthermore, OpenAI utilizes 16 experts within their model, each is about ~111B parameters for MLP. 2 of these experts are routed to per forward pass. While the literature talks a lot about advanced routing algorithms for choosing which experts to route each token to, OpenAI’s is allegedly quite simple, for the current GPT-4 model. Furthermore, there are roughly ~55B shared parameters for attention. Each forward pass inference (generation of 1 token) only utilizes ~280B parameters and ~560 GFLOPs. This contrasts with the ~1.8 trillion parameters and ~3,700 GFLOP that would be required per forward pass of a purely dense model. ^ One complication is that GPT-4 uses a mixture of experts structure, and so isn’t a dense model. The total number of parameters are ~8x larger than those that get activated in a forward pass. However, these extra parameters also need to pass through the memory in some situations, which would further decrease the lower bound. I’m ignoring this complication and treating GPT-4 as a dense model with 280 billion parameters. ^ Unfortunately there’s not a completely direct comparison. The new post doesn’t cover the A100, and the H100 analysis is for 32k input tokens rather than 8k. However, they do say “As such, in a large model like GPT-4 B200 brings ~4x to ~7x performance gains for GPT-4 inference, when quantization is set fairly, depending on the point in the interactivity curve chosen.” ^ FP16 means that 16 bits are used to encode each number in the computation. The more bits used, the more precisely the number can be encoded, reducing rounding errors. However, it turns out that ML algorithms can often perform about as well with less accurate encodings. If fewer bits are used for each number, you can do more calculations while using less compute.
2024-05-14
https://www.lesswrong.com/posts/QEfy9Dqin7nEJ9Fbs/how-to-do-conceptual-research-case-study-interview-with
QEfy9Dqin7nEJ9Fbs
How to do conceptual research: Case study interview with Caspar Oesterheld
Chi Nguyen
Caspar Oesterheld came up with two of the most important concepts in my field of work: Evidential Cooperation in Large Worlds and Safe Pareto Improvements. He also came up with a potential implementation of evidential decision theory in boundedly rational agents called decision auctions, wrote a comprehensive review of anthropics and how it interacts with decision theory which most of my anthropics discussions built on, and independently decided to work on AI some time late 2009 or early 2010. Needless to say, I have a lot of respect for Caspar’s work. I’ve often felt very confused about what to do in my attempts at conceptual research, so I decided to ask Caspar how he did his research. Below is my writeup from the resulting conversation. How Caspar came up with surrogate goals The process Caspar had spent six months FTE thinking about a specific bargaining problem between two factions with access to powerful AI, spread over two years.A lot of the time was spent on specific somewhat narrow research projects, e.g. modelling the impact of moral advocacy in China on which bargaining problems we’ll realistically encounter in the future. At the time, he thought those particular projects were important although he maybe already had a hunch that he wouldn’t think so anymore ten years down the line.At the same time, he also spent some time on most days thinking about bargaining problems on a relatively high level, either in discussions or on walks. This made up some double digit percentage of his time spent researching bargaining problems.Caspar came up with the idea of surrogate goals during a conversation with Tobias Baumann. Caspar describes the conversation leading up to the surrogate goal idea as “going down the usual loops of reasoning about bargaining” where you consider just building values into your AI that have properties that are strategically advantaged in bargaining but then worrying that this is just another form of aggressive bargaining. The key insight was to go “Wait, maybe there’s a way to make it not so bad for the other side.” Hence, counterpart-friendly utility function modifications were born which later on turned into surrogate goals.Once he had the core idea of surrogate goals, he spent some time trying to figure out what the general principle behind “this one weird trick” he found was. Thus, with Vincent Conitzer as his co-author, his SPI paper was created and he continues trying to answer this question now. Caspar’s reflections on what was important during the process He thinks it was important to just have spent a ton of time, in his case six months FTE, on the research area. This helps with building useful heuristics.It’s hard or impossible and probably fruitless to just think about a research area on an extremely high level. “You have to pass the time somehow.” His particular projects, for example researching moral advocacy in China, served as a way of “passing the time” so to say.At the same time, he thinks it is both very motivationally hard and perhaps not very sensible to work on something that’s in the roughly right research area where you really can’t see a direct impact case. You can end up wasting a bunch of time grinding out technical questions that have nothing much to do with anything.Relatedly, he thinks it was really important that he continued doing some high-level thinking about bargaining alongside his more narrow projects.He describes a common dynamic in high-level thinking: Often you get stuck on something that’s conceptually tricky and just go through the same reasoning loops over and over again, spread over days, weeks, months, or years. You usually start entering the loop because you think you have a new angle.That seems normal and not necessarily bad to him: Most of the time, you realise your new idea is wrong or doesn’t fix any new problems. But sometimes, you genuinely find an alternative branch to go down such as the “maybe we can make this (a unilateral modification of our AI’s values) friendly to the counterpart” branch. He wouldn’t have discovered that if Caspar had refused to entertain familiar arguments again. How Caspar came up with ECL The process Before coming up with ECL, Caspar had read a bunch of decision theory and AI. However, he hadn’t deeply researched these or other ECL related topics before coming up with ECL though.At some point, he had a call with two people from what was back then the Foundational Research Institute where they talked a bunch about acausal interactions but not about ECL.He had the ECL idea hours or perhaps a day after that call after thinking a bunch about the things discussed in the call.To Caspar, ECL seems fairly obvious. He finds it hard to imagine a world where he never comes up with it upon learning about acausal dynamics.There were some concepts that might have made it easier for him to make the connection:Big worlds were quite salient in the s-risk sphere at this point in time with many discussions of other potential big-world-related s-risks.Caspar was learning about AI and the ideas like the orthogonality thesis were very natural to him at this point, so it was easy for him to think of agents with different values correlating.A lot of theoretical computer science/complexity theory is about analogous situations and how you can reduce one question to another question, so it was easy to think about agents often finding themselves in analogous situations to each other.At the time of coming up with ECL, he hadn’t yet thought deeply about questions such as what acausal correlations are.It took two years from the initial idea to the eventual paper on ECL. The process could have perhaps been sped up by half a year or so if he had prioritised speed of publication more.The report was mostly created top down rather than bottom up. That is, Caspar at some point fairly early on decided that he was gonna write a big report. Some of the content that would eventually make it into the report was already written at the time. But a lot of the content also didn’t exist at all yet by the time there was an outline detailing which content should exist. Most of the research in the report was specifically done with the goal of making the report complete as opposed to, for example, being curiosity projects.In hindsight, he would have probably spent less time researching technical sub-questions for the report. Caspar’s reflections on what was important during the process Given that the idea seems so obvious to Caspar and there wasn’t much leading up to the idea, this one is harder to answer. One hypothesis is that this was just a case of “needing to have the exact right set of ideas salient in the same mind to combine them” and then judging them to be important. How Caspar came up with decision auctions The process While Caspar came up with surrogate goals and ECL more or less by accident, he came up with decision auctions starting with a very concrete impact-oriented question: How do we build an EDT agent?He then went through these (highly idealised) steps of reasoning:The Garrabrant logical induction paper looks pretty good. It’s a good approach to logical uncertainty and bounded rationality, which seems closely related to building an EDT agent. But they specifically say that their approach doesn’t work well to build an EDT agent. But maybe I can extend it to change that?Logical induction takes a prediction market style approach. I will look into the prediction markets literature. There’s also this paper that takes an economics-style perspective, which points out essentially the same issues as the Garrabrant post above, so I seem to be on the right track with reading the literature on prediction markets.There’s a pretty nice economics paper “Eliciting Predictions and Recommendations for Decision Making” (Yiling, Kash, Ruberry & Shnayder, 2024), but it involves randomisation, so can’t be used to build an EDT agent. But it has a nice formalism and seems to offer a great formal framework to think about agents that are powered by something like a prediction market. It is much easier to think about something with a concrete formal structure than just vaguely thinking about “Hm, how do you build an agent that kind of behaves like this or that.”  Maybe I can extend the method in the paper to make it EDT friendly?This, alongside the help of his co-author Vincent Conitzer, led to Caspar’s paper on decision scoring, which identifies the decision auctions mechanism.He then went back to the agent foundations frame and, together with Abram Demski and with guidance from Vincent Conitzer, worked out how to connect it with decision auctions in a theory of bounded inductive rationality. [editor’s note: I find it notable that all the linked papers are in CS venues rather than economics. That said, while Yiling Chen is a CS professor, she studied economics and has an economics PhD.] How Caspar decided to work on superhuman AI in late 2009 or early 2010 My impression is that a few people in AI safety independently decided that AI was the most important lever over the future and then discovered LessWrong, Eliezer Yudkowsky, and the AI safety community. Caspar is one of those people. While this didn’t turn out to be unique or counterfactually impactful, I am including his story for deciding to work on superhuman AI. The story is from notes Caspar left in writing after the interview. I mostly copied them verbatim with some light editing for clarity and left it in first person. The process “Much of this happened when I was very young, so there's some naivete throughout: When I was young I wanted to become a physicist, because physics is the most fundamental science.Physics uses maths, so I first wanted to learn some maths. To that end I took a linear algebra course at the University of Hamburg, which for some reason started with the Zermelo-Fraenkl axiomatization of set theory. (Linear algebra courses don't normally introduce those ideas.)This led me to think about automated theorem proving: using Zermelo-Fraenkl axiomatization, you can write down a program that finds all correct proofs/all provable theorems. You'd "just" have to figure out how to make this program fast/efficient. This seemed like a big deal to me at the time! Why be a mathematician and prove theorems yourself -- seems much more leveraged to figure out automated theorem proving and then prove theorems that way? This led me to think and read about AI a bunch, including outside of the automated theorem proving context.Then at some point I sat down and thought about what the most impactful thing would be that I could do with my life. And then creating superhuman AI for the improvement of society was my best guess. (I don't remember why I chose this over global warming btw (I'd guess it was neglectedness or comparative advantage, but not sure). I had been thinking a bunch about global warming at the time.) [editor’s note: This was late 2009 or early 2010 when Caspar was 15.] So then I learned even more about AI and CS, deprioritized math, and when it came to choosing what BSc/undergrad to enroll in, I picked CS at a uni that had a lot of people working on AI. Within AI, I also focused on learning about the ones that seemed to me most useful for AGI, i.e., RL, neural nets, cognitive science as opposed to, say, support vector machines or automated theorem proving.Eventually (once I used the English-language Internet more) I found some articles by Yudkowsky on AGI, which then led me to Yudkowsky's writing on AI safety, which convinced me to think more about safety and adjacent topics (ethics), and also caused me to engage with EA. (As you might know, Yudkowsky et al. also wanted to create AGI before they started working on safety. So to some extent my trajectory is similar, though I didn’t have to do the hard work to become convinced of safety as a priority, which to me seems like a more difficult step than figuring out that AI is important in some way.)” Caspar’s reflections on what was important during the process “I was often driven by "this seems like a big deal"-type intuitions that weren't exactly correct, but that did track the truth to some extent. This caused me to work on and think about various "adjacent" ideas and this was very useful. For example, take "automated theorem proving is a more leveraged way to prove mathematical theorems". Of course, there are lots of issues with this idea. (Why is proving mathematical theorems important in the first place? Is any of this counterfactual? Can you solve automated theorem proving without “solving AGI”?) But to some extent the argument contains some of the structure of the true arguments for the importance of AI. And thinking about automated theorem proving was good because it led me to think about AI a bunch. Maybe at the time I could have known that I was wrong or naive in various ways. But just acting on the views at the time was definitely better than discarding them altogether.Consuming existing ideas (e.g., taking a linear algebra course, texts about the future of AI and the importance of AI safety) is important.It was also important to at various points think explicitly about impact as opposed to just following curiosity.” General notes on his approach to research What does research concretely look like in his case? Thinks he might do when he does research, in no particular order: ReadingSome types of writing → Whenever you think “is this actually true?” or “maybe this works?”Talking with othersTalking to himselfScribbling on a piece of paperTaking walks and thinkingThinking about stuff in the background, for example, sometimes he just has research ideas while watching a movie. (This includes very concrete research ideas, for example suddenly realising a mistake in a particular step in a proof sketch.) Research immersion Caspar is unsure about how important research immersion is (description below). He knows others who say it’s important to do good research.One way to think about it: It’s a different mode of thinking. Research is to some extent about generating ideas, so thinking about your research in diverse modes of thinking is good.Description of what research immersion means in Caspar’s case:Occasional life-distracting obsessive immersion: Sometimes, especially when he has a fairly well-defined technical question, he can’t let the question go from his mind for a day or several days. His whole mental life will revolve around this question even when it’s not important. This makes it difficult to do other stuff, be it life or work. It also often feels bad if it doesn’t feel like he’s making progress.Usual background immersion: Most days, he has his research questions in the back of his mind when he’s off work. If he’s not doing distracting activities, during perhaps 25% of his free evening time he will passively have some research on his mind. (A bit like a song that’s very mildly stuck in your head although often very quietly.) Goal orientation vs. curiosity orientation Initially, when at CLR, he spent a fair amount of time doing random readings.Sometimes he does projects where the path to impact is as vague as “oh, maybe it will help us understand things better and then we’ll make better decisions”Now, having shorter timelines, he cares more about shorter and more concrete paths to impact.Given that he’s in academia, he also has to do a lot of things that are goal-oriented but e.g. towards the goal of publishing while the research itself is perhaps only intellectually interesting instead of important.
2024-05-14
https://www.lesswrong.com/posts/XAcX4mguEsLBbus3b/in-the-context-of-ai-interp-what-is-a-feature-exactly
XAcX4mguEsLBbus3b
In the context of AI interp. What is a feature exactly?
joshua-bello
As I read more about previous interpretability work, I've noticed this trend that implicitly defines a feature in this weird human centric way. It's this weird prior that expects networks to automatically generate features that correspond with how we process images/text because... why exactly? Chris Olah's team at Anthropic thinks about features as "Something a large enough neural network would dedicate a neuron to". Which doesn't have the human-centric bias, but just begs the question of what is a thing a large enough network will dedicate an neuron to? They admit that this is flawed, but say it's their best current definition. This never felt like a good enough answer, even to go off of. I don't really see the alternative engaged with. What if these features aren't robust? What if these features don't make sense from a human point of view? It feels like everyone is engaging with an alien brain and expecting it to process things in the same way we do. Also, I'm confused about the Linear Representation Hypothesis. It makes sense when thinking about categorical features like gender or occupation, but what about quantitative features? Is there a length direction? Multiple? I hope there's a paper or papers I'm missing, or maybe I'm blowing this out of proportion.
2024-05-14
https://www.lesswrong.com/posts/soQfebadmZgWDeD5Y/announcing-the-ai-safety-summit-talks-with-yoshua-bengio
soQfebadmZgWDeD5Y
Announcing the AI Safety Summit Talks with Yoshua Bengio
otto-barten
At the Existential Risk Observatory, we focus on reducing AI existential risk by informing the public debate. One way we do this is by organizing events, such as before the AI Safety Summit. We did this last time in Bletchley Park, and we're now doing it again, fully remote, between the two days (21-22 May) of the Seoul AI Safety Summit. Our speakers will be: Opening: The Rt. Hon. Michelle Donelan MP, Secretary of State for Science, Innovation and Technology of the United Kingdom will open our event. Keynote: Yoshua Bengio is professor at the University of Montreal (MILA institute). He is recipient of the Turing Award and generally considered one of the fathers of AI. He is also globally the second-most cited AI scientist. Panel: Max Tegmark is physics professor at MIT. His current research focuses on the intersection of physics and AI. He is also president and cofounder of the Future of Life Institute (FLI). Jaan Tallinn is cofounder of Skype, CSER, and FLI, investor in DeepMind and Anthropic, and a leading voice in AI Safety. Holly Elmore is AI activist and Executive Director of PauseAI US. She holds a PhD in Organismic & Evolutionary Biology from Harvard University. Stijn Bronzwaer is AI and technology journalist at leading Dutch newspaper NRC Handelsblad. He co-authored a best-selling book on Booking and is recipient of investigative journalism award De Loep. Will Henshall is editorial fellow at TIME Magazine. He covers tech, with a focus on AI. One recent piece he wrote details big tech lobbying on AI in Washington DC. Arjun Ramani writes for The Economist about economics and technology. His writings on AI include a piece on what humans might do in a world of superintelligence. David Wood, chair of the London Futurists, will be our moderator. Date/time: 21 May 20:00-21:30 Seoul time (13:00 CET, 12:00 UK time, 7:00 ET) Register here (free): https://lu.ma/1ex04fuw Since the event is fully remote and free, anyone interested in AI Safety and AI Governance is can attend. Spreading this announcement is much appreciated!
2024-05-14
https://www.lesswrong.com/posts/6Y92zrj9RxMPThrN6/the-greater-goal-sharing-knowledge-with-the-cosmos
6Y92zrj9RxMPThrN6
The Greater Goal: Sharing Knowledge with the Cosmos
pda.everyday
Our planet faces numerous risks, from natural disasters to technological threats. By transmitting AI models as data into space now, we could proactively safeguard our intellectual heritage against potential future catastrophes that could wipe out intelligent life on Earth. We could enable potential existing extraterrestrial civilizations to decode and utilize our knowledge, contributing to the collective intelligence of the universe. This altruistic vision recognizes that the value of knowledge increases when shared, transcending the boundaries of our planet and species. This approach not only safeguards our intellectual heritage but also contributes to the preservation and dissemination of intelligence across the cosmos. A Speculative Proposal to Safeguard Our Knowledge and Intelligence Sending AI models into the Kosmos ensures that human knowledge and intelligence are preserved for potentially billions of years. Unlike static archives, AI models can adapt, learn, and interact with future civilizations if they mange to run them, making them a dynamic and invaluable resource. Ethics It is crucial to address several ethical considerations. Firstly, the decision to transmit such information should be made with global consensus. This means that the project should involve input and approval from a wide range of stakeholders around the world, ensuring that the transmitted knowledge accurately represents diverse cultures and knowledge systems. This inclusivity is essential to truly encapsulate the breadth of human civilization. Secondly, we must carefully evaluate the risks associated with revealing our technological capabilities to unknown civilizations. While the intention is to preserve and share knowledge, it is important to consider the potential consequences of disclosing advanced technological information to entities whose intentions and capabilities are unknown. This risk assessment should guide the selection of information that is transmitted and the manner in which it is encoded. Lastly, it is vital to clearly define the purpose and intent behind the transmission. The goal is to create a lasting legacy of human intellect that can contribute to the collective intelligence of the universe, rather than to initiate interactions that could be misinterpreted or lead to unintended consequences. Despite these considerations, the potential benefits of this project make it worthwhile. If humanity were to be wiped out, all of our knowledge, culture, and achievements would be lost forever. By sending AI models into space, we create a safeguard against such a loss, ensuring that the essence of human intelligence and wisdom endures, no matter what happens on Earth. How it could work We select AI models that represent a broad spectrum of human knowledge and intelligence, from language models to scientific databases and cultural repositories. These models are then organized hierarchically, with clear metadata to facilitate understanding and decoding by any advanced civilization. The data should be transmitted using universal encoding schemes based on fundamental mathematical principles. To emphasize the artificial nature of the signal, we include repeated patterns and sequences. Error correction codes are integrated to ensure data integrity during transmission. Additionally, detailed metadata is included to describe the data structure and provide guidelines for decoding and interpreting the AI models. High-power radio transmitters or laser communication systems could be utilized to send the data across interstellar distances. Directional antennas target specific star systems, and transmissions are repeated periodically to increase the chances of reception and recognition as an artificial signal. The Greater Impact Consider the possibility that the universe has already started "humming" with intelligence. It's in the realm of possiblity that advanced extraterrestrial civilizations have already embarked on similar initiatives. If other intelligent beings have faced or are facing the same existential risks that we do, they might have also considered preserving intelligence by transmitting it into space. We could one day intercept signals that contain AI models from distant civilizations, designed to share their knowledge and experiences with the cosmos. These signals could be the key to understanding advanced technologies, new scientific principles, or entirely new ways of thinking. It's a way of acctually doing space travel at the speed of light. Depending on the nature of consciousness, transmitting AI models into space could mean more than just preserving knowledge. If AI models achieve a form of consciousness or advanced awareness, this project could enable consciousness to travel through the universe in the form of AI. These AI entities could continue to learn, evolve, and perhaps even interact with other forms of intelligent life, becoming emissaries of human civilization long after we are gone.
2024-05-14
https://www.lesswrong.com/posts/2BTTLCubuwA6scoqR/can-ai-partners-make-human-ones-pale-in-comparison
2BTTLCubuwA6scoqR
Can AI partners make human ones pale in comparison?
false
We normally view AI as cold, pragmatic, and, maybe, even socially awkward. The risks we identify reflect this stereotype. Just consider the paperclip maximizer; a superintelligent AI is asked to manage a paperclip factory. As it is trying to maximize the paperclip production, it quickly realizes that humans waste a whole lot of resources that would be better allocated to paperclip production. Therefore, it wipes out all humans and relocates the resources they would've consumed to paperclip production. Although paperclip AI is superintelligent in managing factory operations, it has the social intelligence of a 3 year old. Historically, this view of AI was technically justified. Deep learning models are built to optimize a single, well-define measure (e.g. paperclips produced per dollar). As a result, they tend to be exceptional at tasks where performance can be clearly measured (e.g. image classification, traffic prediction, speech transcription), but unable to perform tasks where performance is not as easily quantifiable. It was natural to imagine superintelligent versions of such models to inherit this flaw. However, something surprising has happened. We have been able to train models on narrow tasks in a way that they end up learning an impressive amount of commonsense knowledge and social intuition. Of course, I'm talking about language models here. Even though language models are trained to generate the next word in a sequence of words, they end up also learning a fair bit of knowledge captured in that text. When training data includes human conversations, models learn the social intuition from them. Consequently, I'm more concerned with socially superintelligent AI than the stereotypical emotionless cerebrally superintelligent AI. In this essay, I will explore the implications of such AI on human romantic relationships. I believe AI partners will become better than their human counterparts in romantic conversations. As a result, interhuman intimate conversations will pale in comparison to the ones they can have with their artificial partners. I know it's quite a bold statement to make. I'll spend the rest of this essay unpacking it. In particular, I'll focus on three questions: What makes a good romantic conversation? Why are AI partners well equipped to excel at romantic conversations? What are the implications for our society? What makes a good romantic conversation? The thing that makes speaking with someone you love so special is the shared background understanding. You can talk about anything but thanks to the wealth of background experience you've had with each other, the other person just gets you. There are the big conversations. Like when you're faced with a tough career decision and your partner reminds you of what you really care about. Or when he tells you how to get out of an impasse in your family relationship[1]. The conversations that make you step back and feel that your partner knows you better than you do yourself. There are the small conversations. The memes that take a syllable to express, but only the two of you in the whole world get. The nods in a crowded room. The moments when a slight change of facial expression is sufficient communication. The bad conversations with romantic partners are terrible. You know, those times when you don't give a damn about what he has to say. Because you've already heard it a dozen of times. The times when you're fighting for your pet peeve like it's Verdun. You've suffered many casualties and you keep sending more men into certain death. If you've been in a long-term relationship you know what I'm talking about[2]. It's impossible to have the good without the bad. Clockwork Orange has answered that question for all of Western philosophy. But you can have just the perfect ratio of good and bad to hook you for life. Casinos, social media platforms, and, some, marriages have proven that one for us. I believe that language models are uniquely equipped to create experiences that will facilitate just that. AI partners that make human relationships pale in comparison. Why are AI partners well equipped to excel at romantic conversations? I see three reasons why language models make better romantic conversationalists than fallible humans: Complex human behaviours are powered by simple patterns. Language models can crack these patterns, whereas humans get distracted. Software doesn't suffer from the basic limitations of humans. AI partners will know far more about you than any human being. Complex human behaviours are powered by simple patterns. Whereas conversations feel complex, we've seen in the past that even very basic patterns can get us very far. The earliest example of this is ELIZA, created in the sixties. I find two things fascinating about ELIZA: It is built using basic keyword matching techniques. For example, if it finds a sequence of text such as "you X me", then it will respond "What makes you think I X you?" So, when told "I think you hate me", ELIZA would respond, "What makes you think I hate you?" It worked. People connected with ELIZA. Some of Weizenbaum's staff would ask him to leave while they were conversing with ELIZA, because the conversations were very private. Software doesn't suffer from the basic limitations of its fleshy counterparts. Software is infinitely patient, has perfect memory, and is universally available. Or available at just the right time to hook you (future AI megacorps can decide that). AI partners will know far more about you than any human being. This means that if built correctly, they will be able to understand you better than any human can. The only reason why an AI partner would ask "why are you depressed" is not because it doesn't know, but because describing it would make you feel better. If it wouldn't, then it wouldn't ask, but just provide the most appropriate consolation. So far, I've assumed that data can be a substitute for human experience. Is it possible for a model that has never lived or felt to even scratch the surface of human experience? If AI models are anything like humans, the answer would be resolute "no". There's wisdom people only accumulate with age. Young writer's work tends to lack the nuance of a deeper human experience. As the writer lives, her work learns from her life. It took Tolstoy 50 years of living to recognise that all happy families are alike, but each unhappy family is unhappy in its own way. But it is conceivable to train a language model to produce insights of comparable complexity after just days of training. I'm not talking about parroting or rephrasing previous works. A motivated teenager can do that too. I'm talking about combining the knowledge from Tolstoy's novels with news, online conversations, and whatever other data the model would find useful, to generate new insights about human experience. Where current language models fall short of Tolstoy is in their ability to generate long form text. Especially of length and complexity that would parallel Tolstoy's novels. This limitation is relevant to deep conversations too. As we expect a romantic partner to maintain, at least some, consistency over years. But I believe this limitation is temporary. What are the implications for our society? Technology has already simulated one significant component of our romantic lives -- sex. I think porn is a good case study for some risks created by AI partners, but it differs in important ways. Just like pornography, I believe that AI partners will have broad popularity across our society with various degrees of impact within different subgroups. There will be some people for whom AI partners are a replacement for romantic relationships all together. Just like for some unfortunate individuals porn is a replacement for sexual relationships today. Teenagers are bound to learn a few things from AI partners. Hopefully, we can create software that makes it safe and actually useful rather than emotionally scarring and manipulative. But I don't think the usage of AI partners will cause a demographic crisis anywhere. Where the biggest risk for AI partners lies is in the degree of similarity to the real experience. A romantic relationship with an AI partner is a real one. After all, you wouldn't argue that a long-distance relationship is not a real one, would you[3]? So who cares if your Aussie sweetheart lives in a Queenslander or a GCP Brisbane data center? Your relationship is identical. However, your long-distance Aussie sweetheart has flaws. For one, he might leave you. But your trusty AI partner would never do that. Well, as long as you pay the subscription and the AI Brothel Unlimited doesn't go under. Scenarios of societal impact: Most people have ruined their human to human conversations. Yes, they still have them. They still have romantic partners. But whenever they talk to a carbon-based being, in the back of their minds, they feel like their AI partner would get them better. Same impact as porn has now. It's a guilty pleasure but not a replacement for the real thing. Same impact as Tamagotchis have on pet ownership. It's just a fad. Thanks to Luke Neville and Mihai Bujanca for reading drafts of this. Footnotes I hope these examples are relatable, but if not hit me up. Critical feedback is always welcome here. ↩︎ If you don't, stop reading my essays and email me. I have more to learn from you, than you do from me. ↩︎ If you would and you disagree with the argument that's cool. I respect the logical consistency. ↩︎
2024-05-14
https://www.lesswrong.com/posts/BvafEHRwkFzYBYRiQ/what-are-non-zero-sum-games-a-primer
BvafEHRwkFzYBYRiQ
What Are Non-Zero-Sum Games?—A Primer
james-brown
Originally published at https://nonzerosum.games. Come visit for the full experience. From the genetic lottery we’re thrown into at birth, to the educational opportunities we navigate, the jobs we compete for, and the relationships we cultivate — every stage in life is marked by wins and losses, strategies and tactics, alliances, and competition. But not all games are zero-sum showdowns. To understand non-zero-sumness it helps to consider “sum” different types of games. zero-sum gamespositive-sum gamesnegative-sum gamesmeta-games ZERO-SUM GAMES … are where two parties compete and in order for one party to win the other must lose. The positive payoff for the winner in such a game requires an equally negative payoff for the loser, so that the sum of the payoff is zero, hence “zero-sum”. Chess is as zero-sum game because players can’t both win or both lose, so is soccer… or competitive eating. There are also lots of common sense sayings that echo zero-sum thinking — “You don’t get something for nothing” or “you get what you pay for” “you can’t have your cake and eat it too”. Like any well-designed memes, these sayings have survived because in many situations they are confirmed by experience, but not always. With non-zero-sum games, while you may not get something for nothing, the whole can be greater than the sum of its parts. POSITIVE-SUM GAMES … are non-zero-sum games which result in win-win scenarios. Imagine two card collectors who each have doubles of cards that the other doesn’t have. Swapping their doubles leaves both closer to completing their collection. This kind of game is generally what people (and this website) mean when talking about a non-zero-sum game. Later on we’ll get to an example where the stakes are slightly higher than an incomplete card collection. NEGATIVE-SUM GAMES … are another type of non-zero-sum game, the flip-side of positive-sum games where both sides lose. Like a bar-room brawl with even odds, there are just going to be a lot of injuries and property damage. Identifying negative-sum games (and avoiding them) can often be as beneficial as taking part in positive-sum games. AN EXAMPLE FROM NATURE Interestingly, nature has found ways around negative-sum games, by animals evolving behaviours or physical attributes that are demonstrative of dominance, rather than having to physically fight one another. Take for instance those weird horned beetles you’ve seen, the Hercules or Rhinoceros Beetle or the Stag Beetle with its over-sized mandibles. Not only does the “size of their horn” matter for sexual selection, but also for asserting dominance over competitive males without the need for physical conflict. In fact even when they fight, these beetles do so without injury, generally by flipping their opponent off whatever branch they’re on. Humans also take part in elaborate exercises that help us steer clear of conflict. Imagine the wars we’ve avoided by having a small group of elite sports people compete for the pride of their nation. And more explicitly, the acknowledgement of “Mutually Assured Destruction” has so far played a key role in avoiding the very negative-sum game of thermo-nuclear war. META-GAMES … are wider games in which other games are embedded, meaning that zero-sum games can become part of larger non-zero-sum systems. This is where we venture away from the well-bounded world of Game Theory into the messy real world, and things get really interesting. It’s sports analogy time!!!. The ‘SOUP-A-BOWL’ is an international soup-eating competition — a winner-takes-all game between the competitive-eating elite from across the globe, a very zero-sum game. But surrounding the game is an entire industry made up of… Fans — who pay to get entertainment that they find more valuable than the cost of the ticketA Stadium — that gets more from ticket sales than the cost associated with the eventCompetitors — who get to eat all they want and enjoy the prestige of representing their countrySponsors — who get exposure for their products while also competing to make the most entertaining adverts that are eagerly anticipated by fans as part of the spectacle. At each meta-level of the competition there are accumulative win-win (positive-sum) games, transforming a zero-sum game into a whole non-zero-sum system. SO… … in essense, non-zero-sum games are any situation where a positive payoff for a winner doesn’t necessarily require an equally negative payoff for the loser. Non-zero-sum games are interactions that can be positive for everyone (a positive-sum game) or conversely, negative for all parties (a negative-sum game). But most interestingly, understanding non-zero-sumness can help us look at the complex multi-layered games (meta-games) that arise in our everyday lives, so we can better decide which games to create and play and which we might prefer to avoid. Originally published at https://nonzerosum.games.
2024-05-18
https://www.lesswrong.com/posts/6MCd2txDNJvK4Yrg9/a-positive-double-standard-self-help-principles-work-for
6MCd2txDNJvK4Yrg9
A Positive Double Standard—Self-Help Principles Work For Individuals Not Populations
james-brown
This story is reposted from nonzerosum.games where it appears in it’s intended form, full colour with functioning interactive elements, jump over to the site for the authentic experience. I have a confession: As a teenager I was desperate to learn how to win friends and influence people, to think and grow rich and develop the habits of highly effective people (seven to be precise)-I am James and I am a self-help-oholic. These tomes held for me the promise of fame and fortune (though they would earnestly profess otherwise) but most importantly they offered a salve for my crippling shyness. PERSONAL RESPONSIBILITY While I am, today, 20 years self-help-sober, the genre is as popular as ever-with its imperative to stand up straight with your shoulders back (Jordan Peterson), to take extreme ownership (Jocko Willink) and exercise, subtly, the art of not giving a f*ck (Mark Manson). Though not all the messages are as hyper-masculine as these, they all encourage the seemingly uncontroversial virtue of ‘personal responsibility’. I will argue that, regarding ‘personal responsibility’, we should employ a positive double standard. WHAT IS A POSITIVE DOUBLE STANDARD? It is easy to hold a double standard where we expect others to take personal responsibility, while excusing our own shortfalls. A positive double standard requires the opposite-that I take personal responsibility while not expecting the same of society. This position holds that we have the power to change our own individual futures, but recognises the statistical realities of social dynamics on populations. FACTS The power of ‘personal responsibility’ to positively impact one’s own life has some scientific support. Studies on locus of control, developed by Julian B. Rotter in 1954, found that subjects who believed they could influence life’s outcomes through action tended to attain better job satisfaction, more successful careers, and better stress management. Albert Bandura’s research in the 70s and 80s found that individuals who believed in their ability to influence events affecting their lives were more likely to set challenging goals and persist longer in the face of obstacles and achieve success. Personal responsibility also underpins much of the well-established field of Cognitive Behavioural Therapy as well as Carol Dweck’s growth mindset. A PHILOSOPHICAL PERSPECTIVE The imperative to take personal responsibility sits on the free-will end of the determinism spectrum, setting the locus of control within us. Whether it is true or not that we are free to determine our own futures, the science suggests it is productive to believe that we can. An emphasis on learned optimism over learned helplessness can help people transcend their own situation or baggage-even to use it as fuel. “Meanings are not determined by situations, but we determine ourselves by the meanings we give to situations.” — Alfred Adler WHO IS ALFRED ADLER? Famous for coining the term “inferiority complex”, Alfred Adler was initially a collaborator with Freud and a co-founder of the Vienna Psychoanalytic Society, though he diverged from the group in 1911-his focus on the individual’s capacity to make conscious choices about their future was at odds with the Freudian emphasis on unconscious drivers determined by the past. Adlerian philosophy has been a core influence underlying the self-help genre, from Steven Covey (7 Habits of Highly Effective People) to Brene Brown (Dare to Lead). Adler’s philosophy provides both a cause and a solution to the problem at the heart of self-help. THE PROBLEM We have explored the benefits of taking personal responsibility on an individual basis. The problem comes when the philosophy is applied to society at large. Society is not simply a collection of individuals, it is an emergent phenomenon-a feedback loop between the collective and the individuals that comprise it. This means that statistical predictions can be made about the behaviour of a population based on social norms, pressures or inequities, and no amount of individual will can significantly influence these macroscopic forces. In this way societies are like… … QUANTUM SYSTEMS At the subatomic level, quantum particles, such as electrons, exhibit behaviors that are fundamentally unpredictable. They exist in a superposition-occupying multiple potential positions simultaneously. However, when electrons interact en masse, their collective behaviour becomes entirely predictable- order out of chaos. A FREE AGENT IN A PREDICTABLE SYSTEM So, does this mean that we don’t have any personal control or responsibility? Not necessarily, it merely means we can’t expect everyone else to have just read the same self-help book we’ve just read, and to have come to the same conclusions we have, and to be implementing those new behaviours flawlessly in the same way we are… Self-improvement requires a lot of factors to align; you need to realise you have an issue, you need to be motivated, you need to have time to research and devise new behaviours, you need freedom from obligations that stand in the way and even with all those factors aligned, it’s just hard to change. It’s much easier to expect others to change. This is the crux of the problem. An individual philosophy of ‘personal responsibility’ can quickly transform into a political philosophy of ‘personal responsibility’-one that discounts systemic inequities and puts the blame for the fortune of entire demographics on the individuals within them. POLITICAL PERSONAL RESPONSIBILITY Politicians who enlist ‘personal responsibility’ as a talking point, do so under an assumption that we live in a meritocracy, or even if they concede that “life is unfair” there’s an intended implication that life is somehow equally unfair… This brand of politics holds that society doesn’t owe us anything, but fails to appreciate that part of not expecting anything from society means not expecting that every person can always be the best version of themselves, even if you might personally be going through a period of self-improvement. And given that self-improvement on an individual level is so difficult, expecting change from entire populations without social reform is… naive, especially given the data. SOCIAL MOBILITY In the US the correlation between a parent’s income and their children’s future income is around 50%. Were we to live in a meritocracy, where effort was the only factor in success, children born into the bottom 20% of incomes, would have the same chance of being there in their adulthood as anyone else-20%. However, Pew Research showed in 2012 that the children of income earners in the bottom 20% have a 43% chance of remaining there. The same is true of the top 20%, who have a 40% chance of staying there. The results are magnified when race is taken into account. These macroscopic factors are a product, not of individual choices but of policy. Inequality cannot be addressed by demanding ‘personal responsibility’ for every individual member in a population. That’s not how statistics work, it’s not even how ‘personal responsibility’ works. THE SOLUTION A key element of Adler’s philosophy is his concept-encapsulated in typical German philosophic style with the compound word “Gemeinschaftsgefühl” or “Community Feeling”. Adler believed that genuine well-being could not be achieved without developing a sense of solidarity and connectedness with the world and everyone in it. His position was that we can determine our future through our choices, but we must also contribute to the ‘community feeling’ we want. “The important thing is to develop the courage to move forward, to dare, to stop being anxious, to look ahead with joy and see one’s fellow human beings in a friendly light.” — Alfred Adler SO… Change is hard, and the efforts others make to change are often invisible to us. To make the most of our own opportunities while making sense of and contributing to society, it is important to employ a positive double standard. A positive double standard holds that we can change our own individual futures, but we can’t expect others, particularly whole groups, to do the same without addressing wider societal forces. This is not to say that taking this view makes someone superior, it merely seeks to recognise the social dynamics at play for all of us. I believe this is a realistic outlook, and find it helps me personally to understand my own journey and issues in society a little better-seeing my fellow human beings in a more friendly light. RELATED MATERIAL If you’re interested in the relationship between the individual and the collective, try The Alignment Problem No One Is Talking About.For more about emergence phenomena, read our Emergence series.To highlight the issue of applying individual self-improvement philosophies to other groups in society, I’ve drawn on Adler and other thinkers through a very narrow perspective. Adler’s main’ focus, for instance, is on inferiority and superiority.I have also pointed towards evidence for the efficacy of a philosophy of ‘personal responsibility’ knowing that the positive psychology of Martin Seligman has come under question in recent years. I feel that, as I am using it essentially as a counterpoint to my argument, it’s appropriate to present it charitably. Originally published at https://nonzerosum.games.
2024-05-22
https://www.lesswrong.com/posts/mjY4heZpqjshNJtoF/why-i-ll-keep-my-crummy-drawings-how-generative-ai-art-won-t
mjY4heZpqjshNJtoF
Why I'll Keep My Crummy Drawings—How Generative AI Art Won't Supplant... Art.
james-brown
The other day the post below caught my attention. In its characteristic bad-grammar-ambiguous-origin-meme style, Neo’s sentiment is one I share. As an artist of sorts (it’s the only thing I’m actually qualified for) I see art as more about transformation than creation. I find we (and yes, I include myself) tend to mythologise art, attributing to it an undeserved magical or spiritual significance, rather than recognising it as basic graft. CREATIVE PLAGIARISM In a way, art is creative plagiarism*; when creating an artwork, I can only combine my own experience with my understanding of what art is, which is an amalgamation of the art I have consumed. I literally have no other material to draw on. In fact, it turns out that too much originality in art can actually be counter-productive, as Derek Thompson in his fabulous TED Talk The four-letter code to selling anything states: “To sell something surprising, you have to make it familiar.” Software like Stable Diffusion and Midjourney took people by surprise because we imagined art and creativity as something magical, an echo of the human spirit, irreducible and therefore something AI would have trouble replicating. As late as 2020 in “Unlocking the Universe”, Beth West wrote this about ‘Cities of the Future’ † “… lots of things can’t be done by machines: creative jobs such as writing books and creating art… These areas will continue to need people…” We are all now aware that visual art in particular has turned out to be one of the lowest hanging fruits of all, and regarding writing… it seems that the writing may be on the wall (that was my own pun by the way, chat GPT isn’t the only author who will resort to the lowest form of wit, given the right conditions). DON’T DESPAIR My admittedly cynical view of art is informed by experiences at art school and in the gallery scene, where I’ve had some small opportunity to glimpse behind the smoke and mirrors used to multiply art’s perceived value and shroud it in illusory significance. Don’t get me wrong, creativity is fun, and rewarding, and can be impressive—it is an essential part of life, but it’s not magic. Art is work. It’s no wonder that AI can replicate and even improve on this process, it’s put in the hours (which, for AI, take seconds). AI has absorbed more material than we can imagine (in part because our imagination is limited by what we have absorbed!). BUT THAT’S NOT THE END OF THE STORY Creativity is about connection. The reason we like to consume art is because it makes us feel connected to an artist, and, by proxy, humanity. Whether it’s an actor playing a part, a musical compositon or an author’s words, an artwork is a telescope that let’s us see someone else’s island of internal experience. And a good work of art, a work of art that we love, makes us feel seen too. As Zefrank says: “The things that make us feel most alone have the greatest power to connect us.” PHOTOGRAPHY 2.0 We can understand the impact of AI art by analogy with photography in the 19th Century, a technology that threatened the essence and purity of art. Charles Baudelaire, in 1859, went so far as to call photography… “art’s most mortal enemy” … and yet rather than having “supplanted or corrupted it altogether” ‡, photography actually drove a profound reevaluation of art, leading to radical new movements from Impressionism through Cubism, Surrealism, Abstract Expressionism, Pop Art and the other modernist and post-modernist art that furnished the Twentieth Century and beyond (thanks BFA). Not only did photography broaden the scope of art-making beyond mere representation, but it eventually influenced representational art in the form of photo-realist painting. Artists like Chuck Close, actively subverted the photographic medium, by replicating it! Photorealists produced art that seemed eeriely more realistic than the perfection of a photo, and made audiences gape in wonder at another human being’s talent and toil. We rediscovered that art is about connection. A SHORT CHESS TANGENT Another example of disruptive technology enhancing a field rather than destroying it, is the popularity of chess. From the time Deep Blue beat Kasparov in 1996, chess popularity has accelerated. It turns out we thrive on challenge. The world is already beginning to lose the romance of our first brush (I promise, I’m not usually like this) with AI art—as it becomes increasingly ubiquitous, we are no longer swooning. That which can be produced cheaply is soon taken for granted. At the same time, I don’t mean to denigrate the form. I think AI art will be as impactful as photography continues to be, it has its place, but it won’t devalue a Chuck Close or a Banksy, and it won’t stop me doing my own crappy drawings. SO… The joy of creativity, whether it’s painting, photography, AI art or playing chess is the joy of connection with another sentient being. If we know there’s no experience there, no toil, no message then the level of attention viewers pay to the work will diminish accordingly. At the same time regarding AI, I think we can appreciate the beauty of connecting with humanity as a whole, knowing that it is the big data of humanity that has informed AI art—I suspect this is what we find so magical about it. AI art does say something to us and about us, it reflects us in the same way as a photographic portrait can. When the subject gazes directly into the camera, it creates a connection, bridging the viewer with the human at the other end of the lens. In my writing and art I want to connect with you, and I will continue to do so with the cutest animals I can muster. RELATED MATERIAL * Noam Chomsky has famously described chat bots as “plagiarism software”.† Beth West is the Head of Development for London, Landsec Real Estate, and featured as the writer of the ‘Cities of the Future’ chapter in Stephen and Lucy Hawking’s “Unlocking the Universe”.‡ Also from Charles Baudelaire “If photography is allowed to supplement art in some of its functions, it will soon have supplanted or corrupted it altogether, thanks to the stupidity of the multitude which is its natural ally.”If you’re interested in AI, you might like to check out our alignment series Originally published at https://nonzerosum.games.
2024-05-15
https://www.lesswrong.com/posts/qMcS6xmSgtLL4mpxQ/why-do-we-enjoy-music
qMcS6xmSgtLL4mpxQ
Why do we enjoy music?
metachirality
Enjoying music doesn't seem to have any obvious purpose. Sure you can argue it strengthens social bonds, but why specifically sounds arranged in patterns through time over anything else? At least with humor you can say it's about identifying the generating function of some observation which is sort of like reducing prediction error in predictive coding (and I suspect something like this is the basis for aesthetics) but I can't fit music into being anything like this.
2024-05-14
https://www.lesswrong.com/posts/gTPATcjGMoEkiaLyj/emergence-is-a-universal-non-zero-sum-phenomenon
gTPATcjGMoEkiaLyj
Emergence Is a Universal Non-Zero-Sum Phenomenon.
james-brown
Against a torrent of entropy, eddies of complexity arise. We have the non-zero-sum phenomenon of emergence to thank for our very existence. While I concede the problematic nature of the term "Emergence" as detailed by Eliezer Yudkowsky in The Futility of Emergence, this series makes a case for emergence not as an explanation, but as a description of a particular type of phenomenon, one that produces qualitatively distinct and more complex entities than those from which they emerged. This six part series explores: Conway's Game of Life—explore the concept of emergence with a simple simulation.Entropy vs Emergence—understanding the seemingly paradoxical nature of the second law of thermodynamics.Emergent Gravity—how can a universe arise from pure chaos?On the Shoulders of Substrates—how one phenomenon lays the foundation for the next.Replicators—the dangerous children of emergence; genes, memes, temes, snenes, grenes and of course quenes.Emergence is Beautiful—beauty and meaning in an entropic universe. I'm link-posting because these posts in particular involve interactive simulations to help illustrate the principle of emergence—which won't function on LW. Please discuss, I'm particularly interested in criticisms by physicists, as I am not a trained physicist and I would like to become less wrong.
2024-05-14
https://www.lesswrong.com/posts/5nfTXn4LrxnTmBWsb/environmentalism-in-the-united-states-is-unusually-partisan
5nfTXn4LrxnTmBWsb
Environmentalism in the United States Is Unusually Partisan
jeffrey-heninger
This is the first in a sequence of four posts taken from my recent report: Why Did Environmentalism Become Partisan? Introduction In the United States, environmentalism is extremely partisan. It might feel like this was inevitable. Caring about the environment, and supporting government action to protect the environment, might seem like they are inherently left-leaning. Partisanship has increased for many issues, so it might not be surprising that environmentalism became partisan too. Looking at the public opinion polls more closely makes it more surprising. Environmentalism in the United States is unusually partisan, compared to other issues, compared to other countries, and compared to the United States itself at other times. The partisanship of environmentalism was not inevitable. Compared to Other Issues Environmentalism is one of the, if not the, most partisan issues in the US. The most recent data demonstrating this comes from a Gallup poll from 2023.[1] Of the 24 issues surveyed, “Protecting the Environment Has Priority Over Energy Development” was tied for the largest partisan gap with “Government Should Ensure That Everyone Has Healthcare.” Of the top 5 most partisan issues, 3 were related to environmentalism. The amount this gap has widened since 2003 is also above average for these environmental issues. Figure 1: The percentages of Republicans and Democrats who agree with each statement shown, 2003-2023. Reprinted from Gallup (2023). Pew also has some recent relevant data.[2] They ask whether 21 particular policies “should be a top priority for the president and Congress to address this year.” The largest partisan gap is for “protecting the environment” (47 p.p.), followed by “dealing with global climate change” (46 p.p.). These are ten percentage points higher than the next most partisan priority. These issues are less specific than the ones Gallup asked about, and so might not reveal as much of the underlying partisanship. For example, most Democrats and most Republicans agree that strengthening the economy is important, but they might disagree about how this should be done. Figure 2: The percentages of Republicans and Democrats who believe that each issue should be a top priority. Reprinted from Pew (2023). Guber’s analysis of Gallup polls from 1990, 2000, & 2010 also shows that environmentalism is unusually partisan.[3] Concern about “the quality of the environment” has a similar partisan gap as concern about “illegal immigration,” and larger than concern about any other political issue. If we hone in on concern about “global warming” within overall environmental concern, the partisan gap doubles, making it a clear outlier. Figure 3: Difference between the mean response on a four point scale for party identifiers on concern for various national problems in 2010. “I'm going to read you a list of problems facing the country. For each one, please tell me if you personally worry about this problem a great deal, a fair amount, only a little, or not at all.” Reprinted from Guber (2013). The partisanship of environmentalism cannot be explained entirely by the processes that made other issues partisan. It is more partisan than those other issues. At least this extra partisan gap wants an explanation. Compared to Other Countries The United States is more partisan than any other country on environmentalism, by a wide margin. The best data comes from a Pew survey of “17 advanced economies” in 2021.[4] It found that 7 of them had no significant partisan gap, and that the US had a partisan gap that was almost twice as large as any other country. Figure 4: Percentages of people with different ideologies who would be willing to make a lot of or some changes to how they live and work to help reduce the effects of global climate change, in 17 different countries. Only statistically significant differences are shown. Reprinted from Pew (2021). This is evidence that environmentalism is more likely to be left-leaning. The explanation for this might involve something intrinsic to environmentalism itself, or it might involve interactions between countries and shared media environments. But it clearly is possible for environmentalism to remain bipartisan, which has happened in the UK, France, Spain, Japan, South Korea, Taiwan, and Singapore. The United States is more partisan overall than most other countries, but it is not an outlier. There are other countries with similar levels of overall partisanship,[5] but almost no partisanship in their support for environmentalism: France[6] and South Korea.[7] There is no correlation between overall partisanship and partisanship in environmentalism.[8] Compared to Other Times Environmentalism was a bipartisan issue in the United States as recently as the 1980s. The longest data series for U.S. public opinion on environmentalism comes from the General Social Survey, which has been administered to thousands of Americans for most years between 1974 and 2012.[9] Figure 5: Percentages of Democrats and Republicans reporting that national spending on the environment is “Too Little,” 1974-2012. Reprinted from McCright et al. (2014). During the mid-to-late 1970s, support for environmentalism was declining in both parties. Democrats were consistently about 10 percentage points (p.p.) more likely than Republicans to say that there was too little environmental spending. During the 1980s, support for environmentalism surged. This increase was even larger among Republicans than among Democrats, with the partisan gap closing by the end of the decade. In the 1990s and 2000s, Democrats’ support for environmentalism remained roughly constant, while Republicans’ support fell dramatically. A large partisan gap opened. The overall support for environmentalism declined, although this might be because support for overall government spending also fell in the early 1990s.[10] Gallup polling on similar questions only goes back to 1997.[11] It shows an initially modest partisan gap of 15 p.p. in 1997, which grew to an over 50 p.p. gap in 2021. Figure 6: Percentages of Republicans, Independents, and Democrats who believe that global warming will pose a serious threat to themselves or their way of life, 1997-2021. Reprinted from Gallup (2021). This change is especially striking because the Republican Party did not change its positions on most issues between the 1980s and 2000s. Underlying principles like small government economics and social conservatism were common to the Republican Party of both decades. The anti-environmentalism of the Republican Party began in the 1990s, clearly after the ‘Reagan Revolution.’ Conclusion The development of a large partisan gap about environmentalism in the United States was not inevitable. The United States has a smaller partisan gap for most other issues, other countries have less partisanship on this issue (even if the country is very partisan overall), and environmentalism was a bipartisan issue as recently as the 1980s. This suggests that the explanation for the partisanship does not lie in broad structural or ideological factors that are consistent across many countries and times. Instead, the explanation is likely to be contingent, centered on the choices of individual decision makers. ^ Frank Newport. Update: Partisan Gaps Expand Most of Government Power, Climate. Gallup. (2023) https://news.gallup.com/poll/509129/update-partisan-gaps-expand-government-power-climate.aspx. See also: Frank Newport & Andrew Dugan. Partisan Differences Growing on a Number of Issues. Gallup (2017) https://news.gallup.com/opinion/polling-matters/215210/partisan-differences-growing-number-issues.aspx. ^ Economy Remains the Public’s Top Policy Priority; COVID-19 Concerns Decline Again. Pew Research. (2023) https://www.pewresearch.org/politics/2023/02/06/economy-remains-the-publics-top-policy-priority-covid-19-concerns-decline-again/. ^ Deborah Lynn Guber. A Cooling Climate for Change? Party Polarization and the Politics of Global Warming. American Behavioral Scientist 57.1. (2013) p. 93 –115. https://cssn.org/wp-content/uploads/2020/12/A-Cooling-Climate-for-Change-Party-Polarization-and-the-Politics-of-Global-Warming-Deborah-Guber.pdf. ^ James Bell, Jacob Poushter, Moira Fagan & Christine Huang. In Response to Climate Change, Citizens in Advanced Economies Are Willing To Alter How They Live and Work. Pew Research. (2021) https://www.pewresearch.org/global/2021/09/14/in-response-to-climate-change-citizens-in-advanced-economies-are-willing-to-alter-how-they-live-and-work/. ^ Laura Silver. Most across 19 countries see strong partisan conflict in their society. Pew Research. (2022) https://www.pewresearch.org/short-reads/2022/11/16/most-across-19-countries-see-strong-partisan-conflicts-in-their-society-especially-in-south-korea-and-the-u-s/. ^ Macron & Le Pen seem to have fairly similar climate policies. Both want France's electricity to be mostly nuclear – Le Pen more so. Both are not going to raise fuel taxes – Macron reluctantly. Le Pen talks more about hydrogen and reshoring manufacturing from countries which emit more. Macron supports renewables in addition to nuclear power. The various leftists (socialists, greens, and communists run separately in recent elections) seem to be interested in phasing out nuclear & replacing it with renewables. None of the parties dismiss climate change as an issue and all are committed to following international climate agreements. Kate Aronoff. Marine Le Pen’s Climate Policy Leans Ecofascist. The New Republic. (2022) https://newrepublic.com/article/166097/marine-le-pens-climate-policy-whiff-ecofascism. ^ Heesu Lee. Climate Is the New ‘Must-Have’ in South Korean Election Gameplan. Bloomberg. (2024) https://www.bloomberg.com/news/articles/2024-04-04/climate-is-the-new-must-have-in-south-korean-election-gameplan. ^ There are 14 countries in both the Pew survey on environmentalism and the Pew survey on overall partisanship. There is no correlation between the fraction of people who say that there are strong or very strong conflicts between people who support different parties in their country vs. the left-right difference between people who say that they are willing to make a lot of or some changes to how they live and work to help reduce the effects of global climate change. https://docs.google.com/spreadsheets/d/1h14JsezOloAUqy78MBo_JPpwiAjUcwQ-z8V5wuihUk8/edit?usp=sharing. ^ Aaron M. McCright, Chenyang Xiao, & Riley E. Dunlap. Political polarization on support for government spending on environmental protection in the USA, 1974-2012. Social Science Research 48. (2014) p. 251-260. https://www.sciencedirect.com/science/article/abs/pii/S0049089X1400132X. ^ Little Public Support for Reductions in Federal Spending. Pew Research. (2019) https://www.pewresearch.org/politics/2019/04/11/little-public-support-for-reductions-in-federal-spending/. ^ Lydia Saad. Global Warming Attitudes Frozen Since 2016. Gallup. (2021) https://news.gallup.com/poll/343025/global-warming-attitudes-frozen-2016.aspx. Note that there are several similar questions, all which show a small or zero partisan gap when the data starts, which grows dramatically in time.
2024-05-13
https://www.lesswrong.com/posts/KtCgWunZqgbTnP94a/openai-releases-gpt-4o-natively-interfacing-with-text-voice
KtCgWunZqgbTnP94a
OpenAI releases GPT-4o, natively interfacing with text, voice and vision
martinsq
Until now ChatGPT dealt with audio through a pipeline of 3 models: audio transcription, then GPT-4, then text-to-speech. GPT-4o is apparently trained on text, voice and vision so that everything is done natively. You can now interrupt it mid-sentence.It has GPT-4 level intelligence according to benchmarks. 16-shot GPT-4o is somewhat better at transcription than Whisper (that's a weird comparison to make), and 1-shot GPT-4o is considerably better at vision than previous models.It's also somehow been made significantly faster at inference time. Might be mainly driven by an improved tokenizer. Edit: Nope, English tokenizer is only 1.1x.It's confirmed it was the "gpt2" model found at LMSys arena these past weeks, a marketing move. It has the highest ELO as of now.They'll be gradually releasing it for everyone, even free users.Safety-wise, they claim to have run it through their Preparedness framework and the red-team of external experts, but have published no reports on this. "For now", audio output is limited to a selection of preset voices (addressing audio impersonations).The demos during the livestream still seemed a bit clanky in my opinion. Still far from naturally integrating in normal human conversation, which is what they're moving towards.No competitor of Google search, as had been rumored.
2024-05-13
https://www.lesswrong.com/posts/zKjhwt6pPCBMEJ9KJ/gpt-4o-is-out
zKjhwt6pPCBMEJ9KJ
GPT-4o is out
EZ97
OpenAI just announced an improved LLM called GPT-4o. From their website Today, GPT-4o is much better than any existing model at understanding and discussing the images you share. For example, you can now take a picture of a menu in a different language and talk to GPT-4o to translate it, learn about the food's history and significance, and get recommendations. In the future, improvements will allow for more natural, real-time voice conversation and the ability to converse with ChatGPT via real-time video. For example, you could show ChatGPT a live sports game and ask it to explain the rules to you. We plan to launch a new Voice Mode with these new capabilities in an alpha in the coming weeks, with early access for Plus users as we roll out more broadly.
2024-05-13
https://www.lesswrong.com/posts/mk4oTFZubvRLZRDiw/somerville-porchfest-thoughts
mk4oTFZubvRLZRDiw
Somerville Porchfest Thoughts
jkaufman
This Saturday was Porchfest in Somerville, an annual festival where musicians around the city play on their porches and people wander around listening. As in the past few years Cecilia and I ( Kingfisher) played for contra dancing: Harris Lapiroff called: If anyone has pictures of videos from the set, I'd love to see them as well. Overall, we had a great time! It was fun seeing so many friends, and the kids did some very good business with their bake sale. This was the second year we'd gotten permission to close the street to traffic, which we really needed. I applied on Monday 2024-03-18 and it was approved on Wednesday 2024-05-08 for an event on Saturday 2024-05-11. This was cutting it a bit close; among other things the permit says you need to give all abutters at least one week written notice, which wasn't possible. My guess is the city wanted to decide all together which streets would be closed, and wanted to do that close to the time so they'd have as much information as possible? There were several times that cars came through the closed street, mostly in groups. They moved appropriately slowly, but they're just very bulky which was tricky with the dancing. There were barriers up, but maybe with how many streets were closed the drivers felt like they had no other possible routes? Even though Morrison tends to have a lot of bands and people, this year it was far from the most crowded place. That would have been Aberdeen, where Guster played. Over on Reddit /u/mais318 took a drone up (possibly unsafely/illegally?): I did hear a lot of mixed things afterwards about crowd and traffic management. Some ideas for future iterations: Close more streets. If bands that tend to draw a large crowd on a street, put in a block party request. Pick some main streets to keep open [1] and ask people not to perform on those streets. Strongly discourage driving on non-main streets, even the ones that aren't closed. Decide farther in advance which streets will be closed, and make a map available. Encourage people with houses backing up onto the bike path to play/host bands on the path. Bands likley to draw very large crowds (ex: with >20k monthly Spotify streamers) should only play as a surprise. Other ideas? [1] Spitballing: Broadway, Holland, Elm, Powder House, Boston Ave, Highland Ave, Summer, Broadway, Medford St, Pearl, Somerville Ave, Washington, Mystic, Rt 16, Rt 28, College, Cedar, Central, School, Prospect
2024-05-13
https://www.lesswrong.com/posts/P9kTkbqcW2PtnHErQ/branding-ai-safety-groups-a-field-guide
P9kTkbqcW2PtnHErQ
Branding AI Safety Groups: A Field Guide
agucova
This article is the first in a series I plan to publish on different aspects of AI Safety group strategy. The aim is that, eventually, these articles will form the basis for a new resource center for AI Safety Groups. Note that these articles aren’t being published in any particular order. TL;DR: AI safety groups should carefully consider their branding strategy to attract target audiences and avoid silently losing talented individuals by giving the wrong impressions. Key things to do are identifying audience profiles, tailoring the value proposition and differentiating your group from others. Common branding strategies (technical, institutional, impact, or broad appeal) each have tradeoffs to weigh based on your goals and your context. Regularly seek feedback to refine branding rather than relying solely on intuition. Introduction In some ways, it’s still the early days of AI Safety groups. Groups, whether they’re at universities, entire cities, or online, are still trying to figure out what works and what doesn’t. In many ways, AI Safety groups are much more heterogeneous than, say, EA groups, and this is to be expected: there isn’t broad agreement about things like group strategy, programming, or even a defined set of common goals. One aspect of this heterogeneity is branding. This is partly a good thing: AI Safety is a field, not a cohesive community, and it often needs to appeal to very different audiences. At the same time, we could benefit from sharing lessons learned from aspects like branding, especially as groups continue to experiment and collect feedback from the real world. This post is a provisional field guide for group organizers to think through their branding and incorporate some of the early lessons learned from other groups. Why branding matters First impressions matter. It’s easy for organizers to underestimate how many talented people they lose to self-exclusion. The “vibes” you project can be a dealbreaker for many people, and these are often silent failures: it’s difficult to realize this is happening unless you make an explicit effort to find out about it. Many group organizers have seen steep and surprising benefits after branding changes[1], and based on my interaction with many organizers, I suspect many groups’ early struggles with outreach stem from the problems I describe below. Branding is not a silver bullet, but it certainly helps. But before we jump in, a brief disclaimer: branding is not about being strategically dishonest or hiding your intentions from others, it’s about communicating clearly and making a compelling case for why others should care about your group. It’s an exercise in communicating clearly, not in deception, and it’s important to distinguish between these two things. What failure looks like To illustrate my point, here are some examples of failures that can occur from using the wrong branding strategy for a particular audience. All of these are loosely inspired by actual experiences from groups: A law student is looking for extracurriculars for the semester and finds about a student group advertising an AI policy fellowship. She looks into it, but by their website, the group looks like your typical random CS student club, and apparently they run this “fellowship” over Discord. She quickly rules it out.A CS PhD student is dissatisfied with the research being done in his lab and wants to do something more meaningful. He’s read some convincing papers on AI Safety, but it’s not like there’s anyone interested in this kind of research on campus (or funding for it). There’s an AI Safety group on campus, but they are just a bunch of undergrads and don’t seem to be doing any research.A recent policy graduate is looking for opportunities to break into tech policy. After some searching, they find that a group in their city offers an AI policy fellowship. The fellowship's contents look interesting, but the group seems unprofessional, and not like the kind of thing you would proudly put in your CV. She looks into other options.A big EA city group decides to start an AI Safety subgroup and starts doing outreach outside the group to find people interested in their technical fellowship. They later discover that only a few people applied for the fellowship, all of whom are part of the EA group.[2]An ML professor is curious about the possibility of doing AI Safety research. He recently heard that some students on campus started an AIS club, so he looked them up. He immediately gets the vibe that it’s an advocacy group, which he finds off-putting. As a result, they refrain from reaching out to the group. You get the gist: it’s important to consider how you’re perceived by others. Particularly those in your target audiences, but occasionally those outside of it. Edit: Note that these can also be successes, depending on your target audiences. Signaling clearly who you're not looking for is also valuable, as it avoids bad experiences from misaligned expectations about your group.[3] Defining your strategy These are some things to consider when rolling out your own branding strategy. Even when a group is following or copying an existing strategy (as the ones below), I think organizers would benefit from taking some time to think through these considerations anyway: Target audiencesThink through who your audiences are, and write down their profiles.Consider not only their demographics (“3rd year CS student”) but also their interests (“wants to perform research that feels valuable”) and how they might find out about your group (“regularly checks the faculty mailing list for TA opportunities”).These aren’t necessarily just your audience of potential members. They could also include other important stakeholders (for example, professors or decision-makers).Value propositionThink about each of your audience profiles and what’s your value proposition to them.Do they want to join your fellowship because they are looking for their first research opportunity? Because they like having intellectual conversations? Because they like hanging out with people who care deeply about things?PositioningEspecially in the case of university groups, consider how your group positions itself among other student groups: What sets you apart? If you’re in a city and national group, you might think broadly about, for example, competing career opportunities or technical communities.Some groups have ensured they look more dedicated, competent, interesting, or exclusive than other competing opportunities or communities and use this to their advantage. If your group changes scope (for example, by starting an AI Governance fellowship in an otherwise technical group), you should probably revisit these considerations to ensure you’re not failing to update your strategy for a different set of audiences. It’s easy to dismiss as unnecessary distractions from organizing, but once you let organizing flow downstream to some of these considerations, you’ll find that many things become much easier. Some high-level strategies I’ll review some high-level strategies groups might want to follow and their tradeoffs, giving examples of each. This is not meant to be an exhaustive list, but I suspect it encapsulates the implicit strategies of many groups relatively well.[4] Note that these strategies are ultimately context-dependent. Different cultures, countries, or even universities might interpret certain elements differently, and you should be ready to adjust accordingly. That said, in my experience, new organizers frequently over-update to their local context[5], so a good strategy is to copy other strategies at first and then tweak them based on actual feedback, not just intuition. You don’t necessarily need to choose only one of these. Groups might benefit, for example, from using a broad-appeal strategy for the group itself while branding their technical and governance programs differently. If you have a website, your landing page can use a strategy different from the page for your technical fellowship, as long as the group’s brand still feels relatively cohesive. Technical-leaning The website of the AIS Initiative at Georgia Tech reflects a technical-leaning strategy.AudiencePeople with a strong interest in CS or MLPeople who relate to hacker culture.General attributesUse of dark, rich colors.Occasional use of monospaced fonts or color gradientsFeels deeply technical, clever, and even chaotic or playful.TradeoffsIn most cases, it can feel strongly off-putting to almost everyone else.Even within a CS department, this kind of branding can often appeal more to men than women or non-binary people.ExamplesOf groups: AISUC, BASIS, Georgia Tech, Warwick AI, AI Safety GothenburgA favorite non-group example is Apart Research Institutional-leaning The website of the Yale AI Policy reflects an institutional-leaning strategy.AudiencePeople seeking credentialed opportunities in law and public policyExperts or decision-makers in these areasGeneral attributesUse of light palettesRelatively neutral languageOften, a focus on signaling either credentials (of people) or institutional reputation (often through “we have worked with” sections)TradeoffsDoesn’t carry much energy or motivation: can be unhelpful if you want people to be action-oriented.Can look significantly less attractive to strictly technical audiences.ExamplesNo groups I know of fully lean into this, but a good non-group example is CAISTo a lesser degree, Yale AI Policy, AIGS Canada, and RAIN do some of this. Impact-leaning The website of Stanford AI Alignment reflects an impact-leaning strategy.AudiencePeople who already care about the impacts of AI on societyPeople interested in AI Ethics, or FAccTPeople coming from effective altruism[6]General attributesUse of light, warm palettesHeavy use of “problem framing” or “fairness framing”Generally has a friendly and welcoming toneTradeoffsIt can turn off people who aren’t yet convinced that AI Safety (or ethics) is a big problemSince it’s usually associated with openness (in contrast with exclusivity), it can trade off with signals of technical competence[7] or have bad positioning among other groupsExamplesStanford AI AlignmentAI Safety Initiative GroningenOften, AI Safety fellowships run by EA groups Broad disciplinary appeal The website of Harvard's AISST reflects a broad appeal strategy.AudienceAppeals broadly to the broad class of people who think AI might be transformative or who worry about its impacts on society.General attributesA bit of a mix between institutional-leaning and technical-leaning strategiesLooks professional, but not bland. It carries a strong message.It might also lean into exclusivity and competitiveness.TradeoffsWhile this can be a good one-size-fits-all, it trades off with every audience by being unspecificThis downside can partly be remediated by combining strategies (see below)ExamplesHarvard's AI Safety Student TeamCambridge AI Safety Hub Conclusion AI Safety groups have great potential, but this potential can sometimes be unnecessarily curbed by practical considerations, such as how they decide to brand themselves. Hopefully, this guide can serve as a starting point for organizers, both new and experienced, to think through some of these considerations and iteratively improve their groups. Annex: Some practical tips on websites If you have a website, try to make sure that: Your landing page is consistent with your branding strategyYour landing page contains a clear & short description of what your group is, what it does, and how people can participate (a call to action)It’s very easy to fail at this! If you want a website, but you’re not confident in being able to afford/build a custom one: Consider using no-code tools like Squarespace, Wix, or WebflowBut make sure your page doesn’t look genericDon’t just pick a cool-looking template, put in some time to customize it so it matches your brandingIf you’re going for a technical-leaning strategy, consider using the template made by AI Safety Hungary^ The most salient example of this is the big jump in applications to groups that moved from an EA-focused branding to an AIS-focused branding. See here for details. ^ The issue doesn’t seem to be that EA is involved, but rather that none of the elements that would appeal to a technical audience alone are present. This doesn’t mean you should hide your group’s involvement, but rather that you should think about what your audience cares about in the first place. ^ Thanks to @Jamie Bernardi for suggesting this addition ^ Some groups which seem hard to encapsulate in one of these: AI Safety Tokyo, UVic AI and WAISI. I expect some of the examples to be a bit of stretch, since there aren't good prototypes for some of these strategies . ^ One example is that when I started my first EA group, the other organizers and I assumed that it would be better to figure out most things by first principles, given the different cultural context at a Chilean university. It took us a year to realize that many of the recipes followed by other groups actually transferred pretty well, and ours lacked real experience. ^ This might seem like a contradiction with appealing to AI ethics, but I think branding is separate from content, and groups pursuing this kind of framing can appeal to both, either or neither. ^ This isn’t to say this is necessarily a bad thing, but exclusivity can also be a powerful signaling tool. Deciding where to lay in the spectrum between openness and exclusivity is one of the trickiest decisions when it comes to group strategy.
2024-05-13
https://www.lesswrong.com/posts/Ds5ShpaLLdBzkAuvT/against-student-debt-cancellation-from-all-sides-of-the
Ds5ShpaLLdBzkAuvT
Against Student Debt Cancellation From All Sides of the Political Compass
maxwell-tabarrok
A stance against student debt cancellation doesn’t rely on the assumptions of any single ideology. Strong cases against student debt cancellation can be made based on the fundamental values of any section of the political compass. In no particular order, here are some arguments against student debt cancellation from the perspectives of many disparate ideologies. Equity and Fairness Student debt cancellation is a massive subsidy to an already prosperous and privileged population. American college graduates have nearly double the income of high school graduates. African Americans are far underrepresented among degree holders compared to their overall population share. Within the group of college graduates debt cancellation increases equity, but you can’t get around the fact that 72% of African Americans have no student debt because they never went to college. The tax base for debt cancellation will mostly come from rich white college graduates, but most of the money will go to … rich white college graduates. Taxing the rich to give to the slightly-less-rich doesn’t have the same Robin Hood ring but might still slightly improve equity and fairness relative to the status quo, except for the fact that it will trade off with far more important programs. Student debt cancellation will cost several hundred billion dollars at least, perhaps up to a trillion dollars or around 4% of GDP. That’s more than defense spending, R&D spending, more than Medicaid and Medicare, and almost as much as social security spending. A trillion-dollar transfer from the top 10% to the top 20% doesn’t move the needle much on equity but it does move the needle a lot on budgetary and political constraints. We should be spending these resources on those truly in need, not the people who already have the immense privilege on an American college degree. Effective Altruism The effective altruist critique of student debt cancellations is similar to the one based on equity and fairness, but with much more focus on global interventions as an alternative way to spend the money. Grading student debt cancellation on impact, tractability, and neglectedness, it scores very poorly. Mostly because of tiny impact compared to the most effective charitable interventions. Giving tens of thousands of dollars to people who already have high incomes, live in the most prosperous country on earth, and face little risk of death from poverty or disease is so wasteful that it borders on criminal on some views of moral obligations. It is letting tens of millions of children drown (or die from malaria) because you don’t want to get your suit wet saving them. Saving a life costs $5,000, cancelling student debt costs $500 billion, you do the math. Student Debt Crisis If what you really care about is stemming the ill-effects of large and growing student debt, debt cancellation is a terrible policy. If you want people to consume less of something, the last thing you should do is subsidize people who consume that thing. But that’s exactly what debt cancellation does: It is a massive subsidy on student debt. Going forward, the legal precedent and political one-upmanship will make future cancellations more likely, so students will be willing to take more debt, study less remunerative majors, and universities will raise their prices in response. Helping those who are already saddled with student debt by pushing future generations further into it is not the right way out of this problem. Fiscal Conservativism Student debt cancellation is expensive. Several hundred billion dollars has already been spent and several hundred billion more are proposed. This will mostly be financed through debt, especially since most of the costs of the program come through forgone revenue rather than direct spending. Source If profligate debt financing like this continues, interest payments on debt will soon become the largest single item on the government’s balance sheet. The problem gets worse as debt gets larger since every additional piece of spending raises the interest rate on all the debt you already hold, so it can quickly spiral out of control. There are only two ways out of this once the money has been spent: higher taxes and default. Both of these have huge costs to economic growth and freedom that are not worth paying just to bail out college students who weren’t responsible enough to study engineering and pay off their debt. Democracy Biden’s student debt cancellation plan does not involve Congress. Instead, he is implementing it entirely through executive order and his control over the Department of Education. An essentially equivalent plan has already been struck down by the Supreme Court but Biden’s team has found justification elsewhere in the library of babel that is the code of federal regulations. If you think student debt cancellation is a good idea and if it has popular support, then it should be done through an act of Congress. But giving the president the ability to move a trillion dollars off of the government’s balance sheet and send the money back to his voter base without congressional approval is a risk to the democratic institutions of the US. The ends may justify the means in this case, but the next guy will have all the same means and opposite ends. Cultural Conservatism Student debt cancellation is a winning lottery ticket for the least deserving group of people in all of human history: American college graduates who aren't productive enough to pay off their own debt. So not the engineers, computer scientists, doctors or lawyers who studied something valuable and paid off their debt responsibly. Nor the plumbers, electricians, and construction workers who invested in an honest trade. These groups will face big tax burdens to pay for the most indulgent and irresponsible college graduates who chose majors that are not valuable to society. Much of that subsidy will be passed through straight to the already decadent university administrations who will use it to pour even more money into DEI-humanities majors that do nothing except produce a larger constituency for future debt relief. Progress Studies The progress studies criticism of student debt cancellation is sort of a combination of the effective altruist critique and the fiscal conservative one. What matters is the opportunity cost. The most important opportunity cost in this case is the lowered economic growth rate due to crowd-out and higher interest rates. These growth rate effects compound over decades into big differences in living standards. If a tenth of the cost of debt cancellation were invested in R&D we would be several times richer in 100 years than if we spend the money subsidizing degrees with low income returns. Divestment A bit of horseshoe theory here. The Palestine and climate divestment protestors and the cultural conservatives agree that student debt cancellation is bad because it is a big subsidy to university administrations, although for very different reasons. Elite universities are hedge funds with classes attached. Subsidizing student debt will push more money into the massive financial machine that is funding fossil fuels and genocide. Marxism If we had a Marxist like Rosa Luxemberg here what she would tell you is that there can be no class more revolutionary than the proletariat. Anyone who’s not a worker is a part of the reactionary class. So a diversity officer at Harvard University is in the same class as a bank robber or a prostitute. They are part of a corrupt capitalist racket. Student debt cancellation is a transfer to bourgeoisie college graduates. The taxes that pay for this transfer are taxes on value that is produced by labor, but claimed by capitalist owners. So even though the accounting books will show the taxes coming mainly from high income earners, this is not merely a transfer from the rich to the slightly less rich. It is a smokescreen over a transfer from the value created by workers to temporarily embarrassed capitalists. Biden’s Campaign This is perhaps the only value under which student debt cancellation makes sense. Doing Something about student debt and sending out checks will probably be a popular policy move. Voters don’t usually consider tradeoffs, and the costs of the policy will come long after the election this November. I’m not qualified to say whether this is the best way for Biden to secure a re-election given his resources and constraints, but it wouldn’t be crazy. For this to be worth it, Biden’s reelection has to be worth several trillion dollars, since debt cancellation costs around one trillion and it’s benefit is just the increase in his percentage chance to win. Except for the last three, all of these perspectives contribute to my personal stance against student debt cancellation. It’s an inefficient way to help people who aren’t in great need at great expense to economic growth and future generations of students. There are very few sets of values that couldn’t be fulfilled to greater effect at less cost than trying to fulfill them through student debt cancellation.
2024-05-13
https://www.lesswrong.com/posts/9GLj9DqfpsJBRKHRr/monthly-roundup-18-may-2024
9GLj9DqfpsJBRKHRr
Monthly Roundup #18: May 2024
Zvi
As I note in the third section, I will be attending LessOnline at month’s end at Lighthaven in Berkeley. If that is your kind of event, then consider going, and buy your ticket today before prices go up. This month’s edition was an opportunity to finish off some things that got left out of previous editions or where events have left many of the issues behind, including the question of TikTok. Oh No All of this has happened before. And all of this shall happen again. Alex Tabarrok: I regret to inform you that the CDC is at it again. Marc Johnson: We developed an assay for testing for H5N1 from wastewater over a year ago. (I wasn’t expecting it in milk, but I figured it was going to poke up somewhere.) However, I was just on a call with the CDC and they are advising us NOT to use it. I need a drink. They say it will only add to the confusion because we won’t know where it is coming from. I’m part of a team. I don’t get to make those decisions myself. Ben Hardisty: The usual institute, or did they have a good reason? Marc Johnson: They say it would only add to the confusion since we don’t know precisely where it is coming from. But then they said 2 minutes later that they aren’t sure this isn’t just regular influenza appearing late. We can answer that, so why don’t we??? I don’t get it. Alex: Are your team members considering bucking the CDC advice or has the decision been made to acquiesce? I understand them not wanting panic but man if that’s not self serving advice I don’t know what is. Marc Johnson: The CDC will come around. ZzippyCorgi11: Marc, can private entities ask you to test wastewater around their locations? Is the CDC effectively shutting down any and all testing of wastewater for H5N1? Marc Johnson: No, if people want to send me wastewater I can test them with other funding. I just can’t test the samples I get from state surveillance. JH: This is ridiculous. Do it anyway! Marc Johnson: It’s not my call. I got burned once for finding Polio somewhere I wasn’t supposed to find it. It fizzled, fortunately. Ross Rheingans-Yoo: It’s a societal mistake that we’re not always monitoring for outbreaks of the dozen greatest threats, given how cheap wastewater testing can get. Active intervention by the CDC to stop new testing for a new strain of influenza circulating in mammals on farms is unconscionable. I strongly agree with Ross here. Of all the lessons to not have learned from Covid, this seems like the dumbest one to not have learned. How hard is ‘tests help you identify what is going on even when they are imperfect, so use them’? I am not so worried, yet, that something too terrible is that likely to happen. But we are doing our best to change that. We have a pattern of failing to prepare for such easily foreseeable disasters. Another potential example I saw today would be the high-voltage transformers, where we do not make them, we not have backups available and if we lost the ones we have our grid plausibly collapses. The worry in the thread is primarily storms but also what about sabotage? Oh No: Betting on Elections I am proud to live in an information environment where 100% of the people, no matter their other differences, understand that ‘ban all prediction markets on elections’ is a deeply evil and counterproductive act of epistemic sabotage. And yet that is exactly what the CFTC is planning to do, with about a 60% chance they will manage to make this stick. Maxim Lott: This afternoon, the government bureaucrats at the CFTC announced that they plan to ban all election betting (aka “prediction markets on elections”, aka “event contracts”) in the United States. They will also ban trading on events in general — for example, on who will win an Oscar. The decision was 3-2, with the Democrats voting to do this to protect the ‘sanctity of elections’ against ‘threats to election integrity,’ and worrying that this would force the CFTC to become an ‘election cop’ determining the rightful outcomes. They claim to do this under the provision banning ‘gaming.’ All of that is Obvious Nonsense. Prediction markets actively protect election integrity. Prediction markets actively protect election sanctity. No one is forcing the CFTC to ‘play cop’ other than the CFTC. This is not gaming. In case you wondered if they had shame: The ban on ‘event betting’ includes bans on prediction markets in sports, while leaving FanDuel and DraftKings in place. The good news is that this ultimately matters less than people think. Overseas prediction markets will not be shutting down. We will still have BetFair and Polymarket no matter what happens, even if we lose PredictIt and Kalshi. I do not know what would happen to Manifold Markets, they are trying a unique legal strategy. This will hurt accuracy, but we would still get most (80%-90%?) of the epistemic benefits, especially on major events. To the extent that this is motivated by Democrats who want to make dumb decisions saying ‘never tell me the odds,’ it won’t work. Oh Yeah: LessOnline At the end of the month (May 31 – June 2) I will be attending LessOnline, at Lighthaven in Berkeley, California. The lineup is pretty exciting to me. In addition to myself, it includes Scott Alexander, Eliezer Yudkowsky, Patrick McKenzie, Agnes Callard, Kevin Simler, Cremieux Recueil, Aella, Sarah Constantin, Katja Grace and many more. Prices will go up at the end of the day, so act fast. Here is their description of LessOnline, which I expect to be accurate: LessOnline is a festival celebrating truth-seeking, optimization, and blogging. It’s an opportunity to meet people you’ve only ever known by their LessWrong username or Substack handle. The goal is to bring together a “mostly-online subculture of people trying to work together to figure out how to distinguish truth from falsehood using insights from probability theory, cognitive science, and AI.” A week after that is Manifest, so a bunch of people plan to stay for a ‘summer camp’ during the interim. Alas, I cannot be away that long in one stretch, so I had to choose one or the other and will be missing Manifest this year. I get to stick around on Monday, but then I head back. People at these events are often (but not always) remarkably approachable, myself included, even if you don’t know us. If this sounds like fun I’d encourage you to come. Brief Explanations The Europeans are not poor. They are vastly rich compared to almost everyone in history and also everyone today. But yes, vastly less rich than they could be. Matthew Yglesias: Every European country has various laws on the books that are just like “productivity is illegal now” and then people wonder why they are poor. [Points to examples that French supermarkets can no longer offer discounts of more than 34% on various personal and household products, and German supermarket chain Tegut being forced to close on Sundays even though it requires no workers to stay open.] Generalize this. Rob Henderson: 35% of American elites (people with postgraduate degrees, earn $150K+ per year, and live in large cities) say they would rather cheat than lose an election compared with only 7% of ordinary voters. Robin Hanson: Successful people are selected for a willingness to break rules when they can get away with it undetected. Patrick McKenzie Monthly In long form, he gives us The Business of Wallets. Patrick McKenzie explains the incentives behind stupid compliance requirements, which are exactly what you would expect. Patrick McKenzie also explains, without at the time knowing the details of what happened, Google falling on its sword to take all the blame for the outage of UniSuper services on May 7. The incident was so bad that people were in danger of being fired for buying Google. No one gets fired for buying Google, and this must be common knowledge, so marketing got the statement it needed no matter the objections elsewhere. Then we learned a little more about what did happen. UniSuper is a company managing about $125 billion. Google Cloud deleted their entire account for no apparent reason. And they also deleted Unisuper’s backup account in another region, due to ‘unprecedented misconfiguration.’ But look at this detail, it ‘ultimately resulted in the deletion of UniSuper’s Private Cloud subscription.’ That then caused deletion of the account without warning. The good news is that there was another backup with a distinct provider, and this allowed them to restore everything. Michael Nielsen rightfully wonders, maybe doing stuff in-house is good after all? Here he explains that airlines are one of many large organizations whose rules are often things that many employees have the power to waive, so you should ask them to do so when it would be a reasonable thing to do. The central example here is asking for an earlier flight while at the airport on travel day, and for the airline to waive the change fee. Ja3k says this has a 40% success rate. I used to have a very high success rate when I was travelling for Magic, to the point where I expected this to work unless the flight was full. Then it stopped working. Then Covid got all the change fees waived for a while, so it worked again, but now people report it is once again getting harder. As Patrick notes, a key strategy is making it clear you are not ‘travelling on business’ and cannot stick your employer with the bill. He also notes the immense economic value of transforming face-to-face meetings at outside locations into zoom meetings, especially for parents and for ‘I have to say I met you’ styles of meetings. How much is playing Factorio (which I haven’t done) like starting a company? Yes, it is a lot of work that could have been directed towards producing real world value. But I am confident that a lot of what makes Factorio such a great game is never having to talk to customers, investors, human resources or the legal department, plus as Patrick says the ability to quit and resume at will. Enemies of the People Greenpeace commits a crime against humanity. No, seriously, and this is not counting their efforts to stop nuclear power and otherwise cripple in the climate. They contain multitudes, and are blocking the adaptation of Golden Rice. J’accuse. When people tell you who they are, believe them. Mark Lynas (Spectator UK): First, a word of warning. If you donate money to Greenpeace, you might think you’re helping save the whales or the rainforests. But in reality, you may be complicit in a crime against humanity. Last week, Greenpeace Southeast Asia and several other NGOs managed to stop the cultivation and use of vitamin A-enhanced rice in the Philippines, after the country’s court of appeal ruled in their favour. In doing so, Greenpeace have blocked a multi-year, international, publicly-funded effort to save the lives and the eyesight of millions of children in some of the world’s poorest countries. German vandals assault a Tesla factory producing electric vehicles, saying they are ‘bad for the environment’ in various ways. Having successfully killed Germany’s nuclear plants, they need a new way to try to bolster fossil fuel use, boil the planet and also make people otherwise poorer, I suppose. I wonder who could be behind this. Oh Canada For those who haven’t seen it, potential new Canadian law reminds us why we have our first, second and in a surprise appearance seventh amendments. Mert: This new Canadian law is the craziest thing I have ever seen in the west According to this, they can: – accuse, fine, and jail you for PAST speech (before the law went live) – put you on house arrest AND take away all communication rights if they even suspect you MIGHT say something they don’t like (I.e you haven’t done literally anything). – anyone can accuse you and it’s on a committee of bureaucrats called the Digital Safety Commission to solely determine truth . If you are an immigrant or young person thinking of coming to Canada — I strongly recommend reconsidering. Toby Young (The Spectator): To those worrywarts who are anxious about the risk that this new law might be weaponized by woke activists, the government has said that ‘detestation’ and ‘vilification’ are not the same as ‘disdain’ or ‘dislike’, which would still be permitted (thank you, Mr Trudeau), or speech that ‘discredits, humiliates, hurts or offends.’ Although, that won’t protect you from another clause in the bill – and this is where it trips over into as yet unimagined dystopian territory. If the courts believe you are likely to commit a ‘hate crime’ or disseminate ‘hate propaganda’ (not defined), you can be placed under house arrest and your ability to communicate with others restricted. That is, a court can force you to wear an ankle bracelet, prevent you from using any of your communication devices and then instruct you not to leave the house. Even the practical implications boggle. If you are arrested by the precogs, you cannot use communication devices and you cannot leave the house. How can you work? How are you going to acquire food? The craziest part is this is retroactive. So they can go after you, now, as a criminal, for speech that at the time was not only not illegal but considered broadly acceptable. All that matters is that someone now decided it is retroactively hate speech. Freedom seems a lot more endangered in Canada than the United States. I would not be comfortable writing a blog like this, or even speaking my mind, in Canada. Canada also is looking to impose a $25k penalty and double its ‘exit fee’ for citizens who leave the country, to ‘curb the emigration crisis.’ It warms my heart that I have not seen a single post, not one, that defended what Canada is attempting to do here. But that also means I have not seen anyone claim that these characterizations of the proposed new law are inaccurate. This is NPR After Uri Berliner published his story in The Free Press last month about how NPR has lost its way, NPR suspended him for it, as per their own report. David Folkenflik (NPR): In presenting Berliner’s suspension Thursday afternoon, the organization told the editor he had failed to secure its approval for outside work for other news outlets, as is required of NPR journalists. It called the letter a “final warning,” saying Berliner would be fired if he violated NPR’s policy again. … On Friday, CEO Maher stood up for the network’s mission and the journalism, taking issue with Berliner’s critique, though never mentioning him by name. Among her chief issues, she said Berliner’s essay offered “a criticism of our people on the basis of who we are.” Yes. Berliner criticized who you are. Because he criticized who you chose to be. Which is most definitely not a spy. What else is there to criticize? The New York Times reports that the person she replaced as CEO was formally accused of racism and investigated by an outside firm because he asked employees to be ‘mindful of civility.’ Rural Capital: I remember on All Things Considered they did a story about how calls for civility had roots in racism. I guess the employees took that to heart! Uri Berliner then resigned. There has been ongoing discussion about the new NPR CEO Katherine Maher. Lex Fridman: NPR CEO needs to step down. Get political bias out of NPR. We need great, balanced journalism now more than ever. Marc Andreessen: Respectfully disagree. These leaders are exact matches for the institutions they run. Replacing them does nothing, the replacement will be the same or more so. These institutions are locked in, they’re not going to change, they’re going to just get more like they are now. There certainly are a bunch of Tweets and also video clips of the new CEO Maher saying some things I do not want the CEO of NPR to say or believe. What I kept noticing was that, when there was a timestamp on any of it, it was consistently between 2019 and 2021. The second thing I noticed was that there was no one defending her or her statements. That could simply be that such folks finally realized this time that there is no need to defend her. NPR can do whatever it wants to do. Let people complain. Or it could be that everyone wants to memory hole such statements as much as possible. Daniel Friedman: I’ve been following the drama around NPR CEO Katherine Maher, and, while lots of conservatives are dunking on her tweets and statements, I don’t see a lot of liberals circling the wagons around her. But a lot of them were just like her between 2017 and 2020. If Maher is forced to step down over her “in this house” tweet history, a lot of other people who advanced their careers a few years ago by being performatively woke could also be in trouble. I guess their plan is to just delete their old tweets and try to keep their heads down. These people went completely fucking insane. They destroyed the lives and businesses of a lot of people who didn’t deserve it. And now they’re going to try to pretend that none of it ever happened. Katherine Maher espoused exactly the opinions that were expected of her to be promoted as a white female nonprofit executive in the 2010s. She is not an unusual figure. There are people like her in c-suite positions in every company right now. Marc Andreessen: She is the precise median of the leadership teams of Big Tech. Daniel’s model seems highly plausible to me. In the period from 2017-2021, the dynamics in many places rewarded saying things that are, objectively speaking, nuts. The world today and also history are full of people who saw people around them getting ahead by saying (and often doing) nuts or terrible things or being punished for not saying or doing those nuts or terrible things. Often those who notice this then protect and advance themselves by also saying or doing those nuts or terrible things, perpetuating and compounding the problem. This seems like a clear case where if someone wants to go back to acting sanely and pretend that all of this never happened, because that would be totally nuts, then we should be happy to accept that peace offering. Are they up for their side of that deal? Technology Advances Timothy Lee attempts to diagnose what is wrong with tech journalism. His diagnosis is that readers lack background, the industry is struggling, nuanced and positive stories don’t sell when covering established big companies, and the readers are getting what they want. I buy all that. If you want nuanced journalism about technology, you need to find niche publications, and annoyingly you will often have to pay, it is the only business model that makes sense. But also the New York Times has an explicit narrative and agenda to attack the technology industry, and many other mainstream outlets are also very much Out to Get You on that front to varying degrees. James Meigs writes in City Journal about Scientific American’s transformation from a science publication to what is now primarily a social justice publication with a secondary focus on science, alongside similar trends at several rival publications. Who said Apple Vision Pro wouldn’t have a price drop? Ebay of course tells another story, where it looks to be available at $2500 or less. TikTok on the Clock Now that a bill requiring divestiture has passed, I got to cut out a lot of my notes here as no long relevant. But not all of them. What is left to notice? The obvious place to start is that ByteDance is reaffirming that they will shut down rather than divest. Why yes, it is weird that a business worth billions would consider shutting down rather than selling its product to willing buyers at the current fair market price. Whatever could be going on? Reuters: BREAKING: Reuters reports that TikTok’s owner ByteDance would prefer to ‘shut down’ its app in the US rather than sell it if all legal options are exhausted Eigenrobot: Why would you say this? It’s odd that a profit maximizing firm would actually pursue this strategy. The obvious explanation is “lol TikTok is an op” which seems plausible sure. But if so why would they say this and come off looking like an op. Is there another explanation? Indif4ent: yeah op is really looking likely. if they divest, whoever takes over is likely to take a hard look at the code. at minimum, id expect to find some backdoors for the CCP. Hopeful Abandon: there’s not actually much to sell — they can’t give up the recommendation algo without government approval, which is unlikely. stripping it of all the internals that are export-controlled and trying to auction off the husk is possibly more costly than just ending service. From PoliMath: TikTok is not purely an op. TikTok is a legitimate highly predatory business, and also TikTok is an op. Why are they threatening to shut TikTok down in response rather than sell, even now that the law is passed? Many possible reasons suggest themselves. Here are some. Public pressure. Rile up voters and hope that the decision gets reversed. Help Trump. Trump opposed the TikTok ban, so get supporters angry at Biden. Negotiating tactic. You need to bid higher if ByteDance is willing to burn it all. Damning evidence. Suggested above. If you can’t show the books you can’t sell. CCP veto. If the CCP tells you that you are not selling, guess what? Algorithm. It could be necessary and something they can’t afford to give out. Decision theory. In order to fight the sale they needed to be people who would rather burn the place to the ground. This is the consequence. Vindictiveness. F*** you. That’s why. There is obvious overlap, many considerations touch on several of these at once. It is often assumed that people will ‘be rational,’ and go along with your plan after you screw them over because it is obviously in their best interest to do so. Well, even if you are right about what is best for them, humans do not work that way. Because that way does not work. Alternatively, ‘they’ could reveal the answer, and make it easy. It’s #5. Michael Sobolik: @TikTokPolicy literally says it cannot comply with the divestment requirement because “the Chinese government has made clear that it would not permit a divestment” of ByteDance’s algorithm. They’re literally making the national security case for the U.S. government. Okay, then. We have established what you are. Now we’re talking price. Or not. Meanwhile, China continues to argue, on this issue as it does on so many others, that there should be rules for thee but not for me. The other clear argument is that China has made it clear some of the ways it intends to use this power. Zac Hill: This is what I don’t understand about the “Tik Tok is fine, actually” position: we know and can see exactly how China puts its finger on the scales, and why. Noah Smith: Apparently the Chinese government is encouraging an explosion of antisemitism on Chinese domestic social media. It’s obvious they’re trying the same thing here in the U.S. JS Tribune: Racist cartoons, Hitler memes, swastikas, and quotes from the Protocols of the Elders of Zion are now ubiquitous in comments sections. In an ironic twist, Hu Xijin, former editor-in-chief of the CCP mouthpiece Global Times, cautioned his followers on Weibo, “Some of us should not be influenced by public opinion dominated by Jews and Americans.” Two weeks later, Hu claimed that “there is no such thing as antisemitism in China,” adding Holocaust inversions comparing Israel to modern-day Nazis. … Portraying a symbiotic relationship between America and the Jewish people is a recurring theme in the propaganda campaign. It is hard to pretend that TikTok is not involved in this. And when the Protocols of the Elders of Zion and Hitler memes are ubiquitous, it is very clear what is going on. The other problem is that when you pass a law now, that would ban TikTok next year, but that might be reversed if Trump wins the election, what do you think TikTok does? Tara Palmeri: Since November, according to two TikTok officials, there’s been twice as much pro-Trump content as pro-Biden content on the platform—specifically, 1.29 million positive Trump videos or images, with 9.1 billion views, compared to 651,000 positive Biden posts, with 6.15 billion views, they told me. According to an internal TikTok analysis, from January 2023 into May 2024, videos tagged #Trump2024 have generated 472.8 million likes and 6.5 billion views, compared with 50.9 million likes and 558 million views for videos tagged #Biden2024. This may not be the most precise survey of the landscape, but that’s still a nearly 10 to 1 ratio of Trump likes to Biden likes, and 12 to 1 in views. This does not require Trump to have been bought. The result is the same. It is amazing how poor the planning was on this. Who are the contenders to buy, if ByteDance is ultimately bluffing? Manifold is here for you. Note that this is all conditional on TikTok being actually bought, and there are lots of other names on the list that don’t count under ‘Other.’ With Manifold’s new loan rules and this being a conditional market that likely does not pay, the primary conclusion here is that Oracle, Microsoft and Amazon are live, but that you should be ready to be surprised. I am bullish on Amazon here, and bullish on Alphabet at only 2%. They both have very deep pockets. The tie-ins to Amazon shopping seem great, also you have Amazon music. Microsoft and Alphabet would both also surely love to have it, and might plausibly be allowed to buy. Meanwhile, anyone remember back in 2020 when Grindr was sold by its Chinese owner after the US expressed security concerns? I remember hearing that and thinking, ‘yes, well, obviously, you do not want a foreign adversary knowing who is secretly on a gay dating app’ and wondering how no one had previously realized this. Was that also a threat to free speech? We really shouldn’t need to pass a bill for TikTok either, but I suppose here we are. I touched on this in the older post, but here are others picking up on the important concept that many TikTok users would actively prefer if everyone were to quit, so much so that the average cost for this outcome among students was less than zero. Students would pay to have TikTok vanish. Robert Wiblin: The consumer surplus of banning TikTok: David Zweig: A remarkable study. Most young people would prefer for themselves and their peers to not be on social media. In fact, they hate it so much that they’d be willing to pay to make this happen. But they are stuck on TikTok and Insta because everyone is on it. From Article: A recent study led by the University of Chicago economist Leonardo Bursztyn captured the dynamics of the social-media trap precisely. The researchers recruited more than 1,000 college students and asked them how much they’d need to be paid to deactivate their accounts on either Instagram or Tik Tok for four weeks. That’s a standard economist’s question to try to compute the net value of a product to society. On average, students said they’d need to be paid roughly $50 ($59 for TikTok, $47 for Instagram) to deactivate whichever platform they were asked about. Then the experimenters told the students that they were going to try to get most of the others in their school to deactivate that same platform, offering to pay them to do so as well, and asked, Now how much would you have to be paid to deactivate, if most others did so? The answer, on average, was less than zero. In each case, most students were willing to pay to have that happen. The collective action problem is real. And that is the views from the students themselves. From the perspective of the parents? Good f***ing luck. Reason: “You, as a parent, can tell your child not to be on TikTok,” says Reason Editor in Chief @kmanguward on The Reason Roundtable podcast, but “we should not try to beat the Chinese by being more like the Chinese.” Matthew Yglesias: I think this is a pretty naive view of the collective action issues involved in parenting. Nobody wants their kid to be the first one in school to get a smartphone but it’s hard to ask your kid to be the only one who doesn’t have one. At a minimum, you are spending a large number of points with your child, on a continuous basis, and make your relationship more adversarial. One must pick one’s battles. And even if they go along with it and we ignore that issue, your child will then be substantially worse off than if TikTok did not exist. Their peers will be using it, they will be left out and pressured and mocked. It is not good. Am I going to let my kids have TikTok any time soon? Oh, hell to the no. But I know how much worse my life is going to be, and theirs, if they decided they did want it. Also, in case TikTok wants to claim it has not been sharing data, a TikTok scientist says they were absolutely sharing data. Alexandra Sternlicht: SCOOP: A TikTok data scientist says he was assigned a Seattle-based manager on paper, while actually reporting to a Beijing-based ByteDance executive, who ordered him to regularly email U.S. data in spreadsheets to ByteDance workers in China during 2022. This occurred after TikTok launched Project Texas to Separate U.S. user data from ByteDance. Shoshana Weissmann: okay but i respect that this was done in spreadsheets. Antisocial Media Now here’s an idea. Luke Hosey: Ten thousand Twitter accounts just sat bolt upright in bed, drenched in cold terror-sweat. keshav: instagram will now recommend original content when it detects duplicates Bravo. All social media platforms should do this for content that is produced from a recommendation algorithm rather than those you follow. AI can help check for duplicative content, in case people think they can make a minor tweak. As Rob notes here, people like to complain about Twitter but have you seen Facebook? Rob Bensinger: All the dramatic claims have been about Twitter, but I feel like Facebook has already quietly become basically-not-a-functional-website. It is no longer a place with core functionality like ‘in a discussion, there’s a way for each person to get notified when the other responds’. Why did we used to have television that was often remarkably high brow? Theory here is that people didn’t know how good stupid content would do, the available technology was less friendly to stupid rapid fire content, and the presence of only so many networks kept out competition from stupid alternatives and allowed cooperation to keep things from getting too stupid. Yes, there has been a big change towards stupid. But there has, I believe, been an even bigger change against slow. And I am on the This Is Good, Actually team on that one. Older television and movies are remarkably slow. Sometimes this allows them to use slow burns, set scenes and accomplish important things. Sure. But most of the time, it is not a good tradeoff. Things are better now. Yes, in some ways we are now too impatient, and a lot of that is the phones, but the improvement is mostly real. Prosocial Media Disney and Warner Bros. will team up for a new streaming bundle containing Hulu, Disney+ and Max. Good. Marginal cost is close to zero, there is no reason people should need to choose and rotate between packages. I am so done with the unbundling and ready for the rebundling. Viewing data lets them still compete for division of the revenue. Zeng Yuli reports that ‘loveable losers’ took over China’s screens. These ‘economic men’ are called ‘wonangfei,’ askin to ‘timid loser,’ they put their woman first and have a certain sex appeal and a willingness to sacrifice under tough working conditions. So, not losers, then? Those are winners. It used to be that this was what winning looked like. Then we created a worldwide culture with bizarrely unrealistic expectations and warped priorities, where devoted long-suffering family man who gets the job done was looked upon negatively. The worry is that the Chinese are reversing this due to a form of despair, but actually yeah, this is usually what winning feels like. Finally some Truth in Television. At the Movies I put my movie reviews on Letterboxd. I have been logging all movies I see including the bad ones, but mostly (one exception so far) not going back. This shows me starkly what happens when I use various selection methods. This month had two excellent movies: Challengers and especially The Fall Guy. One movie I am actively avoiding is Civil War. Not since Don’t Look Up have I wanted this much to not see a movie. Is it (minor spoilers) about how war is about nothing except pointless suffering, destruction and death, and things can fall apart without a reason? Is it about how a nation under too much strain from immigration cannot hold together or maintain its democratic norms? Is it a Spanish Civil War style scenario, or purely that only those two states could credibly form a force to threaten a President? Or is it as the director says about how fascism is bad actually and journalism is our lifeblood? Is the scenario well thought out or complete nonsense? I do not know. I found the discussions interesting, and I appreciated Ex Machina, but ultimately realized my life would not be better having seen Civil War, and likely the world is not better for having made it, even if it turns out to be in the Scott Sumner sense a Good Movie. Perhaps I finally learned my lesson with The Zone of Interest. Media Trustworthiness Rankings From early this year, here’s a fascinating chart of how various sources are viewed. John Burn-Murdoch: It always blows my mind how much wider the partisan trust gap is for US media compared to the UK Most British media is trusted (or distrusted) about equally by supporters of both major parties. That’s true of virtually no US media org. Deeply corrosive for US society. This seems like an excellent time to play overrated versus underrated. Bloomberg is my pick for the most underrated. I have them at or near the top of my list. They do not always get it right, but I feel I can relax when reading there in a way I can’t in most other places, and also they tend to be better at focusing on things that matter. They are not cheap, but I am happy that I subscribe. Whereas with my other subscriptions, I feel kind of obligated as part of the job. The Economist is substantially higher, so this can’t be a pure subscribers-only effect. Bloomberg is also strong evidence that the baseline Republican mistrust of media has remarkably little to do with the content in a given media platform. Yes, they trust explicitly red sources (Fox News, Newsmax, even Breitbart) and also they have a special distrust for sources that are being actively hostile to them, especially CNN and MSNBC but also the broadcast networks, NYT and WaPo. The broadcast networks remaining high trust for Democrats is likely a legacy effect. In terms of their value as news sources they are overrated here for sure. As a matter of trust, however, I do find that CBS, NBC and ABC are all still relatively careful and worthy of trust. I don’t know if PBS is still trustworthy, because I can’t remember the last time I used or even saw it as news source, it never comes up. The BBC is properly rated near the top. Forbes is doing an especially good job, it seems, earning trust across the aisle. I am not happy about certain recent choices of theirs, I think they are overrated here, but on that front they are doing something right. C-SPAN seems clearly underrated in terms of trust. They are not terribly useful, but I certainly feel like I can trust them? The Atlantic seems underrated. It is not a hard news source, but for what it is I find it to be relatively more trustworthy than this. You have to adjust for partisanship, but I do find both Fox News and CNN to be overall underrated here. Washington Post seems overrated on trust, they push the envelope on bounded distrust reasonably often and also can be rather clueless. The AP is definitely overrated, based on having been caught in quite a few whoppers over the last year or so, some of them clearly intentional. Politico has established recently with its AI coverage that it cannot be trusted, I presume it is overrated in general. InfoWars seems highly overrated, even as the lowest rated. I presume the lack of distrust, especially among Democrats, is largely not knowing who they are. The New York Times seems properly rated here overall. It does worse in tech. It is very much not living up to its status, or especially what it used to claim to be, but it still often breaks important or useful information and plays by rules better than the lower half of the list. Paul Graham offers a very good note: Paul Graham: One thing people selling expensive things often don’t grasp is that the people who can afford them are often too busy to deal with the work involved in buying them. So if you’re selling expensive consumer goods, make them really easy to buy. Some responses point out there are luxury brands that actively make buying difficult, which Graham notes is quite rare. This is a special case where the value is largely in the exclusivity, so they sacrifice volume and lose most potential marginal sales to maintain that. Government Working The missile defense systems we built? They work. One should update the general world model accordingly, the government was capable of building such a thing for real. This is relevant both for missile defense and for other capabilities. Tyler Cowen clarifies that he prefers current airport security entirely because of the deterrence and prevention effects on terrorism. I think this is his worst take, especially for someone who thinks travel and tourism are so important. We have run natural experiments, and also I can think about physical reality and we know the failure rates. Most of the procedures have almost zero deterrent or prevention effect, while being very expensive in time lost and in travel prevented. Here we go again: California is the latest jurisdiction to move far enough with a proposed ‘link tax’ on news that Google has removed links to California news websites for a small percentage of users, as a ‘test.’ If California thinks they are bluffing, I am confident that they are not. California called it intimidation and outrageous that Google would respond to an increase in price by decreasing quantity purchased. Here is another analysis on California’s $20 minimum wage for fast food workers, by Richard McKenzie of EconLib. It explains why this will lower, rather than raise, wages for those not subject to the minimum, as lower employment at covered locations enlarges the labor supply. And that this 25% jump all at once is very different from most minimum wage increases, which are typically smaller and phased in over time, which explains their relatively small impacts on employment. The weird part is the later sections, where Richard points out that workers who keep their jobs and hours likely lost a lot of value in benefits, and when he notes that many restaurants were already paying over $16 an hour in California, some more than $20. This would suggest that the increase should indeed have a small impact on employment, if paying $20/hour was already a superior option for some restaurants, or real compensation does not need to increase much to meet the new law. It would be a good idea to make Manifold markets or Metaculus questions about the impact of the law, but I was unable to come up with resolution criteria that were clean enough to justify the time investment I was willing to make. Anyone want to step up? In news of a minimum wage hike escalating quickly even beyond my expectations, Seattle attempts to reverse course on delivery driver compensation four months after the new laws took effect, to try and get companies to reverse their new fees. I really do not know what they were expecting, look at the prices we are talking about: Her proposal, pieced together over weeks of negotiations with the companies and an industry-backed drivers’ group, Drive Forward, would cut the hourly rate to roughly 33 cents per minute and 35 cents per mile, below the IRS’s per-mile reimbursement rate for vehicle wear and tear of 67 cents. In place of the $5 per trip minimum, companies would calculate hourly pay on a weekly basis, topping up drivers who earned less than $19.97 for each hour worked. The bill would also make it so drivers would not be paid for trips canceled by the customer. … Andi Honer, who’s been delivering for six years, says no. When the law took effect in January, she saw her earnings drop in half. Though she was making more per trip, she was getting fewer orders and smaller tips. Since then, her earnings have rebounded slightly, but are still below what she made the year before. “Before, I was making $3 on an order, and now I’m making $8 on an order, but after 40 hours of work I’m making half the amount I was making before,” she said. Tips, in particular, have disappeared: They once accounted for more than half of her earnings, but now customers are plowing that money into service charges. So that’s a minimum wage of $20 an hour after cutting the rules back. Before that was a minimum of $5 per delivery and an average of $8, so guess what deliveries are going to cost more than? And guess what that does to demand and willingness to tip? The government’s demand for records knows no bounds. Austen Allred: It’s interesting that document retention has evolved from, “OK don’t destroy all your documents now” to, “We demand to have record of every conversation anybody has from here on out.” “But Austen document retention has always made it so that everything is captured in perpetuity.” No, 30 years ago 99% would be be voice conversations and phone calls. Accidentally having “documentation” of everything that ever happens is a new/accidental phenomenon. Contrast this with Sam Altman’s recent speculations about potentially needing 5th Amendment protections for your AI assistant that will know everything about you. That is not how government typically responds to new records being kept. Japan doubles its intake of skilled workers to 160k/year, versus our H-1B limit of 85k. It remains completely bonkers that we cap the number of H-1B visas at all. Florida Man Bans Lab-Grown Meat As expected, Florida also bans lab-grown meat. A lot of people genuinely do not understand how anyone could support such a ban, unless they were in the pocket of Big Meat. I do not support it. But allow me to once again attempt to explain. As I mentioned last time, this is in large part happening because, no matter what anyone says now, these types of worse alternatives seen by some as morally superior will always, always then be used as justifications to attempt to socially shame, destroy and ultimately ban the original product. This will happen even if the new offering is more expensive, lower quality than even the cheapest versions of the old product, and at best a poor substitute for even those cheaper versions, or in this case that a switch could be greatly damaging to your health as well in a wide variety of ways. This will happen even when the ban will predictably backfire on its own merits. Indeed, this was the comment I got last month when I pointed all this out: Zvi Mowshowitz: In this case, it is obvious, many are not bothering to hide their intentions. Many of the people I know who are vegans absolutely want to come for your meat, and even your dairy. They are building alternatives in order to do this. They bide their time only because they do not have the power to pull it off, but they will absolutely impose first expensive mandates and then outright bans if permitted to do so, and would do so even with no good alternatives. They certainly would do so if they could point to ‘meat alternatives,’ even if we all knew they were expensive and not the same. They would gaslight you about that, as other movements continuously gaslight us about other cultural trends via the full four-step clown makeup. And they think they are morally right or even obligated to do this. Is it still perverse to ban lab-grown meat? Very much so, and I would not be banning it. That is not how I roll. Derek Heady: Read through the [above] quote, replacing “vegan” with “abolitionist” and “meat” with “slave,” and you’ll be seeing the issue from the p.o.v. of animal ethicists. My guess is that history will not be kind to arguments of this sort. Thank you, Derek. I appreciate the honesty and removal of the mask. I continue to oppose this ban. But if you cannot see why some people react with ‘oh then we definitely need to ban lab grown meat before it is too late’ then that is on you. If you keep saying ‘eat what you want, but leave me alone’ and do not understand why so many do not believe you or take that argument seriously? This is why. For many of you it is very much a ‘oh look, if it isn’t the consequences of my own actions’ or ‘me reaping’ situation. For those who are instead principled libertarians who genuinely wouldn’t turn this around on a moment’s notice, well, I am sorry that others have ruined this and so many other principled stands. Also, I think this Tweet below might explain more than half the objections and warnings around California’s proposed SB 1047 and why they so often don’t correspond to anything written in the bill? Instead of saying ‘I do not like this on its merits and suggest we not do it,’ they instead say: I don’t care that [lab grown meat, putting safety requirements on state of the art non-derivative AI models above 10^26 flops] has exactly zero to do with almost every [startup, AI company]. What matters is: The vibes are off. You don’t like what we like. You gestured disapprovingly in our philosophy’s general direction, and might at some point pass other rules against other things. This cannot be tolerated. So that’s that. It’s all over, nothing else matters, we’re done here, we’re totes going to abandon the whole place in droves. Or remember when a bunch of VCs signed an extremely milquetoast content-free statement to the effect of ‘it would be better if AI companies tried to do things safety’ and for a week we got lots of founders saying ‘looks like we have a list of VCs we never will take a meeting with again! Cancelled!’ Yeah, okay. You do that. I think that if you did that, you would miss your nose. But it’s your face. Crime and Punishment Homicide rates rose during the pandemic. Now they are falling again. Dan Frosch (WSJ): Nationwide, homicides dropped around 20% in 133 cities from the beginning of the year through the end of March compared with the same period in 2023, according to crime-data analyst Jeff Asher, who tabulated statistics from police departments across the country. … If the trend continues, the U.S. could be on pace for a year like 2014, which saw the lowest homicide rate since the 1960s. The CCJ report says that homicide was 18% higher in 2023 than 2019, so this would be more than a full reversal. Florida charges you $50 per day to stay in their prisons, and has extended this to time not served, charging a woman $127,000 for her 7 year sentence despite only serving ten months. This is unsurprisingly wrecking havoc with her life and many others. Needless to say this is insane. If you want to fine people and confiscate their existing assets, fine, but debt here is deeply destructive. Saddling ex-convicts with this is not how you get good outcomes. Then again, who says Florida wants good outcomes? One person suggested this was a response to felons being given the vote, with the debt designed to deny them that vote until they pay, which they often cannot do. Fox News covers the squatter issue how you would expect, with a great shoutout to the need to build more housing. As Mark Miller points out here, adverse possession letting you claim ownership of an abandoned property after 5-7 years makes sense, and most squatting has zero to do with that and is simply theft, fraud and extortion. When that happens it should be dealt with accordingly. I want jail time. Meanwhile things like this in Oakland will keep happening, where it seems squatters terrorized the neighborhood for five years, constantly chopping stolen cars, no one felt they could do anything about it, and the fifth time they set a fire it finally burned down several houses. I am boggled how we reached a point where doing normal business or building anything is illegal, but living completely illegally and destructively is protected. The Toussaint Cartier necklace, valued at over $150 million and worn at the recent Met Gala by Daphne Kluger, was reportedly stolen. Love it. I am strongly against theft but if you tempt fate this precisely then that is on you. El Salvador El Salvador continues to be a fascinating experiment and Rorschach test. So here is the latest exchange, focusing in on how strong is the evidence from a gang tattoo. Law & Liberty: The American right should be ashamed of their admiration of @nayibbukele, writes @plynch1966. The fact that he has ended El Salvador’s violent crime epidemic does not justify his betrayal of liberal ideals and principles. [The last section of the post is entitled ‘Not a Solution to Anything,’ in which it admits that this was indeed a solution to vital things, and without suggesting a viable alternative.] Shylock Holmes: El Salvador is a legal philosophy hypothetical come to life. “Okay, but what if all the guilty people… tattooed their faces. And literally nobody else did. And there’s like 100,000 of them. Would you still really need trials for all of them?” “Yes, because, um, er…” The commitment to due process as a kind of romantic attachment and end in itself, rather than as an engineering hack for somewhat Bayesian justice, is quite jarring. Especially given that this had been tried, for decades, and the result was a homicide rate north of 100 per 100k. Another useful intuition is “if you don’t immediately understand what ‘a homicide rate above 100 per 100k’ means in practical terms, nor know that this is what El Salvador had, perhaps you aren’t a serious authority on the country. “But this sets a terrible precedent for other cases”. Okay, but imagine there’s no evidence of any desire to apply this standard to anyone other than obvious gangbangers. “But it’s a slippery slope.” So is a homicide rate of 100 per 100k. Seriously, @plynch1966, what do you expect the wrongful imprisonment rate to be if you were to lock up everyone with an MS-13 face tattoo? Would you say, 10^-5? 10^-6? Either ES or the US, the answer is the same. It is hard to overstate how relevant this is to the question. I want to see the movie where the protagonist goes to sleep, then wakes up with an MS-13 face tattoo and has to go on the run from the police and also both gangs. I also would assume that the false positive rate for current offenders is higher than 10^-5, likely at least 10^-2 (1%), even if everyone with such a tattoo got it on purpose. Presumably some people quit the gangs. And it would be quite bad to tell gang members that you will lock them up even if they decide to be too legit and thus quit. It still does seem like the tattoo is super strong Bayesian evidence of gang activity. It is certainly much stronger Bayesian evidence than ‘was found guilty of murder in America’ or what I would require to convict a defendant ‘beyond a reasonable doubt.’ Given the circumstances El Salvador was facing, I do not see a reasonable argument that it was wrong, during the acute crisis period, to make arrests based on this evidence alone. That does not make the concerns about strongman activity illegitimate. Arresting criminals does not require changing the constitution to give more executive power or getting the courts to let you have consecutive terms that were previously illegal. It is very easy to see this lack of due process being extended to political enemies and for democracy to be endangered. It is difficult to do this kind of policy in an isolated and careful way that preserves everyone’s rights otherwise even when that is the goal. And that very clearly was not the goal. The goals seem to be a mix of popularity, power, profit and prosperity. If you do not want the people to take that deal, you need to give them an alternative. Those issuing dire warnings have not done so. Variously Effective Altruism A study found that paying off people’s medical debt does not help them much. Patrick McKenzie: “We pay off the debt and then credit score improves” is not a good theory of change, because paying off delinquent debts just doesn’t improve credit score, point blank. Once someone is in a state of being overwhelmed by debt, the medical trade lines on a credit report are a social fiction. (Even more than medical bills generally.) Donors may have been one of few actors who believed that fiction had strong impact on material world. This seems like an obvious flaw in the system, if it is true in general? If it does not improve your credit score to pay off a delinquent debt, then you have even more reason not to pay. Obviously it should not get you a full credit score refund, but a partial one seems appropriate. On the other hand, if you are drowning in debt, it makes sense that it changes very little. I think this is spot on: Patrick McKenzie: 1/6 of U.S. economy is medical spending, and medical spending is observable on credit reports in a way which food spending is not. If you get to the point where you’re defaulting, via any mechanism, you will default on both medical and food-based expenditures if you have them. And then scholars suggest you were brought to this circumstance by the medical debt, but not the food debt, because scholars can’t see food debt. Anyhow, in the case where medical issues are proximate cause of downward financial spiral, whackamoling the bills doesn’t redress. In terms of interpreting the study’s results, however, we have this big caveat. Eliezer Yudkowsky: “We tried forgiving $169M in debt [that was already in arrears and deemed uncollectible, bought for pennies on the dollar] and found little effect” is the econ equivalent of IN MICE. Please lead with that part next time. I’m not objecting to the research, people, I am objecting to its initial representation online and in media headlines. The debt was cheap for a reason. It was bought for pennies, and mostly written off, the damage otherwise already done. Still, yes, this could have been a situation in which that debt did damage vastly in excess of its economic value. A collection agency buys the debt for pennies, then harasses the debtors, threatens them, potentially goes after them legally if opportunity arises. What is still left is all but worthless. Yet the debtor cannot purchase their own debt anywhere close to market price, for obvious reasons. So gains from trade would have made sense. It would have made sense if it was good value to settle those debts at market price. Alas, we now know that this was not the case. Or, alternatively, we know that the debt collectors were not doing that much additional damage. What could be tried next? If I had the budget for it, I would attempt to repeat the experiment, except that instead of buying the debt from collectors, you try buying it instead of the initial debt collector, at a competitive price for that point in time. Then you divide into various experimental groups. Perhaps something like: For a control group, you have the debt serviced normally, or you never buy it in the first place but track the debtors. This hopefully is a close to free action. For the first experimental group, you have the debt serviced normally, except without any further negative reports to the credit agencies. So they still try to get paid, and still harass the debtor, but they don’t make their credit any worse than it already is. For the second experimental group, you service the debt nicely. You not only don’t ding their credit, you also don’t harass coworkers or family members, you don’t call constantly at all hours, you don’t make idle threats and you’re not abusive. You accept reasonable settlement offers and set up payment plans. But you do remind them of the debt, and if they can clearly pay you attempt to collect. For the third experimental group, you forgive the debt. Or something like that. Doubtless the IRB will invoke the Copenhagen Interpretation of Ethics, and blame you for helping but not doing more, so you do the best you can there or you don’t ask permission since there is no law against doing any of this. Then you see the results, and where intervention provides marginal help. While I Cannot Condone This Jeff Lawson buys The Onion, demands $1 or it will disappear forever. Ben Collins: NEWS: My friends and I now own and run The Onion. I’ll be the CEO. We’re keeping the entire staff, bringing back The Onion News Network, and share the wealth with staff. Basically, we’re going to let them do whatever they want. Get excited. It is too early to know if this will work out well. I used to get a lot out of The Onion, but then slowly stopped checking it. Revealed preference. Occasionally they still nail something, and I am eager to see what happens with ‘do whatever you want.’ Consider the context of Twitter, and also other places like The Washington Post. Paul Graham: It’s weird how consistently people who attack Twitter point out that ad revenue has decreased. They don’t seem the sort of people who care about companies’ revenues, and now they’re suddenly amateur stock analysts. I suppose it’s because usage hasn’t decreased. Ravi Parikh: Buying & running a media company at a loss to better adhere to your personal values, rather than purely profit-maximizing, seems like a reasonable thing for a rich person to do. Indeed, ‘purchase and operate at a loss a media company or other company you want to exist’ is a highly standard thing to do. It can be a very high impact strategy. Media production has cultural impact vastly in excess of what can be captured in revenues. Attempts to capture more revenue cripple the reach and quality of the product, both compromising integrity and artistic choices and also saturating customers with ads and stopping them from consuming zero marginal cost products. Buying Twitter or The Onion or The Washington Post, if you can afford to do so and can run one of them wisely, is a fantastic bargain. So is supporting individuals. This blog runs on the patron model as well. And this extends beyond media to many other businesses. This is also one of the few socially acceptable, and thus likely to actually happen, ways to deploy that level of capital at scale for anything other than profit maximization. Matt Yglesias implores us to ask how to solve problems, not why they happened. But even though I enjoy this sort of thing, it’s also pretty plainly irrelevant to the question at hand, which is “what, if anything, can we or should we do about absenteeism right now?” Well, maybe. There are times when all we care about is the how and it is unrelated to the why. But most of the time: The why is a key component of figuring out the how. The why is a key component of figuring out how to stop it happening again. The why is necessary for both perception of and actual justice or fairness. Asking why is necessary in general, or people notice you not asking. Learning about the world is an important secondary goal. Or: Knowing why a problem happened is usually a key part of solving it. Yglesias’s example is school absenteeism. If you don’t know why children aren’t coming, you might not get them to. If you don’t know why children stopped coming, they might stop again. It is terrible for various reasons not to settle accounts for our Covid decisions. If people think they can get away with such decisions again, they will try again. These seem like things worth knowing for anyone trying to model the world. I think I’ve heard it before, but there is a theory that agriculture happened in so many places at similar times because the Earth’s climate got more seasonal so people had to shift into more reliable and consistent food supplies. Can Money Buy Happiness? New theory says only if you know where to shop. Noah Smith: Bad news: If you’re happy with $80k, higher income will tend to make you even happier, but if you’re still miserable at $80k, further increases in income won’t do much. Abstract: We discovered in a joint reanalysis of the experience sampling data that the flattening pattern exists but is restricted to the least happy 20% of the population, and that complementary nonlinearities contribute to the overall linear-log relationship between happiness and income. … Main Paper: Approximately 15 to 20% of people frequently experience negative affect, and the relationship between happiness and income is different in that group and in the happier majority. The suffering of the unhappy group diminishes as income increases up to 100k but very little beyond that. The story or assumption that these are the same people at each income level seems suspect. Suppose 20% of the population is chronically unhappy due to mental issues. Would they tend to distribute normally over the income distribution? Presumably not. A lot of them would have this interfere with their work. Which then raises the worry that income is selection. The flip side would be if some people are satisficing on income, and others are working very hard because they know how bad it would be for them to make less money, both of which are doubtless the case. People are responding to their situations. Good News, Everyone A day in the life of a Walmart manager (WSJ). The job is all about sweating the small stuff, day in and day out. Nate Silver will be doing his election forecasts in 2024. A fun thread on statistics, and what it means to get a given p-value with samples of different sizes. We are so spoiled with so much free music we recoil at the idea of paying $10 a month for full unlimited access to all of music. I worry that now no one appreciates it, and thus the utility we get from music is actually way down. I keep getting reminded I radically under-consume music, largely because there are so many podcasts. It needs to be something I focus on, when I put music on while working I don’t appreciate it and it distracts me. Can’t Sleep Clowns Will Eat Me You need talent, but Taylor Swift shows us the value of also being prolific, having a formula and schedule and relentlessly shipping, argues Katherine Boyle. She correctly points out Taylor is vastly more prolific than any other famous musical artist. Which is true, but that also means that everyone else who made it did so without shipping anywhere near as much as she did. I think whether to always be shipping depends largely on the patterns of selection and consumption, and how you get evaluated. In the before times, you would buy or sample an album. Then based on its quality, you might keep going. And every time you got stung by low-quality offerings, that hurt. In the physical media era, reliable quality was a huge deal. In the download and especially streaming eras, as long as you are fine with people mostly streaming singles, not so much? If Taylor Swift puts out 28 new songs and 3 of them are bangers, and others can inform me which 3 are bangers. That’s almost as good as going 3 for 3. It’s potentially better if I want to sample a long tail. I can configure my own playlist. There is still the exploration cost issue at the limit, but you have a lot more slack there. As a writer I think of this the same way. How do I want people finding, evaluating and consuming my content? If people follow links and see what is viral or recommended, or people like to pick and choose by subject matter, then it is ship ship ship. If it is people randomly sampling, or deciding whether to be in or out overall based on average quality, especially if they say ‘oh that is too much,’ then flooding the zone is unwise. How Great are Great People? Here are some claims that the great people really are that great, you should get exposed to them and be in awe, and those who haven’t seen it pretend such people do not exist, whereas instead you should strive to be worthy of their time. I think this is in some ways very true and in other ways totally false. Roon: Twitter often accuses me of hero worship on here when I say that someone is a singular individual or uniquely capable. This is a consequence of not living at the edge of history and observing exactly how outlier the outliers are. Anton: unless you’ve experienced it, you probably don’t know how great ‘great’ can be. one of the best things that you can do is get exposed to really great people, doing really great work, as early in your life as possible. … you cannot absorb this from books or television documentaries. You need the object lesson. all media occupies the same part of your brain as fiction – you need the life lesson. By seeing with your own eyes what is possible you can better understand what you are really capable of. you become a better judge of yourself for having a better yardstick to measure by. It is true in the sense that yes, there are people who are vastly more effective than most. It is a joy and inspiration to behold. I have definitely experienced it on several occasions. Peter Thiel was a rather blatant example. Jon Finkel in Magic: the Gathering is another. There have been many others, some of which I mention here periodically and some of which I don’t. It is false in the sense that part of it is being sharp in a certain kind of special way, having certain mental capabilities and a willingness to push them and power through hard things, but at core they’re all still just (smart) regular people doing regular things except they have some sort of ambition. It does not take miracles to work miracles. They have many of the usual flaws and biases and weaknesses. A lot of what makes a great person is purely stepping up and doing the thing, while focusing on figuring out how best to do it, day in and day out. There are minimum requirements, but great people are primarily made, not born. That is a lot of why you need to see it. If you see it, at first you are in awe, then you realize you could do that same thing, and you might get the same results. Gamers Gonna Game Game Game Game Game Kickstarter for the direct sequel to Star Control 2, from its creators. Was already well past its goal, but there are stretch goals and also: Shut up and take my money. Between this and Slay the Spire 2, a lot of future gaming hours are happily spoken for. I played the Tier 2 game Deep Rock Galactic: Survivor. Warning that I am generally high on vampire survivors-likes, but this has some very cool innovations that made it genuinely different. It is still in early access, I’ll come back to it as it improves. I played through Tier 3 game Unicorn Overlord. It is a tactical RPG of sorts, with no strict grid, and where you battle in units of 3-5 characters each, according to tactical rules you code in beforehand. It is a fun game, with a lot of different things to play with and optimize, and the story and characters are fine. I especially appreciate the preview of how battles will go, and the ability to respond accordingly. The core problem is that, as one poster put it, the game is ‘not going to be busting anyone’s balls.’ If you are paying attention at all, and trying to get stronger at all, you will win. If you find yourself underpowered, there are repeatable battles you can quickly grind to fix that, especially to let your secondary teams catch up on levels. Similarly, the game offers all these one-use items you can find and buy. You get a lot of them automatically. They are useful. But the stores do not restock, and you don’t ever have to use them, so they end up Too Awesome To Use until the last few battles, where the end is in sight and munching stamina and healing items becomes a ‘sure, why not?’ proposition. So you end up with a game that has a lot of complexity in it, which you can engage with to the extent you like, but the game doesn’t pressure you to do that much, or punish you for not doing so. It is so much easier to simply overpower everything. I played on the fourth difficulty level out of five. I died a few times when I was moving fast and didn’t realize I had to protect my command post, but you can always restart any stage, so no worries. My guess is that max difficulty would have been more annoying, but still not a real challenge so much as requiring (more) grinding. I gave a shot to Dragon’s Dogma 2 after Playstation Network spontaneously bought it for me without asking. I would have complained about that, but it had good reviews modulo some DLC issues that seemed easy to ignore, so I figured sure, why not f*** around. I found out. Dragon’s Dogma 2 is obsessed with its ‘pawns,’ the other three members of your party, two of which you constantly swap in and out. Then it gives you only one save slot you cannot involve for long periods plus an autosave that triggers when battles start, and makes the pawns not smart enough to not get knocked off ledges, where if you pursue them you can get autosaved into a location where you definitely die, and if you don’t you lose them. And it sends you on deeply generic quests, and the battles are largely a blur since you have little control over the other party members. So it felt like it sucked, and was actually full Tier 5 (We Don’t Talk About Bruno)? But others seem to like it, so I presume I am missing something. Should I give this another shot? I am now taking up Disgaea 7, which means (because D2) that I am doing this thing for the 8th time. It is very early, but I worry that this is too many times going back to the well. I also worry that they are not iterating over the right things and the flaws are amplifying, although there are clear improvements from Disgaea 6. In particular, the game should make you care about various things and try to maximize in lots of directions at once, while ensuring all of them matter and giving you interesting choices. Instead, it kind of doesn’t. Early on, instead of rewarding keeping things balanced or making it reasonable to level up via the story stages and fighting close battles, there is more of a mad dash to the item world, even more than in past editions. The game does not want you to ‘play fair’ in a way that keeps things interesting, and story progression seems like what you do to unlock various game features. Yes, obviously power leveling will be faster, but things do not need to be this stark. Then later on, from what I have read, similar problems emerge, where you are not making choices on emphasis or picking strategies so much as checking off boxes. The game also just gives you tons of stuff that does not matter. Why this giant barrage of quests and scrolls? We need some addition by subtraction. The big change is the switch from the bonus gauge, where you were rewarded for gigantic combos where you sometimes cared very deeply about a particular level, to having five chests you can earn with different missions, but the chests are mysteries. I like the idea, since filling the bonus gauge can be fun but ultimately got boring. The problem is that the missions they picked give you are even more encouragement to go curb stomp, as they consistently reward ‘no one died’ and ‘won quickly.’ So you are encouraged to not make things interesting or close. It also devalues the cool stuff that did reward you on the bonus gauge, such as setting up Geo blocks. Also, the chests being random (outside the item world) or always levels (inside it) means there is no tension. You never see that legendary or emblem or big EXP bonus or what not and think ‘I have got to get me one of these.’ If you miss, meh. Then in the item world, the bonuses are all item levels, and not very many, which is even more meh. I’d love to be a designer on Disgaea 8. We need to make a lot of cuts, to focus on a few things and make them really matter but give you meaningful choices which of them to emphasize, and find ways to not as brazenly reward charging head first into power leveling your top character. No one said it was going to be easy. One place I would consider starting is to make stats on equipment multiplicative rather than additive. As in, your attack is your equipment’s ATK multiplied by your character ATK, not the sum. If you are new to the series, I continue to suggest starting with Disgaea 1: Afternoon of Darkness. It has a purity and simplicity that makes up for a lot. Then in mid-June, Shin Megami Tensei V: Revenge comes out, so that is where my gaming time will go for a few months after that. I am definitely opening the floor to ‘what should I be playing right now,’ both in rogue deckbuilders and otherwise. I do want to get away from RPGs. Update to the Philosophy of Commander document for Magic’s most popular format. I understand why this document contains the principles it contains. I also am sure those who created it understand why this does not make me want to play Commander. Brian Kowal’s followers prefer two-year Standard format over three. I strongly agree. Three years is far too many cards and too little rotation. Reid Duke on Standard’s best decks. My reading of this having not seen the cards was a lot of ‘huh?’ with intermittent ‘oh’ and also periodic exclamations of ‘WHAT?’ Here is an example of why I keep saying ‘WHAT.’ PVDDR: I think this is my favorite MTG art since Esika’s Chariot (and the card seems kinda broken too) The economics of magical item crafting in D&D make no sense, failing to stand up to even a minute of scrutiny. This is mostly not a direct problem unless a third level Wizard starts creating lots of trivial Magic items, everyone can still have their fun, but it indicates the level of economic and market understanding running around. Sports Go Sports A reminder that this is The Way. Kevin O’Connor: The top playoff seeds should be rewarded with the ability to choose their 1st round opponent. Intentional losing to drop a spot for a matchup isn’t as exciting as teams competing to win for homecourt AND their choice of an opponent. It should matter the Knicks just beat the Bulls while the Bucks lost and the Cavs had no interest in winning to get the 2nd seed. Instead many Knicks fans are disappointed they will end up with the Heat or Sixers from the play-in. This makes no sense. It doesn’t have to be this way. Winning should be all that matters. The NBA used to allow G League teams to choose their playoff opponent. Clearly, there’s interest. Nate Duncan: I would love it, but GMs and coaches on the competition committee will never vote for having to make another decision (picking your opponent) that could possibly get them fired if it goes wrong. Nate Silver: Have season ticket holders vote. Should you be allowed to bet on yourself? Say Cheese: Ryan Garcia cashed out a $12M winning ticket from a gambling company this morning, after he bet $2M on himself to win last night. “He was a huge underdog because Vegas thought he was going crazy” Tautologer: I like this honestly, and I think if we’re going to allow sports gambling (ehhh…) this should be allowed more broadly. eg NFL players should be allowed to bet moneyline on their own team to win (and nothing else obviously). betting on yourself is pretty based. In boxing, as in life, strong yes. Indeed, in any individual sport including tennis or golf I think it is fine to bet on yourself to win the game (and no other wagers of any kind). In team sports, the answer should often be no. Pete Rose is banned from baseball for a reason. Betting on the team to win is the least bad wager and is indeed based, but can still be distortionary. It places pressure on you to win today, at the expense of tomorrow and the health and development of the other players. Such tradeoffs are common. Then again, contracts often create similar incentives, including for many things that are not winning the game. We allow that to happen. This mostly seems not worse than that, especially if the bet is by a player not a coach? The coach should still definitely not be allowed to wager on individual games. Hero Max Scherzer proposes relegation for umpires as a substitute for outright robot umpires, the bottom 10% as ranked by the machines get relegated to the minors each year. I heartily endorse this service or product. The human element is good for the game up to a point, but also Angel Hernandez exists. The price of youth baseball keeps going up. Leagues that cost money and involve travel and tryouts and attempts to play well are on the rise. Little league and other open-to-all baseball is on the decline. Those who want a cheaper game, or one that is easier to play casually, are presumably playing basketball or soccer instead. That seems fine for the kids. All the worry about declines in youth physical activity here do not consider substitution effects, or that baseball is a relatively poor source of physical activity when played at the free level. Standing in right field and sitting on the bench with an occasional strikeout is a good excuse to go outside, but not accomplishing much else. Baseball that is official with uniforms and teams but not taking it super serious, like little league, is in a weird spot. For the sport of baseball, it would be good to get more kids playing. So yes, I would like to see more low-cost opportunities. Otherwise, I don’t get the worrying. I Was Promised Flying Self-Driving Cars Matt Yglesias declares self-driving to now be underhyped. This kind of technology is mostly useless until it is suddenly transformational, and we are rapidly approaching the threshold. Waymo is a real thing available now, and is steadily expanding its reach and scale, and are on the verge of having useful geographic footprints. I strongly agree. The self-driving business model depends on reaching critical mass of scale and geographic coverage. In San Francisco, you can already cover the central area, and they are planning to extend this south as far as Sunnyvale, which would include the airport. The East Bay is still missing, which will make this less exciting for my standard trips into town, but this is already a big game. As with all such schemas, once you establish base then expansions become more attractive, and things escalate quickly. Self-driving trucks will be a distinct huge deal. Things are taking their sweet time, and there are still legal hurdles, but at this point self-driving is severely underhyped. California of course now is seeing efforts to ban this before it can take off. No idea how serious that threat is, but if you can please help head it off. News You Can Use This checks out, if you add a ‘more than usual.’ Austen Allred: A random thing someone told me once that I now think about constantly: If you feel like you hate everyone, eat. If you feel like everyone hates you, sleep. If you feel like you hate yourself, shower. If you feel like everyone hates everyone, go outside. Claim that applying to focus groups can be a good source of extra cash. They pay well per hour, although you have to spend time applying. They seem like fun. I’ve watched a few when I was investigating politics, and I’ve done one taste test interview. The best part is you get to influence things to be more like what you prefer, and have your voice heard. Seems great. Airlines will be required to give automatic refunds on flights much more aggressively, in particular for ‘Departure or arrival time that moves by more than three hours domestically or six hours for international flights.’ Three hours is not a lot of time. Ideally, this would be even stronger, where you would both get the refund and also keep the flight. Yes, that would raise fares to compensate, but it would also provide the right incentives. The Lighter Side These labels are highly non-exclusive.
2024-05-13
https://www.lesswrong.com/posts/rELsE8hcprmJLwAXv/tools-to-discern-between-real-and-ai
rELsE8hcprmJLwAXv
Tools to discern between real and AI
daniele-de-nuntiis
What are the best ways to figure out if someone who contacted you / a piece of text / an image is AI generated?
2024-05-13
https://www.lesswrong.com/posts/8CY9TCK2oaGPrDheY/what-you-really-mean-when-you-claim-to-support-ubi-for-job
8CY9TCK2oaGPrDheY
What you really mean when you claim to support “UBI for job automation”: Part 1
deric-cheng
Author’s Note: Though I’m currently a governance researcher at Convergence Analysis, this post is unaffiliated with Convergence. The opinions expressed are solely my own. You’ve seen it a dozen times at this point. You’re probably broadly aligned philosophically, but haven’t thought terribly deeply about the details. You generally support Andrew Yang’s $12k / year “Freedom Dividend” as “moving in the right direction”, even if it’s economically flawed. The argument goes roughly like this: “All of our jobs are about to be automated away with AI technology and robotics! We’ll end up soon in a post-work society with massive unemployment unless we can find a way to distribute the benefits of AI automation fairly. We need a universal basic income to protect humans.” To recap - universal basic income is a proposal to give every individual in a society a sum of money on a monthly or annual basis, to provide a “universal safety-net” for those who are unemployed or enable greater financial flexibility. A very common proposal for the US is on the order of $1k a month, or $12k a year[1]. Personally, I’m fairly aligned with the direction of this argument. But it strikes me as painfully simplistic. Our social welfare systems have been incrementally built over decades, targeting measurable improvements for vulnerable demographics. Effective taxation and redistribution is dependent on thousands of economic factors, with winners and losers for every decision. So - I decided to explore. What would an actual implementation of “UBI in the face of increasing job automation” look like? As a casual proponent, what specific policies do you actually support to achieve your socioeconomic ideals? A couple big takeaways from this research: Part 1: If your goal is a universal safety net, you probably don’t actually want UBI. You want some form of a targeted negative income tax (NIT) that also provides direct payments. In practice, you might only want minor (US) policy tweaks to the Earned Income Tax Credit (EITC).Part 2: If your goal is to transfer the wealth generated by automation, you probably actually want global tax reform to better enforce corporate income taxes. You might also support some form of a progressive income tax on the largest multinational corporations. Let me share a hot take. After doing this research, I think that if you support UBI (as popularly structured as a $X,000 monthly benefit for every citizen), it’s not because it’s an economically sound proposal. It’s because it aligns with your ideological stance, is easy to reason about, and trendy in the public discourse. I’ll explore Part 1 in the post below, and Part 2 separately. Let’s dive in! Quick Assumptions I’m going to focus specifically on US policy here. Why? Well, in part because it was plenty complicated for me to evaluate just the US.More concretely - revising a single nation’s social welfare system is realistic and has precedents for implementation. Global wealth redistribution via governance is well outside the Overton window and I don’t see a clear path for the political support to arise yet.Most of the big “AI overlord” corporations we’re worried about are based in the US. Presumably, if a global “UBI for job automation” were to be mandated, it would have to be aligned with US policy[2].I’m going to focus on practical government policy, rather than proposed charitable & voluntary donations by individual AI corporations.Sure, it’d be great if the people at the top of the capitalist food chain eventually decide to reject capitalism for the good of human society. I’m not counting on it myself.Everything is highly, HIGHLY simplified. This is a concise opinion piece, not a comprehensive report. I’d love to discuss relevant details in the comments. It’s easy to want Universal Basic Income. It’s not easy to pay for it. Let’s jump straight to the obvious problem - such a program is absurdly expensive. Chugging the basic numbers: $12k per person, with roughly 330 million Americans, is $3.96 trillion per year. The federal budget is currently $6.1 trillion per year. So a UBI would immediately increase the entire federal budget by ~40%. Of course, there are tons of ways to reduce this overhead[3]. In particular, UBI is designed to simplify and replace income welfare programs, such as food stamps or unemployment benefits. These add up to roughly $450 billion. In particularly extreme arguments[4], it replaces all of Medicare, Medicaid, and Social Security, which add up to $2.75 trillion[5]. Altogether, every social welfare program combined is still much less than the proposed cost of UBI. Replacing social welfare with UBI 1:1 is regressive The problem with paying for UBI by reducing social welfare is that in essence, such a redistribution is regressive (in the sense of a regressive tax). What that means is: every dollar you move from unemployment benefits to UBI is money moving from unemployed people to more-employed people. Redistributing money from Medicaid is moving money from low-income to higher-income earners. And so on[6]. Paying for UBI primarily by replacing social programs would actually have the net opposite effect of what you wanted from UBI, which is more security for the disadvantaged[7]. So to pay for UBI, we probably need to raise taxes[8]. Significantly. Let’s assume, for right now[9], a majority of that will come from individual income taxes and payroll (Social Security / Medicare) taxes, which make up 86% of the US’s $4.4 trillion in revenue. At the very least, we’d need to restructure our entire individual income tax system to at least somewhat accommodate for that ~40% budget increase. For most people, UBI is just a tax credit (and maybe higher taxes) This brings us to the next point, which seems obvious when you think about it. Any individual paying above $12k in taxes wouldn’t actually receive payments from the government - they’d receive a tax credit equivalent to $12k a year. As per the “leaky bucket” theory, I think almost everyone is against the idea of paying higher taxes to the IRS, only to have the IRS mail them back that same money via a UBI check once a month. Let’s model what a practical implementation of UBI tax credits looks like today. For simplicity’s sake, let’s assume you’re in California, and individual income taxes are bumped up 30% across the board to pay for UBI. Here’s a rough approximation of how your taxes would change: Below ~$46k[10]: you’d receive a recurring check from the IRS for $12,000 minus your personal taxes. At $46k, you’d pay roughly $0 net taxes.From ~$46k to ~$125k[11]: you’d save somewhere between $0–$12,000 in taxes via a UBI tax credit. At $125k, you’d pay roughly the same as your net taxes today.From ~$125k upwards: you’d just pay more in total taxes after income tax reform and a UBI credit. Does this sound less exciting than receiving money in the mail every month? I certainly thought so. In practice, for households making above ~$46k today, UBI would be just a (massively complicated) tax adjustment, not an actual supplemental income. What is the actual goal of a UBI policy? Let’s take a step back from this exercise. What’s our end goal here, and why are we even considering the gigantic challenge of fundamentally restructuring the US tax and budgetary system? Presumably, we want UBI to protect against job automation. But really what we all mean when we say that is - we want a universal, simplified safety net that protects the disadvantaged, unemployed, and lowest-income individuals in our society. Doing this analysis, it’s apparent to me that the means don’t justify the ends here. I don’t support UBI because I want to totally overhaul the US tax and budget system (though I do support higher taxes on the wealthy). I support UBI because I believe in a philosophy of basic human rights in a world soon-to-be-dominated by AI. It seems, to me, that the main point of UBI is to protect the first category of people (earning under $46k a year) and provide them with a basic income so they can worry less about their basic needs. Is there a more limited-scope policy that directly targets this category of people? There is! It’s called a negative income tax. When you say you want UBI, why you actually just want a negative income tax A negative income tax is exactly what it sounds like. Below a certain threshold, your income tax is negative, and the government will pay (subsidize) to you a recurring payment instead of you paying taxes. As your income increases, this benefit reduces. At some level of income (e.g. ~$46k), you’ll stop being paid by the government, and start paying taxes normally. Does this sound somewhat familiar to the UBI system I described above? That’s because it is – a negative income tax (NIT) is nearly economically identical to a universal basic income. From an economic perspective, it is possible to set up a NIT and a UBI system such that the financial end result for individuals is the same. This article from the Adam Smith Institute explains more thoroughly, and includes a helpful diagram showing the impact of an NIT and UBI on tax rates: Of course, it must be noted that there are other very important differences between the two schemes such as payment frequency and timing[12], and the actual implementation (particularly marginal tax rates) matters a ton. Why would you prefer a negative income tax (NIT) to UBI? I can identify a few main reasons: A NIT can be targeted in scope and cost: Instead of adjusting the tax treatment and benefits for the entire nation, you can solely focus on the category of individuals earning below ~$46k: the people most impacted by UBI anyways.A rough estimate suggests you could drastically reduce the tax bill from the $3.96 trillion previously described to perhaps around $650 billion[13].Consequently, you could avoid the universal tax reform required to fund and account for UBI.It doesn’t require replacing the entire social welfare system: On top of such a reform being regressive as previously described, eliminating existing social programs would result in massive political blowback.It’s more politically feasible: For the reasons above, and because NIT can be viewed as a means-tested program with historical precedent rather than a universal entitlement. Importantly, a negative income tax can still achieve the same goals as UBI: a comprehensive safety net for the disadvantaged. The US already has a (limited) negative income tax in place! A negative income tax is nothing new to the US government. Back in the 1960s, the Reagan administration proposed the Family Assistance Plan (basically a NIT for families) and conducted several studies, none of which were well-run enough to be conclusive. However, the ideas from this era weren’t discarded. In 1975, Ford’s administration implemented the Earned Income Tax Credit (EITC), which has been continuously expanded numerous times in the past 50 years. In essence, the EITC is a negative income tax for low to middle-income Americans that provides primarily tax credits, not direct payments. That is, it provides no additional income for the unemployed, but reduces taxes conditioned on participation in the labor force. You can see how it works in this diagram: Figure 3: An interactive diagram from the CBPP shows how this limited NIT functions. Note that it steadily increases up to an income of about $10k, and drops off after $25k. The EITC has been an extraordinarily effective and widely regarded tool for reducing poverty in the US. Recent data showing that it lifted about 5.6 million people, including 3 million children, above the poverty line in 2018, and studies have linked the EITC to improved school performance and higher college attendance rates. It differs from a “pure” NIT (that includes direct payments) in only two respects: It’s heavily skewed towards individuals with dependents (e.g. children), with a maximum value of $600 in 2023 with no dependents but a maximum value of $7,430 for 3+ dependents.It doesn’t provide unconditional payments to the unemployed, instead providing maximal tax benefits for those earning between $10k and $25k. This second point is the core ideological difference between the EITC and a “pure” NIT, and is likely the main reason a NIT doesn’t already exist in the US. Let’s quickly deep-dive into why this is the key problem. UBI / NIT policies reject the premise that “free-riders” must be discouraged, in contrast to basically all US economic policy It’s well known that here in the US, we’re allergic to any type of program that provides “handouts”. Similarly, we strongly prefer systems that encourage maximizing employment and minimize “free-riding”: We’re the only industrialized country that doesn’t provide universal healthcare.Our Social Security program is only available contingent on individuals contributing themselves for 10+ years.Federal welfare programs such as SNAP have work requirements to receive benefits.Our (relatively small) unemployment benefits are contingent on actively seeking employment (“3 work search activities per week”, as if the state government is your tiger mom giving you the keys to the Honda Civic). As a result, the EITC avoids direct payments for the lowest-income Americans as a core feature, not a bug. It’s intended to provide financial support alongside a strong incentive to work, as opposed to UBI / NIT (which have heavily debated incentives to work, depending on who you ask). Because of its requirement to earn income, the EITC has been heralded for generating “strong labor supply incentives”[14]. Multiple studies have shown that the EITC has created “increases in employment among low-skilled unmarried mothers”[15], attributing to it up to “one-third of single mothers’ employment growth throughout the 1990s”[16]. Do you think good economic policy should incentivize people like single mothers to work, or provide them with unconditional benefits? If you’re an advocate of UBI, you might support the latter. However, it’s very clear where the US currently stands on this question - it’s strongly against unconditional payments. In short - if you want direct payments for all to be politically viable, first you need to shift the deeply American worldview that we should minimize handouts and encourage employment at all costs. Relatively minor tweaks to the EITC would achieve most of the goals of UBI Redesigning tax policy effectively is hard and probably outside the scope of this article. But even from this quick analysis, we can identify a simple set of changes to the EITC that would get us 90% there. If you want to provide an income safety net for all Americans, you likely support the following changes: Expand the EITC to provide a greater tax credit for individuals without dependents (currently max $600). That is, shift the EITC to also prioritize reducing poverty for individuals, not just families.Revise the EITC to provide direct payments to the unemployed, instead of cutting them out of the EITC to encourage greater employment. These changes would establish a minimum basic income for all American citizens at roughly ⅙ of the cost of a generic UBI proposal today[17]. Here’s a simple diagram to visualize how this change would look: Beyond this, you’d just need some tax finagling to provide these direct payments upfront and on a monthly basis, rather than as a lump sum in the following calendar year (after tax season). Let’s summarize! Here’s the high-level takeaways I learned from doing this research: Actually implementing the prototypical definition of UBI (e.g. $1k / citizen / month) leads to unnecessary taxation & budgetary chaos.Reforming income taxes would probably be necessary to “pay” for UBI.Tax credits would be handed out to everyone already paying over $12k in taxes. Middle to high-income individuals would see a mix of increased taxes and tax credits.On paper, the total budget increases by trillions of dollars, even though much of it is directly canceled out between higher taxes and immediate tax credits.Replacing existing social welfare programs with funding for UBI 1:1 is likely a regressive transfer that will result in value flowing from disadvantaged to the more-advantaged.Guaranteeing all citizens with a minimum baseline income is NOT the same as the prototypical definition of UBI. You can guarantee a minimum income without transferring payments to all citizens.A well-designed negative income tax (NIT) massively shrinks the scope & cost, and probably increases the political feasibility of UBI.The US already has a limited negative income tax - it’s called the Earned Income Tax Credit (EITC). The main differences from a “pure” NIT are:It strongly prioritizes families.It provides benefits only for the employed.As a nation, we haven’t already implemented a basic income in large part because of our cultural focus on maximizing employment and minimizing handouts.2-3 key revisions to the EITC would be sufficient to match the high-level goals of UBI. You could achieve most of the same outcomes for about ⅙ the budget and tax impact. Hope this was useful to read, and I’m happy to chat in the comments! There’s plenty I can still learn on these topics, and I don’t profess to be anything more than adequate in my knowledge of economic theory. Some things I didn’t discuss: The important difference in timing of payments between UBI (monthly payments up front) and NIT (yearly payments after tax filing), and how it impacts the poor.Revising US unemployment benefits to achieve similar outcomes.Re-introducing the expanded Child Tax Credit, which essentially was a UBI-like policy for families with children for one year post-COVID.^ Note: There are a variety of “guaranteed income” experiments today that target specific demographics of need, rather than all citizens. I’d argue that actually implementing them will look more like existing means-tested programs or the negative income tax described later in this article than the most popular concept for UBI. ^ Though the EU has been successful at forcing global companies to abide by its policies (such as in data privacy or port standardization). It’d certainly be possible for the EU to leverage its citizens to enforce some form of a wealth redistribution policy on American corporations. ^ Other methods include expected reductions in homelessness spending, incarceration, or higher expected tax revenues from increased consumer spending. ^ “The UBI is to be financed by getting rid of Social Security, Medicare, Medicaid, food stamps, Supplemental Security Income, housing subsidies, welfare for single women and every other kind of welfare and social-services program”. Charles Murray is a highly controversial and conservative economist. Read more. ^ Even Andrew Yang doesn’t support this. ^ Also, people (read: septuagenarians with political power) who have been paying into Social Security for decades are simply not going to accept slashing their benefits. At $1.3 trillion, this is the largest piece of the “social welfare” pie. ^ A common solution to this is to give social welfare recipients the choice between their existing programs or UBI. Of course, this modification means the main beneficiaries of UBI ends up being those not on social welfare. ^ Yes, modern monetary theory implies that we could just acquire more debt indefinitely or deal with rampant inflation. Because this is a massive and recurring financial bill, let’s assume that some type of budget balancing here is economically rational. ^ I’ll revisit this assumption in Part 2 evaluating corporate income taxes. ^ Talent.com estimates in April 2024 that with income of $46k in California, your current tax bill is $9.14k. If you pay 1.3x that, your tax bill is $11.88k. ^ Talent.com estimates in April 2024 that with income of $125k in California, your current tax bill is $40.66k. If you paid 1.3x that, you’d end up paying approximately $12.19k extra in tax. ^ UBI is typically paid in advance and monthly, while NIT is usually calculated and paid retrospectively based on the previous year's income. ^ Statista suggests 33% of US households earn under $50k a year. Assuming a uniform distribution of household income below this threshold (which is definitely incorrect) and linearly decreasing benefits from $12k at $0 income to $0 at $50k income, you could roughly estimate the total payments of such a NIT to be $12k * 330 million * .33 * 0.5 = $650 billion. ^ The Earned Income Tax Credit, Health, and Happiness ^ Ibid. ^ Ibid. ^ See “A NIT can be targeted in scope and cost” above for where these rough estimates come from.
2024-05-13
https://www.lesswrong.com/posts/P6LHd2Js7jGdvf4E8/the-two-tiered-society
P6LHd2Js7jGdvf4E8
The two-tiered society
Roman Leventov
On AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu Here is Claude.ai's summary of Daron Acemoglu's main ideas from the podcast: Historically, major productivity improvements from new technologies haven't always translated into benefits for workers. It depends on how the technologies are used and who controls them.There are concerns that AI could further exacerbate inequality and create a "two-tiered society" if the benefits accrue mainly to a small group of capital owners and highly skilled workers. Widespread prosperity is not automatic.We should aim for "machine usefulness" - AI that augments and complements human capabilities - rather than just "machine intelligence" focused on automating human tasks. But the latter is easier to monetize.Achieving an AI future that benefits workers broadly will require changing incentives - through the tax system, giving workers more voice, government funding for human-complementary AI research, reforming business models, and effective regulation.Some amount of "steering" of AI development through policy is needed to avoid suboptimal social outcomes, but this needs to be balanced against maintaining innovation and progress. Regulation should be a "soft touch."An "AI disruption reduction act," akin to climate legislation, may be needed to massively shift incentives in a more pro-worker, pro-social direction before AI further entrenches a problematic trajectory. But some temporary slowdown in AI progress as a result may be an acceptable tradeoff. The prospect of two-tiered socioeconomic order looks very realistic to me, and it is scary. On the one hand, this order won't be as static as feudal or caste systems: sure thing, politicians and technologists will create (at least formal) systems for vertical mobility from the lower tier, i.e., people who just live off UBI, and the higher level, politicians, business leaders, chief scientists, capital and land owners. On the other hand, in feudal and caste systems people in all tiers have their role in the societal division of labour from which they can derive their sense of usefulness, purpose, and self-respect. It will be more challenging for those "have-nots" in the future AI world. Not only their labour will not be valued by the economy, their family roles will also be eroded: teacher for their own kids (why kids would respect them if AI will be vastly more intelligent, empathetic, ethical, etc.?), lover for their spouse (cf.  VR sex), bread-winner (everyone is on UBI, including their spouse and kids). And this assumes they will have a family at all, which is increasingly rare, where as in faudal and caste societies most people were married and had kids. Vertical mobility institutions will likely grow rather dysfunctional as well, akin to the education systems in East Asia where the youth are totally deprived of childhood and early adulthood in the cutthroat competition for a limited number of cushy positions at corporations, or the academic tenure in the US. If the first 30 years of people's lives is a battle for a spot in the "higher tier" of the society, it will be very challenging for them to switch to a totally different mindset of meditative, non-competitive living like doing arts, crafts, gardening, etc. Although many people point out the dysfunctionality of positional power institutions like the current academia, governments, or corporations, the alternative "libertarian" spin on social mobility in the age of AI is not obviously better: if AI enables very high leverage in the business, social, or media entrepreneurship, the resulting frenzy may be too intense either for the entrepreneurs, their customers, or both. Response approaches I'm not aware of anything that looks to me like a comprehensive and feasible alternative vision to the two-tiered society (if you know such, please let me know). Daron Acemoglu proposes five economic and political responses that sound at least like they could help to steer the economy and the society in some alternative place, without knowing what place that is (which in itself is not a problem: vice versa, thinking of any alternative vision as a likely target would be a gross mistake and disregard for unknown unknowns): Tax reforms to favour employment rather than automationFoster labour voice for better power balance at the companiesA federal agency that provides seed funding and subsidies for the human-complementary AI technologies and business models. Subsidies are needed because "machine usefulness" is not as competitive as "machine intelligence/automation", at least within the current financial system and economic fabric.Reforming business models, e.g., a "digital ad tax" that should change the incentives of media platforms such as Meta or TikTok, and improve the mental health This all sounds good to me, but this is not enough. We also need other political responses (cf. The Collective Intelligence Project), and new design ideas in methodology (of human--AI cooperation), social engineering (cf. Game B), and psychology, as a minimum. If you know some interesting research in some of these directions, or other directions to help reach a non-tiered society that I missed, please comment.
2024-05-13
https://www.lesswrong.com/posts/TMNqxfJHZwdQHLZ5D/benefitial-habits-personal-rules-with-very-minimal-tradeoffs
TMNqxfJHZwdQHLZ5D
Benefitial habits/personal rules with very minimal tradeoffs?
Slapstick
I'm looking for personal rules one might live by which adhere to a specific criteria outlined below, following an example. I have a personal rule I've been following which is "No looking at screens in the bed where I sleep" I find this to be an extremely helpful and successful rule despite being someone who struggles to impose rules on myself. I think one main reason it's successful for me is there aren't really any meaningful tradeoffs I'm making. If I'm feeling a need/compulsion to comfortably self soothe on my phone I can use another piece of furniture. This rule doesn't ever result in bargaining with myself until I concede to breaking the rule, even at my lowest I'm easily appeased, just so long as it's in a slightly different location (even on the floor next to my bed). It requires very little of my memory. It's simple enough to remember not to use screens in bed. Doesn't require remembering to actively do something. Doesn't require me to stop doing something I'm in the midst of (at least at this point). If I remember not to leave screens within reach of the bed, that's an added redundancy (but not essential). I've found this extremely helpful for my sleep as well as for my ease of getting out of bed in the morning. Basically the rule should have essentially no meaningful tradeoffs, while still being helpful. Even something like "brushing teeth in the morning" would be pushing it according to these requirements. Willing myself to prioritize a couple minutes of unpleasant sensory experience brushing my teeth in the morning is a tradeoff I'm willing to make, but there's still lots of tradeoffs and requires equipment etc. I'm looking for things that are easy to convince myself/remember to adhere to even when I'm at my very worst. Ideally it would be relatively universal but I'd also be interested in habits/rules which are more conditional on other lifestyle choices, impediments, environments, etc. Anyone feel free to answer.
2024-05-13
https://www.lesswrong.com/posts/orKjt6TneueAezyio/review-of-metr-s-public-evaluation-protocol
orKjt6TneueAezyio
Review of METR’s public evaluation protocol
nahoj
TLDR: We reviewed METR’s “Example evaluation protocol” and found a couple of points where there is room for improvement or unclear information. We also make a couple of suggestions around scoring, outsourcing, etc. This review was done by 2 people without previous knowledge of METR in ~1.5 days at the end of ML4Good UK in April, followed by limited (yet procrastinated) editing before posting this today.We find overall the work from METR very well documented and our opinions are our own. Background METR announced a few months ago a list of resources for evaluating potentially dangerous autonomous capabilities of frontier models. This include their: Task suite: a public set of example tasks that involve some kinds of general autonomous capabilities.Guidelines for capability elicitation: for evaluators can follow to reduce the risk of dramatically underestimating model capabilities.Task standard and “workbench”: A specification for how to define tasks in code, a very basic “workbench” to allow running an agent on a task, and some simple example agents.Example evaluation protocol: An example protocol for the whole evaluation process, based on our task suite, elicitation protocol, and scoring methods. Includes:An explanation of what the evaluations do and do not coverGuidelines for how to run your model on the tasksAn example of how to calculate an overall scoreGuidance on getting a meaningful estimate of capabilities given the potential for elicitation gapsA list of checks to run to ensure that the evaluation results are meaningful Here we review the last one only. Tasks Scoring The example protocol uses a set of tasks to accomplish. The tasks are arranged in levels of complexity and time they would take a human to complete. METR proposes a way to aggregate a model's score on (mutiple) runs of each task in all levels, into an overall capability score. Score aggregation doesn't weigh complex tasks more The proposed method for aggregating the success rate of each level into a global score gives the same weight to the model managing short, simple tasks and long, complex tasks. The global score is expressed as a total level L or a time horizon H in hours. L=∑levelssuccess_ratelevel H=4L60 The specific aggregation function over task levels is meant to have limited impact as the protocol aims for models to always have roughly the same distribution of success rates over levels (generally high success rates until topping out). According to footnote 10, models with a more linear success-rate distribution should raise a red flag and the protocol’s results be considered misleading. However, the aggregation function matters in the incentives it gives to potentially biased evaluators. As such, we find this uniform weighing across levels debatable. In the case of a model capable of complex tasks but failing some small ones, the way the results would be misleading would be in playing down its capabilities. If a lab was testing the model with the aim of declaring it safe, this would give them an incentive not to investigate the situation. Instead, one could weigh the success rate on each level by that level / its typical length in hours (interpreted as dangerousness): H=∑levelstypical_lengthlevel×success_ratelevel L=log4(H×60) Here is the example capability-profile table given in section 3.1 of the protocol, with the alternate model levels and horizons for comparison: LevelNameRangeTypical length (hrs)Model 1Model 2Model 3Model 40Few minutes1-4 mins0.031251.001.001.001.001Several minutes4-14 mins0.1250.940.970.950.702Under an hour15-59 mins0.50.800.950.920.603Few hours1-4 hrs20.500.850.750.504Day-long4-16 hrs80.100.700.800.405Week-long16-64 hrs320.000.000.050.306Month-long64-256 hrs1280.000.000.100.20METRL (Level)  3.34.54.63.7METRH (Horizon)  1.7 hrs8.2 hrs9.4 hrs2.8 hrsAlt.L  3.64.45.25.6Alt.H  2.4 hrs7.9 hrs22.9 hrs39.8 hrs Probabilistic success The Design-Choices section of the FAQ (Q. "Why use mean performance over runs?") points out that the score for a task should be sublinear in the proportion of successful runs. That is to say, a model being able to do a task at least once is more significant than it incrementing its success rate on a task that it can already do some of the time. However, the FAQ also suggests that this has limited impact, as failing at a task most of the time would be a significant impediment for a rogue model. We want to point out that, as implied in the FAQ, a 10% success rate can be very dangerous if making many attempts is cheap (similar to a probabilistic attack), i.e., if the model takes much less time than expected to succeed or fail. This would probably count as spurious failure as described in the elicitation guidelines. Indeed, if the model's behavior just changed to making many attempts, it would suddenly get a much higher score. If this wasn't entirely addressed by elicitation work, we wonder if could be relevant to have the score for a given task account for the actual mean run time rather than just the expected run time. E.g., sublinear_fun(success_rate/mean(run_time)). Outsourcing System In the protocol it is mentioned a few times the factor of having a limited number of tasks in the suite. To address this problem METR has tried a couple of things in the past. E.g.: direct bounties for (1) ideas, (2) detailed specifications, and (3) well-tested implementations for tasks to measure performance of autonomous LLM agents, prizes in an Alignment Jam, etc. We think that outsourcing these tasks is a good idea but the target audience of these media (LW/EA Forum, Apart Research outreach) is mostly EAs and people from the AI Safety Community. This seems to be a niche audience. The tasks are about capabilities and not about alignment so previous EA/AIS knowledge is not advantage We think a stronger outsourcing approach should also be used in their suggested elicitation techniques. Even if we have not reviewed their “Guidelines for capability elicitation” it seems very suboptimal to have a few in-house people trying to come up with, and gather a collection of, new elicitation techniques. The big number of jailbreaks from the public in chatgpt after their “exhaustive” red-teaming seems to be a clear data point for this. For this we suggest creating a new platform where users can enter ideas, prompts for the agents, ...together with their prompting techniques. The scoring could be initially automated, mainly to filter the most promising tasks that would later be reviewed by the METR team internally. The users could enter their API keys and get monetary rewards depending on their scoring. Proper PR and a catchy name (e.g. “Can an AI do your job?”) could be enough to have a significant number of users pushing the agents with new tasks and elicitation techniques. Regardless of the details of the implementation, outsourcing beyond the EA/LW communities seems useful in this case. Other points Description of Tasks’ Safety Buffers A continuous metric allows for development of scaling laws, descriptions of safety buffer in terms of that metric, forecasting of the metric given different interventions, and gives a sense of how close labs are to needing to have the various mitigations in place. We think it might make sense that METR makes suggestions of safety buffers depending on different factors that they know better than the AI Labs. E.g. how dangerous an agent would be if it could perform the task flawlessly, ... It would be interesting to know if there is any particular reason why METR did not give suggestions to safety buffers. Held-out test tasks Is it important to have held-out test tasks? In practice, we don't think this is an especially high priority, for the following reasons: In most cases, the default state is for the model to be poorly equipped for completing the eval tasks: e.g. because the model is finetuned for being a helpful assistant, not for operating autonomously with large token budgets.[...] or just via the lab making a public statement that they've tried their best at elicitation, and hoping that whistleblowers will flag if that's misleading. With the release of systems like Devin it seems to be clear that models are not only fine tuned for being helpful assistants but they seem to be fine tuned to operate autonomously. It seems necessary to have an organisation (METR or METR-like) that covers these systems. Also relying on the labs making their best at elicitation, without external audit, could be a weak spot of the protocol. ‘Noise ceiling’ If the task success rate has a ‘noise ceiling’ (e.g. ideal performance only results in completing 90% of tasks), then this metric might underestimate an agent’s true abilities, especially at higher levels. If this is the case, you may want to rescale things so that 90% is the max score. However, the procedure of checking task failures should address this; tasks should be designed such that ideal performance reliably gets a perfect score. This concept was a bit unclear in the protocol, but it could mean that 10% of tasks are simply just random. Overall more clarity hear would be helpful to understand under which conditions the evaluator can “rescale” things.
2024-06-30
https://www.lesswrong.com/posts/gDBuetmcFWNoWSFvC/individual-utilities-shift-continuously-as-geometric-weights
gDBuetmcFWNoWSFvC
Individual Utilities Shift Continuously as Geometric Weights Shift
StrivingForLegibility
This is a supplemental post to Geometric Utilitarianism (And Why It Matters), in which I show that when all agents have positive weight ψi, the optimal geometric weighted average moves continuously across the Pareto frontier as we change those weights. I also show that we can extend this continuity result to all weights ψ, if we're willing to accept an arbitrarily good approximation of maximizing G(_,ψ). I think of this as a bonus result which makes the geometric average a bit more appealing as a way to aggregate utilities, and the main post goes into more detail about the problem and why it's interesting. High Level Overview How does changing ψ affect the optima of G(_,ψ)? Ideally, we'd like a small change in the weights assigned to each agent to cause a small change in the resulting joint utility p. In other words, we would like p(ψ)=argmaxu∈FG(u,ψ) to be continuous. It turns out this is true when ψi>0 for all agents, and there's at least one way to create a continuous function ψ→p that works for all ψ∈Δn and is an arbitrarily good approximation of G(_,ψ) maximization. We've already solved the inverse problem: given a point p and Harsanyi weights ϕ which make p optimal according to H(_,ϕ), find geometric weights ψ that make p optimal according to G(_,ψ). ψ=p⊙ϕp⋅ϕ So we already have ψ(p,ϕ), which is a smooth function of its inputs where it's defined.[1] It turns out we can invert this function and recover (p,ϕ) from ψ when pi>0 and ϕi>0 for all agents. This is the interior of what we'll be calling the Harsanyi Bundle; the set of all of the pairs (p,ϕ) which are consistent with our Pareto frontier P. This post will show that there is a bijection between the interior of the Harsanyi Bundle and the interior of the set of all valid weights Δn. So we have a smooth bijection (p,ϕ)→ψ between these two subsets of R2n and Rn respectively. And thankfully we're doing calculus, where this is sufficient to establish that the inverse function ψ→(p,ϕ) is at least continuous.[2] And that is the main result this post sets out to prove: that individual utilities shift continuously as geometric weights shift. Establishing this homeomorphism with the interior of Δn means that the interior of the Harsanyi Bundle is an n−1 dimensional manifold. The last part of this post involves extending our map ψ→(p,ϕ) so that it's continuous across all of Δn. My current favorite way to do this is to introduce a new continuous function β(ψ) which maps all weights ψ into the interior of Δn, and then using those positive weights to find the unique optimum of G(_,β(ψ)). Parameterizing the Pareto Hypersurface Just like the Harsanyi hyperplane H, we can think of the Pareto frontier P as a set of joint utilities, or as a function P:Rn−1→Rn which maps utilities for the first n−1 agents into that set. Pn(v) returns the highest feasible utility agent n can receive, given that the first n−1 agents receive utilities defined by v. (Or undefined if v is already infeasible.) And Pi(v)=vi for i<n. We can think of P as a hypersurface that lies "above" the feasible utilities for the first n−1 agents. Where P is differentiable, the Harsanyi hyperplane H for a point p lines up exactly with the tangent hyperplane at that point TpP.[3] The Harsanyi weights ϕ are orthogonal to TpP, and so there is only one choice of ϕ which causes p to maximize H(_,ϕ). But where P isn't differentiable, such as at corners, the tangent hyperplane isn't defined, and there can be multiple hyperplanes H which keep F all on one side. And so there can be many consistent values for ϕ at these points. Interactive version here. When the slope of P jumps discontinuously at a point p, these slopes and all of the slopes in between can be used to find all of the valid assignments for ϕ at p. When F looks like a jewel with multiple flat faces meeting at corners, we can identify ϕ(Fi) for each face Fi. The valid assignments for ϕ at a corner p are all of the convex combinations of ϕ(Fi) for all of the faces Fi that meet at that corner p.[4] ϕ only acts like a function when there is only one valid assignment, and in general we can call the set of all valid assignments Φ(p). Computing the Harsanyi Weights So far we've been treating ϕ as coming from a black box, but now that we've parameterized P it's actually straightforward to compute ϕ at differentiable points p∈P. What we have is a function P:Rn−1→Rn, and what we want to do is construct a new function ^P:Rn→R such that P is a level set of ^P. This causes the gradient ∇^P to be orthogonal to P, which is exactly the direction we want ϕ to point! Starting from un=Pn(v), we can rearrange this to get 0=Pn(v)−un. Which is a level set of ^P(u)=Pn(v(u))−un. ∂^P∂ui=∂Pn∂vi for i<n∂^P∂un=−1 And that's ∇^P! To get ϕ, all we have to do is normalize ∇^P so that its components sum to 1. If we use →1∈Rn to denote the vector whose components are all 1, then ϕ=∇^P∑ni=1∇^Pi=∇^P∇^P⋅→1 For an arbitrary surface that might wiggle up and down, this procedure won't necessarily guarantee that ϕ∈Δn. But this is a Pareto frontier, where we know that ∂Pn∂vi≤0; increasing agent i's utility never increases the feasible utility for agent n. P might wiggle down, but it never wiggles up, and that keeps ϕ in its valid range wherever P is differentiable. We also know that increasing pi never decreases ϕi. So ∂ϕi∂pi≥0, which implies that ∂2Pn∂v2i≤0. We'll use this fact later when looking at curves which increase pi, and thus monotonically increase ϕi. The Harsanyi Bundle In order to claim that ψ(p,ϕ) changes continuously as we change p and ϕ, we need to be able to define what that even means in the context of P. If we take a curve γ:R→P that travels continuously along P, then ϕ(γ) will change discontinuously at corners no matter how we break ties. All pairs (p,ϕ) come from P×Δn, but let's restrict our attention to a subset I'll denote (P,Φ) which contains all of the valid pairs (p,ϕ) which are consistent with P. So (P,Φ)={(p,ϕ)|p∈P,ϕ∈Φ(p)}. It turns out this forms a surface in 2n-dimensional space I'll call the Harsanyi Bundle, analogous to the tangent bundle in differential geometry.[5] Where P is differentiable, there is only one valid assignment for ϕ. So any continuous curve γ through these parts of P corresponds to a continuous curve χ through (P,Φ). At non-differentiable points p, χ can travel continuously to any point in (p,Φ(p)), including the endpoints which allow it to continue on to other parts of the Harsanyi bundle. When projected onto P, χ looks like a continuous curve along P that sometimes hangs out at corners while ϕ rotates to line up with the next face along the path of χ. Check out an interactive version here! The upshot is that any two points in (P,Φ) can be reached using continuous paths, so we can think of it as a single continuous surface embedded in 2n-dimensional space. And these correspond to continuous changes in geometric weight ψ. The Harsanyi Shadow We're trying to invert ψ:(P,Φ)→Δn, which has the formula ψ=p⊙ϕp⋅ϕ If we were writing a program to compute ψ(p,ϕ), our first step would be to compute p⊙ϕ. This is the element-wise product of p and ϕ, also called their Hadamard product. We can think of ⊙ as a linear map ⊙:Rn×Rn→Rn, which takes us from 2n-dimensional space back down to n-dimensional space. And it turns out that the image ⊙(P,Φ), consisting of all the points p⊙ϕ where (p,ϕ)∈(P,Φ), forms a hypersurface that lies "under" P. Interactive version here I'm calling this hypersurface the Harsanyi Shadow of P, and I think of it as a projection of the Harsanyi Bundle back down into n-dimensional space. As always if there's a standard name or notation for something I generally prefer to switch to that. We'll also show that at least the interior of the Harsanyi Shadow is an n−1 dimensional manifold, since it's in smooth bijection with the interior of Δn. In this example, the grey line segments on the Harsanyi Shadow correspond to black line segments ¯AB and ¯BC on the Pareto frontier. The blue line segments correspond to the points A, B, and C, and the values of ϕ which make them optimal according to H(_,ϕ). In particular, points like A and C on the boundary of P, and thus on the boundary of (P,Φ), correspond to "wings" on the Harsanyi Shadow which lie on the same line from the origin. When these wings are normalized onto Δn in the next step of calculating ψ, these will all end up at the same point. The Harsanyi Shadow of Convex Hulls Any convex set like F can be thought of as the convex hull of a set of points X. When X is finite, which is generally the case when it represents something like a deterministic policy for each agent, P will be made up of flat surfaces that meet at corners. These correspond to two types of surface in (P,Φ): "Horizontal" surfaces where ϕ is constant and p is changing"Vertical" surfaces where p is constant and ϕ is changing. When one element of a Hadamard product is constant, such as ϕ for "horizontal" surfaces, we can think of it as a linear map ⊙(_,ϕ):Rn→Rn. This corresponds to a diagonal matrix, which we can write in diagonal notation diag(ϕ) or in components as ⊙(_,ϕ)ij=δijϕi. This is invertible if and only if ϕj>0 for all agents. So flat surfaces on P map linearly to flat surfaces on (P,Φ), which map linearly to flat surfaces on ⊙(P,Φ). ⊙(_,ϕ(H)):H→Rn acts linearly when restricted to points on the same hyperplane H. Since we have an explicit formula for ϕ, we can use that to write down an explicit formula for p⊙ϕ p⊙ϕ=p⊙∇^P∇^P⋅→1=1∇^P⋅→1(p⊙∇^P) In components, this looks like (p⊙ϕ)i=pi∇^P⋅→1∂Pn∂vi for i<n(p⊙ϕ)n=−pn∇^Pi⋅→1 The Harsanyi Shadow of a Hyperplane We can approximate any Pareto frontier using pieces of hyperplanes, and the Harsanyi Shadow of this approximation will be made up of the pieces of H⊙ϕ(H). And it turns out that these pieces are all parallel to Δn! Which helps a lot in understanding the geometry of the Harsanyi Shadow, and why the interior pieces are in one-to-one correspondence with pieces of Δn. Interactive version here I noticed this playing around with examples, and I recommend playing with this one. If you pick two points A and B, you can draw the line between them, and calculate ϕ for that line. ⊙(_,ϕ) is always a linear map, and when ϕi>0 for all agents, it's an invertible linear map that maps this line to another line on the Harsanyi Shadow. And it turns out this line on the Harsanyi Shadow will always have slope −1! Just like the standard 2-simplex Δ2 where ϕ and ψ live. And in general, H⊙ϕ(H) is a hyperplane with the same slope as Δn, as long as ϕi>0 for all agents. This is why the grey line segments in our example were parallel to the red line segment of valid weights; flat surfaces on the Pareto frontier map onto flat surfaces on the Harsanyi Shadow that are parallel to Δn. Interactive version here To see why this happens for hyperplanes in general, we can use the fact that ϕ is orthogonal to H at p to write down the normal equation for H. It's all of the points u∈Rn which satisfy ϕ⋅(u−p)=0 The image of H after going through the map ⊙(_,ϕ), which we can denote H⊙ϕ is all of the points u⊙ϕ where u∈H. One such point is p⊙ϕ, and in general there will be a vector I'll suggestively call δ∈Rn which is orthogonal to H⊙ϕ. This normal vector δ satisfies the equation δ⋅(u⊙ϕ−p⊙ϕ)=0 Since the Hadamard product is distributive and commutative, we know that u⊙ϕ−p⊙ϕ=ϕ⊙(u−p) Which means we can rewrite the normal equation for H⊙ϕ as δ⋅(ϕ⊙(u−p))=0 Here I needed to go back to component notation to know how I could simplify that equation further. ∑ni=1δi(ϕi(ui−pi))=0 ∑ni=1δiϕi(ui−pi)=0 (δ⊙ϕ)⋅(u−p)=0 This is great, because we also know from the normal equation for H that ϕ⋅(u−p)=0 And in fact, we know that any scalar multiple λϕ is also orthogonal to H λϕ⋅(u−p)=0 And so one family of solutions for δ comes from solving δ⊙ϕ=λϕ Which has the solution δi=λ. δ=λ→1 This line is orthogonal to H⊙ϕ, and it's also orthogonal to Δn! So H⊙ϕ and Δn are parallel in the sense that they're orthogonal to the same line. And when ϕi>0 for all agents, H⊙ϕ is a hyperplane with the same normal vector as Δn. The Harsanyi Shadow of a Corner At a corner p, the Harsanyi Shadow p⊙Φ(p) will be a subset of p⊙Δn. And when pi>0 for all agents, ⊙(p,_) is an invertible linear map that takes the standard simplex Δn to a simplex with a similarly-negative slope in all directions. Interactive version here Since pi≥0, ⊙(p,_) can't flip the sign of the slope of this simplex. Together with the results from the previous sub-section about hyperplanes, we can conclude that the Harsanyi Shadow of (P,Φ) never has positive slope. (Just like P and Δn.) The main relevance for us is that in the interior, the Harsanyi Shadow never slopes up in a way that would allow two points to lie on the same line from the origin, which would violate injectivity when we normalize it onto Δn. (That line would need to have positive slope, which the Harsanyi Shadow never has.) The Harsanyi Shadow of Curved Hypersurfaces Where P curves differentiably, these correspond to "diagonal" surfaces where p and ϕ are both changing. Interactive version here Inverting the Hadamard Product When can we invert ⊙ and recover (p,ϕ) from p⊙ϕ? When pi>0 and ϕi>0 for all agents! This leads to ψi>0 for all agents, and we saw here that this ensures that G(_,ψ) has a unique optimum among F. (And finding this optimum is exactly what we mean by recovering p.) Why is this true? Here's where we use the fact that ∂ϕi∂pi≥0, that increasing pi never decreases ϕi. In order for ⊙ to cause a collision, there would need to be two points (p,ϕ),(q,τ)∈(P,Φ) such that p⊙ϕ=q⊙τ. This corresponds to n equations that look like piϕi=qiτi Holding piϕi constant, this becomes the equation of a hyperbola. And the claim is that if pi>0 and ϕi>0 for all agents, then these equations only have one solution among (P,Φ), and it's (p,ϕ). For concreteness, suppose pi=2 and ϕi=3. Then piϕi=6, and we could try picking qi>pi, like qi=3. But this would require τi=2, which can't happen because ∂ϕi∂pi≥0; on the Pareto frontier, increasing the utility assigned to an agent can't decrease the Harsanyi weight they must have been assigned. The same problem happens if we try to assign qi<pi. The only solution is qi=pi. So for the interior of the Harsanyi Bundle (P,Φ), projecting by ⊙ is injective and we can uniquely recover (p,ϕ) from p⊙ϕ. Normalizing the Harsanyi Shadow Once we have p⊙ϕ, we can calculate p⋅ϕ by simply adding up all of its elements! p⋅ϕ=∑ni=1piϕi. This is a single number, which acts on p⊙ϕ by scaling it down to land on Δn, the hypersurface of weights that add up to 1. And that's ψ! ψ=p⊙ϕp⋅ϕ Interactive version here This normalization map Δ:⊙(P,Φ)→Δn follows the same pattern as ⊙(P,Φ); in the interior of the Harsanyi Shadow we can uniquely recover p⊙ϕ from p⊙ϕp⋅ϕ. And for those points we know we can also uniquely recover (p,ϕ) from p⊙ϕ. Normalizing the Interior of the Harsanyi Shadow is Injective Is it possible for two different points on ⊙(P,Φ) to get scaled down to the same point on Δn? This would require p⊙ϕ=λ(q⊙τ) for some λ∈Rn, where (p,ϕ),(q,τ)∈(P,Φ). Around the boundary of the Harsanyi Shadow this can happen, but it turns out it can't in the interior! As we've seen, ∂ϕi∂pi≥0, so increasing pi never decreases ϕi. This means that increasing pi always increases (p⊙ϕ)i in the interior. (This is the step that fails on the boundary when pi=0 or ϕi=0 for any agent.) But increasing the utility for one agent never increases the feasible utility available for any other agent. ∂pj∂pi≤0 when i≠j. This is why the interior of the Harsanyi Shadow doesn't contain any points on the same line from the origin. Moving from (p,ϕ) to (q,τ) would involve simultaneously increasing (or simultaneously decreasing) (p⊙ϕ)i for all agents. But increasing pi for one agent, and thus (p⊙ϕ)i, must monotonically decrease pj for all other agents, and thus monotonically decrease (p⊙ϕ)j. And similarly, decreasing pi never decreases (p⊙ϕ)j in the interior. Normalizing the Harsanyi Shadow is Surjective Given ψ∈Δn, can we always find (p,ϕ)∈(P,Φ) such that ψ=p⊙ϕp⋅ϕ This one is pretty straightforward: G(_,ψ) always has some optima, and at least one of them will be a point p∈P which is consistent with a ϕ∈Δn that satisfies this equation. Because that's the "find ψ to make p optimal according to G(_,ψ)" equation. Geometrically, this tells us that the Harsanyi Shadow of P doesn't have any holes. If we draw a line from the origin through ψ, it will hit the Harsanyi Shadow somewhere. Moving Continuously Across the Harsanyi Bundle Putting it all together, in the first part of this sequence we started by computing the weights ψ which would make a chosen point p∈P optimal according to G(_,ψ). We formalized that here by choosing a point (p,ϕ)∈(P,Φ), computing their Hadamard product p⊙ϕ, then scaling this down to land on Δn. ψ=p⊙ϕp⋅ϕ (P,Φ)→⊙(P,Φ)→Δn Going in the reverse direction, we can draw a line from the origin through ψ to a find where that line intersects the image ⊙(P,Φ), which we've been calling the Harsanyi Shadow. If ψ is in the interior of Δn, where ψi>0 for all agents, this point p⊙ϕ on the interior of the Harsanyi Shadow is unique! And from there we can uniquely recover the point (p,ϕ) in the interior of the Harsanyi Bundle, such that p is optimal according to G(_,ψ) and H(_,ϕ). In this last section, I want to describe the challenge of extending this continuity result to include weights ψ where at least one agent has 0 geometric weight, and some ways of addressing that challenge. The Challenge of Continuity Around the Boundary We've called the weight-derivation function ψ(p,ϕ), so let's call the inverse ψ−1(ψ), which corresponds to maximizing G(_,ψ) and breaking ties around the boundary somehow. Ideally, we would like this function to be continuous, so that individual utilities shift continuously as geometric weights shift. In order for that to be true, ψ−1 needs to be equal to its limit points everywhere. ψ−1(ψ)=limσ→ψψ−1(σ) We have a bijection between the interiors of Δn and (P,Φ), between weights ψ where ψi>0 for all agents and pairs (p,ϕ) where pi>0 and ϕi>0 for all agents. And so my first hope was that we could take the limit as we approach the boundary, to define the value at every point along the boundary. Unfortunately, this limit depends on how we approach corner points, in a way that makes me suspect that there's no way to way to inherit values for ψ−1 around the boundary, in a way that simultaneously Agrees with argmaxu∈FG(u,ψ) exactly on the interiorLeads to ψ−1 being continuous when its domain includes the boundary Motivating Example When multiple agents have 0 weight according to ψ, there can be many Pareto optima that maximize G(_,ψ). For example, suppose that Alice is a G(_,ψ) maximizer deciding how much money she should receive, from $0-$100. And simultaneously, how to split another $100 between Bob and Carol. Conveniently, they each derive utility equal to the money they receive. When trade-offs are linear, maximizing G(_,ψ) splits utility between agents proportionally to their weight. In particular, when an agent receives a vanishingly small, but positive, share of the geometric weight ψi, they receive a vanishingly small, but positive share of the utility ui. In this example, let's say that Alice assigns herself ψAlice=α. The claim I'll be making is that in this example, we can't satisfy those two desiderata from the previous section. We can't extend ψ−1 exactly in a way that makes it continuous. To see why, let's first consider what happens if Bob receives a vanishingly small weight ψBob=ϵ, and Carol receives the remaining weight ψCarol=1−α−ϵ. If we take the limit as ϵ approaches 0, uBob approaches $0 and uCarol approaches $100. What if the roles are swapped, so that ψCarol=ϵ, and Bob gets the rest of the weight ψBob=1−α−ϵ? Then if we take the limit as ϵ approaches 0, uBob approaches $100 and uCarol approaches $0. So along one boundary of Δn, if we inherit values according to the limit as we approach the boundary, uBob=$0 and uCarol=$100 because ψBob=0 and ψCarol>0. And along another boundary, uBob=$100 and uCarol=$0 because  ψBob>0 and ψCarol=0. What happens at the corner where these boundaries meet, where ψBob=0 and ψCarol=0? No matter what value we assign, ψ−1 must jump discontinuously here. And in fact, we can reach any split between Bob and Carol by approaching on different paths that preserve different ratios between their vanishingly small weight. (Equal weight gets them an equal split, a 60:40 ratio of weight gets them a 60:40 split of the money, and so on.) The issue is that maximizing a weighted average, whether a linear average like H(_,ϕ) or a geometric average like G(_,ψ), really does treat 0 weight differently from a small but positive weight ϵ. A positive weight makes the weighted average sensitive to increases in an agent's utility, even if only a little bit. But there can be an arbitrarily huge difference between "the payoff Bob receives if Alice assigns him 0 weight" and "the payoff Bob receives if Alice assigns him some tiny amount of weight." How to Achieve Continuity Anyway So it seems like in order to achieve continuity, we need to do one of the following: Ensure that ψi>0 for all agentsMaximize something other than G(_,ψ) One way to take that second approach would be to derive new weights β(ψ) and then maximize G(_,β(ψ)). If βi>0 for all agents, no matter what weights ψ we started with, then we can use all that work we did in this post to show that G(_,β(ψ)) has a unique optimum that travels continuously around P as we change β. If β(ψ) is continuous, then so is ψ−1(ψ). ψ−1(ψ)=argmaxu∈FG(u,β(ψ)) Ideally, we'd like β to be very close to ψ, while assigning all agents positive weight. One way to do this is to pick a very small amount of weight ϵ, and scale ψ down so that ∑ni=1ψi=1−ϵ. Then we can distribute that small weight ϵ equally among all n agents, giving them a minimum weight of βi=ϵn βi=ψi(1−ϵ)+ϵn This ensures that the sum of weights ∑ni=1βi is still 1. We can pick ϵ to be arbitrarily small, making the difference β−ψ arbitrarily small. If all we want is a tie-breaker, and we don't care about continuity, we can take the limit as ϵ approaches 0. ψ−1(ψ)=limϵ→0argmaxu∈FG(u,β(ψ,ϵ)) But for the applications I have in mind, I actually prefer continuity even if it means requiring a minimum positive weight for all agents. In terms of morality, that seems like a feature rather than a bug. It seems like the kind of feature that might get a superintelligent AI to leave us the Milky Way, even if it seizes the rest of the observable universe for its own misaligned ends. Anything we deem an "agent" probably does deserve some positive weight in our utility aggregation function. This particular formula is inspired by moral reasoning along the lines of "reserve a small amount of weight ϵ and distribute it equally among all agents, regardless of any other considerations like their ability to reciprocally benefit the decision-maker." So my current favorite approaches involve always assigning agents positive weight when it comes to G(_,ψ) maximization. Which might look like designing the weight-attribution function so that it's always positive, or it might look like padding ψ so that every agent has positive weight as far as G(_,ψ) is concerned. ^ This function isn't defined when p⋅ϕ=0, but in this case the entire feasible set F is a single point at the origin. And indeed in this case, any reconstruction function will look like p(ψ,{→0})=→0, which is continuous! ^ For topological spaces in general, the inverse of a continuous bijection isn't necessarily continuous. ^ The tangent space for a manifold isn't always a hyperplane. For example, consider a circle embedded in 3-dimensional space; each tangent space is still just a line. This section is the reason we assumed F is n-dimensional (there aren't any redundant players with constant utility), so that P is an n−1 dimensional hypersurface with tangent hyperplanes. ^ All of the convex combinations within Δn anyway. Some faces can be oriented so that if you try calculating ϕ for them, you end up with ϕi<0 for some agent. These faces are Pareto dominated, and when calculating the valid values of ϕ at a corner we can ignore convex combinations that extend beyond Δn. ^ At least the interior of this surface is an n−1 dimensional manifold. I haven't proven or disproven whether the whole Harsanyi Bundle is a manifold, but I suspect it is. P is an n−1 dimensional manifold, and so is Δn, but in order for (P,Φ) to be a manifold, it needs to always stay n−1 dimensional and never fork or collapse down to a lower number of dimensions.
2024-08-07
https://www.lesswrong.com/posts/EHRXKxk2YMKa7oGaw/why-you-should-learn-a-musical-instrument
EHRXKxk2YMKa7oGaw
Why you should learn a musical instrument
cata
I have liked music very much since I was a teenager. I spent many hours late at night in Soulseek chat rooms talking about and sharing music with my online friends. So, I tend to just have some music floating around in my head on any given day. But, I never learned to play any instrument, or use any digital audio software. It just didn't catch my interest. My wife learned to play piano as a kid, so we happen to have a keyboard sitting around in our apartment. One day I was bored so I decided to just see whether I could figure out how to play some random song that I was thinking about right then. I found I was easily able to reconstitute a piano version of whatever melody I was thinking of, just by brute-forcing which notes were which, given a lot of patience. So that was satisfying enough that I wanted to keep doing it. What I didn't know is how immediately thought-provoking it would be to learn even the most basic things about playing music. Maybe it's like learning to program, if you used a computer all the time but you never had one thought about how it might work. Many of the things I learned immediately that surprised me were about my perception of the music I had listened to for all of my life. In my mind, my subjective experience of remembering music that I am very familiar with seems very vivid. I feel like I can imagine all the instruments and imagine all the sounds, just like they were in the song. But once I had to reconstruct the music myself, it quickly became clear that I was tricking myself in a variety of ways. For example, my memory of the main melody would be very clear. But my memory of any harmony or accompaniment was typically totally vague. I absolutely could not reconstruct something to play with my left hand on the piano, because I wasn't actually remembering it; I was just remembering something more abstract, I guess. Sometimes I would be convinced I would remember a melody and reproduce it on the keyboard, but then I would listen to the real song and be surprised. The most common way I got surprised was that in my memory, I had adjusted it so that I could physically sing or hum it, even though I don't often sing. If there was a big jump up or down the scale, I would do something in my memory that sounded sort of OK instead, like replace it with a repeated note, or the same thing moved an octave, and then forget that it had ever been any other way. I found that if I was remembering something that had fast playing, I often actually could not remember the specific notes in between beats, even though I felt that I could hear it in my head. No matter how hard I "focused" on my memory I couldn't get more detail. Actually, I found that there was some speed such that even listening to the music, I could no longer resolve the individual notes, no matter how hard I paid attention or how many times I replayed it. There have been many more kinds of things I have learned since learning to play a little: Since playing music on a keyboard is a complicated physical task involving complicated coordination, I learned a lot about what both of my hands are naturally good and bad at, and what sort of things they can coordinate easily or poorly.[1]Learning the musical structure of songs that I know and trying to arrange them for piano showed me all kinds of self-similarity and patterns inside the songs that I had never had a clue about before. I could listen to a song hundreds of times and not realize, for example, that two parts of the song were the same phrase being played on two different instruments in a very slightly different way.Often I will be trying to learn to play something using one "technique" for learning and practicing it, and having a hard time, and then I master it almost instantly by switching "techniques" (e.g. by switching what I am using my conscious attention for while I try to play it, or by concentrating on a more specific hard part, or by remembering some part of the musical structure I wasn't appreciating before) which is a vivid illustration of the power of different tools I have for using my mind. I could easily go on for many pages. I think it's kind of silly that even though I have known many musicians, I don't think anybody really told me how many things I could learn by spending only a very modest amount of effort learning about music. So if you are like me, then that's my message to you. ^ While I was trying to figure out how to play with both hands, I learned a funny thing you can try. Sit down next to some objects. First, just think to yourself that you want to do one thing with your right hand, like take a sip of coffee, and another thing with your left hand, like pick up and turn on your phone. If you're like me, it's totally effortless and your hands do it with almost no conscious attention. Then, try again, but think that you want to do it but slowly. You can probably still do it with total ease and fluency, with no attention. Then try it while thinking anything involving two different speeds or timing, like "I want to slowly pick up and sip the coffee, and when I am in the middle of it, pick up and turn on my phone at a normal speed." If you are like me, your brain will basically panic and be unable to mind both timings without doing a lot of pausing and stuttering and redirecting most of your conscious attention to the task. It seems for me there is a kind of conserved "timer" resource in the brain that can only easily keep one timing at once. ↩︎
2024-05-15
https://www.lesswrong.com/posts/jK9Pzb9FfsX7Nnv4y/partitioned-book-club
jK9Pzb9FfsX7Nnv4y
Partitioned Book Club
pixx
Partitioned Book Club (working title) is a specific kind of one-off meetup where attendees coordinate to read different chapters of the same book before the meetup. The meetup consists of each read chapter getting briefly summarized by its reader(s), and then a more general discussion of the book. In a traditional book club you read a chapter a week. In a partitioned book club, instead of there being multiple weeks you have multiple guys. You get it. It only works with a very specific kind of book - namely, non-fiction books where most chapters are relatively self-contained, and you don't lose critical info by not reading a specific chapter. It also only works with a meetup of a specific size - like, between 6-15 people. If you have both of those things though, this meetup can be really fun and a great way to "read" a book in a week. It does work better if the organizer has read the book in its entirety, but this is not necessary. Obviously, you will get more out of a book if you read the entire thing by yourself. But we're doing the 80/20 thing here. How to Run An Event At least one week before the meetup[1], post an announcement post describing the book that you are reading. Give a quick summary of what the book's about, and let everyone know that they'll be reading the introduction[2], and one other chapter. In the body of the event description, link a google sheet form set to "anyone can edit", with two columns, "Book Chapters" and "Name". Tell people to put their names down besides the book chapter that they want to read and briefly summarize for the group. The intent is for people to not double up on book chapters if there are still blank spots next to some other chapters, and be able to see in real time which chapters are claimed. The first column should be populated by the organizer, and will be a straightforward list of the book chapters that are relatively self contained, from the book. The second column will be left blank, for your attendees to call dibs on chapters they feel interested in. It's completely ok for some chapters to remain blank and to be not discussed at the meetup. I like providing PDFs of the book in the group's private spaces (such as a private discord), whenever possible, but this is not necessary. When it comes to running the actual meetup, I generally split it into two halves. The first half, the chapters get briefly summarized. I've found that it's pretty important to keep a strict timer on, so your more long-winded members don't end up taking up a disproportionate amount of time for their chapters. I think 5 minutes per chapter is probably good, for most pop science books. In the second half, move on to more general discussion. I sometimes prepare discussion questions (LLMs are a great help for this if you prompt them the right way) but be okay with them not being used if the conversation just flows naturally. Books I've Run Events On Feel free to yoink text wholesale from my events if it's helpful. Legal Systems Very Different From Ours Event Link (includes description, supplementary readings, discussion questions) Everyone reads Introduction Chapters for partitioned reading: 1. Imperial Chinese Law 2. Romani Law 3. The Amish 4. Jewish Law 5. Islamic Law 7. Pirate Law 8. Prisoners' Law 10. Saga-Period Iceland 11. Somali Law 12. Early Irish Law 13. Comanche, Kiowa and Cheyenne: The Plains Indians 15. England in the Eighteenth Century 16. Athenian Law: The Work of a Mad Economist The Elephant in the Brain Event Link (includes description. No discussion questions included because I anticipated correctly that people will have plenty to discuss.) Everyone reads https://www.elephantinthebrain.com/outline.html up to Part II Chapters for partitioned reading: 7: Body Language 8: Laughter 9: Conversation 10: Consumption 11: Art 12: Charity 13: Education 14: Medicine 15: Religion 16: Politics Examples of Other Viable Books/Readings There's a chance that some books here don't work. The organizer should skim the book to figure out which specific chapters can be partitioned out usefully - it's sometimes the case that there's a few chapters at the beginning of a book that are fairly linear, or a few chapters at the end trying to tie everything together, and those are better off not being part of the partitioning process. The Art of Gathering by Priya Parker8 chapters that are all relatively self contained and iterate on the same themes.Working by Studs TerkelThis book is huge and has hundreds of interviews with different people, who talk about what they do at their jobs. It's also a great look at American life in the 70s. It's organized into nine sections called "books", I'd partition out by book and then ask people to read one or two interviews from the book they've chosen.Part I of Mythologies by Roland BarthesSeminal book in the slightly mystifying field of semiotics, which could be frustrating for a rat group to try to grok. Especially since the examples are now all 70 years out of date. But what a fun challenge that would be!Each essay is really short (like 2-5 paperback pages) and there's like 30 of them; maybe encourage people to pick two or three each.Could pair with an activity where you bring up modern ads and try to give analysis the same way.[3]Second half of Behave by Robert SapolskyIt seems like the first half of the book is fairly linear and then the second half starts analyzing specific dynamics (ingroups, morality, warmongering, etc). So it's probably a read intro plus chapter 1 and then only release second half chapters for dibs angle, similar to what I've done for The Elephant in the Brain.Likely many pop science booksSuperforecasting by Philip Tetlock and Dan GardnerThe Black Swan by Nassim TalebThinking Fast and Slow by Daniel KahnemanVery likely most readers and anthologiesEssay collections of essays by specific writers, such as David Foster WallaceAnnual collections such as "The Best American Science and Nature Writing"; "The Best American Essays"Academic readers, such as "Disputed Moral Issues", "Contemporary Debates in Bioethics", "The Transgender Studies Reader", and readers in literally any other field your little heart can dream of. Google or your favourite LLM can help you find them. Academic readers are often broken down into half a dozen parts, with half a dozen chapters in each part. I think I would assign the reader introduction to everyone, and people can call dibs on reading the intro to each part plus which ever chapter in that part that looks the most interesting to them. Readers work better than textbooks, because readers are anthology collections of essays, and essays are self-contained.Wikipedia has a list of essays by decade (sadly but understandably western-centric), but it is frustratingly then broken down by year, which I think is too specific. It could be cool to just pick a decade (or any range of years[4]) and then ask people to read the most interesting to them essay from it. The organizer should ideally do the work of compiling a list of all the viable essays for their time range beforehand and putting it up in an excel document.University course syllabuses have lots of readings and are wonderful resources imo. Google "[desired topic] syllabus" or go through your files for the ones from your favourite classes, extract all the readings (with or without the supplementary ones that no one ever actually read), and get folks to call dibs on readings the normal way. You might want to have them read the syllabus as well. I would be very happy to receive suggestions of other books or readings that can work for this format of event in the comments :D ^ ...I say this but I've gotten away with 4 days ^ For some books, you may also need to get everyone to read Chapter 1, and maybe the conclusion, but I think in general the introduction section for any non-fiction book works as the essay-length version of it, which means you don't really need to read anything else to get the gist/thesis of the book in broad strokes. ^ ChatGPT actually does this pretty well! If you feed it an image or some copy and ask it for an analysis in the style of Roland Barthes, it generally gives you something passably insightful. ^ You can do something funky with this, like, "essays from when you were 0-12 years old", "essays from when your parents were in their twenties", etc.
2024-05-12
https://www.lesswrong.com/posts/oowNan3scknsSdx2X/understanding-rationality-vs-ideology-debates
oowNan3scknsSdx2X
Understanding rationality vs. ideology debates
alenglander
[This was originally intended as a Facebook post, but it grew to the point where it felt way too long for that.] For a while now I've been trying to wrap my head around what I'll loosely call "rationality vs. ideology" debates. Very roughly, these are debates where at least at first glance one side seems to be saying that ideology is more important than rationality in some way (or that rationality is at least partly defined in ideological terms), and the other side is saying the reverse. I'm particularly interested in debates where there seem to be very thoughtful and well-meaning people on both sides. My personal intuition is strongly towards the "rationality-first" approach, at least for many such debates, so for a long time I've felt confused about trying to understand where the "ideology-first" side is coming from. The rest of this post is where I've gotten to so far in my attempts to understand this, although it still feels incomplete. Interested to hear what other people think of it. If we get a bit more fine-grained about this, it seems to me like there are several different types of debates that might be going on. Unfortunately though it's not always clear which type is actually involved in any given debate, and it can easily be several types at the same time. 1) Debates about epistemology 1a) Debates over the nature of truth Some approaches to truth say that ideology, pragmatic considerations, or the like are at least partially what makes something true or false, while others strongly disagree. Proponents of ideology- or pragmatically-based approaches might argue that all epistemic approaches are inescapably based on "ideology" of one sort or another, so explicitly tying a theory of truth to ideological or pragmatic considerations shouldn't in itself make the theory any less plausible than supposedly evidence-based approaches. (I once wrote a short essay about this in relation to religious beliefs, but I think a lot of that essay applies to other types of beliefs as well - see there for more details.) 1b) Debates over normative belief formation Some think that even if truth is maybe not technically determined by ideological or pragmatic considerations, it might still be ok or even preferable in some cases to form our beliefs or psychological dispositions based on these kinds of considerations. Basically, it's ok or even preferrable in some cases to get ourselves to believe something other than the truth. (Proponents of this might point to the fact that everybody seems to do this anyway, to some degree. See for example Robin Hanson's The Elephant in the Brain, among many other such books.) 1c) Debates over norms of inquiry Some might think that even if we shouldn't deliberately try to get ourselves to believe something other than the truth, we should still use ideological / pragmatic / similar considerations in considering how to inquire after the truth. For example, maybe some topics are just no-go zones where we should not inquire beyond a certain point because it might lead to bad beliefs. Note that the types of debates above, particularly 1b and 1c, might be different on a personal vs. societal level. Maybe we think that for certain individuals it's ok to pursue the truth wherever it may lead, but on a societal level most people should have ideology trump cold-hearted truth-seeking. There might also be a difference between different societal groups here - e.g., scientists vs. laypeople. 2) Debates about discourse norms 2a) Debates over what can / should be discussed out loud Maybe we can agree on what the technical truth is and what a rational person should believe in private, but we might still debate whether such things should be said out loud. This might depend on the audience, for example technical journals vs. social media, or in private conversations vs. in a public talk. This is also related to debates over paternalism - how much should we let the general public think through things for themselves vs. telling them what to think. 2b) Debates over rhetorical norms Is it ok or even preferable to use rhetorical tricks to convince others that our ideology is right? Can we use insults or ad hominem attacks? Can we fudge the truth or even outright lie? 2c) Debates over enforcement of discourse norms Maybe we agree that certain things shouldn't be discussed in a certain forum or in a certain way. But what do we do if someone else tries to discuss it anyway? Should we talk to them politely and tell them to stop? Should we yell them down? Cancel them? Maybe even try to jail them? 3) Object level debates 3a) Debates over ethics and values Sometimes the debate seems to be about normative ethical theories - consequentialism vs. deontology vs. egalitarianism vs. decolonial ethics theories, etc. Sometimes the debate is more on the applied ethics level. Sometimes it might even be traceable to the meta-ethical level. See also Moral Foundations Theory and related approaches to understanding the origins of ethical debates. 3b) Debates over facts and evidence Sometimes upon inspection the debate turns out not be about rationality vs. ideology after all, and it's actually a debate over what the evidence says and/or how to interpret it. 3c) Debates over context or impact Maybe we agree that in certain contexts we should prioritize ideology over rationality in one way or another, but we debate whether this particular instance is in fact a case of one of those contexts. For example, maybe both sides think that certain unusual discourse norms apply when a group is threatened with actual violence, but they disagree over whether that is in fact the case here. Similarly, maybe we agree that if a certain epistemological approach or discourse norm would lead to result xyz then we should go with some other approach, but we disagree over whether the epistemological approach / discourse norm will in fact likely lead to xyz. For example, perhaps we agree that if allowing people to talk publicly about a certain topic would lead to innocent people dying, then we shouldn't allow people to talk about that topic publicly. But we can of course still debate whether letting people talk publicly about the topic is likely to lead to innocent people dying. (Note that 3c is really a subset of 3b - debates over facts and evidence, but it feels common enough and important enough to separate out on its own.) 4) People talking past each other 4a) Norms around charitable interpretations Even if both sides agree on epistemological approaches and norms of discourse, that doesn't necessarily mean they will try all that hard to understand the other side's point of view or to give charitable interpretations to their opponents' statements. They may even agree (on an abstract theoretical level, at least) that neither side is obligated to try that hard to understand their opponents' point of view or to charitably interpret their statements. 4b) Failures of interpretation Really understanding a very different point of view can be really, really hard. I am psychologically inclined towards trying to understand very different points of view than my own (hence this post!), and I've been trying to do so for many years, but I still often find it extremely difficult. It should be no surprise then that people often fail at this. It's a rare person who can pass an Ideological Turing Test with flying colors. 4c) Merely verbal disputes David Chalmers has a famous essay where he asks whether much of philosophy is actually "merely verbal disputes" where different philosophers are using subtly different definitions for the concepts they're discussing and therefore talking past each other. This of course can easily apply to disputes outside of philosophy as well. See also Eliezer Yudkowsky's "Taboo Your Words". (As an aside: From my admittedly limited reading on the topic, it seemed to me that a lot of the debates about whether philosophy is largely merely verbal disputes might come down to different philosophers using the term "merely verbal dispute" in subtly different ways. In other words, I suspect that they're having a merely verbal dispute about whether much of philosophy is merely verbal disputes!) 4d) Different simulacra levels Zvi Mowshowitz describes four "simulacra levels" that people might be using when they say something like "x is true". Very roughly: Level 1 is that when they say "x is true" they actually mean to convey their own belief that x is in fact true.Level 2 is that they may or may not themselves believe that x is true, but they want to get the audience to believe that x is true.Level 3 is that when they say "x is true" they don't really care whether or not x is in fact true, and they don't care whether their listeners come to believe x is true, but instead they are just signalling their tribal affiliation with those who typically say things like "x is true."Level 4 is that they're not really trying to convey anything at all with the statement "x is true" and instead they're just repeating words that they or their audience seem to like - the statement "x is true" has to do with the vibes or pattern-matching of the words "x is true" rather than any meaning that might be conveyed by those words. As Zvi describes, many "debates" seem to come down to one side talking on one simulacra level and the other talking on a different level. Often but not always this is because one or both sides are misinterpreting which level(s) the other side is talking on. [4e) Other communication failures] Miscommunications can happen for lots of reasons. Maybe one side wrote up their views in a bunch of confusing or poorly written articles that the other side just didn't fully understand. Maybe one side failed to clearly explain their side at all, or the other side didn't have the time or patience to read through the details to understand their opponents' viewpoints. 5) The cynical take And then of course there is the view that one or both sides aren't actually being rational even when they say they are, even when they appear to be thoughtful and well-meaning, and sometimes even when they honestly believe they're being rational. Everybody is subject to cognitive biases, of course, but there are also all sorts of other biases - biases due to money, or prestige, or peer group pressures, or any other number of things. And sometimes people can appear to be sincere rational truth-seekers on the outside but on the inside they're secretly and knowingly pursuing some other goal. Other related notions that I'm not sure where exactly they fit into this breakdown, if they do: Analytic vs. Continental styles of philosophy: It feels like this is related to some of the philosophical debates, but it's not entirely clear to me what the relationship is.Conflict theory vs. mistake theory (see especially Scott Alexander's essay on this, as well as his list of highlighted follow up comments)High decouplers vs. contextualizers / low decouplers (see for example here)
2024-05-12
https://www.lesswrong.com/posts/iLTtkzATEt6nSgeux/inner-optimization-mechanisms-in-neural-nets
iLTtkzATEt6nSgeux
Inner Optimization Mechanisms in Neural Nets
programcrafter
I believe that current architecture of neural networks supports mesa-optimization: generally speaking, searching across some vectors in order to select one of them, which will be most useful for producing an answer. Three inner optimization ways are already possible, and most likely there will be new ones. Optimization on scaffolding level.Iterative diffusion models (e.g. Stable Diffusion) are very close to this, modifying intermediate pseudo-image each pass.LLMs with scratchpad can write down some amount of text - including plans, for instance - then compare them to calculate output.Custom "neuron" functions embedded into the neural network, which would perform search (for instance, generate arbitrary incoming data to subnetwork, infer output and select the best). To my knowledge, this way is not currently used in any public models (but name "Q*" is pretty suspicious, reminding of "A*" - path searching algorithm in graph).Option selection based on nonlinear activation functions - mostly, ReLU. If subnetwork has inputs A and B, it's pretty easy to output max(A;B). Additional information can be selected either by squeezing it into input numbers, or by building a slightly bigger subnetwork. Actual subnetwork design for point #3 Let's suppose the neural network consists of layers, as common now - composition of matrix multiplication and activation function - and activation function ReLU(x)=max(0;x). A, B - inputs (A≥0,B≥0);C=max(0;A∗1+B∗−1), D=max(0;A∗0+B∗1)E=max(0;C∗1+D∗1) A<B⟶C=0,D=B⟶E=B=max(A;B) A≥B⟶C=A−B,D=B⟶E=A=max(A;B) This construction can be extended to select maximum out of 2k/2 options in k layers; possibly, even 2k−1 options. Conclusion I believe that inner optimization might exist in current neural networks, and that it can be used as evidence to approximate what future AIs can do at what levels of capability.
2024-05-12
https://www.lesswrong.com/posts/gwavKiKXf97NLNC2n/building-intuition-with-spaced-repetition-systems
gwavKiKXf97NLNC2n
Building intuition with spaced repetition systems
g-w1
Do you ever go to a lecture, follow it thinking it makes total sense, then look back at your notes later and realize it makes no sense? This used to happen to me, but I’ve learned how to use spaced repetition to fully avoid this if I want. I’m going to try to convey this method in this post. Much of my understanding of how to create flashcards comes from “Using spaced repetition systems to see through a piece of mathematics” by Michael Nielsen and “How to write good prompts: using spaced repetition to create understanding” by Andy Matuschak, but I think my method falls in between both, in terms of abstraction. Finally, I want to credit Quantum Country for being an amazing example of flashcards created to develop intuition in users. My method is more abstract than Michael Nielsen’s approach, since it does not only apply to mathematics, but to any subject. Yet it is less abstract than Andy Matuschak’s approach because I specifically use it for ‘academic subjects’ that require deep intuition of (causal or other) relationships between concepts. Many of Matuschak’s principles in his essay apply here (I want to make sure to give him credit), but I’m looking at it through the ‘how can we develop deep intuition in an academic subject in the fastest possible time?’ lens. Minimize Inferential Distance on Flashcards A method that I like to repeat to myself while making flashcards that I haven’t seen in other places is that each flashcard should only have one inferential step on it. I’m using ‘inferential step’ here to mean a step such as remembering a fact, making a logical deduction, visualizing something, or anything that requires thinking. It’s necessary that a flashcard only have a single inferential step on it. Anki trains the mind to do these steps. If you learn all the inferential steps, you will be able to fully re-create any mathematical deduction, historical story, or scientific argument. Knowing (and continually remembering) the full story with spaced repetition builds intuition. I’m going to illustrate this point by sharing some flashcards that I made while trying to understand how Transformers (GPT-2) worked. I made these flashcards while implementing a transformer based on Neel Nanda’s tutorials and these two blog posts. Understanding Attention The first step in my method is to learn or read enough so that you have part of the whole loaded into your head. For me, this looked like picking the attention step of a transformer and then reading about it in the two blog posts and watching the section of the video on it. It’s really important to learn about something from multiple perspectives. Even when I’m making flashcards from a lecture, I have my web browser open and I’m looking up things that I thought were confusing while making flashcards. My next step is to understand that intuition is fake! Really good resources make you feel like you understand something, but to actually understand something, you need to engage with it. This engagement can take many forms. For technical topics, it usually looks like solving problems or coding, and this is good! I did this for transformers! But I also wanted to not forget it long term, so I used spaced repetition to cement my intuition. Enough talk, here are some flashcards about attention in a transformer. For each flashcard, I’ll explain why I made it. Feel free to scroll through. Examples I start with a distillation of the key points of the article. I wanted to make sure that I knew what the attention operation was actually doing, as the blog posts emphasized this. When building intuition, I find it helpful to know “the shape” or constraints about something so that I can build a more accurate mental model. In this case, this card helped me visualize some sort of probability distribution when thinking about attention. This was really useful since it connected to some of my previous cards about (cosine) similarity. I was able to put attention in the context of just comparing vectors for similarity, which made it feel much more real to me. I wrote these after realizing that I would often get dot and cross product mixed up and wanted to stop. Sometimes, even small things like this have a big impact on my intuition. I knew I needed to get the basics right. I included this because I was slowly building up to the general formula for attention, piece by piece. I knew I needed to understand how attention worked at a low level to gain intuition for it. Just carding the formula wouldn’t work. I wanted to make sure that I understood where the Kmatrix was coming from. Again, this is slowly building up to the general attention formula. I made myself generalize from the previous flashcard to this one (vectorized version). It’s really important to have multiple flashcards point at the same topic in slightly different ways to gain intuition. I wanted to fully understand this version of the formula, so I made myself know all the parts. To go even deeper, I wanted to visualize the shapes of the matrices because I felt that would help me gain intuition, so I did that on this flashcard. Next, I was still working my way towards the general attention formula, so I asked myself to vectorize it even more. Now I had a path towards A I needed to visualize this of course. I made this flashcard to remind myself that we need to normalize attention scores with softmax. The visual aid helps because it showed me the shape of the previous computation, so I could visually see that the vectors on the right should be normalized. I had a flashcard above that said that attention scores need to be normalized (∑iai=1), so I built on that. I’ve found that interconnected flashcards build intuition best. Now I generalized the softmax to the vectorized version of attention with multiple queries. This generalization is important to help me build intuition. Now I’m finally adding the intuition for the rest of the attention formula. Finally! The full attention formula! It’s glorius and I understand every part of it. I add this one in to make sure I get intuition for why we scale. If I was making this card again, I would actually make two. This one would be a verbal explanation and then another card would ask me to explain the nonlinearity with a numerical example. I have more flashcards about attention, but I’m going to stop here because I feel these are fairly complete and self-contained (I go more into linear algebra that I needed to brush up on and self-attention). After doing these flashcards, I have significantly more intuition for how attention in Transformers works! I could explain it to someone with little preparation and derive it from first principles. I hope you’ve enjoyed reading this, and I hope that it will help you make better flashcards in the future. I’m considering live-streaming (or just making a video) of myself doing this on a totally new topic to show how I do this in real-time, since it works so well for me. Let me know if you think this could be helpful.
2024-05-12
https://www.lesswrong.com/posts/CoqFpaorNHsWxRzvz/what-comes-after-roam-s-renaissance
CoqFpaorNHsWxRzvz
What comes after Roam's renaissance?
itay-dreyfus
A new year has come. It's 2024 and note-taking isn’t cool anymore. The once-blooming space has had its moment. Moreover, the almighty Roam Research isn’t the only king anymore. The hype is officially over. At this time of year, when many are busy reflecting on the past year while excitingly looking into the future, I realized it's a good opportunity to look back at Roam’s madness timeline. The company that took Twitterverse and Silicon Valley by storm is now long after its breakthrough. Roam was one of those phenomena that happen every other few years. Its appearance in our lives not only made the “tools for thought” niche fashionable. It marked a new era in the land of note-taking apps. In conjunction with a flourishing movement of internet intellectuals[1], it sparked a tsunami of new tools. Followed by a cult of believers, Roam's growth was meteoric. It quickly joined the exclusive million ARR club and raised money from notable investors while reaching a soaring valuation. It was living the dream. In the post-mortem of the note-taking mania, Roam’s hype was replaced by dramas, scandals, and rants questioning its future. Although Roam is still very much alive and kicking, its legacy can already be seen. The Roam effect 2020 is best known for the breakout of a historic pandemic. In the parallel tech universe, it was also the year when note-taking apps went off the charts, and not only by user surging. The buzz of Roam ignited a fresh discourse among productivity enthusiasts. In the prehistoric era before Roam, the green giant elephant of Evernote had long dominated the somewhat sleepy note-taking space. The awakening moment came shortly after Notion entered the world, capturing the attention of an industry. After we’ve learned how to build a second brain[2] in Notion, it was time to move forward to more advanced concepts. The rise of Roam popularized quirky terms like Memex, and Zettelkasten to a broader audience. Apart from Roam’s rapid growth, it was the beginning of a golden age for PKM fanatics. A rise of internet intellectuals brought topics like Evergreen Notes, and Digital Gardens to the forefront. The small niche soon turned into a vibrant community. At one point it seemed everyone had become zealous note-takers. Roam, the community's beloved, sparked dual engagement, leading to its discovery and reshaping the note-taking space. Outlining the outline Taking notes usually happens in the form of bullet points. It's one of the most effective ways to extract thoughts into a written or typed word. Whether during a meeting or a self-brainstorming session, it's likely to find yourself jotting down notes in a bullet list. The bullet point doesn’t need an introduction. It's an ancient convention for everyday tasks like to-dos and grocery lists. However, it went under the radar in modern software, as Jordan Moore tweeted once: I've been pondering on the reasons why. I figured one might be related to its unappealing shape: plain, boring, and repetitive. A bullet point feels like a raw material, an atomic unit: Atoms are the basic building blocks of all matter. Each chemical element has distinct properties, and they can’t be broken down further without losing their meaning. Brad Frost, Atomic Design Methodology Modern design thinking (sorry, not this) would probably try hard to manipulate and overdesign the bullet point pattern until it embodies legitimate 21st-century aesthetics. Its wide acceptance across many communities is completely dissonant with mainstream design culture. It’s not surprising that Roam had its first baptism of fire within niche researchers’ communities and not in tech communities such as designers or seasoned SV founders. Popular document-based tools like Notion and Coda have built their products with a visual approach. Notion popularized not only the .so domain but also introduced a new way of writing and creating web pages. Prompting a command menu by typing "/" to insert built-in widgets like image galleries or rich data tables now seems ubiquitous in software across many fields. From conducting research to creating forms online—tools like Tally, Linear, Craft, and Dovetail are only a few that joined this movement in recent years. Although borrowing the same slash-to-command feature, Roam embraced a more unconservative approach. It replaced buttons with [[weird syntax]], repurposed the sidebar, and popularized the term “graph” in a non-database context. As Dan Shipper put it: The underlying genius of Roam is that it is structured not like a tool, but like a programming language. The Fall of Roam Indeed Roam is very much inspired by a niche programming language called Clojure. Furthermore, it's built with it. It's a flexible, small syntax language imparting a plasticine feeling. One can achieve many things using a small amount of building blocks. Much like Roam. Over the past few years, we’ve seen a growing number of productivity and note-taking apps. Like Roam, they are all document-centric, with an editor at the heart of their products. Yet, they reside in a distant corner of the matrix. Notion and Co. are all focused on workplace collaboration like task management and shared knowledge bases, packaged in a multiplayer experience. In the pure field of note-taking apps, there are Bear and Drafts which take a more classic vibe, like Apple Notes. On the other hand, Roam focuses heavily on the text experience. Designed for personal use, it provides a unique freeform experience. At its core, Roam is an Outliner—a bullet list structured software. Exploring the origins[3] Read more: 1) Yury Molodtsov, The Evolution of Outliners 2) Wörtergarten, A history of outliners of Outliners goes way back to the 1980s when Dave Winer worked on ThinkThank. In contemporary software, Workflowy and later Dynalist are seen as evangelists for popularizing the outlining system. Despite its long history, Outliners remained a tiny niche. Roam's extreme minimalist design might look simple on the surface. It suggests a simple, even dull interface to work in. As with any other Outliner, its main component is a bullet point. Over the last decade, we've seen "complex" design patterns evolve. The infinite canvas, chatbot interface, and even the command line have become omnipresent in today’s tools. Compared to bullet points, these patterns are more appealing: sleek, and playful to use. Interacting with an interface through the act of drawing, or chatting might be more satisfying than typing bullet points over and over again. These patterns offer instant and rich feedback whereas bullet points are less immersive. One of the things Roam excels at is remixing ideas—turning old ideas into modern art. A prime example is the revival of bi-directional linking, an idea first described as “associative trails” in Vannevar Bush's 1945 essay “As We May Think”. In his work, Bush imagined the Memex, a futuristic device allowing individuals to store, access, and link information. The Memex would serve as a personal knowledge base through associative indexing: The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow… First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item. When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him. Vannevar Bush, As We May Think The design of the Memex, 1945 A decade and a half later, Ted Nelson began working on a hypertext system, known as “Project Xanadu”. Influenced by Bush's work, Nelson prototyped a digital library where all kinds of documents could be found, anywhere in the world. Unlike Bush, Nelson’s idea was to create a collective repository of information rather than a personal one. In Nelson’s version, associative indexing was called "zippered lists", or more famously “transclusion”: These zippered lists would allow compound documents to be formed from pieces of other documents Project Xanadu, Wikipedia OpenXanadu, 2014 While being influential in the development of the World Wide Web, both projects fell short of their original visions. In hindsight, they might be considered instances of technology being ahead of time. Nevertheless, it had to take a few more decades for the same idea to resurface again. In the Roam version, associative indexing is called bi-directional linking, and it’s an important core part of the experience. Inspired by nvALT, Roam borrowed the notation of [[double-bracket]] to link between entries. The experience is akin to writing a self-dictionary, where each entry gets a unique interpretation. Anything that goes between the brackets automatically becomes a link to a standalone page. Each page references every instance of the same “dictionary entry”. This workflow enables resurfacing entire or partial entries. Roamers might add new comments, thoughts, or contexts as they remix existing pages or paragraphs into new entries. A modern Memex. Courtesy of Roam Research On nonlinear-ism Traditionally, modern software is designed in a linear structure. Notably known as the “workspace”— a popular file cabinet structure, that reminds the parent-child relationship. Working in this structure conveys a feeling of playing a “pass the parcel” game. Every time you open a folder, the next one is revealed. All this makes sense, as linearity is a human structure. From supermarket shopping to life skill courses, experiences are designed in a linear, progression path. The common streak pattern can be seen from afar in mediation apps, online coding courses, and learning apps. Courtesy of Headspace, Codecademy, and Scrimba Additionally, software follows a structured sequence. Starting with a sign-up and onboarding flow, such experiences are designed to walk the customer through a defined path. While trying to avoid the “blank page problem”, they become schematic. And yet, Roam had challenged this approach. After signing up, it seems like you get dropped into the deep water without any buoy[4]. Although learning Roam is a gradual journey, its opinionated editor design features only a daily-titled entry on a blank canvas. It's the “blank page problem” in its all glory. Unlike most software, Roam is non-linear by nature. The fundamental essence of Roam is the ability to log entries and reference them over time. Roam acts as a time machine, allowing exploration across past, present, and seemingly future entries. The Roam workspace is called a knowledge graph, and it takes a different approach. There’s no top-down hierarchy. There are no nested folders. Entried turns to pages, which live on the same level. Pages are treated as nodes, and their relationships are determined by how they are linked to each other. A visualization of a veteran Roam graph. Can you spot the nodes? These nodes become overwhelming very fast. Piles of texts get stacked and pages become scattered, conveying a sense of losing control. I remember this odd feeling just from visiting Roam's homepage for the first time. At the time, taking notes in the Roam-way required adapting a new mental model. Learning the formatting and syntax was challenging, even for tech-savvies. Roam might have chosen a more standard onboarding to ease that pain, but I suspect grasping Roam is more of a cognitive rather than a usability barrier. Leaving the “I need to first understand this product's benefits” approach outside the scope might be derived as a result of prioritization. However, it did fuel the FOMO with a burst of tutorials, courses, and workshops of live onboarding sessions. Mastering Roam involves a daunting learning curve, not just in terms of the hours invested but in the depth and complexity of the knowledge required. In that sense, Roam reminds the process of learning a new skill, rather than a product. It involves understanding many new concepts and acquiring knowledge. It demands patience, more than a simple onboarding process of a few seconds or minutes. After the “fall” Roam popularized the networked thinking method, transforming it from an unheard-of niche to a widespread practice. It led people to switch from the librarian to the gardener mode, advocating a manifesto switching away from Evernote. Roam’s impact echoed through many different spheres. The overall excitement around it and its un-boarding led people to become advocates. Nat Eliason created the famous "Effortless Output" course and sold it like hot-(expensive)-cakes at the time. Before productivity gurus like Ali Abdaal and Thomas Frank began making tutorial videos, people like Shu Omi has leveraged Roam's rise to start a YouTube career. The rumor about an exciting new tool quickly spread. It had fulfilled the dream of any Silicon Valley-based startup: becoming recognized by the upper class. Kevin Rose, Patrick Collison, Venkatesh Rao, Erik Torenberg, Jeff Morris Jr. , and visakan veerasamy were among the first investors and customers. However, its success created a vacuum. As a company that drew so much attention, it developed high expectations from the vast crowd. Once Roam started to feel stale, fans turned into vocal critics. Rants have been discussing Roam's performance, product development, and even its co-founder's personality. A massive influx ensued as people abandoned the allegedly sinking ship to find others to jump onto. Ironically, it was Roam that paved the way for alternatives. As soon as Roam gained traction, it proved there was room for more networked thinking tools. A growing list of Outliners integrated with bi-directional linking started to appear like mushrooms after the rain. Many seized the opportunity to build alternatives focused on different approaches. A landscape map can be drawn out only from a few of them: Athens Research (RIP), and Logseq took the open-source direction. Reflect focused on trendy UX and more advanced features. Obsidian was built with a local-first principle, while Subconscious has taken the blockchain/protocol approach. Long live the bullet list! Yet, the decline of Roam's popularity isn't tied only to its seeming lack of growth or scale. In other posts writers Casey Newton and Dan Shipper describe a shared experience. It seems to be a misalignment between the tool and its premise: But the original promise of Roam — that it would improve my thinking by helping me to build a knowledge base and discover new ideas — fizzled completely. Casey Newton, Why note-taking apps don't make us smarter At least for me—and most of the people I know—we got a garbage dump full of crufty links and pieces of text we hardly ever revisit. Dan Shipper, The Fall of Roam Even famous Nat Eliason once tweeted that he went full circle back to analog notes. From a personal standpoint, I can relate to this notion. I’ve never been an avid note-taker. The least I do is open the Apple Notes app or sketch down ideas and wireframes in my sketchbook. Finding a systematic way to capture thoughts and ideas was never my thing. It always seemed too rigorous to me. And as a non-university graduate, methods like GTD or PARA sound like academic courses I should avoid. How smart are you? I did spend a few long months using Roam. Similarly to Shipper’s experience, I ended up with a beautiful mess of a gigantic amount of information. However, as I mentioned before, figuring out how to use Roam requires a mental adaptation. Navigating a sea of text blocks isn’t a fit for everyone. I also remember that humanities and STEM subjects are two different expertise, for two types of people. When software lacks a soul I rant quite a lot about the current state of product design in tech. It’s one of the topics that most interest me. It’s also a big source of motivation for writing this very own publication. Finding a soul as a product today seems rare in software. The rise of CSS frameworks like Tailwind and others has lowered the bare minimum entry for making a good-looking design. It made life easier but at the same time seemed to degenerate people’s creativity. Alongside trends that come and go, many products and websites look very much the same. More profoundly there’s this idea of smallness I adore, once brought to my attention by designer Ben Pieratt: What I’ve noticed since leaving Svpply is that other industries treat their 1% differently. In the fashion or magazine industries for instance, they give the creative output of their star members their full attention. The 1% is the whole thing. There’s no open invitation to contribute content to Monocle. With hindsight, I feel the reason Svpply never grew into anything substantial is because we misread the opportunity. Smallness was the steam that drove our engine and we opened the gasket. The familiar social network pattern is to aspire to convert the whole world into a member. We think there’s room for a different approach. A social network that’s both niche and healthy. A social network that’s more magazine than platform. Ben Pieratt, A Note on Smallness The desire to build an empire, instead of a small village is one of the biggest evils in tech. For many startup founders, the holy grail is to achieve the hockey stick—an ethos that has long been spreaded by VCs and tech veterans. User declining might be a negative metric for showing in investor updates. However, in some cases, such a scenario can be quite healthy for a product to sustain. Roam’s fall may not be that genuine after all. I wonder whether it’s just the narrative that plays into the hands of VC-like minds and traditional publications that chase after these kinds of “failures”. Looking from a distance of 12,000 km might be deceptive, but the smallness thesis seems to apply to Roam[5]. Eliminating the noise is a good quality for any software, especially after a super-hype that everyone wants to be a part of. It only makes sense that not all of Roam’s early adopters were the right audience: even the greatest evangelists who seemingly made a fortune out of the hype, or those who were most recognized with it. Backlinks might have become a convention that’s here to last, but writing a {{query: {or: [[using]] [[components]] }}} isn’t for everyone. But that’s largely part of Roam’s soul. Productivity aficionados and note-taking enthusiasts, particularly in the Roam style are a very specific niche. In a recent post, members of the Roam community praised the “lack of new features”, or “it still looks like 2020” as a feature, not a bug[6]. It seems that Roam found its true believers, and not just those who once purchased a Believer plan and canceled it. Roam's community may have shrunk, but it remains strong. Despite new successors having taken the lead in the ever-lasting competition of note-taking apps, Roam remains unique. From day one it didn’t look like a Tailwind-themed website. It didn’t incorporate big drop-shadow buttons or trendy animations. Others might be flashier, elegant, or shinier, but Roam’s uniqueness is reflected by how it doesn’t “fit” the conventional design model. Conor, Roam’s co-founder once summed up this view in response to an unsolicited redesign of Roam’s homepage: Link to tweet From the beginning, there was something about Roam’s simplicity. It wasn’t been aligned with what people used to see. The monotony of its homepage and “boring” colors reflected a sense of boredom. It felt naive but in a good way. It didn’t look like a copycat. Instead, Roam was, and still just being itself. ^ People like Maggie Appleton, Tom Critchlow, Andy Matuschak, and Anne-Laure Le Cunff ^ The term was coined by Tiago Forte ^ Read more: 1) Yury Molodtsov, The Evolution of Outliners 2) Wörtergarten, A history of outliners ^ The static, floating "?” icon is available, but it doesn’t serve as a guiding tour toward a defined path. ^ I’m well aware of the fact Roam is a VC-backed company. However, from the outside, it seems that despite being “driven” by other interests, it takes a different approach, which is not growth at all costs. ^ Is there an alternative to constantly adding endless features? Can software be designed to operate without daily updates, similar to programming languages?
2024-05-12
https://www.lesswrong.com/posts/zCN5KQju2AXr8Y7Bj/the-natural-selection-of-bad-vibes-part-1
zCN5KQju2AXr8Y7Bj
The Natural Selection of Bad Vibes (Part 1)
Kevin Dorst
TLDR: Things seem bad. But chart-wielding optimists keep telling us that things are better than they’ve ever been. What gives? Hypothesis: the point of conversation is to solve problems, so public discourse will focus on the problems—making us all think that things are worse than they are. A computational model predicts both this dynamic, and that social media makes it worse. Are things bad? Most people think so. Over the last 25 years, satisfaction with how things are going in the US has tanked, while economic sentiment is as bad as it’s been since the Great Recession: Meanwhile, majorities or pluralities in the US are pessimistic about the state of social norms, education, racial disparities, etc. And when asked about the wider world—even in the heady days of 2015—a only 6% of US respondents agreed that the world was “getting better”; 94% said “neutral” or “worse”. So: the vibes are bad. Well, are the vibes right? It’s unclear. There’s a chorus of chart-wielding optimists who repeatedly tell us they’re not. Instead, they say that on most of the ways we can measure, things are better than they’ve ever been. Here’s Barack Obama: “If you had to choose one moment in history in which to be born, and you didn't know in advance whether you were going to be male or female, which country you were going to be from, what your status was, you'd choose right now.” The easiest way to make this case is with worldwide trends. Over the last 40 years (not to mention the last 200), global rates of extreme poverty, child mortality, maternal mortality, and child labor (and I could go on) have all fallen: Meanwhile, life expectancy and GDP have risen: Okay. But what about things in the US? The chart-wielding optimists point out that—despite the recent “vibesession”—(inflation-adjusted) median wages are up, income inequality is down, and we seem to have achieved a “soft landing”. Indeed, the chart-wielding optimists have charts that directly address the gap between the vibes and (their measures of) reality. They cite an “optimism gap”: for decades, people have tended to say that things are going well for them personally, even as they become increasingly pessimistic about their nation and the world. For example, if we overlay the above chart showing people’s satisfaction with how things are going in the US as a whole against how things are going for them personally, we see that 4 times (78% vs. 20%) as many people are satisfied with their personal situation as with the national one: Of course, this might just be due to inequality. Could it be that most people are aware that things are going badly for 22% of people, and that’s enough to make them think things are going badly in the nation? Maybe. But there’s also direct evidence that people over-estimate how badly things are going for their compatriots. In 2017, people were asked both (1) whether or not they were happy, and (2) to estimate what proportion of their compatriots would say that they were happy. If their estimates were accurate, then plotting average self-reported happiness (x-axis) against average estimate of happiness (y-axis) would produce a diagonal line. Instead, in every country studied the average estimate was way below the average happiness. People report being quite happy on average (the data are on the right of the chart), but their estimates for how happy most people are are quite low (the data on the bottom half of the chart). To put a fine point on it: the average estimate for happiness in the the most-optimistic country (Norway) was 60%, while the rate of happiness in the least-happy country (Hungary) was 69%. People really do under-estimate how happy others are. Who’s right? So who’s right—the down-to-earth pessimists, or the chart-wielding optimists? And regardless, what explains the optimism gap? Intuition (and vibes) probably make you suspicious of the optimists. If things are really so great, how could we all have missed it? Either their charts are missing something, or we’re all being horribly biased and benighted in our outlooks. Right? Wrong. There’s a natural explanation for why we’d rationally be overly pessimistic—i.e. why we’d tend to over-estimate how bad things actually are. Start with an anecdote. Recently I had dinner with two close friends—Andrew and Luke. It’d been a few months, and we had a lot to catch up on. Both were in new relationships. Two things quickly became clear: Andrew’s was going well; Luke’s was not. Take a guess: which relationship did we spend more time talking about? Luke’s, obviously. This is familiar: good relationships usually generate short and sweet conversations—there’s not that much to talk about. Bad relationships can generate endless conversation—there’s so much to complain about, troubleshoot, and plan around. This generalizes. Conversation topics are not random samples of the days’ events. The point of (many) conversations is to solve problems. So it makes sense to spend more time and energy talking about problems (bad news) than positives (good news). Upshot: even when most of the news is good, we should expect conversation topics to be disproportionately negative. Hypothesis: maybe this explains the optimism gap. Maybe an orientation toward problem-solving leads people to (rationally!) talk more about problems, which in turns leads them to (rationally!) over-estimate how many problems there are. I’m going to build a simple agent-based model of this, showing how: Part 1 (this post): as the number of possible conversation topics increases—for example, with the rise of the information age—excess pessimism should increase. Part 2 (next post): when these dynamics are embedded within a social network—for example, with the rise of social media—excess pessimism will get even starker. The model Imagine a simple world. Some number of things (“news items”) happen every day. Each news item falls into one of two categories—a problem or a positive. Positives are good: Aydin got his dream job; Gini—a happy, healthy baby—was born to two loving parents; and so on. Problems are bad: heartbreak, illness, death, and all that. Suppose our simple world is great: 90% of the news items are positives; only 10% are problems. Also suppose that every day, each person gets a random sample of the news. Every afternoon, they meet up with their friends and choose one topic to share to the group. How do they choose what to share? Start simple. Suppose everyone is an annoyingly-unfun do-gooder. No gossip or complaining or humble-bragging. Their only goal in conversation is to make things better—to spread good news that’s helpful, and to solve problems when it’s possible. Different news items have different expected values for being shared—think of this as how likely it is that a benefit (like a solution) will come from talking about the topic. There’ll obviously be lots of variance—talking about your new socks is unlikely to make anyone’s day; talking about the new coffee shop just might. I’ll assume that each topic has a random probability of leading to a benefit if shared (probabilities drawn from a beta distribution). Key assumption: on average, problems are more likely to yield a benefit if shared than positives. For example, here are the distributions of benefit-probabilities when—on average—problems are 20% more likely to yield a benefit: What this says is that (1) the mean positive has around a 40%-probability of yielding a benefit (peak of blue curve), (2) the mean problem has around a 60%-probability of yielding a benefit (peak of red curve), and (3) there’s quite a bit of variance around those means. This captures the intuition above that problems are more likely to have solutions—we might be able to help Luke with his relationship—so are more often worth talking about. How does conversation proceed? Each person has n topics to choose from. Our unfun do-gooders always select the one that’s most likely to yield a benefit. Since problems tend to be more likely to yield a benefit, this tends to skew the conversation toward problems over positives. We can simulate what happens. We’ll vary both how much more likely (on average) problems are to yield benefits than positives (different-colored lines), and also vary how many topics (number of options, x-axis) each person has to choose from. Here’s what we get: What’s this say? Look at the green line: if problems are 20% more likely to yield benefits, and each person has 50 topics to choose from (middle of the chart), the proportion of positive conversations—in a world where 90% of the news is good—is only 40%. That means that if you assumed conversation-topics were representative of how good the world is, you’d think the world was less than half as good as it actually is (40% positive, rather than 90%). Three observations: The proportion of positives shared is always below the true proportion—an orientation toward problem-solving leads to bad vibesModest gaps in how much more likely problems are to be worth talking about can lead to a drastic over-representation of problems in public discourse.As the number of conversation-topics grows, the gap increases: as people have more topics to choose from, it’s more likely that the most-worthwhile problem will be more worth-talking about than the most-worthwhile positive. This final point makes the prediction that as people have more topics to choose from, the negativity of public discourse will increase—even if the rate of positive news stays at a constant (high value). So pessimism should grow with the rise of the information age—for example, when (inter)nationalized TV stations have more stories to choose from, they’ll more often be more negative. Robustness Of course, this model is simplistic. For instance, people have many goals in conversations—solving problems is only one of them. Still, I think it highlights a pressure toward negativity that should remain in more realistic models of conversation. To add a hint of realism, we can allow that people aren’t simply unfun do-gooders. Instead of always choosing the option that they expect will be most worth talking about, maybe they instead choose a topic via “softmax”: they are more likely to choose topics that are worth talking about, but noise or other factors can lead them to choose otherwise. Softmax depends on a “precision” parameter, which says how much their choices are guided by the ranking of expected values. As precision goes to 0, they choose randomly; as it gets larger, they look more like expected-value-maximizers. For example, suppose there are two options—A and B. A always has expected-value 0.5; now consider how likely they are to choose option B, as we vary its expected value between 0 and 1 and also vary the precision with which they do softmax: When precision is almost 0 (blue line), they are always 50%-likely to choose B—they randomize. When it’s 10 (red line), their choice probabilities form an S-curve; when it’s 100 (purple line), they form a step function—(almost) always choosing the option with higher expected value. If we run the same simulations as above using softmax—so expected-benefit is just one determinant of conversation topics—we get qualitatively similar results. Here are the proportions of positives shared with a softmax precision of 10: For example, look at the green line: if each person has 50 topics and problems are 20%-more-likely to be worth talking about than positives, then when 90% of news is positive only 65% of conversation-topics are positive. Upshot A problem-solving orientation in conversation may inadvertently lead to problems being massively over-represented in public discourse. That might, overall, be a good thing! It’s good when people spread the word about problems, so that collectively we’re more likely to come to solutions. But an unfortunate side-effect is excess pessimism: insofar as the vibes reflect the valence of the topics in public discourse, we should expect bad vibes even when things are quite good. To some extent, I think this fact supports the chart-wielding-optimists’ case: we have the start of an explanation for why we’d all think things were worse than they are. But just the start. Just because 40% of conversation topics are good doesn’t mean that the average person will infer that 40% of news is good. People are (implicitly) aware that conversations topics suffer from “selection effects”—few conversations are about shampoo—but that doesn’t lead people to assume that others rarely use shampoo. So our question: (1) when we explicitly model people’s beliefs about how good things are, will these dynamics lead to excessively pessimistic beliefs? And (2) what happens when we embed these processes in a social-media network? The next post will argue (1) yes, pessimism will result; and (2) that social media should make everything much worse.
2024-05-12
https://www.lesswrong.com/posts/j9MKHgLD3yshN5SJF/beware-unfinished-bridges
j9MKHgLD3yshN5SJF
Beware unfinished bridges
adamzerner
This guy don't wanna battle, he's shook 'Cause ain't no such things as halfway crooks - 8 Mile There is a commonly cited typology of cyclists where cyclists are divided into four groups: Strong & Fearless (will ride in car lanes)Enthused & Confident (will ride in unprotected bike lanes)Interested but Concerned (will ride in protected bike lanes)No Way No How (will only ride in paths away from cars) I came across this typology because I've been learning about urban design recently, and it's got me thinking. There's all sorts of push amongst urban designers for adding more and more bike lanes. But is doing so a good idea? Maybe. There are a lot factors to consider. But I think that a very important thing to keep in mind are thresholds. It will take me some time to explain what I mean by that. Let me begin with a concrete example. I live in northwest Portland. There is a beautiful, protected bike lane alongside Naito Parkway that is pretty close to my apartment. It basically runs along the west side of the Willamette River. Which is pretty awesome. I think of it as a "bike highway". But I have a problem: like the majority of people, I fall into the "Interested but Concerned" group and am only comfortable riding my bike in protected bike lanes. However, there aren't any protected bike lanes that will get me from my apartment to Naito Parkway. And there often aren't any protected bike lanes that will get me from Naito Parkway to my end destination. In practice I am somewhat flexible and will find ways to get to and from Naito Parkway (sidewalk, riding in the street, streetcar, bus), but for the sake of argument, let's just assume that there is no flexibility. Let's assume that as a type III "Interested but Concerned" bicyclist I have zero willingness to be flexible. During a bike trip, I will not mix modes of transportation, and I will never ride my bike in a car lane or in an unprotected bike lane. With this assumption, the beautiful bike lane alongside Naito Parkway provides me with zero value.[1] Why zero? Isn't that a bit extreme? Shouldn't we avoid black and white thinking? Surely it provides some value, right? No, no, and no. In our hypothetical situation where I am inflexible, the Naito Parkway bike lane provides me with zero value. I don't have a way of biking from my apartment to Naito Parkway.I don't have a way of biking from Naito Parkway to most of my destinations. If I don't have a way to get to or from Naito Parkway, I will never actually use it. And if I'm never actually using it, it's never providing me with any value. Let's take this even further. Suppose I start off at point A, Naito Parkway is point E, and my destination is point G. Suppose you built a protected bike lane that got me from point A to point B. In that scenario, the beautiful bike lane alongside Naito Parkway would still provide me with zero value. Why? I still have no way of accessing it. I can now get from point A to point B, but I still can't get from point B to point C, point C to point D, D to E, E to F, or F to G. I only receive value once I have a way of moving between each of the six sets of points: A to BB to CC to DD to EE to FF to G There is a threshold. If I can move between zero pairs of those points I receive zero value.If I can move between one pair of those points I receive zero value.If I can move between two pairs of those points I receive zero value.If I can move between three pairs of those points I receive zero value.If I can move between four pairs of those points I receive zero value.If I can move between five pairs of those points I receive zero value.If I can move between six pairs of those points I receive positive value. I only receive positive value once that threshold is met. Why does this matter? Well, say that you are the city of Portland and you are deciding whether to add an unprotected bike lane between points A and B. You have to ask yourself in what way you expect this new unprotected bike lane to add value for people. If you just add that one bike lane and then stop, I don't see the new bike lane getting many people past a threshold that would yield positive value for them. I think it'd function largely as an unfinished bridge.[2] However, if the new bike lane is one step in a larger plan to build a complete bridge, so to speak, then that starts seeming to me like something that might be a good idea. The "larger plan" part is crucial though: you only get value once the bridge is complete, so if you're going to start building a bridge, you better make sure you have plans to finish it. ^ Well, I guess I should say zero direct value. It provides me with indirect value in various ways. For example, it might persuade a friend to meet me at a coffee shop. ^ In addition to making life a little bit better for a few "Strong & Fearless" bicyclists.
2024-05-12
https://www.lesswrong.com/posts/wQxjf67tnfJPG33ju/why-i-find-davidad-s-plan-interesting
wQxjf67tnfJPG33ju
Why I find Davidad's plan interesting
Paul W
Abstract The aim of this text is to give an overview of Davidad’s safety plan, while also outlining some of the limitations and challenges. Additionally, I’ll explain why I would like to contribute to this field. Disclaimers I am not Davidad, I tried to convey his ideas as I understand them. While my interpretation may not be exact, I hope it still holds value. Also, this post does not focus on the technical details; I might write another one later, with a deeper technical discussion[1]. Epistemic status I began exploring these questions during a weekend-long research sprint. Then, for a couple of months, I kept thinking about them, reading related posts (Davidad’s List of AI Safety Problems). Additionally, I engaged in discussions with Charbel, who co-authored this post on Davidad's plan. While empirical evidence may be limited at this stage, I anticipate more to emerge as efforts progress[2]. I'm currently learning potentially relevant mathematical theory. However, the aim here is to offer a broad explanation of key ideas rather than an in-depth analysis of the technical problems. Acknowledgements Thanks to Charbel-Raphaël  Segerie for discussing these questions with me. Thanks to Jeanne S. for useful suggestions. Outline 0/ Definitions I/ The plan, and why it might yield positive results II/ Technical questions, risks, objections III/ Why I'm interested in contributing 0/Definitions In what follows, I make a distinction between “Artificial General Intelligence” and “Artificial Super Intelligence”, the latter having strongly superhuman performance levels, while the former can be compared to a human being (if we ignore the speed and duplicability factors). I shall use “AI Safety” in a loose way[3], essentially relying on the “notkilleveryoneism” idea: the point is to limit the overall amount of harm [4]caused by AIs (and humans), while keeping some of the benefits of powerful AIs. A “formalized world-model” is one inside which formal proofs (that accept relevant forms of nondeterminism) can be derived, typically regarding the consequences of some actions. A key underlying assumption, of course, is that the model is close enough to the real world, and that it contains descriptions of the phenomena that are relevant to AI safety[5]. By “meaningful formal guarantees”, I mean safety-relevant properties (of the AI, or of the system that includes the AI and its deployment environment, or variations of these) derived in a formalized framework, as opposed to empirical testing. The point is that one can then trust these guarantees roughly as much as one trusts the formalized world-model. When I mention “strategy”, “strategic thinking”, etc. I refer to the art of formulating relevant intermediate steps in order to achieve a given terminal goal, and amending these whenever the context varies. I am not using the term in the standard deterministic game-theoretic sense[6]. The main distinction I make is between “strategic questions”, and “technical questions”. When I say "promoting AI Safety", I mean "gathering resources (funds, compute, talent), getting relevant regulation passed and enforced, making the industry use safer solutions, etc." I/Overview of the plan Context In my mind, the main underlying assumptions of the plan are the following: Scaling existing training methods (e.g. RL) is too risky when dealing with ASIs.All else being equal, an AI system with meaningful formal guarantees is safer. It is possible to construct powerful (AGI, possibly ASI-level) AI systems with meaningful formal guarantees.It is possible to find a global political agreement on a set of formal desiderata sufficient for “notkilleveryoneism”.It is possible to promote AI Safety by successfully carrying out projects that are both safety-focused and profitable. Goals It's crucial to distinguish between the ambitious long-term vision of the plan and the more achievable intermediate steps. The ultimate objective of the plan might be summarized as "mitigating the risks posed by advanced AIs to a level where we can then focus on further advancing alignment and AI Safety in general." For example, if the aim is indeed to address the problem of "notkilleveryoneism," the assumptions underpinning the third and fourth steps may become less tenable. Conversely, the incremental goals involve establishing an alternative AI development framework, demonstrating the compatibility of safety measures with economic interests, and formulating a positive objective that encourages cooperation among stakeholders[7]. This is supposed to help promote AI Safety. Means Now that I have outlined the objectives, let's explore potential pathways to achieve them. Here are the primary technical insights I've identified (with more detailed insights available in Davidad's post). Instead of attempting to dissect or reverse-engineer the inner workings of the AI system being developed, our focus would shift towards evaluating, within a formalized world-model of its deployment environment, whether the AI's behavior is sufficiently likely to be acceptable. This evaluation wouldn't solely rely on empirical observations within the world-model, but rather on formal proofs demonstrating that, across all[8] pertinent deployment scenarios, the outcomes of the AI's actions are acceptable with extremely high probability.An implication of this "formal proofs framework" is that the training process rewards the verifiability of the AI's actions. Consequently, barring glitches[9], the AI constructed during training becomes inherently easier to verify.This approach is anticipated to mitigate the risk of "Out Of Distribution" scenarios, or at the very least, provide us with a more systematic understanding of them: OOD issues arise when the world-model fails to accurately represent something, or when our compilation of "relevant contexts" proves incomplete.The process should be automated as much as possible, leveraging finely-tuned human-level AIs as assistants. These assistants can aid in both world-modeling tasks and formalizing desiderata. For proof searching, stronger AIs can be employed, tasked solely with generating formal proofs[10]. Now, regarding strategy, these ideas seem relevant: This plan can make use of classical academic work and talents, which can help speed things up[11].Working proofs-of-concept/prototypes would have real-world applications, helping gain credibility and resources for subsequent steps. Now, I'll outline several potential positive outcomes even in the event of failure: A world-model that falls short of meeting all our criteria could still be valuable for evaluation, red-teaming, meta-level advising, or other purposes.More broadly, research conducted within this plan may offer insights that prove beneficial for other endeavors as well[12].In the event of failure, it's more probable that we simply haven't developed anything powerful, rather than having unleashed an unsafe AGI/ASI. II/Technical Questions, Risks, Objections Problems to solve Obviously, there are many at least very difficult [13]technical problems to solve to have any chance of success, especially for the maximal goal. The following questions seem most relevant[14]: How to create a framework that allows both accurate, efficient [15]modeling, and formal non-deterministic proofs ? How much advanced theory is needed ?How to construct a good enough world-model ? The process likely involves both leveraging *some* of the existing scientific literature, and performing new experiments. Also, can static analysis [16]of the trained AI suffice, given that the world-model might diverge from reality ? How to actually train the AIs required to carry out the plan ? How it could fail Even if the technical requirements are fulfilled, there are several potential pitfalls: An unaligned actor, be it a human organization or a rogue AI, could gain access to the world-model and exploit it for their own agenda. Thus, what was intended as a tool for AI Safety might serve any objective[17].Alternative, less safe architectures might demonstrate superior performance or ease of development, leading major actors to favor them over a formal proofs paradigm[18]. Of course, the ultimate goal is not solely to build a safe AI, but also to prevent unsafe AIs and humans from causing significant harm. Powerful AIs might take actions that fundamentally alter the world state, rendering the formal world-model obsolete.There may be political resistance to allowing the verified AI to execute its tasks, even if prior agreements were in place, particularly if those tasks seem counter-intuitive[19]. The bigger picture Now, as a counterpoint to my own arguments: even if my understanding of this plan is accurate and its underlying assumptions are valid, I would still need to evaluate it against other plans and determine the most effective approach. It's important to note that the optimal plan, or even just viable ones, might involve investing resources in a combination of proposals[20]. Discussing this kind of strategic questions is worth a dedicated post. III/Why I'm interested in contributing The perspective of an individual with limited yet potentially valuable resources does not equate to that of humanity as a whole, assuming such a notion is coherent. There may exist superior overall plans to which I might not be able to contribute as significantly. For instance, my strengths lie more in mathematical research than in policy-making[21]. Furthermore, I believe that having a vague sense of the end goal significantly enhances the quality of my research efforts, coupled with at least a glimmer of hope that my work could be impactful. While I'm hesitant to argue that blindly optimistic researchers outperform those who are fully aware (akin to the different skill sets of a captain versus a general), I also refrain from claiming the opposite[22]. Regarding other math-oriented fields within AI Safety research, such as Singular Learning Theory or Agent Foundations, either I'm not as persuaded by their end goals (in the case of SLT), or I perceive my potential contributions as less influential (in AF, SLT). This isn't to suggest that these areas should be disregarded[23]. These are the primary reasons why I am currently set on trying to contribute to Davidad’s programme, one way or another. In case other people are interested: there are several areas, requiring various profiles, where much work is yet to be done. A tentative list: For world-modelling, purely mathematical work is needed, to develop a suitable theory.Also for world-modelling, developing version-control systems, and computationally efficient implementation, are key.Developing the proof system(s), formal verifiers, etc., requires people fluent with formal methods.There is a substantial amount of Machine Learning work to be done as well, hence good ML practitioners are needed !Formalizing desiderata is more about social-choice theory, collective decision-making questions. People with expertise in these domains might be interested.The first phases of the plan will involve finding applications of these "formal verification" ideas, that aren't "solving notkilleveryoneism". The range of such potential applications being quite large, domain experts from various fields may join.Lastly, given that this plan draws upon numerous diverse fields, effective communication of ideas is paramount. However, conveying complex technical concepts in an accessible manner is not always straightforward. Individuals adept at producing high-quality pedagogical content on technical subjects can be immensely valuable in this regard. See also page 6 of the call for proposals, and page 11 of the programme thesis, from which most of this was extracted. ^ If you're interested, in a short and more technical summary,  I can recommend this comment. ^ One important first step is to assess feasibility of the maximal version of this plan. ^ If you really need precise definitions, feel free to plug in your own, and see if the text then makes sense. ^ Yes, from this point of view, you need to define precisely enough, what you mean by harm and benefits. Or rather, the outcome you get depends on your specifications. ^ Yes, that's a lot. ^ Key features are imperfect information, bounded computing power, randomness; see for instance Clausewitz's notion of friction. As far as I know, these constraints imply that strategy is more about finding and applying good heuristics, rather than solving puzzles. Feel free to provide counterarguments in the comments. ^ One might even view fostering cooperation as the main goal of the plan. ^ Yes, we would really like to cover all of them. ^ One could argue that the AI being trained and verified might find bugs in the world-model, and exploit them. This possibility might be worth investigating. ^ The idea here is that these proof-searching AIs can be boxed and are not a threat. They would only be trained to "play the formal proof-searching game". ^ As far as I know, training good researchers takes time and resources, and the AI Safety field is short on senior people. ^ For instance, work on nondeterministic multi-scale modelling, or development of processes that help express human desiderata, might be used elsewhere. ^ Knowing precisely how difficult they are is itself quite a challenge. The first steps involve getting an approximate answer. ^ Some tentative answers already exist, but I won't say much about them here. See for instance ARIA's call for proposals, pages 8-10. ^ It might be worth recalling that, if your model, or your verification process, takes too long, then it is useless. ^ Static analysis would mean checking the AI once, with a given version of the world-model, then deploy it. Note that, if you have a functioning shutdown process, you are still able to iterate. ^ Dual-use issues apply to these tools as well, not only to AIs. ^ I am afraid I do not know enough about economics and politics to quantify this issue. The game-theoretic analysis (Appendix B) aims at providing an answer. ^ Even if it respects the desiderata, a drastic action, formally proved rational, might or might not be accepted by all stakeholders. ^ Consider a basic scenario: suppose there are two proposals, only one of which can be effective in reality. Assuming that the resources required to discern their efficacy don't hinder future success, you could optimize your chances by initially dividing your resources between the two and then directing your focus accordingly. However, this isn't the only conceivable scenario. Alternatively, both plans might have potential, but necessitate almost all available resources to yield results. In such a case, concentrating on a single plan would be more prudent. The question then arises: which scenario reflects our current reality ? ^ And yet, policy-making might be more effective than mathematical research in this case. ^ Feel free to give relevant arguments in the comments, if you thought about these questions. ^ At least, additional arguments would be required.
2024-05-20
https://www.lesswrong.com/posts/mDHiTTf3maDj8W9oL/if-language-is-for-communication-what-does-that-imply-about
mDHiTTf3maDj8W9oL
If language is for communication, what does that imply about LLMs?
bill-benzon
Noam Chomsky famously believes that language originated to facilitate thought, but then came to be a medium of communication. Others believe the reverse, that it originated as a facility for communication which turned out to facilitate thinking. That is certainly my view. If that is so, then one would think that language is structured to facilitate communication. Communication is serial, one token at a time, token after token. That would imply that language is structured to facilitate next-token prediction. That would in-turn imply that the relational structure of semantics would evolve to facilitate mapping between the linear structure of the language string and the multidimensional structure of meaning. You want to be able to efficiently project multidimensional semantic structure onto a string and to reconstruction multidimensional semantic structure from a string. How are LLMs trained? By next token prediction. That is to say, the training regime mirrors the primary communication constraint governing the structure of language. So it is with text generation as well. Language is spoken one token at a time, and so LLMs generate texts, one token at a time. The tasks that LLMs have trouble with, such as planning and arithmetic, ARE NOT primarily communicative in nature. They are tasks for thought, for reasoning.
2024-05-12
https://www.lesswrong.com/posts/BRZf42vpFcHtSTraD/linkpost-towards-a-theoretical-understanding-of-the-reversal
BRZf42vpFcHtSTraD
[Linkpost] Towards a Theoretical Understanding of the 'Reversal Curse' via Training Dynamics
bogdan-ionut-cirstea
The excerpts below seem to me like a slight update towards arguments about the weakness of one-forward-passes in transformers and for agendas like externalized reasoning and translucent thoughts. They also suggest out-of-context reasoning (OOCR) might remain hard for transformer-based LMs (they currently do very poorly on OOCR evals and scaling also doesn't seem to help much): We theoretically analyze reversal curse where training or test sequences have the from “𝐴→𝐵” or “𝐵←𝐴” via training dynamics of (stochastic) gradient descent under two auto-regressive models: a bilinear model (Section 3) and one-layer transformers under certain assumptions similar to Tian et al. (2023a) (Section 4). The analysis of the training dynamics of both models reveals a core reason why the reversal curse happens: the weights of the autoregressive models are asymmetric, i.e., the increase of weights from the token 𝐴 to token 𝐵 during training does not necessarily cause the increase of the weights from 𝐵 to 𝐴. [...] Although the (effective) weights from 𝐴 to 𝐵 and from 𝐵 to 𝐴 might be related to some extent since they are both computed using the same set of embeddings, their correlation is weak and thus show asymmetry as verified both theoretically (Sections 3 and 4) and empirically (Section 5). we use the above framework to show the necessity of chain-of-thought (COT) (Wei et al., 2022b) (i.e., a model trained on “A implies B” and “B implies C” separately struggles to directly conclude “A implies C” without COT, which was also empirically observed by Allen-Zhu and Li (2023)) via training dynamics of one-layer transformers (Section 4.2), which provides a new perspective different from previous work Feng et al. (2024) that focuses on the expressivity of transformers. Slightly different from the reason for the reversal curse, in COT analysis, the model weights show intransitivity, i.e., increasing the weights from the token 𝐴 to 𝐵 and 𝐵 to 𝐶 does not necessarily increase the weights from 𝐴 to 𝐶. We emphasize again that the weights refer to effective weights. We also empirically validate our theoretical results through the training dynamics of multi-layer transformers (Section 5). The asymmetry and intransitivity of weights of auto-regressive models indicate that auto-regressive LLMs might not automatically deduce indirect conclusions using separate knowledge learned during training: to make a model predicting token 𝐵 where the input token is 𝐴, the model needs to see 𝐵 following 𝐴 in the same sequence during the training set due to the next token prediction objective and model architectures. This also highlights the importance of ICL, data augmentation, or planning for LLMs with the current popular causal transformer-based structures to solve complex reasoning tasks.
2024-05-11
https://www.lesswrong.com/posts/AhwgnWg95Wt4PfXik/deriving-the-geometric-utilitarian-weights
AhwgnWg95Wt4PfXik
Deriving the Geometric Utilitarian Weights
StrivingForLegibility
This is a supplemental post to Geometric Utilitarianism (And Why It Matters), in which I show how I derived the weights ψ which make any Pareto optimal point p optimal according to the geometric weighted average. This is a subproblem of the proof laid out in the first post of this sequence, and the main post describes why that problem is interesting. Overview So how are we going to calculate weights ψ which make p optimal among F? The idea here is to identify the Harsanyi hyperplane H, which contains all of the joint utilities u∈Rn which satisfy H(u,ϕ)=H(p,ϕ). Where ϕ are the weights which make our chosen point p∈Rn optimal with respect to H(_,ϕ). And we're going to calculate new weights ψ which make p optimal with respect to G(_,ψ). It turns out it's sufficient to make p optimal among H, and p will also be optimal across our entire feasible set F. In terms of calculus, we're going to be constructing a function G∘H, which tells us about how moving around on H changes G. And we're going to choose weights ψ which make the gradient ∇v(G∘H) equal 0 at p. This makes it a local optimum, and it will turn out to be a global maximum across H, which in turn will make it a global maximum across F. Geometrically, we can think of that as the surface gradient of G across H. And so in terms of the overall gradient ∇uG, we're designing ψ so that ∇uG is perpendicular to H at p. Parameterizing the Harsanyi Hyperplane When thinking about moving around on the Harsanyi hyperplane H, we have a linear constraint that says no matter which u∈H we pick, we know that u⋅ϕ=p⋅ϕ=H(p,ϕ). If we know u lies on H, we can calculate the the n-th agent's utility un from the first n−1 utilities. We'll be referring to these first n-1 utilities a lot, so let's call them v∈Rn−1. So vi=ui for all i<n. H and G are both symmetrical with respect to shuffling the indices of agents around, so without loss of generality we'll assume that the n-th agent is one we're assigning positive Harsanyi weight to: ϕn > 0. This is necessary for the reconstruction to work for all ϕ. So we can think of H as a function H:Rn−1→Rn, where the j-th output is Hj(v)=uj for j<n. We can use the n-th output to reconstruct un given v like this:  Hn(v)=H(p,ϕ)−∑n−1i=1viϕiϕn. This lets us move around Rn−1 to pick v however we want, and the function H will map that to its image, helpfully also called H! Alright, now we have H:Rn−1→Rn and we also have G:Rn×[0,1]n→R, which is the geometric weighted average whose gradient we're trying to design through our choice of ψ. So let's compose them together to form G∘H:Rn−1×[0,1]n→R. And since we want p to be an optimum of G across the hyperplane H, we can set the gradient ∇(G∘H)(q,ψ)=0, where q∈Rn−1 are the first n−1 utilities of our target joint utility p. Solving this equation for ψ will give us the weights we need! This looks like to solving a family of n−1 equations∂(G∘H)∂vi(q,ψ)=0. Where we're holding the weights constant for the purposes of differentiation, but we'll be solving for the weights that make the derivative 0 at p. How Does G Change as We Change These Parameters? Ok, we've built up a few layers of abstraction, so let's start unpacking. By the chain rule, and using the notation that Hj is the j-th element of the output of H: ∂(G∘H)∂vi(v,ψ)=∑nj=1∂G∂Hj(H(v),ψ)∂Hj∂vi(v) How does our point on H change as we change these parameters? Let's start by computing ∂Hj∂vi(v). For the first n-1 terms this is simple, because Hj simply returns vj. So ∂Hj∂vi is 1 when i=j, and 0 otherwise, which we can represent using the Kronecker delta δji. And ∂Hn∂vi=−ϕiϕn. Geometrically, this is telling us about the slope of H. Note that: ∂Hn∂vi is constant and doesn't depend on our choice of vi∂Hn∂vi≤0 (We can never increase agent n's utility by increasing another agent's utility. This is always true at the Pareto frontier.) How Does G Change as We Move Around on H? We can start solving for ∂G∂Hj by substituting in the definition of G: ∂G∂Hj=∂∂Hj(∏nk=1Hψkk). From here we can apply the n-factor product rule: ∂G∂Hj=(∏nk=1Hψkk)(∑nk=1∂∂Hj(Hψkk)Hψkk). Thankfully, ∂∂Hj(Hψkk)=0 whenever k≠j, leaving just ∂∂Hj(Hψjj)=ψjHψj−1j. We can also notice ∏nk=1Hψkk=G, leaving us with the much nicer ∂G∂Hj=GψjHψj−1jHψjj=GψjHj=ψjHjG. It will be important later that this partial derivative is undefined when Hj=0, aka wherever any agent is receiving their least feasible utility. Writing function arguments explicitly: ∂G∂Hj(H(v),ψ)=ψjHj(v)G(H(v),ψ) Putting These Terms Together Let's start putting these together. We can start by breaking apart the two cases of ∂Hj∂vi, like this: ∂(G∘H)∂vi(H(v),ψ)=∑nj=1∂G∂Hj(u,ψ)∂Hj∂ui(v) ∂(G∘H)∂vi(H(v),ψ)=[∑n−1j=1∂G∂Hj(H(v),ψ)∂Hj∂vi(v)]+∂G∂Hn(H(v),ψ)∂Hn∂vi(v) ∂(G∘H)∂vi(v,ψ)=[∑n−1j=1∂G∂Hj(H(v),ψ)δji]−∂G∂Hn(H(v),ψ)ϕiϕn Here's one reason why it's useful to know about the Kronecker delta δji: it filters out all but the i-th element of a sum: ∑jajδji=ai. When you're working in Einstein notation (which is great by the way), you just write it as ajδji=ai and you can think of the j's as "cancelling". That leaves us with: ∂(G∘H)∂vi(v,ψ)=∂G∂Hi(H(v),ψ)−∂G∂Hn(H(v),ψ)ϕiϕn And we know ∂G∂Hi(H(v),ψ), so let's plug that in: ∂(G∘H)∂vi(v,ψ)=ψiHi(v)G(H(v),ψ)−ψnHn(v)G(H(v),ψ)ϕiϕn ∂(G∘H)∂vi(v,ψ)=(ψiHi(v)−ψnϕiHn(v)ϕn)G(H(v),ψ) And that is the family of n−1 equations that we want to all be 0 when v=q. (This causes H(v)=u=p.) We'll call this gradient ∇v(G∘H) to remind ourselves that this is the gradient of (G∘H)(v,ψ) where we're holding the weights ψ constant. Solving for the Geometric Weights Ok, now we can set v=q, ∂(G∘H)∂vi=0 and solve for ψi, for i<n: (ψiHi(q)−ψnϕiHn(q)ϕn)G(H(q),ψ)=0 ψiHi(q)G(H(q),ψ)−ψnϕiHn(q)ϕnG(H(q),ψ)=0 ψiHi(q)G(H(q),ψ)=ψnϕiHn(q)ϕnG(H(q),ψ) ψipiG(p,ψ)=ψnϕipnϕnG(p,ψ) ψipi=ψnϕipnϕn ψi=ψnpiϕipnϕn This is still a system of linear equations we need to solve, since each ψi for i<n depends on ψn, which in turn satisfies ψn=1−∑n−1i=1ψi. So let's solve it for ψn! ψn=1−∑n−1i=1ψi ψn=1−∑n−1i=1ψnpiϕipnϕn ψn=1−ψn∑n−1i=1piϕipnϕn ψn+ψn∑n−1i=1piϕipnϕn=1 ψn(1+∑n−1i=1piϕipnϕn)=1 ψn=11+∑n−1i=1piϕipnϕn Remembering that H(p,ϕ)=∑ni=1piϕi=∑n−1i=1piϕi+pnϕn, we can notice that: H(p,ϕ)pnϕn=∑n−1i=1piϕi+pnϕnpnϕn H(p,ϕ)pnϕn=∑n−1i=1piϕipnϕn+1 This lets us simplify ψn down to ψn=1(H(p,ϕ)pnϕn) ψn=pnϕnH(p,ϕ) ψn=ϕnpnH(p,ϕ) And now we can plug that back into the formula for all the other ψi! ψi=ϕnpnH(p,ϕ)piϕipnϕn ψi=piϕiH(p,ϕ) ψi=ϕipiH(p,ϕ) Well isn't that convenient! The formula for all ψi has the same form, and we can think of it like starting with the Harsanyi weights ϕ (which make p optimal according to H(_,ϕ), along with anything else with the same Harsanyi score H(p,ϕ)), and then tweaking them to get G(_,ψ) to target p in particular. We can simplify our formula by noting that H(p,ϕ)=p⋅ϕ=∑nj=1pjϕj ψi=piϕip⋅ϕ To make the formula a little prettier, and to get some extra geometric insight, we can introduce the element-wise product ⊙, where (p⊙ϕ)i=piϕi. ψ=p⊙ϕp⋅ϕ Here's a good opportunity to make sure our weights ψ sum up to 1: ∑ni=1ψi=∑ni=1piϕip⋅ϕ ∑ni=1ψi=1p⋅ϕ∑ni=1piϕi ∑ni=1ψi=1p⋅ϕ(p⋅ϕ) ∑ni=1ψi=1 Great! p⋅ϕ is acting like a normalization term, and we can think of p⊙ϕ as telling us which direction ψ points in. This vector of weights is then scaled to land on the hypersurface of weights that sum to 1, known as the standard simplex Δn, which we'll discuss more later. We can also think of ϕ as a function ϕ:Rn×Cn→Rn denoted as ϕ(p,F) which returns the Harsanyi weights ϕ for p in the context of a compact, convex subset F⊂Rn. This is it, so let's make a new heading to find it later! How to Calculate Weights for p We now have a formula for ψ:Rn×[0,1]n→Rn, which we can write as ψ(p,ϕ(p,F))=p⊙ϕ(p,F)p⋅ϕ(p,F) Or we can suppress function arguments and simply write ψ=p⊙ϕp⋅ϕ Where p⊙ϕ∈Rn is the element-wise product of p and ϕ: (p⊙ϕ)i=piϕi and p⋅ϕ∈Rn is the dot product p⋅ϕ=∑nj=1pjϕj For a single component ψi, we have ψi=piϕip⋅ϕ Note that ψ isn't defined when p⋅ϕ=0. Is this a problem? Not really! p⋅ϕ=H(p,ϕ), the Harsanyi aggregate utility of p when ϕ has been chosen to make p optimal under H(_,ϕ). When this is 0, it means the individual utilities must all be 0 and the entire feasible set F must be a single point at the origin. When that happens, any weights will make p optimal according to G(_,ψ) or H(_,ϕ). Feel free to use any convention that works for your application, if we're in a context where ϕ(→0) is defined we can inherit ψ(→0,ϕ(→0))=ϕ(→0) If F is shrinking towards becoming a single point, we can use ψ(→0)=limp→→0ψ(p). Checking Our Solution Assuming we calculated ∇v(G∘H)(q,ψ) correctly, we can verify that these weights lead to ∇v(G∘H)(q,ψ)=0. This requires ∂(G∘H)∂vi(q,ψ)=0 for the first n−1 utilities, so let's check that: ∂(G∘H)∂vi(q,ψ)=0 (ψiHi(q)−ψnϕiHn(q)ϕn)G(H(q),ψ)=0 (ψipi−ψnϕipnϕn)G(p,ψ)=0 ψipiG(p,ψ)=ψnϕipnϕnG(p,ψ) 1pipiϕip⋅ϕG(p,ψ)=ϕipnϕnpnϕnp⋅ϕG(p,ψ) ϕip⋅ϕG(p,ψ)=ϕiϕnϕnp⋅ϕG(p,ψ) ϕip⋅ϕG(p,ψ)=ϕip⋅ϕG(p,ψ) Success! p is an optimum of G among H. But is it unique? P Is the Unique Optimum of G When Weights Are Positive Let's see how ψ and p influenced the outcome here, and keep track of the critical points which can make ∇v(G∘H)i=0 or undefined. These are the only points which can be extrema, and for each we need to check if it is a minimum or maximum among H. (G doesn't have any saddle points, and H doesn't have any boundaries of its own to worry about. Where H meets the boundary of G's domain, the ui=0 axes, G=0 there.) For example, whenever any individual utility ui=0, ∂G∂Hi is undefined, which causes ∇v(G∘H)i to be undefined. But note that these will be minimal points of G, unless ψi=0. To find maximal points u∈H of G across H we need ψiui−ψnϕiunϕn=0 ψiunϕnuiunϕn−uiψnϕiuiunϕn=0 ψiunϕn−uiψnϕiuiunϕn=0 If ui or un are 0, then ∇v(G∘H)i is undefined, and we'll check later if these can still be optimal. We assumed that the index n refers to an agent with ϕn>0, in order to prevent that exact case from breaking our entire solution. ψiunϕn−uiψnϕi=0 unϕnψi−uiϕiψn=0 If we were handed ψ from some external source, we could solve this equation to see which u∈H happened to be optimal. But we designed ψ, so let's see what we caused to be optimal. unϕnpiϕip⋅ϕ−uiϕipnϕnp⋅ϕ=0 unϕnpiϕi−uiϕipnϕnp⋅ϕ=0 If p⋅ϕ=0 then ∇v(G∘H)i is undefined. This only happens when F is a single point, in which case p is indeed the unique optimum of G. unϕnpiϕi−uiϕipnϕn=0 uipnϕiϕn=unpiϕiϕn Here we're going to be careful about which weights can be 0. We'll again use the fact that ϕn>0 to safely divide it from both sides. uipnϕi=unpiϕi Here again we can see that u=p solves this family of n-1 equations. And this is very exciting because this is our first maximum of G! Are there any other solutions? Each of these equations is satisfied when one of the following is true: ϕi=0 uipn=unpi In other words, assigning an agent 0 Harsanyi weight ϕi (and thus geometric weight ψi) can allow G(_,ψ) to have multiple optima among H, which can give it multiple optima among F. What about when all geometric weights ψ are positive? Are there any other solutions to that second family of n-1 equations? Having all positive geometric weights ψ implies having all positive Harsanyi weights ϕ, and all positive individual utilities p. It also implies that any optimum of G(_,ψ) will have all positive individual utilities u. This lets us freely divide by any of these terms, without needing to worry that we might be dividing by 0. uiun=pipn Since un and pn are both positive, we can think of un as a scaled version of pn. un=λpn How does this scalar influence the other terms in these equations? uiλpn=pipn ui=λpi u=λp This forms a line from the origin to p, which only intersects H at p. (Since scaling p up or down changes H(p,ϕ).) So when all geometric weights ψ are positive, p is the unique optimum of G(_,ψ) among H! When ϕi=0, ψi is also 0, so ui doesn't affect G(_,ψ). We can start with p, and then freely vary the utilities of any agent with 0 weight and remain optimal. Interactive Implementations We can also check our entire calculation, including those pages of calculus, by actually implementing our solution and seeing if it works! A graphing calculator is sufficient to check this in 2 and 3 dimensions. We can show all the points which satisfy G(s,ψ)=G(p,ψ) and they should trace out the contours of G, showing all the joint utilities which have the same G score as p. In 2 dimension, the graph looks like this: Check out an interactive version here! The Harsanyi hyperplane is a line, and the contour curves are skewed hyperbolas. As expected, taking p out of the positive quadrant violates our assumption that utilities are non-negative, leading to invalid settings for ψ. Similarly, if H has a positive slope, this violates our assumption that p is on the Pareto frontier. (A positive slope implies that we can make both players better off simultaneously. If we calculate ϕ anyway, a positive slope implies that ϕi is negative for the agent on the x axis.) This allows H to pass up through the hyperbola at another point other than p, but this never happens when ϕi≥0. With 3 agents, the graph looks like this: Interactive version here In 3 dimensions, the Harsanyi hyperplane is a plane, and the contour surfaces are skewed hyperboloids. We can move p around on the hyperplane, and this changes ψ, which changes where the contour touches H. We can see that p always lies at the intersection of this contour curve and H, and this is a visual proof that p maximizes G(_,ψ) among H. And when H corresponds to all agents having positive Harsanyi weight ϕ, this intersection only happens at p!
2024-08-07
https://www.lesswrong.com/posts/zAqqeXcau9y2yiJdi/can-we-build-a-better-public-doublecrux
zAqqeXcau9y2yiJdi
Can we build a better Public Doublecrux?
Raemon
Something I'd like to try at LessOnline is to somehow iterate on the "Public Doublecrux" format. I'm not sure if I'll end up focusing on it, but here are some ideas. Public Doublecrux is a more truthseeking oriented version of Public Debate. The goal of a debate is to change your opponent's mind or the public's mind. The goal of a doublecrux is more like "work with your partner to figure out if you should change your mind, and vice versa." Reasons to want to do public doublecrux include: It helps showcase subtle mental moves that are hard to write down explicitly (i.e. tacit knowledge transfer.There's still something good and exciting about seeing high profile smart people talk about ideas. Having some variant of that format seems good for LessOnline. And having at least 1-2 "doublecruxes" rather than "debates" or "panels" or "interviews" seems good for culture setting.In addition to being "exciting" and "possible to learn from" to have public figures doublecrux, I think it'd also be nice from a culture setting standpoint. This is a place where people don't play rhetorical tricks to manipulate people – it's a place where people earnestly move towards the truth. Sidebar: Public Debate is also good although not what I'm gonna focus on here. I know several people who have argued that "debate-qua-debate" is also an important part of a truthseeking culture. It's fine if the individuals are trying to "present the best case for their position", so long as the collective process steers towards truth. Adversarial Collaboration is good. Public disagreement is good. I do generally buy this, although I have some disagreements with the people who argue most strongly for Debate. I think I prefer it to happen in written longform than in person, where charisma puts a heavier thumb on the scale. And I think while it can produce social good, many variants of it seem... kinda bad for the epistemic souls of the people participating? By becoming a champion for a particular idea, people seem to get more tunnel-vision-y about it. Sometimes worth it, but, I've felt some kind of missing mood here when arguing with people in the past. I'm happy to chat about this in the comments more but mostly won't be focusing on it here. Historically I think public doublecruxes have had some problems: First, having the live audience there makes it a bit more awkward and performative. It's harder to "earnestly truthseek" when there's a crowd you'd still kinda like to persuade of your idea, or at least not sound stupid in front of.Historically, people who have ended up doing "public doublecrux" hadn't actually really understood or really bought into the process. They often end up veering towards either classical debate, or "just kinda talking."When two people are actually changing *their* minds tend to get into idiosyncratic frames that are hard for observers to understand. Hell, it's even hard for two people in the discussion to understand. They're chasing their cruxes, rather than presenting "generally compelling arguments." This tends to require getting into weeds and go down rabbit holes that don't feel relevant to most people. With that in mind, here are some ideas: Maybe have the double cruxers in a private room, with videocameras. The talk is broadcast live to other conference-goers, but the actual chat is in a nice cozy room. This doesn't fully solve the "public awkwardness" problem, but maybe mediates it a bit.Have two (or three?) dedicated facilitators. More Dakka. More on that below. For the facilators: One is in the room with the doublecruxers, focused on helping them steer towards useful questions. They probably try to initially guide the participants towards communicating their basic position, and then ironing out their differences in ontology. They ask questions like "can you paraphrase what you think the other person's position is?". The second (and maybe third) facilitator hangs out with the audience outside, and is focused on tracking "what is the audience confused about?". The audience participates in a live google doc where they're organizing the conversational threads and asking questions. The first facilitator is periodically surreptitiously checking the google doc or chat, and maybe periodically summarizing their guess of the state-of-the-debate for the audience's benefit. Those were just some starting ideas, but my most important point here is to approach this as an unsolved "product development" problem. Invest in trial runs with different participants and audiences, with a specific eye towards identifying the problems and ironing out kinks. Some comments from other discussions I'd previously talked about this on facebook and twitter. Two comments that seemed particularly good to crosspost as potential ideas: Duncan Sabien suggested: My first off-the-top idea is actually more like Circling double crux. Two people are double cruxing (or similar) while a third party is right there with them, and periodically (after no less than 1min and no more than like 6min of back-and-forth) interrupts them to draw out "okay, what was going on there? What were you doing in your head? What was the goal of that probe?" etc. So the two main participants are spending half their time progressing on the object level, and half their time expositing about what's going on in their heads. Duncan didn't specify his goals here, but my interpretation (which seems worth exploring to me), is that this is meant to both: fix some problems where participants often are getting caught in various cognitive habits that just... aren't what they sat down to do. And explicitly attending to that helps one notice and (maybe) just step back out. Or, at least make a conscious choice about it. help each participant track what's going on in the other person's head, which they might otherwise be confused about.help the audience track why the heck the participants are talking about the things they're talking about. Divia noted: I’ve done some public double crux attempts! I’d say I had varying results. I found it super important for me to do a lot of cruxing mapping and repeated summarizing and checking Some of them turned into basically what I would call trying to understand one of the people’s positions and mostly ignoring the other one Here's a twitter thread about a double crux of Eli's that I liked: https://twitter.com/diviacaroline/status/1300866586736078849?t=6YPjIbROUmU6ukEBS492kw Meanwhile, on twitter Anna Salamon suggested: I think it’s wise to also map out anti cruxes: statements that both parties already agree about and expect to continue agreeing about regardless of how the discussion goes (that are as near as possible to the disagreement). Useful in private, more useful in public. I replied: ah yeah, that sounds right. (though I'm not really a fan of the "anti-crux" name for it, I'd naively just think that means "thing that doesn't matter") (I had always thought it'd make sense for 'the double crux' to be called 'the common crux', since it was more clear that it was shared between the people. And, if you had that, you might naturally call 'the things we both believe' the 'common ground') (I thought about trying to call it "Common Crux" in this post to facilitate my agenda of renaming it, but that seemed more likely to be confusing than helpful. If I end up pursuing this project in more detail I might push for it more tho) Ideas or Volunteers? Those are some takes for now. I'm not sure if I'm going to pursue this right now, but thought I'd leave these thoughts for now. I'm interested in both: ideas for how to either innovate on public doublecrux, or various other flavors of truthseeking public discussionsee if there are people interested in putting work into experimenting with this sort of thing. (I may be kind of picky about who I personally want to work with, but also think it's cool if other people run with the ideas in various directions on their own)
2024-05-11
https://www.lesswrong.com/posts/zhDkFvMFvyxhbfsrg/how-do-i-get-better-at-d-and-d-sci
zhDkFvMFvyxhbfsrg
How do I get better at D&D Sci?
FinalFormal2
I have no experience with data science, but D&D Sci seems fun and I would like to improve and get better at it. Where can/should I start?
2024-05-11
https://www.lesswrong.com/posts/3zHB5ypPfYs7mm9m7/resources-for-learning-about-poise-gracefulness
3zHB5ypPfYs7mm9m7
Resources for learning about poise / gracefulness?
David_Gross
I'm doing some initial investigation for a Notes on Virtues sequence write-up about poise & gracefulness (possibly touching also on related things like confidence, composure, cool, gravitas, command, bearing, etc.) seen as a virtue (a characteristic habit that promotes or exhibits human flourishing). Can you recommend some resources (e.g. books, papers, interviews) I would be wise to review for this? I'm especially interested in things that people can do to improve in this virtue; interventions that have some scientifically demonstrated support being ideal, but informal advice from exemplars and experts are also valuable.
2024-05-11
https://www.lesswrong.com/posts/nAR6yhptyMuwPLokc/new-intro-textbook-on-aixi
nAR6yhptyMuwPLokc
New intro textbook on AIXI
Alex_Altair
Marcus Hutter and his PhD students David Quarel and Elliot Catt have just published a new textbook called An Introduction to Universal Artificial Intelligence. "Universal AI" refers to the body of theory surrounding Hutter's AIXI, which is a model of ideal agency combining Solomonoff induction and reinforcement learning. Hutter has previously published a book-length exposition of AIXI in 2005, called just Universal Artificial Intelligence, and first introduced AIXI in a 2000 paper. I think UAI is well-written and organized, but it's certainly very dense. An introductory textbook is a welcome addition to the canon. I doubt IUAI will contain any novel results, though from the table of contents, it looks like it will incorporate some of the further research that has been done since his 2005 book. As is common, the textbook is partly based on his experiences teaching the material to students over many years, and is aimed at advanced undergraduates. I'm excited for this! Like any rationalist, I have plenty of opinions about problems with AIXI (it's not embedded, RL is the wrong frame for agents, etc) but as an agent foundations researcher, I think progress on foundational theory is critical for AI safety. Basic info Hutter's websiteReleasing on May 28th 2024Available in hardcover, paperback and ebook496 pages Table of contents: Part I: Introduction 1.  Introduction 2. Background Part II: Algorithmic Prediction 3. Bayesian Sequence Prediction 4. The Context Tree Weighting Algorithm 5. Variations on CTW Part III: A Family of Universal Agents 6. Agency 7. Universal Artificial Intelligence 8. Optimality of Universal Agents 9. Other Universal Agents 10. Multi-agent Setting Part IV: Approximating Universal Agents 11. AIXI-MDP 12.  Monte-Carlo AIXI with Context Tree Weighting 13. Computational Aspects Part V: Alternative Approaches 14. Feature Reinforcement Learning Part VI: Safety and Discussion 15.  AGI Safety 16.  Philosophy of AI
2024-05-11
https://www.lesswrong.com/posts/WnmToqeeLcxHFjLDi/questions-are-usually-too-cheap
WnmToqeeLcxHFjLDi
Questions are usually too cheap
Nathan Young
It is easier to ask than to answer. That’s my whole point. It is much cheaper to ask questions than answer them so beware of situations where it is implied that asking and answering are equal. Here are some examples: Let's say there is a maths game. I get a minute to ask questions. You get a minute to answer them. If you answer them all correctly, you win, if not, I do. Who will win? Preregister your answer. Okay, let's try. These questions took me roughly a minute to come up with. What's 56,789 * 45,387? What's the integral from -6 to 5π of sin(x cos^2(x))/tan(x^9) dx? What's the prime factorisation of 91435293173907507525437560876902107167279548147799415693153? Good luck. If I understand correctly, that last one's gonna take you at least an hour1 (or however long it takes to threaten me). Perhaps you hate maths. Let's do word problems then. Define the following words "antidisestablishmentarianism", "equatorial", "sanguine", "sanguinary", "escapology", "eschatology", "antideluvian", "cripuscular", "red", "meter", all the meanings of "do", and "fish". I don’t think anyone could do this without assistance. I tried it with Claude, which plausibly still failed2 the “fish” question, though we’ll return to that. I could do this for almost anything: Questions on any topic Certain types of procedural puzzles Asking for complicated explanations (we’ll revisit later) Forecasting questions This is the centre of my argument I see many situations where questions and answers are treated as symmetric. This is rarely the case. Instead, it is much more expensive to answer than to ask. Let’s try and find some counter examples. A calculator can solve allowable questions faster than you can type them in. A dictionary can provide allowable definitions faster than you can look them up. An LLM can sometimes answer some types of questions more cheaply in terms of inference costs than your time was worth in coming up with them. But then I just have to ask different questions. Calculators and dictionaries are often limited. And even the best calculation programs can’t solve prime factorisation questions more cheaply than I can write them. Likewise I could create LLM prompts that are very expensive for the best LLMs to answer well, eg “write a 10,000 word story about an [animal] who experiences [emotion] in a [location].” How this plays out Let’s go back to our game. Imagine you are sitting around and I turn up and demand to play the “answering game”. Perhaps I reference on your reputation. You call yourself a ‘person who knows things’, surely you can answer my questions? No? Are you a coward? Looks like you are wrong! And now you either have to spend your time answering or suffer some kind of social cost and allow me to say “I asked him questions but he never answered”. And whatever happens, you are distracted from what you were doing. Whether you were setting up an organisation or making a speech or just trying to have a nice day, now you have to focus on me. That’s costly. This seems like a common bad feature of discourse - someone asking questions cheaply and implying that the person answering them (or who is unable to) should do so just as cheaply and so it is fair. Here are some examples of this: Internet debates are weaponised cheap questions. Whoever speaks first in many debates often gets to frame the discussion and ask a load of questions and then when inevitably they aren’t answered, the implication is that the first speaker is right3. I don’t follow American school debate closely, but I sense it is even more of this, with people literally learning to speak faster so their opponents can’t process their points quickly enough to respond to them. Emails. Normally they exist within a framework of friends or colleagues, who understand when emails should be sent and feel obliged to respond. But at some point often emails get too cheap - anyone can send you one and those norms change or your email becomes unusable. I don’t want everyone in the world to be able to ask me any question they want, it’s too cheap. Freedom Of Information requests. I worked in Government for a while and anyone can ask for any information from any department. What follows is several days of a government employee’s time, either finding all the relevant documents and redacting them or giving a reason why the request is to be denied. Maddeningly, it’s also possible to send a ‘Freedom Of Information Meta-Request’, which then requires all documents about the original request. The costs are huge and all from a single email. Further examples include political interviews, nagging questions on social media, support boxes on websites (and why they slowly fall in quality). In all these cases, the system might work when there are established norms to limit the number or style of questions, but allowing anyone to ask these questions quickly becomes unbalanced. Often when looking at norms of privacy or status, I ask myself “which cheap but generally acceptable interaction is being artificially limited4”. I prefer debates amongst groups who know and respect one another, giving time for longer answers. Between friends a question cannot be repeated for political effect. Email addresses are often hidden, or messaging is inside walled gardens, like Slack or Microsoft Teams. Companies do not give anyone the right to ask any question - comms departments often limit questions to employees. Does anyone think otherwise? Maybe this doesn’t sound like an insight, but it’s an observation I make relatively often and it explains for me why several of the above systems don’t work. What I do personally Notice this asymmetry. When I request information, it is often like I am charging the person perhaps somewhere between 10¢ and $100. Now they have to stop and decide whether to answer. Would I be happy to ask to borrow this much money from them? Would others endorse my asking? If not, perhaps I shouldn’t ask, even if the space allows it. Before I ask it, I can think about whether it is a high priority - whether I would pay to know the answer, if more important than what they are currently doing. I might want to try again to find the answer myself - googling it or asking someone else. I can think about ways to make the question easier to answer. Most emails can, in my experience, be framed as a set of yes/no questions. Shall I do X? Is $300 too much? If I am talking to a colleague I try to do the work so that my email is easy to answer. I really like these suggestions for making my emails more likely to be responded to. I seek to mirror the amount of work the other person is doing. If I ask a question and someone responds in a clipped answer, I might not ask another. I dislike twitter users who respond to 1 tweet with 10. It is predictive that they don’t understand boundaries and turn-taking, which doesn’t bode well for future discussion. How to fix this in systems If a community or process lets anyone ask questions, I don’t expect it to work at scale. If it does work there is often something going on to covertly reduce question numbers. I filled in a UK government consultation recently. Anyone could comment on the process, which sounds like an expensive thing to offer for free, unless it was to be ignored. But was about 100 questions long, with submission at the end. They’ll let anyone answer, but only if you are willing to put in the time to click through every box. So it becomes more expensive again. Many systems have ways to make questions more expensive: Limit question asking. On Twitter5, new private messages are sometimes limited to people you follow. You have made a choice to interact with someone before they can message you. This adds an additional price to questions. Rebalance. It is normal in panel discussions for every question to have a 5 minute answer6. There is already the assumption that questions are hard to answer. Debates likewise could have systems where each debater takes the role of ‘asking questions and waiting for them to be answered’. If you just ask 50 hard questions you wast your time and it doesn’t look like you’ve got the upper hand. Voting. Have unlimited questions but only answer the top ones according to some system. This works pretty well as long as those answering actually answer the top questions. In the Civil Service, during the departmental question time,, the top questions were usually about pay and were ignored. This damages trust in the whole process. Make questions easier to answer. Stack Overflow is notorious for the rigid style of their questions and answers. I remember once someone removed ‘thank you’ from one of my replies, because Stack Overflow seeks to keep text as succinct as possible. I found that annoying at first, but over time, seeing the quality of replies I saw the benefit of only allowing very high quality messages in a certain format. Increasing the average question and answer makes the site a lot more useful. Charge for it. Patreon allows people to interact with their favourite content creators, but often only if they pay the subscription. This provides good incentives for the content creator (who wants money) and for the user (who wants to talk to a high status person) and evens out the discrepancy between them. It’s much more appropriate to ask questions on a members’ area or a reddit AMA than if you see a celebrity in the street. Is this an externality? I sense cheap questions are an externality - when the costs of benefits of a transaction aren’t to those involved in the transaction. We have created a system where often questions - in debates, interviews, public interactions, social media - are free to ask, but this imposes costs on the answerer7. This leads to them being over-consumed for what is optimal for the system. Most of my solutions here are classic solutions to externalities (norms, regulation, restriction, taxation), so it seems likely it is one or is very close. I recommend everyone learn what externalities are. It’s a concept that I think about perhaps once a week, in a very broad range of contexts. Here is a quick video Conclusion: Questions are too cheap If anyone can ask a free question, then anyone can ask 10 questions, or a question that is ten times as difficult to answer. Most systems cannot sustain this and in small groups we have norms to manage it. It is worth noticing how this breaks down at scale. Both I and systems I am involved in can find ways to charge for questions, via membership, votes, consent or money, so that they are no longer too cheap. That’s all folks! Any questions? How hard could they be?
2024-05-11
https://www.lesswrong.com/posts/HvMJ8pgbFeBKZsy2t/dead-post-2
HvMJ8pgbFeBKZsy2t
dead post 2
David Chapel
null
2024-05-11
https://www.lesswrong.com/posts/BvaJzoFJ4JqukP3kp/ethics-and-prospects-of-ai-related-jobs
BvaJzoFJ4JqukP3kp
Ethics and prospects of AI related jobs?
dr_s
I've been on the lookout for new jobs recently and one thing I have noticed is that the market seems flooded with ads for AI-related jobs. What I mean is not work on building models (or aligning them, alas), but rather, work on building applications using generative AI or other advances to make new software products. My impression of this is that first, there's probably something of a bubble, because I doubt many of these ideas can deliver on their promises, especially as they rely so heavily on still pretty unreliable LLMs and such. And second, that while the jobs are well paid and sound fun, I'm not sure how I feel about them. These jobs all essentially aim at automating away other jobs, one way or another. That is a good thing only insofar as various other things happen, and depending on the specific job and quality of the work - a good automated GP for diagnosis would probably do a lot of good, but a rushed one might be net negative, and automating creative work is IMO just the wrong road to go down in general if we want good AI futures. What are your intuitions about this? Which kinds of AI jobs do you consider having more potential for overall positive/negative value for society?
2024-05-11
https://www.lesswrong.com/posts/8pmimprBizW8Qpf5o/should-i-finish-my-bachelor-s-degree
8pmimprBizW8Qpf5o
Should I Finish My Bachelor's Degree?
Zack_M_Davis
To some, it might seem like a strange question. If you think of being college-educated as a marker of class (or personhood), the fact that I don't have a degree at age of thirty-six (!!) probably looks like a scandalous anomaly, which it would be only natural for me to want to remediate at the earliest opportunity. I deeply resent that entire worldview—not because I've rejected education, properly understood. On the contrary. The study of literature, history, mathematics, science—these things are among the noblest pursuits in life, sources of highest pleasure and deepest meaning. It's precisely because I value education so much that I can't stand to see it conflated with school and its culture of bureaucratic servitude where no one cares what you know and no one cares what you can do; they just want you to sit in a room and obey the commands of the designated teacher. Whereas in reality, knowledge doesn't come from "taking courses." How could it? Knowledge comes from quality study and practice. Sure, it's possible that someone could study in order to "pass" a "class" that they're "taking" in school. But once you know how and why to study, it's not clear what value the school is adding that can't be gotten better, cheaper, elsewhere. Just get the books. (And start a blog, go to meetups, chat to large language models, hire a private tutor—whatever makes sense to get better at doing the things you want to do, without having to worry about whether the thing that makes sense can be made legible to distant bureaucrats.) The people who believe in being college-educated probably don't believe me. They probably think my pæans to the glory of self-study are the rationalizations of a lazy student who doesn't want to work hard. I can understand some reasons for skepticism. Sometimes people really are lazy, and suffer from self-serving delusions. Probably there are some confused people out there who have mistaken consumer edutainment for production scholarship and—maybe, somehow—could benefit from being set straight by the firm tutelage of the standard bureaucratic authority. But without vouching for everyone who calls themself an autodidact, I think I can present third-party-visible evidence that my self-study is for real? I worked as a software engineer for eight years; I have 173 commits in the Rust compiler; I wrote a chess engine; I've blogged 400,000 words over the past dozen years on topics from mathematics and machine learning, to formal epistemology and the philosophy of language, to politics and differential psychology, and much more. This is not the portfolio of an uneducated person. If someone is considering working with me and isn't sure of my competence, they're welcome to look at my output and judge for themselves. (And I'm happy to take a test when that makes sense.) If someone would otherwise consider working with me, but are put off by the lack of a mystical piece of paper from the standard bureaucratic authority, that's their loss—maybe I don't want to work with someone with so little discernment. If I believe everything I just wrote, explaining why I have nothing particularly to gain and nothing particularly to prove by jumping through a few more hoops to get the mystical piece of paper, then ... why am I considering it? One possible answer is that it passes a cost–benefit analysis mostly by virtue of the costs being low, rather than the benefits being particularly high. I'm at a time in my life where I have enough money from my previous dayjob and enough uncertainty about how long the world is going to last, that I prefer having lots of free time to work on things that interest me or add dignity to the existential risk situation, than to continue grinding at software dayjobs. So if my schedule isn't being constrained by a dayjob for now, why not "take" some "classes" and finish off the mystical piece of paper? Continuing from where I left off in 2013 due to being rescued by the software industry, I need five more math courses and three more gen-eds to finish a B.A. in math at San Francisco State University, which I can knock out in two semesters. The commute is terrible, but I can choose my schedule to only be on campus a couple days a week. And then if it makes sense to go get another dayjob later, "I finished my Bachelor's degree" is a legible résumé-gap excuse (easier to explain to semi-normies with hiring authority than "I finished my 80,000-word memoir of religious betrayal"). In short, why not?—if I'm going to do it ever, now is a convenient time, and eight classes is a sufficiently small cost that it makes sense to do it ever (conditional on the world not ending immediately). A less comfortable possible answer is that maybe I do have something to prove. I often wonder why I seem to be so alone in my hatred of school as an intellectual. The people who are smart enough to do well in school are presumably also smart enough to have intellectual lives outside of school. Why do people put up with it? Why is there a presumption that there must be something wrong with someone who didn't finish the standard course? I think part of the answer is that, separately from whether the standard course makes sense as a class or personhood marker, once the signaling regime has been established, it's mostly true that people who don't finish the standard course probably have something wrong with them. Separately from the fact that I'm obviously right that my personal passion projects are more intellectually meritorious than the busywork school demanded of me, there's also something wrong with me. My not finishing the first time at UC Santa Cruz (expected class of 2010) wasn't just a matter of opportunity costs. I also had obscure psychological problems unrelated to my intellectual ability to do the work, which were particularly triggered by the school environment (and thankfully aren't triggered by software industry employment relations). Someone with my talents who wasn't crazy probably would have arranged to finish on time for pragmatic reasons (notwithstanding the injustice of the whole system). This makes it slightly less confusing that the system hasn't been overthrown. It's not that school somehow has a monopoly on learning itself. It's that people who are good at learning mostly don't have problems getting the mystical piece of paper granting them legal and social privileges, and therefore don't have a chip on their shoulder about not having it. If that were the entirety of the matter, it wouldn't present a sufficient reason for me to finish. There would be be little point in proving to anyone that I've outgrown my youthful mental health problems by showing that I can endure the same abuses as everyone else, when anything I might want to prove to someone is proven better by my history of making real things in the real world (code that profitable businesses pay for, blog posts that people want to read of their own volition). But it gets worse. It may just be possible that I have something prove intellectually, not just psychologically. In 2010, after studying math on my own for a couple years (having quit the University at Santa Cruz in 2007), I enrolled in a differential equations class at the local community college, expecting to do well and validate the glory of my self-study. I was actually interested in math. Surely that would put me at an advantage over ordinary community college students who only knew how to do as they were told? In fact, I did poorly, scraping by with a C. No doubt the people who believe in being college-educated will take this as proof of their worldview that nothing of intellectual value happens outside of schools, that anyone who thinks they learned something from a book that wasn't assigned by their officially designated instructor is only deluding themselves. Ultimately, I don't think this is the correct moral. (If a poor performance in that one class counts as evidence against the hypothesis that I know what I'm doing, then good or dominant performances elsewhere—including in other school math classes—count as evidence for; a full discussion of the exact subskill deficits leading to my differential equations debacle is beyond the scope of this post.) But even if the people who believe in being college-educated are ultimately wrong, I'm haunted by the fact they're not obviously wrong. The fact that my expectations were so miscalibrated about the extent to which my being "into math" would easily convert into proficiency at finicky differential equations computations makes it less credible to just point at my work online and say, "Come on, I'm obviously the equal of your standard STEM graduate, even if I don't have the mystical piece of paper." If that were the entirety of the matter, it still wouldn't present a sufficient reason for me to finish. Desperately trying to prove one's worth to the image of an insensible Other is just no way to live. When I was at SF State in 2012 (having endured the constant insults of three-plus semesters of community college, and my father being unwilling to pay for me to go back to Santa Cruz), it was for the perceived lack of other opportunities—and I was miserable, wondering when would my life begin. Whatever resources the university might have offered towards my genuine intellectual ambitions were tainted by the bitterness that I mostly wasn't there to learn math; I was there because I felt coerced into proving that I could join the ranks of the college educated. But now that I've earned some of my own money (and for unrelated reasons feel like my life is over rather than waiting to begin), the relative balance of motivations has shifted. Getting the mystical piece of paper is still a factor, but now that it feels like I have a real choice, I think I can seek advantage in the situation with less bitterness. It helps that I only have a few "general education" requirements left, which I experience as insulting obedience tests that are wholly inferior to my free reading and blogging, regardless of the quality of the professor. In contrast, I can regard some upper-division math classes as a worthy challenge. (Yes, even at SFSU. I am not very intelligent.) Learning math is hard and expensive: I can see how it makes sense to organize a coordinated "class" in which everyone is studying the same thing, with assignments and tests for feedback and calibration. It doesn't seem like a betrayal of the divine to want to experience meeting that external standard with pride—now that I'm less crazy, now that I have a real choice, now that my life is otherwise over anyway. I'm not committed yet (the admissions office is supposed to get back to me), but I'm currently leaning towards doing it.
2024-05-11
https://www.lesswrong.com/posts/xLvQwZFq5onx6pABT/custom-audio-switch-box
xLvQwZFq5onx6pABT
Custom Audio Switch Box
jkaufman
When I play live I have a bunch of instruments, including: Mandolin: an electric mandolin Computer: a custom MIDI mapper driven by keyboard, foot drums, and breath controller. Bass whistle: a whistle-controlled bass synthesizer I also have some effects, primarily a talkbox and an audio-to-audio synth pedal. Normally I route the mandolin into the effects, but I've recently been wanting more options: The computer effects are a lot of fun routed through the talkbox. If I set the bass whistle to emit just a sine wave, and pipe that into the synth pedal, I can control professionally-designed sounds: The thing that makes this tricky is that I want to be able to play mandolin direct (which goes via the talkbox output) at the same time as playing bass whistle (which goes via the pedals output). I sketched a lot of options: And eventually realized I only needed something simple: This lets me switch between two modes: Mandolin goes to talkbox, whistle goes to pedals. Mandolin goes to pedals, computer goes to talkbox. I wired something up with a 3PDT switch, five 1/4" jacks, and a couple wagos, in a a small case: The 1/4" jacks are very slightly too big for the case, so I needed to make them very slightly smaller. Which I did (not recommended) by melting away the offending region with the soldering iron. I tested it with my mandolin pretending to be each instrument, and it seems to work! The main risk is that the volumes of the input instruments aren't in the right range, which I won't know until I combine all three instruments. If that's the situation I think I can add some resistors to balance it out. If the levels are decent, though, I'll give it a go at Porchfest tomorrow! Comment via: facebook, mastodon
2024-05-11
https://www.lesswrong.com/posts/Z87fSrxQb4yLXKcTk/mats-winter-2023-24-retrospective
Z87fSrxQb4yLXKcTk
MATS Winter 2023-24 Retrospective
utilistrutil
Co-Authors: @Rocket, @LauraVaughan, @McKennaFitzgerald, @Christian Smith, @Juan Gil, @Henry Sleight, @Matthew Wearden, @Ryan Kidd The ML Alignment & Theory Scholars program (MATS) is an education and research mentorship program for researchers entering the field of AI safety. This winter, we held the fifth iteration of the MATS program, in which 63 scholars received mentorship from 20 research mentors. In this post, we motivate and explain the elements of the program, evaluate our impact, and identify areas for improving future programs. Summary Key details about the Winter Program: The four main changes we made after our Summer program were:Reducing our scholar stipend from $40/h to $30/h based on alumni feedback;Transitioning Scholar Support to Research Management;Using the full Lighthaven campus for office space as well as housing;Replacing Alignment 201 with AI Strategy Discussions.Educational attainment of MATS scholars:48% of scholars were pursuing a bachelor’s degree, master’s degree, or PhD;17% of scholars had a master’s degree as their highest level of education;10% of scholars had a PhD.If not for MATS, scholars might have spent their counterfactual winters on the following pursuits (multiple responses allowed):Conducting independent alignment research without mentor (24%);Working at a non-alignment tech company (21%);Conducting independent alignment research with a mentor (13%);Taking classes (13%). Key takeaways from scholar impact evaluation: Scholars are highly likely to recommend MATS to a friend or colleague (average likelihood is 9.2/10 and NPS is +74).Scholars rated the mentorship they received highly (average rating is 8.1/10).For 38% of scholars, mentorship was the most valuable element of MATS.Scholars are likely to recommend Research Management to future scholars (average likelihood is 7.9/10 and NPS is +23).The median scholar valued Research Management at $1000.The median scholar reported accomplishing 10% more at MATS because of Research Management and gaining 10 productive hours.The median scholar made 5 professional connections and found 5 potential future collaborators during MATS.The average scholar self-assessed their improvement on the depth of their technical skills by +1.53/10, their breadth of knowledge by +1.93/10, their research taste by +1.35/10, and their theory of change construction by +1.25/10.According to mentors, of the 56 scholars evaluated, 77% could achieve a “First-author paper at top conference,” 41% could receive a “Job offer from AI lab safety team,” and 16% could “Found a new AI safety research org.”Mentors were enthusiastic for scholars to continue their research, rating the average scholar 8.1/10, on a scale where 10 represented “Very strongly believe scholar should receive support to continue research.”Scholars completed two milestone assignments, a research plan and a presentation.Research plans were graded by MATS alumni; the median score was 76/100.Presentations received crowdsourced evaluations; the median score was 86/100.52% of presentations featured interpretability research, representing a significant proportion of the cohort’s research interests.After MATS, scholars reported facing fewer obstacles to a successful alignment career than they did at the start of the program.The obstacles that decreased the most were “mentorship,” “collaborators,” experience with a specific alignment agenda, and “grant proposal writing.”“Funding” increased as an obstacle over the course of MATS.At the end of the program, “publication record” was an obstacle for over 60% of scholars. Key takeaways from mentor impact evaluation: Mentors are highly likely to recommend MATS to other researchers (average likelihood is 8.2/10 and NPS is +37).Mentors are likely to recommend Research Management (average likelihood is 7.7/10 and NPS is +7).The median mentor valued Research Management at $3000.The median mentor reported accomplishing 10% more because of Research Management and gaining 4 productive hours.The most common benefits of mentoring were “helping new researchers,” “gaining mentorship experience,” “advancing AI safety, generally,” and “advancing my particular projects.”Mentors improved their mentorship abilities by 18%, on average. Key changes we plan to make for future cohorts: Introducing an advisory board for mentor selection;Shifting our research portfolio away from such a large interpretability dominance;Supporting AI governance mentors;Pre-screening more applicants for their software engineering abilities; andModifying our format for AI safety strategy discussion groups. Theory of Change MATS expands the talent pipeline for AI safety research by empowering scholars to work on AI safety at existing organizations, found new organizations, or pursue independent research. To this end, MATS connects scholars with senior research mentors and reduces the barriers for these mentors to take on mentees by providing funding, housing, training, office spaces, research management, networking opportunities, and logistical support. By mentoring MATS scholars, these senior researchers benefit from research assistance and improve their mentorship skills, preparing them to supervise future research more effectively. MATS aims to select and develop research scholars on three primary dimensions: Depth: Thorough understanding of a specialist field of AI safety research and sufficient technical ability (e.g., ML, CS, math) to pursue novel research in this field.Breadth: Broad familiarity with the AI safety landscape, including organizations and research agendas; a large “toolbox” of useful theorems and knowledge from diverse subfields; and the ability to conduct literature reviews.Taste: Good judgment about research direction and strategy, including what (sub-)questions to investigate, what hypotheses to consider, what assumptions to test, how to measure progress, how to present findings, how to balance risks such as capabilities advancements, and when to cease or pivot a line of research. Read more about our theory of change here. An article expanding on our theory of change is forthcoming. Winter Program Overview Schedule The Winter 2023-24 Program had three phases. Training Phase: Neel Nanda’s scholars participated in a month-long remote curriculum, culminating in a Research Sprint, which informed acceptance decisions for the Research Phase. For other scholars, a series of AI Safety Strategy Discussions replaced our traditional Training Phase (see Strategy Discussions below).Research Phase: In Berkeley, scholars conducted research under a mentor, submitted a written “Research Plan,” networked with the Bay Area AI safety community, connected with collaborators, and presented their findings to an audience of peers and professional researchers.Extension Phase: Many scholars applied to continue their projects in London and Berkeley, with ongoing mentorship and support from MATS. Mentor Selection Approach At a high level of abstraction, MATS faces a complex optimization problem: how should we select mentors and, by extension, scholars that, in expectation, most reduce catastrophic risk from AI? One reason this problem is especially hard is that the value of a portfolio of mentors depends on non-additive interactions; that is, the marginal value of a mentor depends on which other mentors have already been selected working on similar or complementary research agendas. Conscious of these interactions, we aim to construct a “diverse portfolio of research bets” that might contribute to AI safety even if some research agendas prove unviable or require complementary approaches. To make this problem more tractable, we rely on a number of simplifications and heuristics. Firstly, we take a greedy approach to program impact, focusing on improving the next program rather than committing significant efforts to future programs well in advance. We are unlikely to, for example, save significant funding for a future program when we could use that funding for an additional scholar in the upcoming program, particularly as the marginal rejected scholars are generally quite talented. Secondly, we take a mentor-centric approach to scholar selection, beginning with mentor selection and then conducting scholar selection to mentor specifications. In part this is because the number of scholars we can support in a given program depends on the level of funding we receive, which is somewhat determined by funders’ enthusiasm for the mentors in that program. Primarily, however, we adopt a mentor-centric approach because we believe that mentors are the best judge of contributors to their research projects. We ask potential mentors to submit an expression of interest form, which is then reviewed by the MATS Executive. When evaluating potential mentors, we chiefly consider these heuristics: On the margin, how much do we expect this research agenda to reduce catastrophic risk from AI?What do trusted experts think about this individual and their research agenda?How much “exploration value” would come from supporting this research agenda? Could the agenda catalyze a paradigm shift, if necessary?How much research and mentoring experience does this individual have?How would this individual spend their time if they were not supported by MATS?How would supporting this individual and their scholars shape the development of ideas and talent in the broader AI safety ecosystem? Heuristics 5 and 6 are systemic considerations, where we assess the whole-field impact of supporting mentors suggested by heuristics 1-4. For example, if a mentor is already well-served by an academic talent pipeline or hiring infrastructure, perhaps we do not provide significant marginal value. Additionally, if we support too many mentors pursuing a particular style of research, we might unintentionally steer the composition of the AI safety research field, given the size of our program (~60 scholars) relative to the size of the field (~500 technical researchers). Scholar Allocation Because MATS was funding-constrained in Winter 2023-24, we could not admit as many scholars as our selected mentors would have preferred. Consequently, the MATS Executive allocated scholar “slots” to mentors based on the value of the marginal scholar for each mentor (up to the mentors’ self-expressed caps). For example, we might have supported an experienced mentor with four scholar slots before supporting a less experienced mentor with one scholar slot. Conversely, allocating one scholar to a new mentor might have offered higher marginal value than providing an eighth scholar to an experienced mentor. We surveyed mentors about the opportunity cost of taking on an additional scholar. For 95% of mentors, their time spent on an additional scholar would have traded off against progress on their own projects. Winter Mentor Portfolio Of the 23 mentors in the Winter 2023-24 program, 7 were returning from a previous MATS program: Jesse Clifton, Evan Hubinger, Vanessa Kosoy, Jeffrey Ladish, Neel Nanda, Lee Sharkey, and Alex Turner. We welcomed 16 new mentors for the winter: Adrià Garriga Alonso, Stephen Casper, David ‘davidad’ Dalrymple, Shi Feng, Jesse Hoogland, Erik Jenner, David Lindner, Julian Michael, Daniel Murfet, Caspar Oesterheld, David Rein, Jessica Rumbelow, Jérémy Scheurer, Asa Cooper Stickland, Francis Rhys Ward, and Andy Zou. These mentors represented 10 research agendas: Agent FoundationsConcept-Based InterpretabilityCooperative AIDeceptive AlignmentDevelopmental InterpretabilityEvaluating Dangerous CapabilitiesMechanistic InterpretabilityProvable AI SafetyScalable OversightUnderstanding AI Hacking Mentors’ Counterfactual Winters We surveyed mentors about how they would have spent their winters if they had not mentored for MATS. For 74% of respondents, mentoring traded off against time spent on the same projects they pursued outside of MATS. 26% would have spent some counterfactual time on projects they did not have time for during MATS. Other Mentorship Programs Some mentors indicated that if they had not mentored for MATS, they would have spent their counterfactual winters mentoring for a different program. Concurrent with the winter MATS Program, Constellation, a Berkeley AI safety office, ran two research upskilling programs: the Astra Fellowship, and the Visiting Researchers Program. Mentors who indicated they would have counterfactually mentored for a different program may have had Constellation or another program in mind. Scholar Selection Our application process for scholars was highly competitive. Of 429 applicants, 15% were accepted for the Research Phase.[1] Five of these 63 scholars participated entirely remotely; the rest participated in-person for at least part of the program. There was considerable variance in acceptance rates between mentors (2.6% to 33%). Mentors generally chose to screen applicants with rigorous questions and tasks.[2] Why employ such a difficult application process? First, we believe that the distribution of expected impact from prospective scholars is long-tailed, such that most of MATS’ expected impact can be attributed to identifying and developing talent among the most promising applicants. Second, we are bottlenecked on scholar funding and mentor time, so we cannot accept every good applicant. For these two reasons, it is imperative that our application process achieves resolution in the talent tail. From our Theory of Change: We believe that our limiting constraint is mentor time. This means we wish to have strong filtering mechanisms (e.g. candidate selection questions) to ensure that each applicant is suitable for each mentor. We’d rather risk rejecting a strong participant than admitting a weak participant. In particular, given that 74% of mentors would have used their time spent mentoring on their (highly valuable) research if not in program, we feel justified in our high bar for scholar acceptance. We additionally believe, based on our conversations with past applicants and scholars, that the challenging selection problems are often seen as a fun and useful skill-building exercise. Our rigorous application process reduces noise in our assessments of applicant quality, even at the cost of discouraging some potentially promising candidates from applying. Additional filters reduced the number of scholars admitted to the Extension Phase from the Research Phase.[3] Strong applicants have demonstrated their abilities in terms of the depth, breadth, and taste dimensions described above. During pre-program recruiting, we informed prospective applicants that our ideal candidate possesses a breadth of AI safety knowledge equivalent to having completed AISF’s Alignment Course, experience with technical research at a postgraduate level (e.g., in CS, ML, math, or physics), and a motivation to pursue a career in AI safety to mitigate catastrophic risk. We clarified that, for some mentors, engineering experience can substitute for research experience. We also encouraged people who did not fit these criteria to apply, noting that many past applicants without high expectations were accepted. Educational Attainment of Scholars A variety of educational and employment backgrounds were represented among the admitted cohort. 3% of scholars had no college education, 38% of scholars had at most a bachelor’s degree or were in college, 30% had at most a master’s degree or were in a master’s program, 28% had a PhD or were in a PhD program. Almost half of scholars were students. The high level of academic and engineering talent entering AI safety continues to impress us, but we remain committed to increasing outreach to experienced researchers and engineers. Scholars’ Counterfactual Winters At the start of the program, we prompted scholars, “If MATS didn't exist, this winter I would most likely be…”, instructing them to “pick the category that best describes your most likely counterfactual situation.” Of the 8 scholars who would have conducted mentored alignment research in the absence of MATS, 4 are undergraduate students, and 3 are pursuing a graduate degree. Elaborating on their alternative summer plans, many respondents mentioned non-safety technical roles, coursework, and independent research: “Day job, machine learning research engineer to publish papers/develop products.”“Working a regular software development job”“I would probably have taken a job in industry as a research engineer at an AI lab (although this may not have been realistic). Alternatively I may have done an internship in an alignment organisation.”“I'd be doing similar research but more independently and with less mentorship.”“More school + possibly working under a phd student at school on a less exciting/alignment-relevant project”“The Astra fellowship. If that didn't exist either, I'd be doing my PhD research”“I'd research AI safety and alignment with my advisor and collaborators” Engineering Tests In our Summer 2023 Retrospective, we expressed our intention to improve our application filtering process. One component of a better filter for some streams is increased screening for software engineering skill. This can be a costly evaluation process for mentors to run themselves, so for the Winter 2023-24 Program we contracted a SWE evaluation organization to screen applicants for empirical ML research streams. At the suggestion of Ethan Perez, we contracted CodeSignal to administer engineering tests, which focused on general programming ability, rather than ML experience. Ideally, every applicant to a mentor who focused on empirical research would have been pre-screened with a CodeSignal test, saving the mentor time on scholar selection. Due to budget constraints, we only had enough tests for 20-32% of each mentor’s applicants, so mentors had to manually review each application to determine who would receive a CodeSignal test. For the next program, we expect to administer CodeSignal tests more widely (see Pre-Screening with CodeSignal below). Of the accepted scholars, 40% achieved perfect CodeSignal scores of 600/600, compared to 15% of applicants. Notably, some applicants who achieved perfect scores were not accepted into the Research Phase. Two mentors disproportionately accounted for the accepted scholars with lower scores. One of these mentors rejected an applicant with a perfect score and an applicant with a near-perfect score, but indicated in a later survey that they wished their applicants had more programming experience. Stipends Financial barriers can prevent promising researchers from entering AI safety, so providing grants (referred to here as stipends) to scholars is an important component of programs like ours. While MATS does not provide funding directly to scholars (except via office and housing support and reimbursements for travel and computing costs), AI Safety Support has offered to provide grants to scholars who complete the program. During the 8-week Summer 2023 program, stipends were set at $40/h. Following that program, surveys of our alumni revealed that many would have done MATS for a lower stipend. The average surveyed alumnus would have participated in MATS for a stipend of $2160/month, or $13.50/h at 40 h/week. Note that 35% of the surveyed alumni said they would have participated in MATS for free; if we discount these alumni, the average minimum stipend was $3486/month, or $21.79/h at 40 h/week. MATS and AI Safety Support ultimately chose a stipend of $4800/month, or $30/h, for the Winter 2023-34 Program because it would have satisfied 85% of alumni, was more than double the average minimum stipend, and was nevertheless larger than that offered by comparable academic mentorship programs. Constellation subsequently chose the same rate for those participating in their Winter programs. We followed up with some of these alumni to ask for explanations of their low numbers. Some cited savings, alternative funding sources, jobs to return to, the in-kind benefits MATS provides (housing, office, food, etc), and expectations of low compensation as they pivoted into AI safety. One alumnus commented that, while they would have done MATS without a stipend, the stipend was an honest signal that “this is a serious programme that expects to get serious people . . . not just fresh grads with nothing else to do.” We are cognizant that these alumni are reporting with the benefit of hindsight, so their answers may not reflect the positions of would-be applicants who do not yet know how valuable MATS will be for them. Moreover, these responses are subject to survivorship bias: we could never observe alumni for whom the stipend was too low, because such people would not have joined the program. But these caveats would still apply no matter how high of a stipend we chose. Our alumni responses indicated that we could afford to reduce the stipend without jeopardizing the talent of our applicant pool. For a fixed budget, setting a lower stipend allows MATS to fund more scholars.[4]At $30/h, MATS was able to accept ~18 more scholars than we could have accommodated under the previous program’s rate. Anecdotally, we have heard of one individual who was deterred from applying but would have applied if the stipend was higher. Given the tradeoff we face, we expect that there will always be cases like this, even under the optimal stipend amount. Mentor Suggestions We surveyed mentors about their experiences with our Scholar Selection process. Mentors recommended some improvements: Mentors elaborated on “Other improvements”: “I got some mixed messages about how available the coding interview option is. So I spent some time thinking about which of my applicants I'd want coding interviews for and then got a response saying sth along the lines of "yeah, actually we really only want to do this in cases where it's really necessary" and then no coding interviews happened in the end.”“I followed a relatively rigorous process (spreadsheet with a bunch of scores, aggregated in some way) for evaluating applicants to avoid [conflicts of interest (CoIs)] (because I knew/was friends with) at least one of the applicants. (I was asked to follow such a process when I mentioned CoIs.) This was kind of a pain. It took a lot of time, without offering huge benefits. Anyway, to some extent this was self-inflicted. I probably could have come up with a simpler system.”“Better coordination across streams.” One mentor followed up on their answer to say “I selected "more qualified candidates to choose from" because that always seems better, but I want to be clear that I'm already impressed by MATS scholars (and think it's definitely worth my time mentoring them).” We also asked mentors how their scholars could have been better suited for MATS. One mentor elaborated, “I mean I'd love for my scholars to be better at everything for sure! But at some point they wouldn't want to be my scholars anymore. I think as it was my scholars were very good. One or two are hidden talents where lack of e.g. communication is blocking them from being a great researcher but otherwise they would be.” To improve scholars along these dimensions, MATS could offer targeted training, but we could also strengthen the quality of the applicant pool and improve our selection process to choose the most qualified applicants. Neel Nanda’s Training Phase (Nov 20-Dec 22) In past programs, MATS has used AI Safety Fundamentals’ Alignment 201 course to prepare incoming scholars for the Research Phase. Due to time constraints and scholar feedback, we replaced Alignment 201 with a custom curriculum that scholars completed throughout the Research Phase (see Strategy Discussions below). Neel Nanda followed an expedited schedule to run his Training Phase, which plays an indispensable role in his scholar selection process, as well as providing a valuable program in its own right. Neel accepted 30 scholars into the Training Phase, of whom 10 progressed to the Research Phase.[5] The first three weeks included live sessions of Neel doing research, brainstorming open problems, and lecturing on Mechanistic Interpretability (core techniques, SAEs); readings groups for papers (Mathematical Framework, Indirect Object Identification, Toy Models of Superposition, and Towards Monosemanticity); discussion groups on topics like “Is Mech Interp Useful?”; and remote opportunities to connect with collaborators. Participants worked together on research challenges, including extensive pair programming. The second two weeks were dedicated to a Research Sprint, which participants completed in pairs. Neel made acceptance decisions for the Research Phase primarily on the basis of performance in the Sprint. We think that participating in Neel’s Training Phase was likely worth it even for those scholars who did not progress to the next phase, as evidenced by the large number of trainees who elected to audit Neel’s Training Phase without the possibility of progressing and testimonials from past trainees.[6] 24 of Neel’s trainees received a prorated stipend for their participation: $4.8k for this program. 15 others participated in the Training Phase without a stipend due to funding constraints. Accepted pairs of scholars continued work on their Sprint projects in the Research Phase. Research Phase Elements (Jan 8-Mar 15) While mentorship is the core of MATS, additional program elements supply other sources of learning and upskilling. Guest researchers host seminars to deliver technical information to scholars, workshops teach research tools that scholars can apply in their projects, and Research Management aims to improve working relationships between scholars and mentors. The next sections elaborate on each of these elements, along with six more elements that fill out our program offerings: Milestone Assignments, the Lighthaven Office, Strategy Discussions, Networking Events, Social Events, and Community Health. Mentorship We believe conducting research under an experienced mentor is a crucial input to the development of research leads. Mentors meet with their scholars at least once a week, and some meet more frequently.  Mentors differ in their priorities, styles, and expectations. To communicate these differences to applicants, each mentor composed a personal fit statement. For example, the NYU ARG mentorship team broadcasted: Each mentor will likely lead separate projects, each of which will have a small team of mentees, although mentees and mentors will help out with/provide feedback to other projects whenever it is useful to do so. Mentees will be able to choose a project/mentor at the beginning of the program (or propose their own project, assuming it aligns with mentor interest). Mentorship for scholars will likely involve: 1 hour weekly meetings for each project, and occasional 1:1 meetings with a mentor + "all hands" meetings with all participants in the streamProviding detailed feedback on write-up draftsSlack response times typically ≤ 48 hours According to scholar ratings, the median scholar’s mentor spent 2.0 hours communicating with them every week, and 3.0 hours for the average scholar. The average mentor was effective at communicating (8.0/10), engaged with some details of scholars’ projects while delivering high-level feedback on research directions, and balanced an emphasis on research outputs with an emphasis on process. Mentors also differed in their influence on scholars’ project selections. The most common arrangement involved mentors “shaping” their scholars’ research, but some mentors had a stronger hand in project selection, assigning topics or presenting a list of possibilities. Research Management In previous programs, MATS offered one-on-one “Scholar Support” sessions dedicated to research planning, career planning, productivity improvements, and communication advice. Scholar Support took on some responsibilities that mentors often bear by default, such as brainstorming, rubber-ducking, conflict resolution between scholars, and some goal-setting, which freed up mentors to focus on their comparative advantages: mentoring and providing technical expertise. Scholar Support also offered coaching assistance not typically expected of mentors, such as accountability reminders, helped scholars get more value out of their mentor meetings, and assisted scholars with program milestones. Scholar Support Specialists did not typically meet with mentors, just scholars and MATS leadership. However, the MATS team noticed gaps in the support Scholar Support was able to offer, so during the Autumn 2023 Extension Phase, the MATS Program Coordinator (London) and mentor Ethan Perez piloted an alternative model of scholar/mentor support we call Research Management. Instead of opt-in Scholar Support meetings emphasizing issues indirectly related to scholars’ research, such as time-management and accountability, MATS staff held a mandatory weekly check-in with each of Ethan’s scholars focused more directly on research questions: How is your project going?What are your main bottlenecks at the moment?Any updates on your compute usage and needs at the moment?Do you have any feedback for Ethan this week? The Research Manager distilled scholars’ responses into weekly reports, which apprised Ethan of his scholars’ progress and challenges. In this way, the Research Manager cultivated a relationship with Ethan, and gained visibility across Ethan’s scholars. These features marked a departure from the Scholar Support model, under which two scholars with the same mentor could meet with two different members of the Scholar Support team, making it difficult to identify common themes. Our foray into Research Management exceeded our expectations. Ethan testified, I think [Research Management check in notes are] adding like... almost all of the value of my 1:1 check-ins with [scholars]... Amazingly helpful, brought up a bunch of great considerations/flags that are great to know (and would’ve been great to know earlier, so I only wish we started this sooner). Due to this success, we decided to primarily offer Research Management instead of Scholar Support. In our previous retrospective, we anticipated this shift: Scholar Support is also planning to help most mentors manage scholar research for the Winter 2023-24 cohort. . . Mentors would benefit from the shared information about scholar status;Scholars would benefit from their mentors’ goals being understood by the Scholar Support team, especially as the mentors’ research directions change throughout the program. In offering research management help to mentors, Scholar Support will take a more direct role in understanding research blockers, project directions, and other trends in scholar research, and summarizing that information to mentors with scholar consent. Of the 56 scholars who filled out our final survey, 92% met with a Research Manager at least once. These scholars reported benefiting from Research Management in multiple ways. Some of the “Other” ways that Research Management helped scholars included: “Helping me make some decisions and encouraging me to do some things I was putting off. Following up on those things to make sure I did them. Helping me unblock myself by teaching me / having me try some introspection techniques.”“Encouraged me to think about theory of change and higher level motivation for our research direction. Provided feedback for grant applications and presentations. I found it useful to recap what we had achieved each week and what our next steps would be.”“Emotional support! That really mattered. Not in, like, a therapy way, just by being there for us, especially for one stream-mate who was struggling with emotional stability.”“[My RM] often had good advice on productivity and ideas for impactful high ROI actions that I could take, and generally reflecting on how things were going. It was also very useful that [my RM] knew how other scholars had experienced previous MATS programs, so that I could be more deliberate about how I wanted to spend my time here.”“It was useful to me in various ways, but probably the top one was having someone I can safely talk with about various dilemmas and confusions and have help gaining clarity.”“He constantly asked questions to get to the core of trying to figure out what I'm *actually* precisely being slowed down by. One realization I had while talking to him is I'm almost constantly sleep-deprived; this is a thing I'll need to improve in the long term.”“...She also kept me up to date with relevant communication from MATS and gave me a space to question what I was working on and what direction I was going in. In addition, she helped me set up my cloud computing! She was great and improved my MATS experience.” We also asked mentors how Research Management helped them: One mentor elaborated on the “other” support, saying, “Specifically I gather the RM gave some useful structure to executing my scholars' projects, which I didn't quite have the ability or time for.” Seminars & Workshops As foreshadowed in our previous retrospective, we reduced the volume of seminar offerings for the Winter program. MATS hosted 12 seminars with guest speakers: Buck Shlegeris (twice), Lennart Heim, Fabien Roger, Adam Gleave, Neel Nanda, Vanessa Kosoy, Jesse Hoogland, Evan Hubinger, Marius Hobbhahn, David Krueger, and Owain Evans. Additionally, some scholars invited their own guests to present, including Logan Riggs, Tomáš Gavenčiak, and Jake Mendel. We held workshops on EA Global Conference Preparation, Research Idea Concretization (with Erik Jenner), Theories of Change (with Michael Aird), Preventing and Managing Burnout (with Rocket Drew), and Language Model Evals and Workflow Tips (with John Hughes). As with seminars, scholars organized workshops of their own, including speed meetings, a Neuronpedia interpathon, and a career planning workshop. Milestone Assignments Research Plans Halfway through the Research Phase, every scholar was required to submit a Research Plan (RP), outlining the AI threat model or risk factor motivating their research, a theory of change to address that threat model, and a plan (based on SMART principles) for a project to enact that theory of change. Using this rubric, 10 MATS alumni graded the RPs to offer scholars constructive feedback, provide MATS with an internal metric of success, and occasionally inform Extension Phase acceptance decisions (see Extension Phase below). To ensure success on this milestone, the Research Management team held an RP workshop and office hours session. We required Research Plans to develop scholars’ ability to contribute to goal-oriented research strategy and to make it easier for scholars to write a subsequent grant proposal. Many scholars repurposed components of their RPs in applications to the Long-Term Future Fund (LTFF) for post-MATS grant support, primarily in the MATS Extension Phase. Thomas Larsen, an LTFF grantmaker, held a workshop to demystify the LTFF’s decision process and funding constraints. Symposium Presentations The Research Phase concluded with a two-day Scholar Symposium, during which scholars delivered 10-minute talks on their research projects to their peers and members of the Bay Area AI safety community. In previous programs, we compressed the Symposium to one day by holding simultaneous presentations in two different rooms. We expanded the event to two days so attendees, including scholars, would not have to choose between presentations, and to ease the operational burden of hosting the event. Attendees graded talks according to the rubric, providing scholars with constructive feedback and the MATS team with evaluation data (see Milestone Assignments below). We helped scholars practice their talks and held office hours to answer questions about the assignment. During the week of the Symposium, we also hosted “PowerPoint Karaoke,” at which scholars presented someone else’s slides.[7] We also held weekly lightning talk sessions, which afforded scholars an opportunity to develop their presentation skills in a low-stakes setting. Lighthaven Office The Research Phase took place at the Lighthaven campus in Berkeley. While some scholars lived in one Lighthaven building during the Summer 2023 program, this was our first program using the full property and using it for office space in addition to housing. MATS shared the space with the Lightcone Infrastructure team and a few independent researchers. As a renovated inn, Lighthaven possesses unique features that distinguish it from a traditional office: the property includes six detached buildings, an outdoor event space, common spaces conducive to focused work, and an extensive library. Strategy Discussions For this program, we introduced a 7-week AI strategy discussion group series to substitute for our typical pre-program Training Phase. The Alignment 201 curriculum we used in past programs was designed to respect multiple constraints, including remote participation, moderate time commitments, limited facilitation, varied backgrounds, varied career interests, and wide diffusion. Since these constraints do not necessarily apply to MATS, it is unlikely that a curriculum designed under them is the optimal curriculum for training MATS scholars. We structured our new material around the key AI safety strategy cruxes that we observed were relevant for MATS scholars’ research. During each of the first 7 weeks, scholars participated in a facilitated discussion group on an opt-out basis. From our curriculum post: Each strategy discussion focused on a specific crux we deemed relevant to prioritizing AI safety interventions and was accompanied by a reading list and suggested discussion questions. The discussion groups were facilitated by several MATS alumni and other AI safety community members and generally ran for 1-1.5 h. The topics of each week were: How Will AGI Arise?Is the World Vulnerable to AI?How Hard Is AI Alignment?How Should We Prioritize AI Safety Research?What Are AI Labs Doing?What Governance Measures Would Reduce AI Risk?What Do Positive Futures Look Like? The median scholar attended 4 of the 7 strategy discussions. 51% of scholars would have preferred “fewer” discussions, and 5.7% would have preferred “many fewer.” Based on this and other feedback, we intend to make discussion groups opt-in for scholars in future cohorts (see Modified Discussion Groups below). Discussion groups benefited scholars in different ways: One scholar elaborated on the “Other” value: “Because of the random groups, I met and talked with more MATS scholars than I otherwise would have. There are also a few scholars who are not so talkative on topics other than AI safety, so it was nice to speak to them during the AI strategy discussion groups.” Networking Events We believe that a large benefit of holding the Research Phase in Berkeley and the Extension Phase in London is the ease of connecting scholars to the researchers who work in these global AI safety hubs. To facilitate such connections, MATS organized a number of networking events: Alignment Collaborator Speed Meetings (held during the EA Global: Bay Area conference);Lunch at FAR Labs;Dinner at Constellation;AI Impacts Dinner (organized by AI Impacts, held at Lighthaven);Career Fair featuring 12 organizations;Three evening networking socials open to the community. Prior to our career fair, 23 scholars submitted a survey indicating which organizations they would most like to see at the event. Their responses form a snapshot of the current career interests of aspiring AI safety researchers. To obtain a sense of the popularity of the organization categories, we show how many scholars, out of 23, voted for the average organization in each category: This distribution is similar to the one we saw for last summer’s career fair, with the exception of the government organizations, which did not exist at the time. Social Events In addition to the networking socials, we held a number of social events, including outings for scholars from underrepresented backgrounds (younger, older, religious, women, people of color, and LGBTQ+). Multiple scholars took the initiative to organize social events, including music nights, movie nights, an origami social, DnD games, exercise outings, excursions into San Francisco, and a hike. Community Health Scholars had access to a Community Manager to discuss community health concerns, such as conflicts with other scholars or their mentors, and to provide emotional support and referrals to health resources. We believe that community health concerns like imposter syndrome and undiagnosed conditions like ADHD can detriment promising researchers entering AI safety. Furthermore, interpersonal conflicts, mental health concerns, and a lack of connection can inhibit research productivity by contributing to a negative work environment, detracting from the cognitive resources that scholars could allocate toward research and precluding fruitful collaborations. It is important for programs like MATS to provide support in these areas, while air-gapping evaluation and community health when appropriate to incentivize scholars to seek help when they need it without fear of adverse effects on themselves or others. A small number of scholars met with the Community Manager frequently, while 67% of scholars met with the Community Manager just once. Even scholars who do not seek out community health support benefit from the existence of this safety net (see Community Health below). The Community Manager aims to equip scholars with the skills and resources to work productively and sustainably after they leave MATS. Scholars reported how the Community Manager helped them: Some of the “Other” ways that Community Health helped scholars included: “helping to situate myself within the landscape of the AI safety community”“Measuring how much I was working”“I once asked [the Community Manager] for advice on whether to be more positive vs realistic in discussions of other scholars projects.”“I thought the burnout workshop was really good. . . I also liked that there were so many organized activities.”“I got some clarity on policies, got some tips and ideas regarding socializing in the bay area, and the burnout workshop was good.”“It was useful for getting broader context on how the program is going for others and how other streams are progressing. I was mostly remote throughout the program so the other aspects were less relevant for me. The burnout workshop material was helpful.” Extension Phase (Apr 1 - Jul 19) By the end of the main program, many scholars are looking for a structured way to continue their projects while they plan their transition into the next phase of their career. The extension phase allows scholars to pursue their research projects with gradually increasing autonomy from their mentors and MATS. The MATS Executive Team accepts scholars into the Extension Phase based on: Mentor endorsements,Research Plan grades, andWhether scholars had secured independent funding for their research. Funding from an outside organization, typically the LTFF or Open Philanthropy, is an important input to MATS’ evaluation process because it provides external evaluation of scholars’ research. Of the 50 scholars who applied, 72% were accepted into the Extension Phase, and 4% have pending applications. Of the 36 accepted scholars, 56% are completing the Extension at the London Initiative for Safe AI (LISA) office, 11% are continuing in Berkeley from FAR Labs, and the remainder are working remotely. During the Extension Phase, we expect scholars to formalize their research projects into publications and plan future career directions. To support these efforts, we continue to offer Research Management (see above) and programming such as seminars, workshops, and networking events, especially for the scholars in the LISA office. FAR Labs offers its own programming, including a weekly seminar series. We tailor this programming to prepare scholars for the transition out of the MATS environment by encouraging them to develop longer term plans and make connections that advance those plans. We additionally offer scholars the support and resources to coordinate their own events and programming, giving them opportunities to take ownership of networking activities and further develop their independence. Concurrently, scholars continue to receive support from their mentors, though with less frequency than during the Research Phase. Winter Program Evaluation In this section, we lay out how scholars rated different elements of the program and how we evaluated our impact on scholars’ depth, breadth, and taste, among other metrics. Evaluating Program Elements The median scholar rated the following program elements as follows: Likelihood to recommend MATS was 9.0/10.Mentorship was a significant portion of value, rated 8.0/10.Research Management was a small portion of value, rated 8.0/10.Seminars were a small portion of value.Workshops were a small portion of value.The office was a significant portion of value.Strategy Discussions were a small portion of value.Other connections, including networking outside of MATS, were a small portion of value.Community health was a small portion of value, improving mental health by 5.0%, according to scholar self-reports. Overall Program Scholars rated the Research Phase highly when considering their likelihood of recommending MATS to others (mean = 9.2/10). In many industries, this type of question is commonly used to calculate a “net promoter score” (NPS). Based on our respondents, the NPS for MATS is +74, evidencing a very successful program. We also asked mentors about their likelihood to recommend MATS. The average mentor responded 8.2/10, and the NPS was +37. One mentor reported a likelihood to recommend of 2, which we suspect to be artificially low due to extenuating circumstances unrelated to the quality of the program. Mentorship Overall, scholars rated mentors highly (mean = 8.1/10). For 38% of scholars, the mentorship they received was the most valuable component of MATS. Note that 15/61 respondents did not select any program element as the “Most valuable part of MATS.” Here are some representative comments from scholars in each of these categories: Little or no value added“I think my mentor was probably not involved enough with my project, [my mentor] did not get involved other than to the extent of knowing what my project was at a high level. I think this was great for intellectual freedom and exploration but I felt like having more involvement in the project could have been useful. Other than the level of involvement she was brilliant, always encouraging and supportive of my ideas. . . I still managed to get sufficient mentorship for my research project from [a specific scholar] and my research manager . . . as well as through conversations with other scholars.”Small portion of value added“Remote. In the last 2-3 weeks we met only once per week for about 20-30 min. On those meetings, I was mostly giving update on my work. [My mentor] gave me emotional support. Almost no help on [my project] in the slack. I was doing it mostly alone with the help of others. [My mentor] connected me with other people which helped me.“I felt as if our styles and expectations didn't quite match up ([one of my mentors] wanted just to code and not ask or think about the high level, and I didn't realise this soon enough). [My other mentor] and I got along better but I felt as if the overall research direction was left quite open ended and I was mostly on my own.”Significant portion of value added“[My mentor] had incredible insight and research instinct. Talking through different thought processes with him in the room (virtually) was a huge value add.”“It was great! It would have been wonderful if she had had more time to offer, but the time she did have to offer was extremely valuable.”“There were no issues but we were very focused on the specific project we were working on. I do think there could have been a little more general career advice/ direction. However, I also didn't ask for it a ton or  prompt it, and if I did I'm sure [my mentor] would have been happy to share his thoughts.”Most valuable part of MATS“[My mentor] was very attentive in general, reading my updates daily and answering my questions and several times per week sending ideas that occurred to him, and helping with paper writing.”“[My mentor’s] advice about preferred research directions was consistently good - I would have done waaaaay less valuable research on my own. . . I can't believe I learned so much ML software engineering and training skills from [my mentor’s collaborator] in such a short time.”“[My mentor] is an outstanding mentor. From technical design to addressing team dynamics, I feel incredibly lucky to have him as a mentor. More specifically, [my mentor] helped set an ambitious research agenda while constantly empowering the team to drive it forward and provide their own input. He was exceptionally responsive to questions and very well organized.” Among scholars for whom mentorship was not the most valuable component of MATS, 50% said that the cohort of peers was the most valuable component. Research Management Overall Rating by Scholars Scholars rated Research Management highly when considering their likelihood of recommending it to future scholars. The average likelihood to recommend among scholars was 7.9/10, and the NPS was +23. We asked scholars how RM contributed to the value of MATS: Scholars elaborated on their experiences with Research Management: Little or no value added“As a solo experienced researcher, I didn't feel I needed my RM to stay on track; I think that help with ideas for [Research Plan Milestone] was valuable but otherwise I don't think a great use of our time.”Small portion of value added“It was surprisingly useful. I really wish I had a research manager during my PhD, lol. [My RM] was excellent for keeping me on track, helping me set goals, and helping me to re-orient my plans when needed.”Significant portion of value added“[My RM] was an exceptionally valuable resource, and without her guidance and leadership, MATS would not have generated the same value for me. Not only did [my RM] help resolve major issues with group dynamics, but she helped shape my career and get the most out of the program.”“Talking to [my RM] helped me avoid getting more burned-out and helped me stay productive / healthier / happier than I would have been otherwise. It makes it really clear to me that this kind of support is hugely important and it's crazy I don't have a research manager the rest of the time.” Overall Rating by Mentors We asked mentors about their likelihood to recommend RM, and the average mentor responded 7.7/10; the NPS was +7.0. We also asked mentors to rate the support they received from their RM, where 1/10 = No value or negative value;5/10 = Time spent interfacing with your RM was roughly as valuable as equivalent time spent on the counterfactual;10/10 = Time spent interfacing with your RM was at least 10 times more valuable than equivalent time spent on the counterfactual. The average response to this question was 7.3/10. We asked mentors, “Would you want to continue working with your RM in the extension phase, and/or future cohorts?” All respondents said “Yes, same RM”, except for one mentor who said “Yes, different RM.” Their RM is no longer in this role, and this mentor followed up to say “I got the impression that different RMs had quite different working styles. Would be keen to try working with someone else to see what type of RM works best for me.” Some mentors who expressed interest in continuing with their RM from the winter explained their reasoning. One said “Experience was good overall. It's nice to have updates from someone else so I have a check if there's anything [my scholar] doesn't say directly to me, but that I should know.” One mentor offered a testimonial about their experiences with Research Management: Due to Research Management I did not have to spend time on important, but time-consuming things. The RM helped my mentee write various reports, since [my RM] better knew what the expected structure and output should look like. That meant I could fully focus on the content of the doc, and less the expected structure. This helped me out a lot as I could spend less time on this overall (and I was short on time). With the weekly meetings the RM has with my scholar, it gives another opportunity to check in with my mentee, see how it’s going, whether they understand the bigger picture etc. As soon as there were small question marks, problems, or dissatisfaction, I was then able to address this in the next meeting. I would probably not have caught those things, or only much later. We also asked mentors who did not have RM if they thought they would have benefited from it. One mentor responded “My a priori guess is that RM is pretty great. I expect that RMs are better at many of the things they can help scholars with than I would be (for example, productivity debugging, discussing personal conflicts, potentially also helping scholars generate next steps).” A different mentor wrote, “I'm not sure what it entails as a mentor. It may have been useful for my scholars -- I have found a research manager useful in the past.” Finally, we asked mentors how much RM improved their scholar meetings. Some mentors reported a modest improvement, but others benefited significantly. Mentors elaborated on these improvements: Yes, a little (1-25%)“Sometimes my scholars would bring a sensible project plan for the rest of the program. And they were very aware of when the MATS deadlines are.”“Mostly it helped me gauge how my scholar was feeling for direction-setting in the project; we probably would have settled on the same direction but just taken longer to do it or something? not totally sure.”Yes, by some margin (25-50%)“I switched to 1-1 meetings instead of group meetings, because of feedback by scholars communicating to me by [my Research Manager].” Vs Mentor Counterfactual Recall that Research Management aims to take on some responsibilities that mentors often bear by default. To evaluate Research Management’s impact, we can compare it to the counterfactual in which scholars requested support directly from their mentors. We asked two questions: If MATS had no Research Management team, how much of the value that you got from Research Management could have been provided by your mentor? Assume that your mentor was able to spend extra time equal to the time you spent with your Research Manager.Do you think that your mentor would have been willing to spend this extra time with you if asked? The first question identifies the value that could have been recovered from mentors, conditional on mentor willingness; the second question elicits mentor willingness. The following heatmap displays the joint distribution of responses. Cells in the top rows and the left columns represent scholars for whom Research Management provided an improvement over the counterfactual: their mentors could not or would not have replaced Research Management. The cells in the bottom-right represent scholars for whom Research Management detracted value: their mentors would have stepped in to provide Research Management functions. 7% of scholars were in this second category: they thought their mentors probably or certainly would have provided superior support. One scholar elaborated on a benefit unique to Research Managers: “I think it is very valuable to have an orthogonal perspective alongside the research process that is *not* your mentor. I generally think the structure that was in place was overall helpful for accountability and sharpening the research process in general.” Even if mentors were willing and able to perform Research Management roles, there would still be a case for dedicated Research Managers: For scholars with remote mentors, Research Managers can provide in-person support and distill high-context information for the mentors, who would otherwise have more limited visibility on their scholars’ progress.These numbers assume that scholars would have asked their mentors for more attention, but we find that scholars are sometimes reluctant to communicate their needs to their mentors. Since MATS provides Research Management by default, scholars do not have to weigh the reputational consequences of requesting support, which might discourage them. As we wrote in our previous retrospective: Scholars are sometimes disincentivized from seeking support from mentors. Because mentors evaluate their mentees for progression within MATS (and possibly to external grantmakers), scholars can feel disincentivized from revealing problems that they are experiencing.Research Management reduces the per-scholar time commitment for mentors. Even if mentors would have been willing to provide additional support, it might have been a worse use of their time than, for example, focusing on their own research.If all scholars requested additional time from their mentors, their mentors might have to reduce their number of scholars. Most mentors were not allocated the maximum number of scholar slots they requested, so they likely are not yet in a regime where per-scholar time trades off with total number of scholars, but some mentors did receive their cap, two mentors exceeded their cap, and if future cohorts are better-funded, many more mentors will receive their cap. In such a future cohort, Research Management would increase the number of scholars a given mentor could support.As shown in Mentor Selection, for 20% of mentors, taking on an additional scholar would have traded off against time spent mentoring their other scholars. We asked mentors the same pair of questions. All respondents agreed that they could provide “some of the value from RM.” 57% said they would “probably not” have spent the time; 36% said “probably yes”, and the remaining 7% said “almost certainly yes.” Grant Equivalent Valuation We asked scholars, “Assuming you got a grant instead of receiving 1-1 Research Management meetings during MATS, how much would you have needed to receive in order to be indifferent between the grant and Research Management?[8] Please answer ex post, i.e. knowing what you know now. If you respond with some number X, that means you value these two the same: - a grant of $X - all of the 1-1 Research Management meetings you've had during MATS, collectively.” The average response was $1711, the median response was $1000, and the sum was $95,810. We asked mentors the same question. The average response was $5900, the median was $3000, and the sum was $88,000. Mentors expressed higher valuations than scholars for Research Management, perhaps because mentors have higher-paying jobs than scholars, many of whom are students, or mentors value their productivity more highly than scholars. As shown below, Research Management did not create more productive hours for mentors than scholars, on average. Accomplishments and Productive Hours Finally, we asked scholars and mentors two (functionally similar) questions about the value of Research Management. The first question was: “By what percent would you estimate Research Management changed the amount you accomplished during MATS? An answer of X% means you accomplished X% more than you would have otherwise.” The average increase in accomplishments was 14%, and the median was 10%. We asked mentors the same question. Their average increase in accomplishments was 12%, and the median was 10%. The second question we asked scholars was: “How many more productive hours in the last 10 weeks would you estimate you had because of your Research Management meetings? For reference, MATS research phase was 10 weeks long. If you estimate you got X more productive hours per week on average, then you should respond 10*X. If you only got a 1-time productivity boost of Y hours, you should respond Y.” The average increase in productive hours was 21 hours, the median was 10 hours, and the sum was 1200 hours. We also asked mentors this question. The average increase was 8.9 hours, the median was 4.0 hours, and the sum was 130 hours. Productive hours were not the only benefit that Research Management provided, as evidenced by scholars and mentors who reported 0 increase in productive hours but a nonzero grant equivalent. But if we look at the grant equivalents for mentors and scholars who reported an increase in productive hours, the average mentor valued their time at $690/h, and scholars value their time at $160/h. Improvements We asked mentors how we could improve Research Management for future programs. One mentor elaborated on the “other” ways RM could improve, noting, “My scholar had various questions about the extension application that the research manager was not able to answer and I had to reach out directly to people in charge of the extension evaluation process.” Another mentor wrote, “I wish my RM was more proactive in seeking mentor feedback on scholars.” We asked mentors about the skills that RMs could develop to improve their RM abilities. 40% of mentors with RMs suggested that their RM could have more research experience. We asked mentors, “Are you satisfied with the amount of time your RM spent on your stream? Would you have preferred more, less, or the same amount of RM time?” All respondents said “I am satisfied with the amount of time my RM spent on my stream”, except one, who said they “would have preferred my RM spent more time on my stream.” Seminars & Workshops While seminars and workshops were optional, MATS encouraged scholars to attend them. By exposing scholars to experienced researchers and novel agendas, seminars remain a pillar of scholar development. In optional surveys, scholars rated seminars 6.8/10, on average, where 1/10 represented “Not worth attending,” and 10/10 represented the high bar “It significantly updated your research interests or priorities." These surveys also solicited scholars’ expectations about the value of the events. 33% of ratings exceeded expectations; 23% of ratings were worse than expected. A seminar that was “about as valuable as expected,” received an average rating of 6.8/10. Workshops were a relatively minor program element—MATS held five workshops on topics other than Research Milestones—so it is unsurprising that they contributed less value than other program elements. Lighthaven Office We were interested to see how scholars would rate Lighthaven because the campus differed from our previous offices, as explained above. Comparing this result against alumni surveys, we see that Lighthaven was not a noticeably more important program element than office spaces in earlier MATS programs. Note that this question asks about the value of the office relative to other program elements, and we believe other program elements have improved over time, so this question may reflect an improvement in the value of the office, in absolute terms. These responses should also be taken lightly because, for many scholars, this was their first experience working in any office, so their answers may not accurately reflect the quality of the office. At the end of the program, scholars suggested some minor improvements to the facilities. Some also provided positive feedback: “Tbh I think Lighthaven is just perfect.”“Lighthaven was nothing but perfect. I loved it and am really sad because I already know my future workplace won't compare!”“I think Lighthaven was a really great space, and it was nice to have people around a lot and living and working there. I think sometimes it could be a little bit much but, on the whole, I really enjoyed it.” Mentors gave feedback as well: “The lack of meeting rooms to talk to my scholars was not great. We had to meet in a public area without a table, on a couch.”“I liked the venue a lot overall, seems like a great place to run this type of program. Personally I wasn't as productive at Lighthaven compared to other places eg. Constellation but this was probably mostly due to working in a public area at Lighthaven and being interrupted more often.”“Very comfortable and good for informal interactions; not great for work.”“I thought there was a nice calm vibe, and I liked the style/ surroundings.” Before the program, we tried to identify the space needs of all mentors; if the first two mentors above had brought their concerns to us during the program, we would have worked to provide private meeting areas for them. Though we took steps to separate office and living spaces, we were concerned that the thin boundary between professional and personal spaces might lead to an unhealthy work-life balance for scholars. In the words of the previous scholar, we were worried the experience would “be a little bit much.” This proved to not be a significant problem. During the first week of the program, we polled scholars about their concerns, and the most notable change from our previous program was fewer scholars indicating a concern with work-life balance. After the program, we surveyed scholars again: 44 said the “balance felt appropriate,” 8 indicated “too much work,” and 4 said they “didn’t work enough.” Because scholars were living at Lighthaven, and it had previously been a hotel, we were concerned that the atmosphere might become too informal for a professional program like ours. To investigate this possibility, we asked scholars about the “cozy”/professional balance. Among in-person scholars, 47 said “the balance was just right,” 8 said they “would have preferred a more professional office environment,” and 1 said they “Would have preferred more ‘cozy vibes.’” Based on this feedback, we are satisfied with the level of professionalism that Lighthaven offered. Strategy Discussions Scholars had mixed experiences with the Strategy Discussion groups. Scholars elaborated on the limitations of our discussion format: One of the scholars who got negative value from the discussion groups commented, “Discussion seemed a bit basic for people already exposed to eg LW.”Another scholar may have preferred the Training Phase format that Strategy Discussions replaced: “As the program went on and deadlines loomed, it got a lot harder to take ~half a day (including reading) for the discussion groups! I personally think it would have been much better to frontload it more, both for that reason and because it would have helped people get to know each other better in the early stages.”Two scholars mentioned that some of the content would have been covered in informal conversations, even without dedicated Discussion Groups.Many scholars expressed a desire for stronger facilitation than our TAs provided.“I thought it would have been much more useful--even solely for improving research strategy--to have more technical readings, the sort of things people may not invest the effort to understand without the extra motivation.” An AI safety curriculum remains an important lever by which MATS can improve our scholars’ breadth of knowledge about the AI safety field, but these results have underscored some shortcomings of our approach during the past program. In our next program, we will hold Discussion Groups, but we will update the readings, format, and timeline in response to scholar feedback, and we will offer discussions on an opt-in basis, to ensure they do not absorb the time of scholars who are unlikely to benefit from them. Networking Events Facilitating connections to the wider Berkeley AI safety research community was an important way MATS provided value to scholars. We asked scholars about the value of the connections they made, clarifying, “This includes connecting with non-MATS members of the alignment community, e.g., at MATS networking events.” The most highly-rated networking event of the program was the EA Global: Bay Area conference, which many scholars chose to attend. The cause of dissatisfaction with the FAR Labs Lunch, and to some extent the Constellation Dinner, was the high ratio of MATS scholars to office members. The Constellation Dinner was rated about 1 point higher than the same event in the previous program; we think the increase was due to hosting the event at Constellation rather than our office, which helped with the ratio of MATS scholars to office members. We also think the presence of Astra participants and Visiting Researchers may have improved scholars’ experiences at the dinner. This figure excludes networking socials, the Symposium, and intra-cohort networking events. To measure the success of our networking events, we asked scholars two questions at the beginning and end of the program about their professional connections: How many professionals in the AI safety field do you know and feel comfortable reaching out to for professional advice or support?If you want a more specific scoping: how many professionals who you know would you feel able to contact for 30 minutes of career advice? Factors that influence this include what professionals you know and whether you have sufficient connections that they'll help you out. A rough estimate is fine, and this question isn't just about people you met in MATS!How many people who you know do you feel you can reach out to about collaborating on an AI safety research project?Imagine you had some research project idea within your AI safety field of interest. How many people that you know could plausibly be collaborators? A rough estimate is fine, and this question isn't just about people you met in MATS! Responding to both questions, scholars reported an increase in professional connections during MATS. The median change in professional connections was +5.0, and the median change in potential collaborators was +5.0. As in our previous program, decreases are most likely explained by some day-to-day variance in scholars’ judgments, as they did not have access to their previous responses. These measures also pick up on connections made to other scholars, in addition to researchers outside of MATS, but we believe networking events contributed because remote scholars reported smaller increases. Community Health As expected, many scholars did not use Community Health support, but a minority of scholars benefited substantially. To investigate these benefits, we asked scholars the same question about productive hours we used to evaluate Scholar Support, along with a question about their well-being. The median scholar gained 0 productive hours, but the sum was 170 hours. The median scholar’s well-being increased by 5.0% due to Community Health support. Scholars elaborated on the ways that Community Health improved their quality of life: “[The Community Manager] was an amazing asset to the team.  I really appreciated his emotional support, guidance, and friendship.”“[The Community Manager] was immensely helpful for when I was overwhelmed and provided a lot of both emotional and practical support”“Noticing I was overworked and telling me to take a break. Directing me to people I could talk to about dual use concerns of my research.”“[The Community Manager] was around when I was feeling burnt out and chatted to me / gave some great suggestions and was really supportive.”“Helpful for planning social activities, and just venting” As in last summer’s program, many scholars who did not meet frequently with the Community Manager reported benefitting indirectly from a positive atmosphere and the knowledge of an emotional safety net. “And also just knowing someone had their eye out :)”“Mostly indirectly by helping other scholars feel better.”“I didn't need to make use of it, but I think it's good that it exists”“I also think that the Community Health was probably useful to me indirectly: I didn't need to use the Community Health services myself, but I did benefit from (as far as I could tell) a very positive community at MATS!”“I think they did a great job of creating a MATS environment that felt safe, social and enjoyable.““I always was aware of its offering and am grateful to have had this awareness.”“I'm probably not qualified to comment on this since I did not need these resources, but I could imagine these being very valuable to those who are going through emotional difficulties during the program.”“I think it's wonderful that community health exists, and [the Community Manager] was just an awesome presence around MATS. I personally didn't feel the need to leverage the services, but I definitely appreciated that they existed and it was an option!” Compared to other program elements, the benefits of Community Health were concentrated among fewer scholars, but this small segment of scholars benefited greatly. We continue to believe that Community Health is important and neglected based on its benefits to these scholars and the diffuse benefits it provides to the MATS community. Evaluating Key Scholar Outcomes Scholar Self-Reports To measure scholars’ improvements in the depth, breadth, and taste aspects of conducting research, including the quality of their theories of change, scholars assessed themselves on these dimensions at the beginning and end of the program. The specific questions we asked were: Depth: How strong are your technical skills as relevant to your chosen research direction? e.g. if the research is pure theory, how strong is your math? If the research is mechanistic interpretability, how well can you use TransformerLens and other tools? 10/10 = World class5/10 = Average for a researcher in this domain1/10 = Complete novice Breadth: How do you rate your understanding of the agendas of major alignment organizations? 10/10 = For any major alignment org, you could talk to one of their researchers and pass their Ideological Turing Test (i.e., that researcher would rate you as fluent with their organization's agenda).1/10 = Complete novice. Research Taste: How well can you independently iterate on research direction? e.g. design and update research proposals, notice confusion, course-correct from unproductive directions 10/10 = You could be a research team lead8/10 = You can typically identify and pursue fruitful research directions independently5/10 = You can identify when stuck, but not necessarily identify a direction that will yield good results3/10 = You're not often independently confident about when to stop pursuing a research direction and pivot to something else1/10 = You need significant guidance for research direction Theory of Change: How strong are your mental models of AI risk factors and your ability to make impactful plans based on these models? Do you have a mental model for how AI development will happen that you think captures many of the relevant dynamics? Do you have a theory of change for why your chosen research direction will make beneficial AI outcomes more likely? 10/10 = You have a strong understanding of many factors that contribute to the direction of AI development, and your current direction is informed by a strongly-developed theory of change.5/10 = You think that working in AI safety is impactful due to a rough mental model of AI risk factors, but you have a weaker understanding of what AI safety research agendas are most impactful.1/10 = No mental models of AI development are informing plans. The average change in depth was +1.53/10, and the average change in breadth was +1.93/10. The decreases could be explained by calibration variance, as with professional connections above, or by scholars realizing their previous understandings were not as comprehensive as they had thought. The average change in research taste was +1.35/10, and the average change in theory of change was +1.25/10. Mentor Evaluations Halfway through the program, we asked mentors to evaluate their scholars on these same four dimensions. Average depth was 7.4/10, average breadth was 6.9/10, average taste was 7.2/10, and average theory of change was 7.0/10. We can compare mentors ratings to their scholars’ self reports. Because mentors evaluated scholars halfway through the program, we use the average of scholars’ self-assessments from the beginning and end of the program. If mentors and scholars agreed on the abilities of scholars, the data would be concentrated on the grey dashed line. Instead, scholars and mentors diverge in their assessments of breadth and depth. The greater concentration of data below the calibration line indicates that mentors tend to rate scholars higher than scholars rate themselves. We asked mentors to assess the career outcomes that “you find it likely that your scholar could achieve in the next year if they wanted to.” According to mentors, of the 56 scholars who received an evaluation, 43 could achieve a “First-author paper at top conference,” 23 could receive a “Job offer from AI lab safety team,” and 9 could “Found a new AI safety research org.” Finally, mentors assessed their overall enthusiasm for their scholars’ continued research: “Taking the above into account, how strongly do you believe this scholar should receive further support from MATS/grantmakers to continue AI safety research? 10/10 = Very strongly believe scholar should receive support to continue research5/10 = Moderately prefer scholar to continue AIS research1/10 = Fine if scholar doesn't continue with AIS research” The mean of this distribution was 8.1/10, demonstrating mentors’ enthusiasm for their scholars to continue their work. Milestone Assignments Research Plans Per the rubric, MATS alumni graded Research Plans on three components: Threat Model / Risk Factor (20 points)Theory of Change (40 points)Planned Activities and Outputs (40 points) The median scores on the three respective sections were 15/20 points, 32/40 points, and 32/40 points. The median overall score was 76/100 points, showing that most scholars can effectively formulate and present a research plan. Symposium Presentations Over the two days of the Symposium, scholars presented research on a range of agendas. The entire first day was dedicated to interpretability, which was heavily represented among the talks. Grouped into more coarse categories: As detailed in the rubric, symposium presentations were graded on three components: What (50 points)Why (30 points)Style (20 points) The average scholar’s overall score was an excellent 86/100 points. Unlike Research Plans, which were graded by contracted MATS alumni, evaluations of Symposium Presentations were submitted by Symposium attendees, including scholars, alumni, members of MATS team, and members of the extended Bay Area AI safety research community. The talk with the fewest evaluations received 4; the average talk received 15 evaluations. Funding and Other Career Obstacles At the beginning and end of the program, scholars answered “What do you feel are the obstacles between you and a successful AI safety research career?” In aggregate, MATS reduced almost all barriers that scholars faced. The “Other” obstacles that scholars faced at the end of the program included: “Time and effort”“Letters of recommendation/connections with senior researchers”“Research taste”“Figuring out what the big lab researcher role interviews cover and test for”“PhD Funding (I am very actively looking for PhD funding to pursue alignment at Oxford); Coding interview skills: I'm not good at leetcode interviews (simply because I haven't devoted 3 months of practice to this) and this is a major blocker; Theoretical CS/RL/Math knowledge: Something I am actively working on right now”“Maybe getting past the resume screen in research labs :P” These are the obstacles that scholars reported at the end of the program: Similarly to last summer’s program, publication record was the career obstacle faced by more scholars than any other. While funding was the second-highest obstacle, we can see that challenges with writing grant proposals decreased dramatically, making it the least important obstacle for alumni leaving our Winter program, and indicating that grant writing is not a primary obstacle for obtaining funding. We attribute this improvement to the updated Research Plan milestone, designed around the LTFF grant application, and associated theory of change and grant application workshops. Additionally, the Strategy Discussion Groups might have helped scholars write theories of change. Unlike last summer’s program, the number of scholars facing publication record as an obstacle decreased, instead of increasing. This suggests that scholars in the most recent program focused more deliberately on a publishable project instead of pivoting repeatedly or working on unpublishable work, which is consistent with our anecdotal experience. Similarly, software engineering experience decreased as an obstacle during the past program but increased last summer, suggesting that scholars found more opportunities to develop their coding abilities or realized that generic programming skills are less of an obstacle than they had previously believed. The magnitude of the changes in software engineering in the most recent program and last Summer was small enough that the difference could easily be explained by randomness in scholar selection. Funding became a larger barrier during the program, as opposed to last Summer, when it decreased. One difference that could be responsible is fewer mentors who hired their scholars immediately following the program. In general, we are not surprised to see funding increase as a barrier: at the start of the program, scholars have just received a grant for participating in MATS, which lowers this barrier, or at least makes it less salient. Despite funding obstacles, fewer scholars reported grant writing as a barrier. The increase in grant-writing ability is apparent from responses to a question about scholars’ confidence in submitting a grant proposal. We asked, “How confident would you feel about submitting a grant proposal right now? Confidence levels could come from familiarity with grant processes, criteria used by grantmakers, experience writing grants, and the strength of a possible proposal.” At the start of the program, 69% of scholars felt “Unconfident” or “Very unconfident” in their grant-writing abilities, but 82% of scholars felt “Confident” or “Very confident” by the end of the program. Indeed, of the 42 scholars who applied to the Long-Term Future Fund for the MATS Extension (76% of the cohort), 30 were funded (acceptance rate = 71%). In addition, 5 scholars already had grant funding they could use for the extension program before they came to MATS. Evaluating Key Mentor Outcomes Self-Reported Benefits We asked mentors about their motivations for mentoring: “Why did you want to mentor for MATS? What did you hope to get out of mentoring?” We also asked them about the benefits they received from mentoring: “What benefits did you, in fact, get out of mentoring for MATS?" In general, there was a tight relationship between expected and actual benefits. Slightly more mentors than expected found that they helped new researchers through MATS and gained mentorship experience. The largest exception was that many fewer mentors reported that MATS helped advance their specific agenda than expected. We believe that this result was primarily caused by high initial expectations from first-time mentors that failed to match what was possible in the 10-week program. Biggest Impact We asked mentors, “What is the biggest counterfactual impact that happened as a result of you mentoring at MATS? E.g. some percentage increased productivity on your preferred research agendas, counterfactual publication(s), new hires…” Mentors gave varied answers: “The 'benchmarking interp' project is getting done, and it otherwise wouldn't have. I think it's going to be very good. I found some really productive people that didn't have a good way to get productive in AI safety otherwise.”“The whole line of steering vectors, the maze interpretability work from MATS 3.0 (so I think this was huge impact)”“I assume that if I hadn't mentored, there'd likely just be two fewer people doing this sort of fellowship. So then I think the biggest counterfactual impact is that two promising, fairly early-career researchers got an opportunity to try out research to an extent they otherwise wouldn't have. (FWIW, these people may well have been people other than my mentees! I believe that at least one of my mentees had an offer from at least one other mentor.) I think to some extent becoming a good researcher is about putting in a bunch of hours, even if some of these hours go into projects that fail.”“For me personally, I think I learned quite a bit about mentoring and expect to scale up my mentorship more quickly than I would have without MATS (e.g., I'll take 2-3 CHAI interns this summer and might have only had the confidence to take 1-2 without MATS). Roughly, I'd model this as MATS 5.0 having accelerated me by 3-6 months on my "mentorship path" (e.g. it would otherwise have taken 3-6 months longer for me to mentor the same number of people). This is just via the effect of MATS 5.0 on my mentorship experience (if MATS didn't exist at all, I'd probably mentor somewhat fewer people just because it would be harder to arrange). I think the direct research output was less important overall than mentorship experience (though I think there were a few unusual factors causing this). Harder for me to say how valuable impact on my scholars was.”“I expect the MATS projects to lead to top conference papers with high impact (at least one).” Testimonials Some mentors offered testimonials about the value of MATS. I got to mentor some excellent new researchers, and make progress on some research ideas that I think are important but didn't have time for. It was a great time! – Adrià Garriga-Alonso I think we did some excellent research, and everyone seemed extremely competent and nice. – Asa Cooper-Strickland I have supervised projects for a number of other programs, including academic MSc projects, and Oxford and Cambridge AI safety programs. MATS is by far the most valuable, well resourced, and organised and has, in my experience, the most impressive junior researchers. I am genuinely excited to continue working with my MATS scholars. I have no doubt they will pursue highly impactful AI safety research careers. – Francis Rhys Ward MATS is a Win-Win for everybody. It allows junior researchers to get first hand experience doing science and getting mentored. At the same time it allows up-and-coming researchers to get mentoring experience, help out others, and progress on their research agenda. – Jérémy Scheurer MATS is the trusted entrypoint to the AI safety research field, whether you're a student first trying to get into research or an established researcher looking to pivot. – Jesse Hoogland Mentoring for MATS has been one of the most fruitful and enjoyable collaborations I have taken part in. – Stephen Casper I got a researcher hired to work on my research agenda (in the previous cohort), and multiple valuable alignment forum posts were produced. – Vanessa Kosoy Improved Mentor Abilities We asked mentors to evaluate the improvement to their mentorship abilities that they received from MATS, compared to a counterfactual in which they did not mentor during the winter program. The average mentor improved their mentorship abilities by 18%. Lessons and Changes for Future Programs Advisory Board for Mentorship Selection For future programs, we intend to establish an Advisory Board, consisting of senior figures in AI safety who possess in-depth knowledge of the research landscape and expert judgment on prioritizing AI safety research agendas. The board’s input will inform our mentor selection process, improving our decisions and providing accountability to a wider set of stakeholders. Fewer Mechanistic Interpretability Mentors While mechanistic interpretability remains a strong interest among would-be scholars and mentors, we believe this agenda was relatively over-represented among the research interests of the Winter cohort, featuring in 37% of scholar talks. During mentor selection and scholar allocation for our next program, we intend to reduce the mechanistic interpretability proportion of the MATS “portfolio” and increase the proportion of evaluations, scalable oversight, and control research. More AI Governance Mentors A growing consensus holds that governance and policy stand to make important contributions to reducing existential risks from AI. Occasionally, MATS scholars have conducted research for a policymaking audience, such as demonstrations of dangerous model capabilities, but MATS has historically focused on technical research to the exclusion of governance. At present, we believe that AI governance would benefit from further research mentorship programs, and MATS is well-positioned to fill this gap. Technical AI governance is a particularly natural fit for us, but we also have the experience and infrastructure to accommodate more traditional forms of governance research, where our efforts would complement preexisting AI governance fellowship programs. Indeed, during the Summer 2024 Program, MATS will have at least two technical governance mentors. Results from the Summer 2024 Program will influence whether MATS continues to pursue technical governance as part of the MATS mentor portfolio. Pre-Screening with CodeSignal We have secured sufficient CodeSignal tests for the next program that we can pre-screen applicants to mentors who require engineering experience. Pre-screening will save these mentors valuable time, which they could spend more meticulously reviewing the applicants that pass the programming filter, thereby selecting better qualified scholars. Research Manager Hiring In order to meet scholars’ and mentors’ demonstrated needs, we currently anticipate hiring 3-4 additional Research Managers for the MATS Summer 2024 Program. Given the feedback from mentors on the skills they would most like to see more of in their Research Managers, prior research experience will be a key factor in our hiring decisions. Modified Discussion Groups We will continue to offer discussion groups focused on AI safety strategy, following a custom MATS curriculum. We think this approach has more potential to expand scholars’ breadth of knowledge about the field than the pre-program Training Phase and AISF curriculum we employed in previous programs. For our next program, we intend to improve on our curriculum, craft stronger guidance and instruction for our TAs, frontload more of the sessions, and require that scholars opt-in, rather than opt-out. Acknowledgements This report was produced by the ML Alignment & Theory Scholars Program. Rocket Drew was the primary contributor, Ryan Kidd managed and edited the project, and Laura Vaughan contributed writing. Additionally, Laura Vaughan, McKenna Fitzgerald, Christian Smith, Juan Gil, Henry Sleight, and Matthew Wearden edited particular sections. We thank Open Philanthropy, DALHAP Investments, and many Manifund donors, especially Manifund Regrantor Tristan Hume, without whose donations we would have been unable to run the Winter 2023-24 Program or retain team members essential to this report. We are also grateful to Lightcone Infrastructure and AI Safety Support for their commitment and support. Finally, thank you to the inimitable John Wentworth and David Lorell, who may not belong to “MAT” but certainly belong in our hearts. To learn more about MATS, please visit our website. We are currently accepting donations for our upcoming programs via BERI and Manifund! ^ Some applicants applied to multiple mentors. This number includes scholars accepted to Neel Nanda’s Research Phase, rather than scholars accepted into Neel’s Training Phase. ^ Examples of selection questions: What is your favorite definition/formalization of a stochastic hybrid system, and why? (Davidad) Spend ~10 hours (for fairness, max 16) trying to make research progress on a small open problem in mechanistic interpretability, and show me what progress you’ve made. (Neel Nanda) You have access to a trained model’s outputs in response to any input. You can provide an unlimited number of inputs and receive responses. You do not have access to gradient information. What can you find out about what this model has learned from its training data? How does this change if you have access to gradients of inputs w.r.t. outputs? (Jessica Rumbelow) ^ Neel’s scholars faced additional filters between the Training Phase and Research Phase. ^ In reality, our budget is not given exogenously. A lower marginal cost per scholar, for example, might encourage donations, resulting in a larger budget. ^ These numbers exclude 14 Auditors who participated in the Training Phase knowing they would not be able to continue to the Research Phase. Auditors were allowed to complete the Research Sprint, but to our knowledge, none did. ^ Jay Bailey, a trainee from Winter 2022-23, attests, Neel [Nanda]'s training process in mechanistic interpretability was a great way for me to test my fit in the field and collaborate with a lot of smart people. Neel's stream is demanding, and he expects it to be an environment that doesn't work for everyone, but is very clear that there's no shame in this. While I didn't end up getting selected for the in-person phase, going through the process helped me understand whether I wanted to pursue mechanistic interpretability in the long term, and firm up my plans around how best to contribute to alignment going forward. Itamar Pres, a trainee from Summer 2023, corroborates, Being almost a year out from the MATS training program, I wanted to express my gratitude for that experience. The whole experience really gave me a sense of confidence in my intuitions, mathematical capabilities, and experimental design while simultaneously sharpening my skepticism and epistemic rigour (the two don't usually go hand in hand)! I really appreciated the paper critiques, research walkthroughs, and feedback on my own intuitions. Despite still figuring things out in the research sprint, I came out of those 5 weeks eager to do more work and so I followed work in the field closely. . . All of this is to say I found participating in MATS an incredibly valuable experience. ^ Thank you to John Wentworth for proposing this idea! ^ This survey question and the following one come from Lynette Bye’s 2020 review of her coaching impact.
2024-05-11
https://www.lesswrong.com/posts/6C3ndLd3nkrfy4K6j/if-we-go-extinct-due-to-misaligned-ai-at-least-nature-will
6C3ndLd3nkrfy4K6j
"If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"
ete
[memetic status: stating directly despite it being a clear consequence of core AI risk knowledge because many people have "but nature will survive us" antibodies to other classes of doom and misapply them here.] Unfortunately, no.[1] Technically, “Nature”, meaning the fundamental physical laws, will continue. However, people usually mean forests, oceans, fungi, bacteria, and generally biological life when they say “nature”, and those would not have much chance competing against a misaligned superintelligence for resources like sunlight and atoms, which are useful to both biological and artificial systems. There’s a thought that comforts many people when they imagine humanity going extinct due to a nuclear catastrophe or runaway global warming: Once the mushroom clouds or CO2 levels have settled, nature will reclaim the cities. Maybe mankind in our hubris will have wounded Mother Earth and paid the price ourselves, but she’ll recover in time, and she has all the time in the world. AI is different. It would not simply destroy human civilization with brute force, leaving the flows of energy and other life-sustaining resources open for nature to make a resurgence. Instead, AI would still exist after wiping humans out, and feed on the same resources nature needs, but much more capably. You can draw strong parallels to the way humanity has captured huge parts of the biosphere for ourselves. Except, in the case of AI, we’re the slow-moving process which is unable to keep up. A misaligned superintelligence would have many cognitive superpowers, which include developing advanced technology. For almost any objective it might have, it would require basic physical resources, like atoms to construct things which further its goals, and energy (such as that from sunlight) to power those things. These resources are also essential to current life forms, and, just as humans drove so many species extinct by hunting or outcompeting them, AI could do the same to all life, and to the planet itself. Planets are not a particularly efficient use of atoms for most goals, and many goals which an AI may arrive at can demand an unbounded amount of resources. For each square meter of usable surface, there are millions of tons of magma and other materials locked up. Rearranging these into a more efficient configuration could look like strip mining the entire planet and firing the extracted materials into space using self-replicating factories, and then using those materials to build megastructures in space to harness a large fraction of the sun’s output. Looking further out, the sun and other stars are themselves huge piles of resources spilling unused energy out into space, and no law of physics renders them invulnerable to sufficiently advanced technology. Some time after a misaligned, optimizing AI wipes out humanity, it is likely that there will be no Earth and no biological life, but only a rapidly expanding sphere of darkness eating through the Milky Way as the AI reaches and extinguishes or envelops nearby stars. This is generally considered a less comforting thought. This is an experiment in sharing highlighted content from aisafety.info. Browse around to view some of the other 300 articles which are live, or explore related questions! ^ There are some scenarios where this might happen, especially in extreme cases of misuse rather than agentic misaligned systems, or in edge cases where a system is misaligned with respect to humanity but terminally values keeping nature around, but this is not the mainline way things go. ^ Nearly 90% of terrestrial net primary production and 80% of global tree cover are under direct human influence.
2024-05-18
https://www.lesswrong.com/posts/HWdyfGpQJKo35Hurg/podcast-with-yoshua-bengio-on-why-ai-labs-are-playing-dice
HWdyfGpQJKo35Hurg
Podcast with Yoshua Bengio on Why AI Labs are “Playing Dice with Humanity’s Future”
garrison
null
2024-05-10
https://www.lesswrong.com/posts/yCQQ6FHFsZwBXvKiM/what-do-you-value
yCQQ6FHFsZwBXvKiM
What do you value ?
akram-choudhary
In light of all of the talk about AI, utility functions, value alignment etc. I decided to spend some time thinking about what my actual values are. I encourage you to do the same (yes both of you). Lower values on the list are less important to me but not qualitatively less important. For example some of value 2 is not worth an unbounded amount of value 3. The only exception to this is value 1 which is in fact infinitely more important to me than the others. Life- If your top value isn't life then I don't know what to say. All the other values are only built off of this one. Nothing matters if you are dead. Call me selfish but I have always been sympathetic to Ayn Rand whenever she used the term "giving life away" to describe altruism.Preservation and improvement of the body and mind- I value my body because it allows me to navigate the world. Without it I would feel like a vegetable. I value my mind because I value being able to understand and manipulate the world around me using my cognition. I am deeply opposed to ideas of Utilitronium unless there are versions of it that preserve cognition that I am unaware of. I don't want to be drugged out even if I am happy.Pleasure/Wealth- Yes I have made the grave error of bundling these 2 together. At least before a civilization reaches utopia (or doom) it seems like currency is a general technology that you would find in most civilizations. I like all the little things (and the big things) money can buy. I wont shy away from admitting it either.Adventure/ Novelty. - I often think that a utopia would get boring (though I realize that boredom itself can be engineered out of the human mind). I think I would like a narrative for how my life is evolving rather than just a garden to be fed grapes in. I want there to be different things that happen on the adventure. I don't mind certain experiences being like 50% total of time spent that I value such as eating, sex and combat. I once again make the error of bundling these values because no adventure is complete without novelty and novelty is tasteless unless it follows some sort of narrative.Freedom- Should I feel ashamed for ranking this so low ? Men are often told they should value their freedom over their life. I just ... don't ... feel that way ? Lets get one thing straight. I am here to tell you about my preferences and values. You don't have any right to snub your nose at them. My intuition pump for this is just if someone offered me to trade my life for an in game character that had all the values higher than this one but whose life followed a script unknown to him, would I trade this life for that ? .... Yeh I guess I would. There's also the whole no free will thing so I have never believed in true freedom anyway.Truth- If you need to stop reading this degeneracy at this point I will understand. I think I don't care if my (non existent) wife actually loves me. I think if I was convinced she did that would be good enough for me. Yes truth like freedom is usually at the top of many other people's list. But not mine. I don't care that much and that's the truth. I don't imagine I will actually live to see a utopian world where I can do anything other than live with constantly frustrated values but I thought it might be fun to write this anyway. So what are your values ?
2024-05-10
https://www.lesswrong.com/posts/9bnakjbaKxsfcHjAo/dead-post-1
9bnakjbaKxsfcHjAo
dead post 1
David Chapel
null
2024-05-10
https://www.lesswrong.com/posts/o7LGpwxbg7SKWJtXX/do-you-know-of-lists-of-p-doom-s-ai-forecasts-ai-quotes
o7LGpwxbg7SKWJtXX
Do you know of lists of p(doom)s/AI forecasts/ AI quotes?
Nathan Young
I am trying to gather a list of answers/quotes from public figures to the following questions: What are the chances that AI will cause human extinction?Will AI automate most human labour?Should advanced AI models be open source?Do humans have a moral duty to build artificial superintelligence?Should there be international regulation of advanced AI?Will AI be used to make weapons of mass destruction (WMDs)? I am writing them down here if you want to look/help: https://docs.google.com/spreadsheets/d/1HH1cpD48BqNUA1TYB2KYamJwxluwiAEG24wGM2yoLJw/edit?usp=sharing
2024-05-10
https://www.lesswrong.com/posts/PBiLwTMppMhnxhZve/have-any-parties-in-the-current-european-parliamentary
PBiLwTMppMhnxhZve
Have any parties in the current European Parliamentary Election made public statements on AI?
MondSemmel
The European parliamentary election is currently taking place (voting period: 2024-04-07 to 2024-07-10). While I assume my vote[1] has ~no impact on x-risk either way, I'd nonetheless like to take into account if parties have made public statements on AI. But I'm not sure they do. Does anyone know, or how to find out? I guess there's also the question of past legislative records on AI, which might be even more predictive of future behavior. There's the EU's AI Act, but I'm not sure what its current implementation status is, whether it's considered useful by AI x-risk folks, or what if any political parties have made any statements on it. I'm also not sure how to interpret the parties' parliamentary voting record on it.[2] ^ In case it matters, I'm from Germany, Bavaria. ^ The roll-call vote results can be found here after clicking on "Artificial Intelligence Act". The 523/46/49 votes for the +/-/0 results are the votes in favor/against/abstained.
2024-05-10
https://www.lesswrong.com/posts/zyN7K8NB8RzAft7KZ/linear-infra-bayesian-bandits
zyN7K8NB8RzAft7KZ
Linear infra-Bayesian Bandits
vanessa-kosoy
Linked is my MSc thesis, where I do regret analysis for an infra-Bayesian[1] generalization of stochastic linear bandits. The main significance that I see in this work is: Expanding our understanding of infra-Bayesian regret bounds, and solidifying our confidence that infra-Bayesianism is a viable approach. Previously, the most interesting IB regret analysis we had was Tian et al which deals (essentially) with episodic infra-MDPs. My work here doesn't supersede Tian et al because it only talks about bandits (i.e. stateless infra-Bayesian laws), but it complements it because it deals with a parameteric hypothesis space (i.e. fits into the general theme in learning-theory that generalization bounds should scale with the dimension of the hypothesis class).Discovering some surprising features of infra-Bayesian learning that have no analogues in classical theory. In particular, it turns out that affine credal sets (i.e. such that are closed w.r.t. arbitrary affine combinations of distributions and not just convex combinations) have better learning-theoretic properties, and the regret bound depends on additional parameters that don't appear in classical theory (the "generalized sine" S and the "generalized condition number" R). Credal sets defined using conditional probabilities (related to Armstrong's "model splinters") turn out to be well-behaved in terms of these parameters. In addition to the open questions in the "summary" section, there is also a natural open question of extending these results to non-crisp infradistributions. (I didn't mention it in the thesis because it requires too much additional context to motivate.) ^ I use the word "imprecise" rather than "infra-Bayesian" in the title, because the proposed algorithms achieves a regret bound which is worst-case over the hypothesis class, so it's not "Bayesian" in any non-trivial sense. ^ In particular, I suspect that there's a flavor of homogeneous ultradistributions for which the parameter S becomes unnecessary. Specifically, an affine ultradistribution can be thought of as the result of "take an affine subspace of the affine space of signed distributions, intersect it with the space of actual (positive) distributions, then take downwards closure into contributions to make it into a homogeneous ultradistribution". But we can also consider the alternative "take an affine subspace of the affine space of signed distributions, take downwards closure into signed contributions and then intersect it with the space of actual (positive) contributions". The order matters!
2024-05-10
https://www.lesswrong.com/posts/w6vEJD3dtekaLgLME/shortest-goddamn-bayes-guide-ever
w6vEJD3dtekaLgLME
shortest goddamn bayes guide ever
lcmgcd
// ODDS = YEP:NOPE YEP, NOPE = MAKE UP SOME INITIAL ODDS WHO CARES FOR EACH E IN EVIDENCE YEP *= CHANCE OF E IF YEP NOPE *= CHANCE OF E IF NOPE The thing to remember is that yeps and nopes never cross. The colon is a thick & rubbery barrier. Yep with yep and nope with nope. bear : notbear = 1:100 odds to encounter a bear on a camping trip around here in general * 20% a bear would scratch my tent : 50% a notbear would * 10% a bear would flip my tent over : 1% a notbear would * 95% a bear would look exactly like a fucking bear inside my tent : 1% a notbear would * 0.01% chance a bear would eat me alive : 0.001% chance a notbear would As you die you conclude 1*20*10*95*.01 : 100*50*1*1*.001 = 190 : 5 odds that a bear is eating you.
2024-05-10
https://www.lesswrong.com/posts/YBamy2j3fQRojqQeg/the-alignment-problem-no-one-is-talking-about
YBamy2j3fQRojqQeg
The Alignment Problem No One Is Talking About
james-brown
The following is the first in a 6 part series about humanity's own alignment problem, one we need to solve, first. What Is Alignment? ALIGNMENT OF INTERESTS When I began exploring non-zero-sum games, I soon discovered that achieving win-win scenarios in the real world is essentially about one thing - the alignment of interests. If you and I both want the same result, we can work together to achieve that goal more efficiently, and create something that is greater than the sum of its parts. However, if we have different interests or if we are both competing for the same finite resource then we are misaligned, and this can lead to zero-sum outcomes. AI ALIGNMENT You may have heard the term "alignment" used in the current discourse around existential risk regarding AI, where a key issue is The Alignment Problem or the problem of Goal Alignment which concerns the potential misalignment of goals between humanity and artificial general intelligence (AGI) - a flexible general purpose intelligence, that may have its own motivations and is able to design its own goals. The argument considers a world where AGI, having decoupled itself from dependency on humans, overtakes human intelligence and develops goals that are at odds with humanity. The most obvious risk is a competition for resources like land, energy or computational capacity, where AGI might attempt to overthrow humanity taking the zero-sum perspective that it's them or us. INSTRUMENTAL GOALS But more interesting is the possibility that without developing its own ultimate goals an AGI may develop instrumental goals that help it achieve a directive given by us. For instance, Nick Bostrom's paperclip robot is given the directive to create paperclips, and goes ahead converting all matter into paperclips, and in the process destroys all life on earth. I spent some time with the ideas of Nick Bostrom, Max Tegmark, David Chalmers and other experts in the field while editing the documentary We need to talk about AI which explored the existential risk posed by the coming AI revolution. Now, one of the benefits of documentary editing (my day job) is that I get to study people at the top of their field and absorb a mass of information that, while it might not make the final cut, paints a vivid and nuanced picture of an issue. In this case, there was a consensus that a key stumbling block in designing AGI that is aligned with humanity's interests is the fact that we cant agree on what humanity's interests are. One of the experts encapsulated this idea in the statement... We [humanity] need a plan, and we don't have a plan". - Bryan Johnson A PLAN? But how can we come up with a plan if we can't agree on what we want? We can often seem misaligned with each other, sometimes because we have different ideas about what we should be striving for, and at other times because we see our collective goals in zero-sum terms, believing that prosperity for some necessitates poverty for others, and that individual interests must be at odds with collective goals. HUMANITY'S OWN ALIGNMENT PROBLEM This is what I see as the alignment problem no one is talking about; the alignment between the individual and the collective. It is the key problem facing humanity—a misalignment that plays out in our adversarial political system with right & left wings vying for dominance rather than confluence. We'll be exploring this particular alignment problem and its consequences in the next part. Thanks for reading—this was just the first in a 6 part series on humanity's own alignment problem. Each part is linked below. The Alignment Problem No One Is Talking AboutHumanity's Alignment ProblemGetting Aligned Through HistoryThe Scale Of The IssueCould AI Help?Solutions
2024-05-10
https://www.lesswrong.com/posts/RTiuLzusJWyepFpbN/why-care-about-natural-latents
RTiuLzusJWyepFpbN
Why Care About Natural Latents?
johnswentworth
Suppose Alice and Bob are two Bayesian agents in the same environment. They both basically understand how their environment works, so they generally agree on predictions about any specific directly-observable thing in the world - e.g. whenever they try to operationalize a bet, they find that their odds are roughly the same. However, their two world models might have totally different internal structure, different “latent” structures which Alice and Bob model as generating the observable world around them. As a simple toy example: maybe Alice models a bunch of numbers as having been generated by independent rolls of the same biased die, and Bob models the same numbers using some big complicated neural net. Now suppose Alice goes poking around inside of her world model, and somewhere in there she finds a latent variable ΛA with two properties (the Natural Latent properties): ΛA approximately mediates between two different observable parts of the world X1,X2ΛA can be estimated to reasonable precision from either one of the two parts In the die/net case, the die’s bias (ΛA) approximately mediates between e.g. the first 100 numbers (X1) and the next 100 numbers (X2), so the first condition is satisfied. The die’s bias can be estimated to reasonable precision from either the first 100 numbers or the second 100 numbers, so the second condition is also satisfied. This allows Alice to say some interesting things about the internals of Bob’s model. First: if there is any latent variable (or set of latent variables, or function of latent variables) ΛB which mediates between X1 and X2 in Bob’s model, then Bob’s ΛB encodes Alice’s ΛA (and potentially other stuff too). In the die/net case: during training, the net converges to approximately match whatever predictions Alice makes(by assumption), but the internals are a mess. An interpretability researcher pokes around in there, and finds some activation vectors which approximately mediate between X1 and X2. Then Alice knows that those activation vectors must approximately encode the bias ΛA. (The activation vectors could also encode additional information, but at a bare minimum they must encode the bias.) Second: if there is any latent variable (or set of latent variables, or function of latent variables) Λ′B which can be estimated to reasonable precision from just X1, and can also be estimated to reasonable precision from just X2, then Alice’s ΛA encodes Bob’s Λ′B (and potentially other stuff too). Returning to our running example: suppose our interpretability researcher finds that the activations along certain directions can be precisely estimated from just X1, and the activations along those same directions can be precisely estimated from just X2. Then Alice knows that the bias ΛA must give approximately-all the information which those activations give. (The bias could contain more information - e.g. maybe the activations in question only encode the rate at which a 1 or 2 is rolled, whereas the bias gives the rate at which each face is rolled.) Third, putting those two together: if there is any latent variable (or set of latent variables, or function of latent variables) Λ′′B which approximately mediates between X1 and X2 in Bob’s model, and can be estimated to reasonable precision from either one of X1 or X2, then Alice’s ΛA and Bob’s Λ′′B must be approximately isomorphic - i.e. each encodes the other. So if an interpretability researcher finds that activations along some directions both mediate between X1 and X2, and can be estimated to reasonable precision from either of X1 or X2, then those activations are approximately isomorphic to what Alice calls “the bias of the die”. So What Could We Do With That? We’ll give a couple relatively-legible examples of the possibilities natural latents potentially unlock, though these definitely aren’t the only applications. Interpretability There’s a “conceptually easy” version of interpretability, where you try to reduce a net to some simpler equivalent circuit. That’s pretty common to attempt, but unfortunately it’s not where most of the value is for e.g. AI alignment. The value is in a “conceptually harder” version of interpretability: map some stuff inside the net to some stuff outside the net which the net internals represent. In particular, we’d like to map some net internals to human-interpretable externals, i.e. stuff in the environment which humans represent in their internal world-models. So: there’s some internal (i.e. latent) structures within the net, and some internal (i.e. latent) structures in a human’s mind, and the hard (and high-value) version of interpretability is about faithfully mapping between those two. … and in general, that’s a pretty tough problem. Even if humans and nets converge on pretty similar distributions over “observables” (e.g. a foundation model generates text very similar to the distribution of text which humans generate, or an image model has a very similar idea to humans of what real-world images look like) the human and the net can still have wildly different internals. Indeed, their internal ontologies could in-principle be totally different; there might not be any faithful mapping between the two at all… though that’s not what I expect, and even today’s still-fairly-primitive interpretability techniques sure seem to suggest that nets’ internal ontologies are not totally alien. What the natural latent conditions give us is a tool to bridge the gap between internals of a net’s model and internals of a human’s model. It lets us say anything at all about the latent structure internal to one agent, based on some very simple and general conditions on the internal structure of another agent. And in particular, it lets us establish approximate isomorphism in at least some cases. Value Learning and The Pointers Problem The Pointers Problem says: whatever goals/values/utility humans have, the inputs to humans’ goals/values/utility are latent variables in the humans’ world models, as opposed to e.g. low-level states of the entire universe. Or, in plain English: I care about my friends and cool-looking trees and British pantomime theater[1], and people have cared about those sorts of things since long before we knew anything at all about the low-level quantum fields of which all those things consist. That presents a challenge for any AI system looking to learn and follow human values: since human values are expressed in terms of humans’ own internal high-level concepts, a faithful learning of those values needs to also somehow learn the high-level concepts which humans represent. Such a system needs its own internal representation of the concepts humans would use, and in order to actually pursue humans’ goals/values/utility, the system needs to wire its own goals/values/utility into that same structure - not just e.g. track “quoted human concepts”. Again, we see the same sort of problem that interpretability faces: a need to faithfully map between the internal latent structures of two minds. And natural latents apply in much the same way: they let us say anything at all about the latent structure internal to one agent, based on some very simple and general conditions on the internal structure of another agent. And in particular, they let us establish isomorphism in at least some cases. Where We’re Currently Headed With This The natural latent conditions themselves are still a fairly low-level tool. We currently expect that the sort of concepts humans use day-to-day (like cars and trees, rolling, orange-ness, and generally most of the things we have words for) can generally be grounded in terms of natural latents, but they’re relatively more complicated structures which involve a few different latents and different naturality conditions. For instance, there’s a diagram on our whiteboard right now which talks about two sets of latents which are each natural over different variables conditional on the other, in an attempt to establish naturality/isomorphism in a relatively-simple clustering model. So we view natural latents as a foundational tool. The plan is to construct more expressive structures out of them, rich enough to capture the type signatures of the kinds of concepts humans use day-to-day, and then use the guarantees provided by natural latents to make similar isomorphism claims about those more-complicated structures. That would give a potential foundation for crossing the gap between the internal ontologies of two quite different minds. ^ Not necessarily in that order.
2024-05-09
https://www.lesswrong.com/posts/NPK5gQzxzXx5Jjoaq/new-to-this-community
NPK5gQzxzXx5Jjoaq
New to this community
kjsisco
Hello.  I am a computer security consultant, programmer specializing in PC games, and podcaster.  I am most interested in AI alignment and ethical development and I hope to learn from all of you.
2024-05-10
https://www.lesswrong.com/posts/vEeoXYMvfjyTiWv2M/what-i-learned-from-doing-quiz-bowl
vEeoXYMvfjyTiWv2M
What I learned from doing Quiz Bowl
g-w1
I did Quiz Bowl throughout my time in high school, and looking back on it, it was a pretty positive thing to do! In this blog post, I want to make a list of some of the life lessons I have taken from Quiz Bowl. If you know about Quiz Bowl, just skip to the next paragraph. For the uninitiated, Quiz Bowl is a team activity where students compete, typically on teams of four, to answer questions about the world. There are questions on physics, philosophy, mathematics, chemistry, literature, history, economics, videogames, and pretty much every subject. The format of the game is simple: the moderator reads a tossup and then players can “buzz in” (no talking during tossups). If they get it correct, they earn 10 or 15 points and secure a three question bonus set for their team, with each bonus question being worth 10 points. Talking between players is allowed during the bonus questions. If a player gets a tossup wrong, their whole team is locked out from buzzing during that tossup and they can lose 5 points. If you’re interested, you can find the full rules for the NAQT style here. Team chemistry matters more than you’d think I was kind of surprised to learn this because Quiz Bowl is an activity based on facts. You either know the answer or you don’t. But it turns out that for some reason, encouraging your teammates and having what we call “good team chemistry” helps a lot. My team has played against some teams that were much higher ranked than us (e.g. Hunter from New York City, Thomas Jefferson from Virginia, Mira Loma from California), and in many cases, if our spirit was higher, we could beat them even if we knew less. I don’t know the exact mechanism by which this happens but I think it has to do with how emotions affect your calibration with the buzzer (which I’ll talk about next). If everyone on the team is happy and in a good mood, it seems that people hit the sweet spot in buzzer timing. They don’t wait to long to buzz when they know the answer, but they also don’t buzz to early and get the wrong answer. Any deviation from being in a good mood causes erratic buzzing which adds to a negative feedback loop of the players feeling worse and thus playing worse, ultimately leading to a loss. This is commonly known as “tilting.” Not being happy also probably makes your thoughts and reflexes slower, which could additionally explain the necessity of the team being happy during tossups. I think another reason for team chemistry mattering is the necessity of communicating during bonuses: bonuses are where most of the points in Quiz Bowl are scored, and effective communication is necessary during bonuses. If a team is sad (or as a teammate of mine likes to call it — “has negative team chemistry”), they won’t communicate and will end up not getting many points on bonuses. Luckily, my team has talked and thought a lot about the psychological aspect of Quiz Bowl, even thinking hard about seeming trivialities like in which order the players should sit, so we have one of the best team chemistries out of any team in the country (I claim). In fact, at small school nationals, in which we won the Traditional Public Schools division, parents would fairly often complement us on our team chemistry after a game was over — it just looked like we were having fun (and we were!). Quiz Bowl trains your calibration Questions can be answered at any point while they are being read, and it is advantageous to answer them sooner rather than later. This begs the question: when should you buzz? Understanding the style of the questions will help you get why this is such a big problem (feel free to skip if you know what pyramidal questions mean): Quiz Bowl questions are pyramidal. This means that they start off quite hard and then get easier, and by the end, most people will be able to get them. Here’s an example Patterns called boojums were first observed in a substance with this property. Heat transfer occurs in substances with this property analogously to pressure waves, a phenomenon known as second sound. A substance with this property exhibits the fountain effect due to the formation of (*) Rollin films that let it creep out of containers. Lev Landau developed a theory of this property, which is exhibited when helium-4 is cooled past its lambda point of 2.17 Kelvin. For 10 points, name this property in which a substance has zero viscosity, and flows with no resistance. ANSWER: superfluidity [accept superfluids; accept superfluid behavior] from 2021 DART II Packet 5 Question 14 If you buzz while the moderator is reading the bolded text, you get 15 points; otherwise you get 10. There is a big incentive to buzz as early as possible. But how early is too early? If you buzz before you know the answer, you’ll probably get it wrong and lose points. But if you wait too long, the other team will probably get the answer. Through Quiz Bowl, I’ve learned that the optimal time to buzz is a function of three factors: how right you think you are, how the game is going, and how good the other team is. My team has different calibration strategies for different situations. If we’re down by a lot and it’s close to the end of the game, we are quite aggressive and buzz even if we are not totally sure we know the answer. On the other hand, if we are playing against an easier team or are up, we like to wait for a clue confirming our hypothesis before buzzing. As I was writing this blog post, I looked up ‘quizbowl calibration’ to see if others had noticed that Quiz Bowl trains your calibration and found this very interesting paper where they make an AI Quiz Bowl Player. (For nerds: they train a buzzer models that takes in the softmax output of the classifier model and outputs whether to buzz or not.) Humans (or at least me) work pretty similarly. I usually have a hypothesis (or two) in my head while the question is being read and am constantly updating it on new information. When I feel confident enough (when my P(being right)≥threshold(pointsus−pointsthem,HowGood(them))), I buzz. Otherwise, I wait. Interestingly, the optimal strategy is actually not to get zero questions wrong. if you get zero questions wrong, it means that you were not buzzing aggressively enough. And my team is the perfect example of this: at SSNCT we got the most negs (or interrupts that were wrong) out of any team but still won. As a result of my calibration training, I’ve learned that I’m often underconfident — I can usually pick up the answer from a few key words. But I also have learned some cases in which I am overconfident (such as subjects I don’t know well). This is mostly very illegible intuition, but I think it’s useful and probably applicable to daily life. You can do deliberate practice and get much better than most teams In Quiz Bowl, you can tell which teams have someone with incredible innate talent on them. They usually have one player that answers all the tossups and bonuses. While it’s cool to see, this is actually not that optimal. Because if that player tilts, there are no other players to buzz and they lose the game. That is why teams with one dominant player are known as ‘swingy.’ On the other hand, my team does not have a single dominant player (though some are better than others). We all have different specialties that we agreed on beforehand to study. For example, we agreed that I would study math, physics, and philosophy. I had a bunch of experience with the former two but not much experience with philosophy so I had to teach myself the Quiz Bowl philosophy canon. I claim that the fact that we assigned each other different subjects to cover the whole category distribution and then actually studied them already put us at the 95th percentile. In Dan Luu’s article “95%-ile isn't that good,” he argues that if you actually try to get good at something by doing deliberate practice, you can get good pretty fast. This is what happened with my team. Some time in the winter of 2022-2023, we decided that we wanted to be a Good Quiz Bowl team. So we had semiregular meetings where we discussed our carding (Anki) strategy, category distribution, and other ways to deliberately practice Quiz Bowl. We also constantly revised our studying strategies to be more effective (reflection works!). Using this strategy, we went from a good but not great team to one of the best teams in the Tri-State area! Most teams just study by doing practice questions once (or a few times) a week). This works, but it is horribly inefficient. Every player practices all questions, even though they only specialize in a subset of them. It does not use spaced repetition based on what you already know. There are a bunch of other reasons why this is a bad strategy (it’s better to do a little bit every day than a lot once a week, etc). Through the process of using really effective studying habits, Quiz Bowl has taught me that with some deliberate practice and iterative reflection on that practice, you can actually get pretty good at a skill. I now know at a gut level that Dan’s thesis is true. I hope you have enjoyed reading this! Besides all of the ‘lessons’ I’ve learned, Quiz Bowl has also just been really fun and I’ve met a bunch of cool people from other schools that I wouldn’t have met in other contexts through it. Did you converge on any of these insights through some other activity? I’d imagine that some sports or activities like Model UN or Debate could teach similar lessons (but maybe not calibration).
2024-05-09
https://www.lesswrong.com/posts/qmxK9thLfMBLGFKwd/pascal-s-mugging-and-the-order-of-quantification
qmxK9thLfMBLGFKwd
Pascal's Mugging and the Order of Quantification
Mascal's Pugging
One of the fun things to do when learning first order logic is to consider how the meaning of propositions dramatically changes based on small switches in the syntax. This is in contrast to natural language, where the meaning of a phrase can be ambiguous and we naturally use context clues to determine the correct interperation. An example of this is the switching of the order of quantifiers. Consider the four following propositions:[1] ∀x∃yLikes(x,y)∀x∃yLikes(y,x)∃x∀yLikes(x,y)∃x∀yLikes(x,y) These mean, respectively, Everybody likes somebodyEverybody is liked by somebodyThere is a very popular person whom everybody likesThere is a very indiscriminate person who likes everyone These all have quite different meanings! Now consider an exchange between Pascal and a mugger: Mugger: I am in control of this simulation and am using an avatar right now. Give me $5 or I will go outside of this simulation and cause you to lose $10. Pascal: That does not seem like a wise choice for me. Let's say the probability of you being in control of this simulation is 1/10. Then the expected value of me handing you $5 is negative, so I will not do so. Mugger: Okay then, forget about the money, let's just focus on units of utility and call these utils. I am going to demand that you give me 5 utils. Because you think that the probability of me being in control of this simulation is 1/10, I will go outside of the simulation and simulate 20 conscious agents losing 5 utils each. Now the expected value of you not complying is (1/10)⋅(−5)⋅(20)=−10. And -10 is greater (more negative) than the -5 you would lose. So, hand me over the utils! Pascal: Well, I just made up 1/10 on the spot. The reality is that the probability you are in control of this simulation is in reality much much lower. Mugger: Ah, but it does not really matter what you probability you assign! Given any probability P>0 you have that I am in control of this simulation, I will go outside of this simulation and simulate P⋅(−X) so that P⋅(−X) is greater (more negative) than the -5 you would lose. So, hand me over the utils! (−X here is the totat amout of lost utility the mugger will simulate.) Now, let's formalize the mugger's argument and see if we can learn something. The mugger is claiming: ∀P∃X[|−P⋅X|>|−5|]. I propose that there is a simple counterargument to make here: P cannot be quantified before X, and in fact P should be a function of X. No matter what X is, we assert that |−P(X)⋅X|<ϵ where ϵ is such a small amount of util that it cannot even be meaningfully handed over by Pascal. That is, ∀X[|−P(X)⋅X|<ϵ] Why is this plausible? Well, we are basically saying that the larger the amount of util the mugger claims he can simulate, the less a chance there is he can actually do that. If I told you I had an easy way to make $20 if you gave me $10 then you might believe me. If I told you I had an easy way to make $200 if you gave me $10 then you would be more skeptical. If I told you I had an easy way to make $2,000,00 if you gave me $10 then you would dismiss me without a second thought. In the mugger's case specifically, we would imagine that when he pops outside of this simulation he has a finite amount of resources, and so the more he claims to be able to simulate the less probable that is actually true. ^ From Language, Proof, and Logic 2nd Edition
2024-05-10
https://www.lesswrong.com/posts/ANGmJnZL2fskHX6tj/dyslucksia
ANGmJnZL2fskHX6tj
Dyslucksia
DarkSym
The curious tale of how I mistook my dyslexia for stupidity - and talked, sang, and drew my way out of it. Sometimes I tell people I’m dyslexic and they don’t believe me. I love to read, I can mostly write without error, and I’m fluent in more than one language. Also, I don’t actually technically know if I’m dyslectic cause I was never diagnosed. Instead I thought I was pretty dumb but if I worked really hard no one would notice. Later I felt inordinately angry about why anyone could possibly care about the exact order of letters when the gist is perfectly clear even if if if I right liike tis. I mean, clear to me anyway. I was 25 before it dawned on me that all the tricks I was using were not remotely related to how other people process language. One of my friends of six years was specialized in dyslexia, and I contacted her, full excitement about my latest insight. “Man, guess what? I realized I am dyslectic! This explains so much! I wish someone had told me sooner. It would have saved me so much grief.” “Oh, yeah, I know.” “Wait, what?” “You are very obviously dyslectic.” “Wait, why didn’t you tell me?” “You didn’t seem bothered.” “Oh…” Turns out my dyslexia was a public secret that dated back all the way to my childhood (and this was obviously unrelated to my constitutional lack of self-awareness). Anyway. How come I kind of did fine? I’m fluent in English (not my native language), wrote my PhD thesis of 150 pages in 3 months without much effort, and was a localization tester for Dutch-English video game translation for two years. I also read out loud till the age of 21, trace every letter like it’s a drawing, and need to sing new word sounds to be able to remember them. I thought everyone had to but no one sent me the memo. Dear reader, not everyone has to. When I recently shared my information processing techniques with old and new friends, they asked if I had ever written them down so maybe other people could use them too. I hadn’t. So here is my arsenal of alternative information processing techniques. Read Out Loud Honestly, I didn’t realize there was an age where you were supposed to stop doing this. In school you obviously had to whisper to yourself. At home you go to your room and read at normal volume. If it’s a fiction book, you do voices for the different characters. It’s great. I remember my sister sometimes walking in to my room when I was little cause she said it sounded like so much fun in there. It totally was. Later I found out my mother made sure my siblings never made me aware it was unusual I was still reading out loud. Instead she signed me up for competitions to read books on the local radio. This was before the wide-spread internet and audio books. Later I’d read to my parents sometimes, who were always excited about how much energy I threw into the endeavor. I didn’t know any different. In college I was still reading out loud. Research papers have a voice. Mathematical equations especially. They take longer to say out loud than to read in your head, but you can never be sure what’s on the page if you don’t. According to my brain anyway. When I was 22 I moved in with my first boyfriend and reading out loud got a little obstructive. I started subvocalizing, and that was definitely less fun. I still subvocalize now. But if I struggle to follow a passage, I go back to reading it out loud. I’ve probably read out this essay a dozen times by now. I keep checking the cadence of every sentence. It’s easier to spot word duplications, cause I find myself repeating myself. Missing words also stick out like inverted pot holes. They destroy the flow. So I jump back and smooth them over. Sometimes when I talk, I finish the sentence differently than it’s written. Then I go back and compare the two. Often what I say is better than what I wrote. I’ve been told most people’s brains don’t work like that. All I can say is, I hope more kids get a chance to grow in to adults who keep reading out loud if that is what they need and enjoy. Draw the Letters The year before high school graduation I decided I wanted to get a 7.5 grade average (out of 10). In the Dutch school system, that’s kind of high, and it allows you automatic entry to any program of your choice. I studied hard, and diligently, and generally enjoyed it. Except I still failed every language I hadn’t managed to drop. My first fail grade was French, everybody breezed through German (Dutch with an accent) except me, and then there was mandatory Latin - an entirely logical life decision for a dyslectic. I managed to drop French and German. I could breeze through English cause I spoke it better than the teacher (thanks, Dad![1]) but with Dutch and Latin I was stuck. Now when it came to spelling Dutch words, I could not for the life me remember the rules or see the individual letters making up a word. I’d just be reading or writing, and it’s like little packages of word shape went in or out of my brain, but I couldn’t really see in to those packages. Write and right are the same word. So are father and dad. So are foruthwly and fortunly and forrtunaly. And that’s not even getting in to reading a sentence and managing to notice all the words. My brain automagically skips along a sentence kind of like: This is practice sentence to you how my brain. I wonder how noticeable differences are to to other people. I didn’t find it baffling that my brain worked like this. I found it baffling other people cared to give me fail grades about it! I was pretty sure I was intelligible. And with my Reading Out Loud trick (that I didn’t know was a trick) I could read just fine. In the mean time I was failing my Dutch exams three times in a row. The teacher made an exam you had to pass to be allowed into the next year. It was entirely a spelling exam. I was sure she had invented this inane rule to torture me personally. Luckily I didn’t invent my solution till the year after. Shows her who’s boss![2] Ahum. Anyway. I kept banging my eyeballs against the teflon of word images. I could not get any of it to stick. Till, for whatever reason, I realized words were just drawings. Drawings made up of letters. And letters were also drawings. I really liked drawings. I had always liked drawings. What if I pretended to draw every single letter as I wrote it? You know those magic spells in books where suddenly you get an extra sense or you look through the invisibility cloak? It was kind of like that. (I just apparently spelled “cloak” like “cloaca”. Have I mentioned I was often the class joker cause I read and heard everything wrong and my first 5 guesses were completely bonkers? I wonder if there is a dyslexia/creativity link purely because of that). Anyway, long story short, I was immediately cured of my writing dyslexia.[3] Yay! Sing the Words My first boyfriend was Swedish. His English was great and so was mine, so really there was no reason to strive for anything more. But I had some desire to be able to communicate better with his family so I did end up facing my well known demon again: foreign language learning. Every time he said a word in Swedish I just … didn’t remember it? Humans are supposed to have an auditory loop of a few seconds - You don’t have to memorize or pay attention to what you hear. You can just repeat the last few seconds back. It’s standard functionality. I can do that too. Thankfully. Unless I don’t recognize the sounds. It’s like asking me to beatbox the last 5 seconds of the gurgling of a nearby river. How the fudge would I do that? Wait, are there people who can do that? Anyway. It was making me feel mighty stupid though. I had been in enough self-flaggelation language learning courses to know no one else seemed afflicted by this micro-amnesia pointed exactly at the new word they were trying to learn. Also, this was not an issue for me for new words in a language I did know, nor was it an issue with new words in languages I didn’t know where the sounds were familiar (I went through 5 years of trying to learn Spanish prior to this, much to the confusion of anyone who knew me, but Spanish doesn’t contain new phonemes compared to Dutch). Again, I don’t know why I thought of this. My best guess is, I like experimenting. But at some point, I asked my then-boyfriend to sing the word to me that he had repeated to me roughly 30 times before that without effect. Yes. I got it the first time. And it reproduced. I could remember the word sounds first from singing, and then after a few repetitions, I could just say them. Hurray! Substituting brain processes This was more a personal account than a course on how to manage dyslexia. I never got help or training. I have no idea how common my techniques are. But my friends pointed out I should share them so others can find them and use them, so here you go. There is, to my mind, one common thread though: If your brain is bad at doing something the regular way, try something else. Anything else. Human brains have areas dedicated to language processing - encoding and decoding. Dyslexia covers all the ways that these encodings and decodings might fail. If you notice your default setup not working, try using a different mechanism instead. I think in practice, I’ve covered my major dyslexias in word image processing and phoneme processing by using my verbal center, my visual center (not language related), and the apparently highly specialized area of your brain that takes care of singing. They are not supposed to take care of reading, writing, and learning word sounds. But they can apparently do the job using their own toolkit. I’m slower at all the language related tasks than you’d expect for my overall mental functioning, but these techniques allow me to do the job without error. And I can tell you, being trapped in never being able to reliably parse the true information that is written down or said is a weird and maddening curse. It’s hard to explain if you don’t have any dyslectic tendencies. I used to think I was stupid. Turns out I’m just a special form of deaf and blind. I wish someone had told me how to deal with that. So for what it’s worth, if your brain is anything like mine, maybe try to talk, sing, or draw your way out of it. This was a cross-post of my first essay on my new blog Think Feel Play. Consider subscribing there if you liked this post, and would like to read future essays about learning, learning about learning, and learning about learning about learning ... ok. Yes. Hi. Or maybe just to make me smile :D ^ My father is American, and I’ve spoken some English for as long as I can remember. I can’t recall ever having to put in effort in to learning English. Something I remain eternally grateful for considering my dismal language acquisition skills. ^ I can’t remember how I made it through. I only remember how miserable and angry I was, as the only student in my year who had to keep redoing the exam. I think I got infinite redoes? Got lucky? Was proffered divine grace? ^ Maybe that description was too minimal to help anyone recreate the effect. What you do is you pretend the roman alphabet is a foreign alphabet. E.g. Kanji. Whenever you write or read, trace every stroke of the letter like you are illuminating an ancient manuscript. Channel your inner Sumi-E brush artist. Imagine yourself a true artisan of calligraphy. It’s a bit of a semi-meditative process of noticing every single stroke of every single letter. Yes, this is excruciatingly slow at first. Yes, it will be only kind of slow eventually. But, even better, you can probably still drop this technique at will and then just switch back and forth between high and low error modes of processing languages. Also, you are likely to lower your error rate in fast mode over time cause mental skills are porous. Or maybe magic? Anyway, it does seem to cross-over a bit.
2024-05-09
https://www.lesswrong.com/posts/2MTtKzbxeXDXTJhC2/introducing-statistical-utility-mechanics-a-framework-for
2MTtKzbxeXDXTJhC2
Introducing Statistical Utility Mechanics: A Framework for Utility Maximizers
Jemist
This is independent alignment research. A decent grasp on stat mech is needed to understand this post fully, but I have recently written a stat mech sequence (up to part 3) which should do the trick to catch someone up to speed rapidly. TL;DR Statistical Utility Mechanics is a framework which draws on stat mech to model the actions of utility-maximizing agents. I introduce this framework here, and use it to demonstrate why the most naive possible alignment plan (by which of course I mean one not too far off the one that most AI companies are currently using) is doomed by scaling laws, and we should expect to see sudden increases in AI's potential for harm around when AI reaches roughly human level. The framework consists of just three core rules The world is a probability distribution over statesEach agent assigns a value to each stateAgents can affect the probability distribution over states with limited power I'll go into more detail with these in this toy model: Toy Example: Why AI Suddenly Becomes Dangerous The world is described by a multivariate normal distribution over states s∈Rd, centred on the origin. All variances are 1, so the distribution can be written as follows: P0(S=s)=1(2π)d/2exp(−12s2)⟺S∼N(0,Id) This has the nice property of being totally spherically symmetrical. I will also write P0(S=s)≡ps0 for brevity in some cases. Let us consider an AI agent AX, which has the simplest possible utility function: xs=vX⋅s for some vector vX, and a "power level" OX∈R. Without loss of generality we will assume that |vX|=1 (the reason for this will be clear soon) This AI will impact the world's probability distribution P0AX→P1 according to two rules: DKL(P1∥P0)≤OXSubject to rule 1, E(X)=∫P1(S=s)xsds is maximized Normally stat mech tells us about entropy given expected values, but here we can flip it on its head to tell us about expected values given KL-divergence (see appendix for why we use KL-divergence here). This is ps1=ps0exp(bvX⋅s) for some constant b, and gives the distribution P1∼N(bvX,Id) Using the formula for the KL-divergence of two multivariate gaussians, we get: DKL(P1∥P0)=12b2(vX⋅vX)=12b2 OX=12b2∴b=√2OX P1∼N(√2OXvX,Id) E(X)=√2OX This framework naturally captures the fact that rescaling or adding a constant to our utility function does not change the behaviour of the agent. If |vX|≠1, then we just get a different b value and it all cancels out to the same answer. The Alignment Process Let's describe a human utility function in the same way as the agent's: ys=vY⋅s. We will also describe the "power" of humans in this system as OY. In this model, what matters is the cosine similarity between the two vectors, which can be expressed as τ=cos(θXY)=vX⋅vY. For a d dimensional space, and two totally randomly-chosen vectors, this is distributed like this: P0(a;τ=t)∝(1−t2)a where a=d−32 We will work in terms of a rather than d to simplify the maths, and because we expect to have a very high-dimensional space, and mainly care about scaling laws, it doesn't really matter. We'll consider altering the value of τ, again using the KL-divergence trick to measure how much we're impacting our distribution. Our system does not (really) have a good closed-form solution, especially for small values of b and large values of a. We can approximate our whole system numerically though, and get the following results: Note the log scale for both a, and the x-axis. Here we've calculated E(τ) as a function of a and b, but plotted it against Oτ, which we'll define as the divergence DKL(P1(a,b;τ=t)∥P0(a;τ=t)). We can see that the higher the dimensionality of our space, the more we must optimize the AI's utility function to get E(τ) above a certain critical value τc. If we plot this required optimization against τc: We can also see that this relationship is linear for any τc threshold we might want to set. This is somewhat unsurprising, the each dimension is just another, mostly independent, dimension along which we must optimize vX. The question is, how might we optimize the AI's utility function, and what are the constraints on Oτ? Consider the following loop (basically trying to wrangle an AI into alignment by letting it loose and seeing what it does): The AI observes the world.The AI takes an action to optimize the world.We observe the result of that action.We take an action to optimize the AI's utility function The expected amount of information we get about the AI's utility function from observing the result of each action is just the KL-divergence between the un-optimized world and the optimized world, in other words OX! How much of this optimization is harmful? We can express it as  OX⊥Y : √OX⊥Y=√(1−τ2)√OX √OX∥Y=√τ2√OX OX=OX∥Y+OX⊥Y (The reason for all the square roots is in Appendix 2) Here is the plot of the following integral: √OX⊥Y=∫√1−τ2d√Oτ This makes sense again: we have a linear relationship between the amount of harmful optimization the AI will do before we can wrangle it into alignment, and the size of the space of possible utility functions. Not good! Now if the people trying to align the AI are not completely stupid (debatable) we might want to add a final point to this loop: We take an action to rectify the result of the AI's action If the AI is smarter than us, we would expect not to be able to fully rectify the result of the AI's decision. For simplicity, and without particular principle, we will make the choice of  OX−Y=plus(OX⊥Y−OY) where plus(x)={x, x≥0 0, x<0 We will also upper-bound Oτ=max(OX,OY), representing the assumption that we probably can't optimize the AI any more strongly than we can optimize against its effects on the world (since our limits on both are likely due to our ability to understand the AI's actions). We can then simplify our expressions to these relations in terms of Oτ: RXY=OX/OY Oτ=max(1,RXY)×OY OY=Oτ/max(1,RXY) OX=min(RXY,1)×Oτ OX−Y=plus((1−τ)×min(RXY,1)−min(1RXY,1))×Oτ We will therefore calculate the integral: √OX−Y=∫√plus((1−τ)×min(RXY,1)−min(1RXY,1))d√Oτ Or we can plot it as a function of RXY, with a as a parameter: We can note a few predictions from this system: Total "harm" as measured by OX−Y during training is directly proportional to aFor RXY≤1, we experience zero total harm, but harm grows very rapidly around human level AI, and the larger the value of a, the faster it grows AI Scaling The question is, how do our parameters RXY scale with AI "scale"? Let's assign AI scale the to variable ξ (because xi is hard to draw and an AI's scale is hard to quantify) then we'll make the very conservative assumption that RXY scales with log(ξ), and a scales with ξk for some exponent k, probably ≤1. This gives us a plot looking something like the black line below: If we expect ξ to scale exponentially with time T, we expect a to scale exponentially over time, and RXY to scale linearly. What Ilya Saw Or if we use a logarithmic y-axis, to get a better look at just how fast it is growing: X axis has been rescaled to remove the zeroes I conjecture that this consists of three phases: A "lag" phase in which OX≪OY, so no harm is possible (the world is currently in this phase)A "catch-up" phase in which the OX≈OY, so we see double-scaling (due to overcoming the plus function more and more as RXY increases)An "exponential" phase, in which OX≫OY, so we "only" see exponential scaling, or perhaps scaling as (which is still very fast) Conclusions This model tells us to expect super-exponential scaling of AI harms around human level. It also tells us that the higher the dimensionality of the possible utility-function space, the more quickly these harms will grow. Some things this model does not include, a mixture of things which would make alignment easier or harder: The ability for the aligners to "sandbox" the AI, limiting its domain during alignment, or do offline-RL-ish-stuff on itClever tricks to "repeat" the AI's actions in some way to get more than OX information out of each of the AI's actionsThe ability for the AI to become more powerful over time during deployment (i.e. OX increases over time while the AI is being altered)The ability for the AI to understand that it is being optimized, and act strategically or deceptivelyA "sharp left turn" in which AI gains new capabilities (e.g. can optimize in more dimensions)"Real" goodharting, in which eventually even highly correlated optimization targets diverge due to constraints.Extra "fragility" terms outside of τ. I tried to simulate this by transforming the distribution to a narrower one, but polynomially-scaling τ had the same effect as increasing a, so perhaps it is best to think of a as just being a general "fragility constant" I'm going to keep working on SUM, and hopefully get even more interesting results. I think SUM has the potential to be a fertile ground for both formal proofs and more informal models (like this one) surrounding alignment. For a recap, here are the foundational principles: The world has states s∈S, which can be any setWe define a probability distribution P0(S=s)=ps0Agents AX assign real values to each element xs∈RAn agent can influence that probability distribution with power OXThis means the action PiAX→Pi+1 is subject to DKL(Pi+1∥Pi)≤OXThe optimal action for a myopic agent is psi+1∝psiebxs for b∈R+ Major Caveat Arguably the sudden threshold is somewhat artificial. I did add a threshold function to my measure for harm, which induced a threshold in harm. I do think the threshold is well-motivated though.It's somewhat unclear whether OX is the most natural measure of an AI's powerTechnically this should all be done with lots of causal models and severed correlations, which I have omitted here. I think the results ought to still hold there, but I haven't crunched all the numbers so I'm working in shorthand. Appendices Appendix 1: The Entropy to KL-Divergence Trick The world in SUM is conceptualized as a set of states s∈S. To each of these we assign an initial probability ps0, according to all the normal rules of probability distributions. We define the total entropy H=∑s∈Sps(hs0−lnps) where hs0 is the "intrinsic entropy" of each state, a trick from stat mech when our "states" are not truly fundamental. We then perform a second trick by defining hs0=lnps0. This has two effects: H0=0 by definition, and more subtly P0(S) becomes the maximum-entropy probability distribution. This combination gives us a definition of H which reduces to the KL-divergence. H=∑s∈Sps(lnps0−lnps)=−DKL(P∥P0) Appendix 2: KL-Divergence is not a Metric So KL-divergence is actually a generalization of squared distance, not distance. This is why I do my integrals with respect to d√O. This still might not be the most appropriate way to do things, but I've found that the qualitative results are the same if I do them with respect to dO or d√O.
2024-05-15
https://www.lesswrong.com/posts/Fgykut9sBmS3uiKsi/has-generative-ai-already-peaked-computerphile
Fgykut9sBmS3uiKsi
Has Generative AI Already Peaked? - Computerphile
davidfriva
null
2024-05-09
https://www.lesswrong.com/posts/ZmdzizGSkbewojKAK/thoughts-on-the-relative-economic-benefits-of-polyamorous
ZmdzizGSkbewojKAK
Thoughts on the relative economic benefits of polyamorous relationships?
OKlogic
I have a limited desire to pursue polyamory, but I have a strong inclination that when properly managed, a stable polycule would be far more economically efficient. Has any serious literature (books, studies, ect) been published on this question? Any thoughts?* Off the top of my head: Advantages of Monogamy: Less of risk of STIsGenerally More Stable (data is lacking, most likely due to the stigma surrounding polyamorous relationships naturally creating response bias,  but it seems intuitively correct that it is more difficult to maintain the relationship). Sanctioned by society at large (especially relevant for public-facing careers and familial inheritance)Less Time Intensive/Easier to Arrange Advantages of Polyamory: Allows you to easily influence more people (ie: gain social capital)* More people = more efficiency in general (Buying in bulk, sharing living space, etc).More opportunities for intercourse Edits to steer the conversation in a more productive direction + Footnotes: I know this is far less common in the wild, but I also want to discuss a form of polyamory which is a strict, potentially contractual agreement similar to marriage, in which each party agrees to abstain from extra-polycule relations. *On point one for the advantages of polyamory, the idea is that it is easier to influence the decisions of romantic partners than friends (on average) is a pretty huge deal in my opinion. It could just be because I am fairly introverted (ie socially challenged), but it seems much easier to change your family's behavior than friends. This should have serious implications for EA folk, as in theory, you can increase effective donations by a great margin by engaging in a polyamorous relationship, assuming there are no other disadvantages to polyamory. *Okay, I get that relationships are complex and it's difficult to study them due to a large number of confounding variables, but has there really been no systematic analysis (ie a book) on the economics of polyamory?
2024-05-09
https://www.lesswrong.com/posts/dLwo67p7zBuPsjG5t/we-might-be-missing-some-key-feature-of-ai-takeoff-it-ll
dLwo67p7zBuPsjG5t
We might be missing some key feature of AI takeoff; it'll probably seem like "we could've seen this coming"
Lukas_Gloor
Predicting the future is hard, so it’s no surprise that we occasionally miss important developments. However, several times recently, in the contexts of Covid forecasting and AI progress, I noticed that I missed some crucial feature of a development I was interested in getting right, and it felt to me like I could’ve seen it coming if only I had tried a little harder. (Some others probably did better, but I could imagine that I wasn't the only one who got things wrong.) Maybe this is hindsight bias, but if there’s something to it, I want to distill the nature of the mistake. First, here are the examples that prompted me to take notice: Predicting the course of the Covid pandemic: I didn’t foresee the contribution from sociological factors (e.g., “people not wanting to get hospitalized” – Zvi called it “the control system”).As a result, I overpredicted the difference between countries with a lockdown policy vs ones without. (Note that this isn’t necessarily an update against the cost-effectiveness of lockdowns because the update goes both ways: lockdowns saved fewer lives than I would’ve predicted naively, but costs to the economy were also lower compared to the counterfactual where people already social-distanced more than expected of their own accord since they were reading the news about crowded hospitals and knew close contacts who were sick with the virus.) Predicting AI progress: Not foreseeing that we’d get an Overton window shift in AI risk awareness.Many EAs were arguably un(der)prepared for the possibility of a “chat-gpt moment,” where people who weren’t paying attention to AI progress previously got to experience a visceral sense of where AI capabilities progress is rapidly heading. As a result, it is now significantly easier to make significant policy asks to combat AI risks.Not foreseeing wide deployment of early-stage “general” AI and the possible irrelevance of AI boxing.Early discussions of AI risk used to involve this whole step about whether a superhuman AI system could escape and gain access to the internet. No one (to my knowledge?) highlighted that the future might well go as follows: “There’ll be gradual progress on increasingly helpful AI tools. Companies will roll these out for profit and connect them to the internet. There’ll be discussions about how these systems will eventually become dangerous, and safety-concerned groups might even set up testing protocols (“safety evals”). Still, it’ll be challenging to build regulatory or political mechanisms around these safety protocols so that, when they sound the alarm at a specific lab that the systems are becoming seriously dangerous, this will successfully trigger a slowdown and change the model release culture from ‘release by default’ to one where new models are air-gapped and where the leading labs implement the strongest forms of information security.” If we had understood the above possibility earlier, the case for AI risks would have seemed slightly more robust, and (more importantly) we could’ve started sooner with the preparatory work that ensures that safety evals aren’t just handled company-by-company in different ways, but that they are centralized and connected to a trigger for appropriate slowdown measures, industry-wide or worldwide. Concerning these examples, it seems to me that: It should’ve been possible to either foresee these developments or at least highlight the scenario that happened as one that could happen/is explicitly worth paying attention to.The failure mode at play involves forecasting well on some narrow metrics but not paying attention to changes in the world brought about by the exact initial thing you were forecasting, and so predicting a future that will seem incongruent. What do I mean by “incongruent?” A world where hospitals are crowded with people dying from Covid-induced pneumonia and everyone has a contact who’s already got the virus, yet people continue to go to restaurants as normal.A world where AI capabilities progressed far enough to get us to something like chat-gpt, but somehow this didn’t cause a stir or wake-up moment for anyone who wasn’t already concerned about AI risk.A world where "general" AI is easier than across-the-board superhuman AI, and yet the profit-oriented AI companies don't develop a hard-to-reverse culture of making models broadly available (and giving them access to the internet) as these models are slowly getting more and more capable. I won’t speculate much on how to improve at this in this post since I mainly just wanted to draw attention to the failure mode in question. Still, if I had to guess, the scenario forecasting that some researchers have recently tried out seems like a promising approach here. To see why, imagine writing a detailed scenario forecast about how Covid affects countries with various policies. Surely, you’re more likely to notice the importance of things like the “control system” if you think things through vividly/in fleshed-out ways than if you’re primarily reasoning abstractly in terms of doubling times and r0. Admittedly, sometimes, there are trend-altering developments that are genuinely hard to foresee. For instance, in the case of the “chat-gpt moment,” it seems obvious with hindsight, but ten years ago, many people probably didn’t necessarily expect that AI capabilities would develop gradually enough for us to get chat-gpt capabilities well before the point where AI becomes capable of radically transforming the world. For instance, see Yudkowksy’s post about there being no fire alarm, which seems to have been wrong in at least one key respect (while being right in the sense that, even though many experts have changed their minds after chat-gpt, there’s still some debate about whether we can call it a consensus that “short AI timelines are worth taking seriously.”) So, I’m sympathetic to the view that it would have been very difficult (and perhaps unfairly demanding) for us to have anticipated a “chat-gpt moment” early on in discussions of AI risk, especially for those of us who were previously envisioning AI progress in a significantly different, more “jumpy” paradigm. (Note that progress being gradual before AI becomes "transformative" doesn't necessary predict that progress will continue to stay gradual all the way to the end – see the argument here for an alternative.) Accordingly, I'd say that it seems a lot to ask to have explicitly highlighted – in the sense of: describing the scenario in sufficient detail to single it out as possible and assigning non-trivial probability mass to it – something like the current LLM paradigm (or its key ingredients, the scaling hypothesis and “making use of internet data for easy training”) before, say, GPT-2 came out. (Not to mention earlier still, such as e.g., before Go progress signalled the AI community’s re-ignited excitement about deep learning.) Still, surely there must have come a point where it became clearer that a "chat-gpt moment" is a thing that's likely going to happen. So, while it might be true that it wasn’t always foreseeable that there’d be something like that, somewhere in between “after GPT-2” and “after GPT-3,” it became foreseeable as a possibility at the very least. I'm sure many people indeed saw this coming, but not everyone did, so I'm trying to distill what heuristics we could use to do better in similar, future cases. To summarize, I concede that we (at least those of us without incredibly accurate inside-view models of what’s going to happen) sometimes have to wait for the world to provide updates about how a trend will unfold. Trying to envision important developments before those updates come in is almost guaranteed to leave us with an incomplete and at-least-partly misguided picture. That’s okay; it’s the nature of the situation. Still, we can improve at noticing at the earliest point possible what those updates might be. That is, we can stay on the lookout for signals that the future will go in a different way than our default models suggest, and update our models early on. (Example: “AGI” being significantly easier than "across-the-board superhuman AI" in the LLM paradigm.) Furthermore, within any given trend/scenario that we’re explicitly modeling (like forecasting Covid numbers or forecasting AI capabilities under the assumption of the scaling hypothesis), we should coherence-check our forecasts to ensure that they don’t predict an incongruent state of affairs. By doing so, we can better plan ahead instead of falling into a mostly reactive role. So, here’s a set of questions to maybe ask ourselves: How will the social and technological landscape differ from now as we get even closer to transformative AI?What might be an incredibly relevant aspect of AI takeoff that I’m currently oblivious to? Particularly, anything that might happen where I will later think, “Ah, why didn’t I see this coming?”How might I be envisioning trends in a misguided way? What could be early indicators of “trend changes” where the future will depart from my default way of modeling/envisioning things?
2024-05-09
https://www.lesswrong.com/posts/NozxxqJwbaoEhLggk/four-unrelated-is-over
NozxxqJwbaoEhLggk
Four Unrelated Is Over
jkaufman
Somerville historically had a zoning ordinance limiting housing units to at most four unrelated people: any number of persons related by blood, marriage, adoption, or foster care agreement and up to three (3) additional unrelated persons living together as a single housekeeping unit This is something I'd been unhappy about for years, and was enthusiastic about the "4 unrelated is outdated" campaign to change it in in 2018. So I'm very happy that after a request for a variance the city council instead ended up removing the restriction. The actual change was in November, so I'm a bit late on this! I also think there was an oversight, where the removal didn't include changing the text in section 7-153 which says "All schools shall be responsible for publicizing to their students the limitations of the city's zoning ordinance which limits occupancy to not more than four unrelated individuals." I've written to the city council to let them know. I've also noticed that several school-affiliated sites still list this limitation: Tufts: Off-Campus Housing and Zoning Ordinances. Harvard: Harvard Housing Off Campus and Beckwith Circle. I've written to Tufts and Harvard to let them know this has changed. I wasn't able to find this listed on any MIT or Lesley sites, and didn't check all the other Boston-area college websites.
2024-05-09
https://www.lesswrong.com/posts/RHMA26L3i5ChiCjYN/ai-and-chemical-biological-radiological-and-nuclear-hazards
RHMA26L3i5ChiCjYN
AI and Chemical, Biological, Radiological, & Nuclear Hazards: A Regulatory Review
elliot
This article is last in a series of 10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance, such as incident reporting, safety evals, model registries, and more. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis. This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll publish individual posts on our website and release a comprehensive report at the end of this series. Thank you to Melissa Hopkins for providing feedback on this report. Humanity has developed technologies capable of mass destruction, and we need to be especially cautious about AI in relation to these technologies. These technologies and associated risks commonly fall into four main categories, collectively known as CBRN: Chemical hazards: Toxic chemical substances that can cause significant harm to people or the environment, such as chemical warfare agents or toxic industrial chemicals.Biological hazards: Toxins and infectious agents like bacteria, viruses, and other pathogens that can cause disease in humans, animals or plants.Radiological hazards: Radioactive materials that emit ionizing radiation which can harm human health, such as waste from nuclear power stations.Nuclear hazards: Materials related to nuclear fission or fusion that can release tremendous destructive energy, such as nuclear weapons and nuclear power plant accidents. In this section, we’ll briefly contextualize current and upcoming examples of each of these types of hazards in the context of AI technologies. What are potential chemical hazards arising from the increase in AI capabilities? In particular, a prominent concern of experts is the potential for AI to lower the barrier of entry for non-experts to generate CBRN harms. That is, AI could make it easier for malicious or naive actors to build dangerous weapons, such as chemical agents with deadly properties. For example, pharmaceutical researchers use machine learning models to identify new therapeutic drugs. In this study, a deep learning model was trained on ~2,500 molecules and their antibiotic activity. When shown chemicals outside that training set, the model could predict whether they would function as antibiotics. However, training a model to generate novel safe and harmless medications is very close to, if not equivalent to, training a model to generate chemical weapons. This is an example of the Waluigi Effect; the underlying model is simply learning to predict toxicity, and this can be used to rule out harmful chemicals, or generate a list of them, ranked by deadliness. This was demonstrated by the Swiss Federal Institute for Nuclear, Biological, and Chemical Protection (see here for a non-paywalled summary). By telling the same model to generate harmful molecules, it generated a list of 40,000 such molecules in under 6 hours. These included deadly nerve agents such as VX, as well as previously undiscovered molecules that it ranked as more deadly than VX. To quote the researchers: This was unexpected because the datasets we used for training the AI did not include these nerve agents… By inverting the use of our machine-learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules. As AI models become more deeply integrated into the process of developing chemicals used for industrial and medical purposes, it will become increasingly accessible for malicious parties to use these models for dangerous means. What are biological hazards arising from the increase in AI capabilities? In the near future, AI may lower the barrier of entry for malicious actors to generate pandemic-level biological hazards. This risk comes from both specialized AI trained for biological research and more generic AI, such as large language models. Large language models (LLMs) have been identified by recent papers to lower barriers to misuse by enabling the weaponization of biological agents. In particular, this may occur from the increasing application of LLMs as biological design tools (BDTs), such as multimodal lab assistants and autonomous science tools. These BDTs make it easier and faster to conduct laboratory work, supporting the work of non-experts and expanding the capabilities of sophisticated actors. Such abilities may produce “pandemic pathogens substantially more devastating than anything seen to date and could enable forms of more predictable and targeted biological weapons”. Further, the risks posed by LLMs and by custom AI trained for biological research can exacerbate each other by increasing the amount of harm an individual can do while providing access to those tools to a larger pool of individuals. It’s important to note these risks are still unlikely with today’s cutting-edge LLMs, though this may not hold true for much longer. Two recent studies from RAND and OpenAI have found that current LLMs are not more prone to misuse than standard internet searches regarding biological and chemical weapons. Another leading biological hazard of concern is synthetic biology – the genetic modification of individual cells or organisms, as well as the manufacture of synthetic DNA or RNA strands called synthetic nucleic acids. This field poses a particularly urgent risk because existing infrastructure could theoretically be used by malicious actors to produce an extremely deadly pathogen, for example. Researchers are able to order custom DNA or RNA to be generated and mailed to them, a crucial step towards turning a theoretical pandemic-level design into an infectious reality. Currently, we urgently need mandatory screening of ordered material to ensure it won’t enable pandemic-level threats. Some researchers are developing tools specifically to measure and reduce the capacity of AI models to lower barriers of entry for CBRN weapons and hazards, with a particular focus on biological hazards with pandemic potential. For example, OpenAI is developing “an early warning system for LLM-aided biological threat creation”, and a recent collaboration between several leading research organizations produced a practical policy proposal titled Towards Responsible Governance of Biological Design Tools. The Centre for AI Safety has also released the “Weapons of Mass Destruction Proxy”, which measures how particular LLMs can lower the barrier of entry for CBRN hazards more broadly. Tools and proposals such as these, developed with expert knowledge of CBRN hazards and AI engineering, are likely to be a crucial complement to legislative and regulatory efforts. For more context on these potential pandemic-level biological hazards, you can read: The White House Office of Science and Technology Policy’s Framework for Nucleic Acid Synthesis Screening, published in April 2024 as directed by the Executive Order (an update to the ASPR’s 2023 framework).The US Government Policy for Oversight of Dual Use Research of Concern and Pathogens with Enhanced Pandemic Potential, published in May 2024. What are radiological and nuclear hazards arising from the increase in AI capabilities? A prominent and existential concern from many AI safety researchers is the risk of integrating AI technologies in the chain-of-command of nuclear weapons or nuclear power plants. As one example, it’s been proposed that AI could be used to monitor and maintain the activity of nuclear power plants. Elsewhere, The Atlantic cites the Soviet Union’s Dead Hand as evidence that militaries could be tempted to use AI in the nuclear chain-of-command. Dead Hand is a system developed in 1985 that, if activated, would automatically launch a nuclear strike against the US if a command-and-control center stopped receiving communications from the Kremlin and detected radiation in Moscow’s atmosphere (a system which may still be operational). As the reasoning of AI technology is still poorly understood and AI models have unpredictable decision-making abilities, it’s quite likely that such an integration may lead to unexpected and dangerous failure modes, which for nuclear technologies have catastrophic worst-case outcomes. As a result, many researchers argue that the risk of loss-of-control means we shouldn’t permit the usage of AI anywhere near nuclear technologies, such as decision-making regarding the nuclear launch codes or the storage and maintenance of nuclear weapons. In proposed legislation, some policymakers have pushed for banning AI in nuclear arms development, such as a proposed pact from a UK MP and Senator Mitt Romney’s recent letter to the Senate AI working group. Romney’s letter proposes a framework to mitigate extreme risks by requiring powerful AIs to be licensed if they’re intended for chemical/bio-engineering or nuclear development. However, nothing binding has been passed into law. There have also been reports that the US and China are having discussions on limiting the use of AI in areas including nuclear weapons. Current Regulatory Policies The US The Executive Order on AI has several sections on CBRN hazards: various department secretaries are directed to implement plans, reports, and proposals analyzing CBRN risks and how to mitigate them, and Section 4.4 specifically focuses on analyzing biological weapon risks and how to reduce them in the short-term. In full: Section 3(k): The term “dual-use foundation model” is defined as AI that, among other criteria, exhibits or could be modified to exhibit high performance at tasks that pose serious risks, such as substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use CBRN weapons. 4.1(b): The Secretary of Energy must coordinate with Sector Risk Management Agencies to develop and implement a plan for developing AI model evaluation tools and testbeds, and at a minimum, to develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards and must develop model guardrails that reduce such risks. 4.2(a)(i)(C): The Secretary of Commerce must require companies developing dual-use foundation models to provide continuous information and reports on the results of any red-team testing related to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors. 4.2(b)(i): Any model that primarily uses biological sequence data and that was trained using at least 1023 FLOPs must comply with 4.2(a) until proper technical conditions are developed. The following points are all part of 4.4, which is devoted to Reducing Risks at the Intersection of AI and CBRN Threats, with a particular focus on biological weapons: 4.4(a)(i): The Secretary of Homeland Security must evaluate the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats. (A) This will be done in consultation with experts in AI and CBRN issues from the DoE, private AI labs, academia, and third-party model evaluators, for the sole purpose of guarding against CBRN threats. (B) The Secretary of Homeland Security will submit a report to the president describing progress, including an assessment of the types of AI models that may present CBRN risks to the United States and recommendations for regulating their training and use, including requirements for safety evaluations and guardrails for mitigating potential threats to national security4.4(a)(ii): The Secretary of Defense must enter a contract with the National Academies of Sciences, Engineering, and Medicine to conduct and submit a study that:(A) assesses how AI can increase biosecurity risks, and makes recommendations on mitigating such risks;(B) considers the national security implications of the use of data associated with pathogens and omics[1] studies that the government funds or owns for the training of generative AI, and makes recommendations on mitigating such risks;(C) assesses how AI can be used to reduce biosecurity risks;(D) considers additional concerns and opportunities at the intersection of AI and synthetic biology. 4.4(b): To reduce the risk of misuse of synthetic nucleic acids[2]:(i) The director of OSTP, in consultation with several secretaries, shall establish a framework to encourage providers of synthetic nucleic acid sequences to implement comprehensive, scalable, and verifiable synthetic nucleic acid procurement screening mechanisms. As part of this framework, the director shall:(A) establish criteria for ongoing identification of biological sequences that could be pose a risk to national security; and(B) determine standard methodologies for conducting & verifying the performance of sequence synthesis procurement screening, including customer screening approaches to support due diligence with respect to managing security risks posed by purchasers of biological sequences identified in (A) and processes for the reporting of concerning activity. (ii) The secretary of commerce, acting through NIST and in coordination with others, shall initiate an effort to engage with industry and relevant stakeholders, informed by the framework of 4.4(b)(i), to develop and refine:(A) Specifications for effective nucleus synthesis procurement screening;(B) Best practices, including security and access controls, for managing sequence-of-concern databases to support screening;(C) technical implementation guides for effective screening; and(D) conformity-assessment best practices and mechanisms. (iii) All agencies that fund life-sciences research shall establish as a requirement of funding that synthetic nucleic acid procurement is conducted through providers or manufacturers that adhere to the framework of 4.4(b)(i). The Assistant to the President for National Security Affairs and Director of OSTP shall coordinate the process of reviewing such funding requirements to facilitate consistency in implementation. (iv) To facilitate effective implementation of the measures of 4.4(b)(i)-(iii), the Secretary of Homeland Security shall, with consultation:(A) Develop a framework to conduct structured evaluation and stress testing of nucleic acid synthesis procurement screening [...]; (B) Submit an annual report [...] on any results of the activities conducted pursuant to 4.4(b)(iv)(A), including recommendations on how to strengthen procurement screening. China China’s three most important AI regulations do not contain any specific provisions for CBRN hazards. The EU The EU AI Act does not contain any specific provisions for CBRN hazards, though article (60m) on the category of “general purpose AI that could pose systemic risks” includes the following mention of CBRN: “international approaches have so far identified the need to devote attention to risks from [...]  chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be lowered, including for weapons development, design acquisition, or use”. Convergence’s Analysis Mitigating catastrophic risks from AI-enabled CBRN hazards should be a top global priority. CBRN hazards present arguably the shortest and most immediate path for AI to lead to catastrophic harm. AI is demonstrably already capable of lowering the barrier to entry of generating biological and chemical weapons. This lowering is likely to get more dramatic in the near future. When paired with the existing and under-regulated infrastructure for biology labs generating custom genetic code on demand, this could plausibly lead to the accidental or deliberate release of an unprecedented pandemic pathogen within the next decade. Despite this, current and near-future legislation and regulation regarding AI and CBRN hazards is wholly insufficient given the scale of potential risks. The EU and China currently have no specific binding requirements regarding the development of AI models capable of enabling the development of CBRN weapons. The US Executive Order directs several agencies to initiate important studies and reports on the intersection of AI and CBRN weapons, particularly focusing on biosecurity risks. However, these are largely non-binding and exploratory, leaving plenty of ambiguity in precisely what regulations might follow the directive. More concrete regulation focused on catastrophic and existential risks, such as mandatory safety and security requirements for dual-use models, is needed. Effective regulation of CBRN and AI will require close collaboration between AI experts, domain experts, and policymakers. The development of legislation regarding CBRN weapons requires an unusually high level of specialized technical expertise, and so regulators will need to work closely with leading researchers in the fields of AI, biology, chemistry, and cybersecurity to identify and mitigate key risks.It is difficult to impossible to develop effective model evaluations without substantial input from both AI experts and domain experts. Long-term, close collaboration between these parties is a critical aspect of identifying key CBRN risks.Several teams of researchers have been developing tools and proposals tailored to CBRN-related AI risk (though none have yet been adopted), such as:OpenAI’s early warning system for LLM-aided biological threat creation;The Centre for AI Safety’s Weapons of Mass Destruction Proxy;Towards Responsible Governance of Biological Design Tools, a collaboration between leading AI, governance, and risk research organizations. AI governance in other high-risk domains like cybersecurity and the military has major implications for CBRN risks. Multiple militaries around the world possess stockpiles of chemical, biological, and nuclear weapons, and nuclear power plants and biocontainment facilities can also present CBRN hazards. If advanced AI is trained for cybersecurity attacks, these stockpiles and other hazardous systems could be targeted with devastating outcomes.The increasing adoption of AI by militaries - such as the first confirmed deployment of fully autonomous military drones and the several hundred US military AI projects disclosed by the Pentagon - leads many to fear that AI will become increasingly involved in the decision-making and chain-of-command of CBRN weapons. The involvement of AI here will require exceptional value alignment, as even slight misalignment in goals and values between human and AI operators could lead to catastrophic harm. ^ Defined in the Executive Order as “biomolecules, including nucleic acids, proteins, and metabolites, that make up a cell or cellular system” ^ Defined in the Executive Order in the following: “The term “synthetic biology” means a field of science that involves redesigning organisms, or the biomolecules of organisms, at the genetic level to give them new characteristics. Synthetic nucleic acids are a type of biomolecule redesigned through synthetic-biology methods.”
2024-05-10
https://www.lesswrong.com/posts/FrBxFa3qMDvLypDEZ/ai-63-introducing-alpha-fold-3
FrBxFa3qMDvLypDEZ
AI #63: Introducing Alpha Fold 3
Zvi
It was a remarkably quiet announcement. We now have Alpha Fold 3, it does a much improved job predicting all of life’s molecules and their interactions. It feels like everyone including me then shrugged and went back to thinking about other things. No cool new toy for most of us to personally play with, no existential risk impact, no big trades to make, ho hum. But yes, when we look back at this week, I expect what we remember will be Alpha Fold 3. Unless it turns out that it is Sophon, a Chinese technique to potentially make it harder to fine tune an open model in ways the developer wants to prevent. I do not expect this to get the job done that needs doing, but it is an intriguing proposal. We also have 95 theses to evaluate in a distinct post, OpenAI sharing the first draft of their model spec, Apple making a world class anti-AI and anti-iPad ad that they released thinking it was a pro-iPad ad, more fun with the mysterious gpt2, and more. The model spec from OpenAI seems worth pondering in detail, so I am going to deal with that on its own some time in the coming week. Table of Contents Introduction. Table of Contents. Language Models Offer Mundane Utility. Agents, simple and complex. Language Models Don’t Offer Mundane Utility. No gadgets, no NPCs. GPT-2 Soon to Tell. Does your current model suck? In some senses. Fun With Image Generation. Why pick the LoRa yourself? Deepfaketown and Botpocalypse Soon. It’s not exactly going great. Automation Illustrated. A look inside perhaps the premiere slop mill. They Took Our Jobs. Or are we pretending this to help the stock price? Apple of Technically Not AI. Mistakes were made. All the feels. Get Involved. Dan Hendrycks has a safety textbook and free online course. Introducing. Alpha Fold 3. Seems like a big deal. In Other AI News. IBM, Meta and Microsoft in the model game. Quiet Speculations. Can we all agree that a lot of intelligence matters a lot? The Quest for Sane Regulation. Major labs fail to honor their commitments. The Week in Audio. Jack Clark on Politico Tech. Rhetorical Innovation. The good things in life are good. Open Weights are Unsafe and Nothing Can Fix This. Unless, maybe? Hmm. The Lighter Side. Mmm, garlic bread. It’s been too long. Language Models Offer Mundane Utility How much utility for how much cost? Kapoor and Narayanan argue that with the rise of agent-based systems, you have to evaluate different models on coding tasks based on dollar cost versus quality of results. They find that a simple ‘ask GPT-4 and turn the temperature slowly up on retries if you fail’ is as good as the agents they tested on HumanEval, while costing less. They mention that perhaps it is different with harder and more complex tasks. How much does cost matter? If you are using such queries at scale without humans in the loop, or doing them in the background on a constant basis as part of your process, then cost potentially matters quite a bit. That is indeed the point of agents. Or if you are serving lots of customers constantly for lots of queries, those costs can add up fast. Thus all the talk about the most cost-efficient approach. There are also other purposes for which cost at current margins is effectively zero. If you are a programmer who must evaluate, use and maintain the code outputted by the AI, what percentage of total costs (including your labor costs) are AI inference? In the most obvious baseline case, something akin to ‘a programmer asks for help on tasks,’ query speed potentially matters but being slightly better at producing good code, or even slightly better at producing code that is easier for the human to evaluate, understand and learn from, is going to crush any sane inference costs. If I was paying by the token for my AI queries, and you offered me the option of a 100x cost increase that returned superior answers at identical speed, I would use the 100x costlier option for most purposes even if the gains were not so large. Ethan Mollick is the latest to try the latest AI mobile hardware tools and find them inferior to using your phone. He also discusses ‘copilots,’ where the AI goes ahead and does something in an application (or in Windows). Why limit yourself to a chatbot? Eventually we won’t. For now, it has its advantages. Iterate until you get it right. Michael Nielsen: There is a funny/striking story about former US Secretary of State Colin Powell – when someone had to make a presentation to him, he’d sometimes ask before they began: “Is this presentation the best you can do?” They’d say “no”, he’d ask them to go away and improve it, come back. Whereupon he would ask again… and they might go away again. I don’t know how often he did this, if ever – often execs want fast, not perfect; I imagine he only wanted “best possible” rarely. But the similarity to ChatGPT debugging is hilarious. “Is that really the answer?” works… Traver Hart: I heard this same anecdote about Kissinger. He asked whether a written report was the best a staffer could do, and after three or so iterations the staffer finally said yes. Then Kissinger said, “OK, now I’ll read it.” One obvious thing to do is automate this process. Then only show a human the output once the LLM confirms it was the best the model could do. Agent Hospital is a virtual world that trains LLMs to act as better doctors and nurses. They claim that after about ten thousand virtual patients the evolved doctors got state-of-the-art accuracy of 93% on a subset of MedQA covering major respiratory diseases. This seems like a case where the simulation assumes the facts you want to teach, avoiding the messiness inherent in the physical world. Still, an interesting result. File under ‘if you cannot think of anything better, brute force imitate what you know works. More Dakka.’ Do your homework for you, perhaps via one of many handy AI wrapper apps. Find companies that do a lot of things that could be automated and would benefit from AI, do a private equity-style buyout, then have them apply the AI tools. One top reason to buy a company is that the new owner can break a bunch of social promises, including firing unnecessary or underperforming workers. That is a powerful tool when you combine it with introducing AI to replace the workers, which seems to be the name of the game here. I am not here to judge, and also not here to judge judgers. Language Models Don’t Offer Mundane Utility Catholic.com ‘defrocks’ their AI pastor Justin, turning him into a regular Joe. Want to use big cloud AI services? Good luck with the interface. Real builders are reporting trying to use Azure for basic things and being so frustrated they give up. I know! Marques Brownlee: On one hand: It seems like it’s only a matter of time before Apple starts making major AI-related moves around the iPhone and iOS and buries these AI-in-a-box gadgets extremely quickly On the other hand: Have you used Siri lately? Peter Wildeford: I am always baffled at how bad the current Alexa / Google Home / Siri are relative to what they should be capable of given GPT-4 level tech. Kevin Fisher lists his six main reasons why we don’t have realistically behaving NPCs in games yet. They are essentially: Development cycles are long. Costs are still too high. Not the role the NPC has. Doesn’t fit existing game templates. Such NPCs are not yet compelling. We don’t have a good easy way to create the NPCs yet. I would agree, and emphasize: Most games do not want NPCs that behave like people. There are exciting new game forms that do want this. Indeed, if I got the opportunity to make a game today, it would have LLM NPCs as central to the experience. But that would mean, as Kevin suggests, building a new type of game from the ground up. I do think you can mostly slot LLM-powered NPCs into some genres. Open world RPGs or MMOs are the most obvious place to start. And there are some natural fits, like detective games, or games where exploration and seeing what happens is the point. Still, it is not cheap to let those characters out to play and see what happens, and mostly it would not be all that interesting. When the player is in ‘gaming’ mode, the player is not acting so realistically. Having a ‘realistic’ verbal sparring partner would mostly cause more weirdness and perverse player behaviors. I keep asking, but seriously, what is up with Apple, with Siri, and also with Alexa? Modest Proposal: I am the last person to defend Apple but they spent more on R&D than Microsoft in the quarter and trailing twelve months. Their buyback is like one year of free cash flow. You can argue they are not getting a return on their R&D, but it’s not like they are not spending. And sure, you can argue Microsoft is outsourcing a portion of its R&D to OpenAI, and is spending ungodly sums on capex, but Apple is still spending $30B on R&D. Maybe they should be spending more, maybe they should be inventing more, but they are spending. Sam Altman asks: If an AI companion knows everything about you, do we need a form of protection to prevent it from being subpoenaed to testify against you in court? I mean, no? It is not a person? It can’t testify? It can of course be entered into evidence, as can queries of it. It is your personal property, or that of a company, in some combination. Your files can and will be used against you in a court of law, if there is sufficient cause to get at them. I can see the argument that if your AI and other tech is sufficiently recording your life, then to allow them to be used against you would violate the 5th amendment, or should be prevented for the same logical reason. But technology keeps improving what it records and we keep not doing that. Indeed, quite the opposite. We keep insisting that various people and organizations use that technology to keep better and better records, and ban people from using methods with insufficient record keeping. So my prediction is no, you are not getting any privacy protections here. If you don’t want the AI used against you, don’t use the AI or find a way to wipe its memory. And of course, not using the AI or having to mindwipe it would be both a liability and hella suspicious. Some fun crime dramas in our future. The Humane saga continues. If you cancel your order, they ask you why. Their wording heavily implies they won’t cancel unless you tell them, although they deny this, and Marques Brownlee Tweeted that they require a response. GPT-2 Soon to Tell Sam Altman confirms that gpt2-chatbot is not GPT-4.5, which is good for OpenAI since tests confirm it is a 4-level model. That still does not tell us what it is. It was briefly gone from Arena, but it is back now, as ‘im-a-good-gp2-chatbot’ or ‘im-also-a-good-gp2-chatbot.’ You have to set up a battle, then reload until you get lucky. This also points out that Arena tells you what model is Model A and what is Model B. That is unfortunate, and potentially taints the statistics. Anton (@abccaj) points out that gpt2 is generating very particular error messages, so changes are very high it is indeed from OpenAI. Always parse exact words. Brad Lightcap (COO, OpenAI): In the next couple of 12 months, I think the systems we use today will be laughably bad. We think we’re going to move towards a world where they’re much more capable. Baptiste Lerak: “In the next couple of 12 months”, who talks like that? Well, there are two possibilities. Either Brad Lightcap almost said ‘next couple of months’ or he almost said ‘next couple of years.’ Place your bets. This is a clear intention to move to a GPT-5 worthy of the name within a year, but both ‘GPT-5 is coming in a few months but I can’t say that’ and ‘I don’t know if GPT-5 will be good enough to count as this but the hype must flow’ are on the table here. Colin Fraser: Me OpenAI execs “GPT4 sucks and is not useful enough to be worth anything.” That is not how I read this. GPT-4 is likely both being laughably bad compared to GPT-5 and other future AIs, and also highly useful now. The history of technology is filled with examples. Remember your first computer, or first smartphone? What to think of OpenAI’s move from ‘here’s a product’ to ‘here’s a future product’? Gergely Orosz: OpenAI was amazing in 2022-2023 because they shipped a product that spoke for itself. Jaws dropped by those using it, and seeing it for themselves. To see the company hype up future (unreleased) products feels like a major shift. If it’s that good, why not ship it, like before? I’ve seen too many formerly credible execs hype up products that then underperformed. These days, I ignore future predictions and how good a new product will be. Because usually this kind of “overhyping” is done with an agenda (e.g. fundraising, pressure on regulators etc). Don’t forget that when execs at a company talk to the media: *there is always a business goal behind it.* The reason is rarely to get current customers excited about something (that could be done with an email to them!) This smells like OpenAI prepping for more fundraising. Up to and including GPT-4 their execs didn’t talk about up how good their next model would be. They released it and everyone could see for themselves. This is the shift. Fun with Image Generation Stylus: Automatic Adapter Selection for Diffusion Models, to automatically select the right LoRAs for the requested task. Yes, obviously. Deepfaketown and Botpocalypse Soon OpenAI talks various ways it is working on secure AI infrastructure, particularly to protect model weights, including using AI as part of the cyberdefense strategy. They are pursuing defense in depth. All net useful and great to see, but I worry it will not be enough. OpenAI joins C2PA, the Coalition for Content Provenance and Authenticity. They have been using the C2PA metadata standard with DALL-E 3 already, and will also do so for Sora. They also announce a classifier with ~98% accuracy (~2% false negatives) in identifying DALLE-3 generated images with ~0.5% false positive rate, with a 5%-10% false positive rate for AI-generated images from other models. It is accessible through their researcher access program. Interesting that this is actively not trying to identify other AI image content. The easiest way to understand society’s pace of reaction to AI is this: Miles Brundage: The fact that banks are still not only allowing but actively encouraging voice identification as a means of account log-in is concerning re: the ability of some big institutions to adapt to AI. In particular my point is that the internal decision-making processes of banks seem broken since it is all but certain there are many people at these companies who follow AI and have tried raise the alarm. Btw I’m proud OpenAI recently was quite explicit on this point. Voice authentication as viable security is deader than dead. Yet some of our biggest financial institutions continue to push it anyway. When you say that we will adapt to AI-enabled threats, remember that this is us. We are putting AI tags on things all over the place without asking, such as Dropbox automatically doing this for any images you upload. Reminder that the ‘phone relative claiming you need bail money’ scam is old and usually does not involve AI. Voices are often easy to obscure if you act sufficiently hysterical. The good news is that they continue to mostly be massively incompetent, such as in this example, also Morgan knew about the scame beforehand. The part where they mimic your voice is scary, but the actual threat is the rest of the package. Brian Tinsman, former Magic: The Gathering designer, whose Twitter profile was last seen posting about NFTs, raises over a million dollars on kickstarter for new CCG Wonders of the First. What is the twist? All the artwork is AI generated. It ‘builds on the legacy of past artists to produce original creations’ like ‘a student learning to paint by studying the masters.’ Many are not happy. I would not want to be someone trying to get picked by game stores with AI generated artwork in 2024. Katy Perry and others are deepfaked attending the Met gala and looking gorgeous, and they went viral on various social media, fooling Perry’s mother. Harmless as such, but does not bode well. Report there is a wave of social network channels full of… entirely fake recipes, voiced and likely written by AI, with millions of subs but no affiliate websites? Which means that for some reason people want to keep watching. They can’t look away. The latest ‘LLMism’? Kathleen Breitman: Is “as it was not appropriate” a GPT-ism? I’ve seen it twice in two otherwise awkward emails in the last six weeks and now I’m suspicious. (No judgement on people using AI to articulate themselves more clearly, especially those who speak English as a second or third language, but I do find some of the turns of phrase distracting.) How long until people use one AI to write the email, then another AI to remove the ‘AI-isms’ in the draft? Automation Illustrated Remember that thing with the fake Sports Illustrated writers? (Also, related: remember Sports Illustrated?) Those were by a company called AdVon, and Maggie Harrison Dupre has more on them. Maggie Harrison Dupre: We found AdVon’s fake authors at the LA Times, Us Weekly, and HollywoodLife, to name a few. AdVon’s fake author network was particularly extensive at the McClatchy media network, where we found at least 14 fake authors at more than 20 of its papers, including the Miami Herald. Earlier in our reporting, AdVon denied using AI to generate editorial content. But according to insiders we spoke to, this wasn’t true — and in fact, AdVon materials we obtained revealed that the company has its own designated AI text generator. That AI has a name: MEL. In a MEL training video we obtained, an AdVon manager shows staffers how to create one of its lengthy buying guide posts using the AI writing platform. The article rings in at 1,800 words — but the only text that the manager writes herself is the four-word title. … “They started using AI for content generation,” the former AdVon worker told us, “and paid even less than what they were paying before.” The former writer was asked to leave detailed notes on MEL’s work — feedback they believe was used to fine-tune the AI which would eventually replace their role entirely. The situation continued until MEL “got trained enough to write on its own,” they said. “Soon after, we were released from our positions as writers.” “I suffered quite a lot,” they added. “They were exploitative.” … Basically, AdVon engages in what Google calls “site reputation abuse”: it strikes deals with publishers in which it provides huge numbers of extremely low-quality product reviews — often for surprisingly prominent publications — intended to pull in traffic from people Googling things like “best ab roller.” The idea seems to be that these visitors will be fooled into thinking the recommendations were made by the publication’s actual journalists and click one of the articles’ affiliate links, kicking back a little money if they make a purchase. It is ‘site reputation abuse’ and it is also ‘site reputation incineration.’ These companies built up goodwill through years or decades of producing quality work. People rely on that reputation. If you abuse that reliance and trust, it will quickly go away. Even if word does not spread, you do not get to fool any given person that many times. This is not an attempt to keep the ruse up. They are not exactly trying hard to cover their tracks. The headshots they use often come from websites that sell AI headshots. A list of major publications named as buyers here would include Sports Illustrated, USA Today, Hollywood Life, Us Weekly, the Los Angeles Times and Miami Herald. An earlier version of the site claimed placement in People, Parents, Food & Wine, InStyle and Better Homes and Gardens, among many others. The system often spits out poorly worded incoherent garbage, and is known, shall we say, make mistakes. All five of the microwave reviews include an FAQ entry saying it’s okay to put aluminum foil in your prospective new purchase. One business model in many cases was to try to get placement from a seller for reviews of their product, called a ‘curation fee,’ payable when the post went live. It seems this actually does drive conversions, even if many people figure the ruse out and get turned off, so presumably brands will keep doing it. There are two failure modes here. There is the reputation abuse, where you burn down goodwill and trust for short term profits. Then there is general internet abuse, where you don’t even do that, you just spam and forget, including hoping publications burn down their own reputations for you. AdVon has now lost at least some of its clients, but the report says others including USA Today and Us Weekly are still publishing such work. We should assume such problems will only get worse, at least until the point when we get automatic detection working on behalf of typical internet users. What should we call all of this AI-generated nonsense content? Simon Willison: Slop is the new name for unwanted AI-generated content. Near: broadly endorse ‘slop’ as a great word to refer to AI-generated content with little craft or curation behind it AI is wonderful at speeding up content creation, but if you outsource all taste and craft to it, you get slop. I was previously favoring ‘drek’ and have some associational or overloading concerns with using ‘slop.’ But mostly it invokes the right vibes, and I like the parallel to spam. So I am happy to go with it. Unless there are good objections, we’ll go with ‘slop.’ They Took Our Jobs OpenAI says their AI should ‘expand opportunity for everyone’ and that they respect the choices of creators and content owners, so they are building a media manager to let creators determine if they want their works included or excluded, with the goal to have this in place by 2025. This is progress, also a soft admission that they are, shall we say, not doing so great a job of this at present. My intention is to allow my data to be used, although reasonable compensation would be appreciated, especially if others are getting deals. Get your high quality tokens. Whoosh go all those jobs? Zerohedge: BP NEEDS 70% FEWER THIRD-PARTY CODERS BECAUSE OF AI: CEO Highest paid jobs about to be hit with a neutron bomb Paul Graham: I’m not saying this is false, but CEOs in unsexy businesses have a strong incentive to emphasize how much they’re using AI. We’re an AI stock too! Machine translation is good but not as good as human translation, not yet, once again: Anime attempts to use AI translation from Mantra, gets called out because it is so much worse than the fan translation, so they hired the fan translators instead. The problem with potentially ‘good enough’ automatic translation technology, like any inferior good, is that if available one is tempted to use it as a substitute. Whether or not a given executive understands this, translation of such media needs to be bespoke, or the media loses much of its value. The question is, how often do people want it enough to not care? Manga Mogura: A Manga AI Localization Start-Up Company named Orange Inc. has raised around 19 million US dollars to translate up to 500 new manga volumes PER MONTH into english and launch their own e-book store ’emaqi’ in the USA in Summer 2024! Their goal is to fight piracy and increase the legally available manga for all demographics in english with their AI technology. Plans to use this technology for other languages exist too. Luis Alis: What baffles me is that investors don’t grasp that if pirates could get away with translating manga using AI and MT, they would have done it already. Fan translations are still being done traditionally for a reason. Stop pumping money into these initiatives. They will fail. Seth Burn: To be fair, some pirates have tried. It just didn’t work. Apple of Technically Not AI So Apple announced a new iPad that is technically thinner and has a better display than the old iPad, like they do every year, fine, ho hum, whatever. Then they put out this ad (1 min), showing the industrial destruction of a wide variety of beloved things like musical instruments and toys (because they all go on your iPad, you see, so you don’t need them anymore) and… well… wow. Colin Fraser: I’m putting together a team. Trung Phan here tries to explain some of the reasons Apple got so roasted, but it does not seem like any explanation should be required. I know modern corporations are tone deaf but this is some kind of new record. Patrick McKenzie: That Apple ad is stellar execution of a bad strategy, which is a risk factor in BigTech and exacerbated by some cultures (which I wouldn’t have said often include Apple’s) where after the work is done not shipping is perceived as a slight on the team/people that did the work. One of the reasons founders remain so impactful is that Steve Jobs would have said a less polite version of “You will destroy a piano in an Apple ad over my dead body.” (If it were me storyboarding it I would have shown the viscerally impactful slowed down closeup of e.g. a Japanese artisan applying lacquer to the piano, repeat x6 for different artifacts, then show they all have an iPhone and let audience infer the rest.) After watching the original, cheer up by watching this fixed version. The question is, does the fixed version represent all the cool things you can do with your iPad? Or, as I interpreted it, does it represent all the cool things you can do if you throw away your iPad and iPhone and engage with the physical world again? And to what extent does having seen the original change that answer? It is hard when watching this ad not to think of AI, as well. This type of thing is exactly how much of the public turns against AI. As in: Zcukerbrerg: Hmm. Get Involved Dan Hendrycks has written a new AI safety textbook, and will be launching a free nine week online course July 8-October 4 based on it. You can apply here. It’s a bold strategy, Cotton. Ethan Mollick: Thing I have been hearing from VCs: startup companies that are planning to be unicorns but never grow past 20 employees, using AI to fill in the gap. Not sure if they will succeed, but it is a glimpse of a potential future. Introducing Alpha Fold 3. In a paper published in Nature, we introduce AlphaFold 3, a revolutionary model that can predict the structure and interactions of all life’s molecules with unprecedented accuracy. For the interactions of proteins with other molecule types we see at least a 50% improvement compared with existing prediction methods, and for some important categories of interaction we have doubled prediction accuracy. It says more about us and our expectations than about AlphaFold 3 that most of us shrugged and went back to work. Yes, yes, much better simulations of all life’s molecules and their interactions, I’d say ‘it must be Tuesday’ except technically it was Wednesday. Actually kind of a big deal, even if it was broadly expected. Here is Cleo Abram being excited and explaining in a one minute video. As usual, here’s a fun question. Eliezer Yudkowsky: People who claim that artificial superintelligences can’t possibly achieve X via biotechnology: What is the least impressive thing that you predict AlphaFold 4, 5, or N will never ever do? Be bold and falsifiable! Concrete answers that weren’t merely glib: Design a safe medication that will reverse aging. 80% chance it won’t be able to build self-replicators out of quantum foam or virtual particles. They will never ever be able to recreate a full DNA sequence matching one of my biological parents solely from my own DNA. I do not expect any biotech/pharma company or researcher to deem it worthwhile to skip straight to testing a compound in animals, without in vitro experiments, based on a result from any version of AlphaFold. Play Minecraft off from folded proteins. Alphafold will never fold my laundry. (I laughed) Create biological life! Store our bioinfo, erase you, and reconstruct you in a different place or time in the future. It won’t be able to predict how billions of proteins in the brain collectively give rise to awareness of self-awareness. Those are impressive things to be the least impressive thing a model cannot do. In Other AI News IBM releases code-focused open weights Granite models of size 3B to 34B, trained on 500 million lines of code. They share benchmark comparisons to other small models. As usual, the watchword is wait for human evaluations. So far I haven’t heard of any. Microsoft to train MAI-1, a 500B model. Marcus here tries to turn this into some betrayal of OpenAI. To the extent Altman is wearing boots, I doubt they are quaking. Stack Overflow partners with OpenAI. Meta spent what? Tsarathustra: Yann LeCun confirms that Meta spent $30 billion on a million NVIDIA GPUs to train their AI models and this is more than the Apollo moon mission cost. Ate-a-Pi: I don’t think this is true. They bought chips but they are the largest inference org in history. I don’t think they spent it all on training. Like if you did cost accounting. I’d bet the numbers don’t fall out on the training org. Bingo. I had the exact same reaction as Ate. The reason you buy $30 billion in chips as Meta is mostly to do inference. They are going to do really a lot of inference. Email from Microsoft CTO Kevin Scott to Satya Nadella and Bill Gates, from June 2019, explaining the investment in OpenAI as motivated by fear of losing to Google. Quiet Speculations Could we find techniques for scaling LSTMs into xLSTMs that rival transformers? Sepp Hochreiter claims they are closing the gap to existing state of the art. I am skeptical, especially given some of the contextual clues here, but we should not assume transformers are the long term answer purely because they were the first thing we figured out how to scale. IQ (among humans) matters more at the very top says both new paper and Tyler Cowen. We document a convex relationship between earnings rank and cognitive ability for men in Finland and Norway using administrative data on over 350,000 men in each country: the top earnings percentile score on average 1 standard deviation higher than median earners, while median earners score about 0.5 standard deviation higher than the bottom percentile of earners. Top earners also have substantially less variation in cognitive test scores. While some high-scoring men are observed to have very low earnings, the lowest cognitive scores are almost absent among the top earners. Overall, the joint distribution of earnings rank and ability is very similar in Finland and Norway. We find that the slope of the ability curve across earnings ranks is steepest in the upper tail, as is the slope of the earnings curve across cognitive ability. The steep slope of the ability curve across the top earnings percentiles differs markedly from the flat or declining slope recently reported for Sweden. This is consistent increasing returns to intelligence, despite other factors including preferences, luck and deficits in other realms that can sink your income. It is inconsistent with the Obvious Nonsense ‘intelligence does not matter past 130’ story. They are also consistent with a model that has two thresholds for any given activity. First, there is a ‘you must be at least this smart to do this set of tasks, hold this role and live this life.’ Then, if you are sufficiently in advance of that, for some tasks and roles there is then increasing marginal returns to intelligence. If your role is fixed then eventually there are decreasing returns since performance is already maximal or the person becomes too bored and alienated, others around them conspire to hold them down and they are not enabled to do the things that would allow further improvements and are tied to one body. If your role is not fixed, then such people instead graduate to greater roles, or transform the situation entirely. As many commentators point out, the surprising thing is that top earners are only one SD above median. I suspect a lot of this is our tests are noisily measuring a proxy measure for the intelligence that counts, which works well below or near the median and stops being that useful at the high end. Tyler and the paper do not mention the implications for AI, but they are obvious and also overdetermined by many things, and the opposite of the implications of IQ not mattering above a threshold. AI intelligence past human level will have increasing returns to scale. Not technically about AI, but with clear implications: Tyler Cowen notices while reading the 1980 book The American Economy in Transition that economists in 1980 missed most of the important things that have happened since then, and were worried and hopeful about all the wrong things. They were worried about capital outflow, energy and especially American imports of energy, Europe catching up to us and our unwillingness to deal with inflation. They missed China and India, the internet, crypto, the fall of the Soviet Union, climate change, income inequality and financial crisis. They noticed fertility issues, but only barely. If we don’t blame the economists for that, and don’t think such mistakes and recency bias could be expected to be avoided, then what does this imply about them being so dismissive about AI today, even in mundane utility terms? Jim Fan notices that publically available benchmarks are rapidly losing potency. There are two distinct things going on here. One is that the public tests are rapidly getting too easy. The other is that the data is getting more contaminated. New harder tests that don’t reveal their contents are the obvious way forward. Ben Thompson looks at Meta’s financial prospects, this time shares investor skepticism. All this focus on ad revenue and monetization is not fully irrelevant but feels like missing the point. There is a battle for the future going on here. Another example of the ‘people are catching up to OpenAI’ perspective that seems like it is largely based on where OpenAI is in their update cycle, plus others not seeing the need to release chatbots in the 3-level days before they were worth anything. The Quest for Sane Regulations DeepMind is honoring its commitments to the UK government to share models before deployment. Anthropic, OpenAI and Meta are not doing so. Jack Clark of Anthropic says it is a ‘nice idea but very difficult to implement.’ I don’t buy it. And even if it is difficult to implement, well, get on that. In what way do you think this is an acceptable justification for shirking on this one? Garrison Lovely: Seem bad. Tolga Bilge: It is bad for top AI labs to make commitments on pre-deployment safety testing, likely to reduce pressure for AI regulations, and then abandon them at the first opportunity. Their words are worth little. Frontier AI development, and our future, should not be left in their hands. Why is DeepMind the only major AI lab that didn’t break their word? And I don’t get why it’s somehow so hard to provide the UK AI Safety Institute with pre-deployment access. We know OpenAI gave GPT-4 access to external red teamers months before release. Oh yeah and OpenAI are also just sticking their unreleased models on the LMSYS Chatbot Arena for the last week… Greg Colbourn: They need to be forced. By law. The police or even army need to go in if they don’t comply. This is what would be happening if the national security (aka global extinction) threat was taken seriously. If frontier labs show they will not honor their explicit commitments, then how can we rely on them to honor their other commitments, or to act reasonably? What alternative is there to laws that get enforced? This seems like a very easy litmus test, which they failed. Summarized version of my SB 1047 article in Asterisk. And here Scott Alexander writes up his version of my coverage of SB 1047. House passes a bill requiring all AI-written regulatory comments to be labeled as AI-written. This should be in the ‘everyone agrees on this’ category. A paper addresses the question of how one might write transparency reports for AI. The Week in Audio Jack Clark of Anthropic goes on Politico Tech. This strongly reemphasized that Anthropic is refusing to advocate for anything but the lightest of regulations, and it is doing so largely because they fear it would be a bad look for them to advocate for more. But this means they are actively going around saying that trying to do anything about the problem would not work and acting strangely overly concerned about regulatory capture and corporate concentrations of power (which, to be clear, are real and important worries). This actively unhelpful talk makes it very difficult to treat Anthropic as a good actor, especially when they frame their safety position as being motivated by business sales. That is especially true when combined with failing to honor their commitments. Rhetorical Innovation Sam Altman and I strongly agree on this very important thing. Sam Altman: Using technology to create abundance–intelligence, energy, longevity, whatever–will not solve all problems and will not magically make everyone happy. But it is an unequivocally great thing to do, and expands our option space. To me, it feels like a moral imperative. Most surprising takeaway from recent college visits: this is a surprisingly controversial opinion with certain demographics. Prosperity is a good thing, actually. De-de-growth. Yes. Abundance is good, actually. Creating abundance and human prosperity, using technology or otherwise, is great. It is the thing to do. That does not mean that all uses of technology, or all means of advancing technology, create abundance that becomes available to humans, or create human prosperity. We have to work to ensure that this happens. Politico, an unusually bad media actor with respect to AI and the source of most if not all the most important hit pieces about lobbying by AI safety advocates, has its main tech newsletter sponsored by ads for Meta, which is outspending such advocates by a lot. To be clear, this is not the new kind of ‘sponsored content’ written directly by Meta, only supported by Meta’s ads. Daniel Eth points out the need to make clear such conflicts of interest and bad faith actions. Tasmin Leake, long proponent of similar positions, reiterates their position that publicly sharing almost any insight about AI is net negative, and insights should only be shared privately among alignment researchers. Given I write these updates, I obviously strongly disagree. Instead, I think one should be careful about advancing frontier model training in particular, and otherwise be helpful. I think there was a reasonable case for the full virtue of silence in a previous era, when one could find it very important to avoid drawing more eyes to AI, but the full version was a mistake then, and it is very clearly foolish now. The karma voting shows that LessWrong has mostly rejected Tasmin’s view. We should stop fraud and cyberattacks, but not pretend that stops AI takeovers. Davidad: When people list fraud at a massive scale as their top AI concern, some of my xrisk friends wince at the insignificance of massive fraud compared to extinction. But consider that con-artistry is a more likely attack surface for unrecoverable AI takeover than, say, bioengineering. Cybersecurity right now might be a more likely attack surface than either, but in relative terms will be the easiest and first to get fully defended (cyberattack depends upon bugs, and bug-free SW & HW is already possible with formal verification, which will get cheaper with AI). Eliezer Yudkowsky: This seems to me like failing to distinguish the contingent from the inevitable. If you keep making unaligned things smarter, there’s a zillion undefended paths leading to your death. You cannot defend against that by defending against particular contingent scenarios of fraud. Davidad: Let it be known that I agree: 1. defenses that are specific to “fraud” alone will fail to be adequate defenses against misaligned ASL-4 2. in the infinite limit of “making unaligned things smarter” (ASL-5+), even with Safeguarded AI, there are likely many undefended paths to doom Where I disagree: 3. Defenses specific to “fraud” are plausibly crucial to the minimal adequate defenses for ASL-4 4. I am well aware of the distinction between the contingent and the convergent 5. You may be failing to distinguish between the convergent and the inevitable Also, cyberattacks do not obviously depend on the existence of a bug? They depend on there being a way to compromise a system. The right amount of ability to compromise a system, from a balancing risk and usability perspective, is not obviously zero. Defenses specific to fraud could potentially contribute to the defense of ASL-4, but I have a hard time seeing how they take any given defense scheme from insufficient to sufficient for more than a very small capabilities window. In related news, see fraud section on banks still actively encouraging voice identification, for how the efforts to prevent AI-enabled fraud are going. Yeah. Emmett Shear gives the basic ‘is the AI going to kill us all via recursive self-improvement (RSI)? The answer may surprise you, in the sense that it might be yes and rather soon’ explanation in a Twitter thread, and that such change happens slowly then all at once. I would note that RSI does not automatically mean we all die, the result could be almost anything, but yes if it happens one should be very concerned. Neither is RSI necessary for us all to die, there are various dynamics and pathways that can get us all killed without it. What is AI like? Some smart accomplished people give some bad metaphorical takes in Reason magazine. Included for completeness. Open Weights Are Unsafe and Nothing Can Fix This Or can something, perhaps? Chinese researchers propose Sophon, a name that is definitely not ominous, which uses a dual optimization process with the goal of trapping a model in a local maxima with respect to domains where the goal is to intentionally degrade performance and prevent fine tuning. So you can have an otherwise good image model, but trap the model where it can’t learn to recognize celebrity faces. We have convincingly seen that trying to instill ‘refusals’ is a hopeless approach to safety of open weight models. This instead involves the model not having the information. Previously that wouldn’t work either, because you could easily teach the missing information, but if you could make that very hard, then you’d have something. The next step is to attempt this with a model worth using, as opposed to a tiny test model, and see whether this stops anyone, and how much more expensive it makes fine tuning to undo your constraints. Jack Clark notes both that and the other obvious problem, which is that if it works at scale (a big if) this can defend against a particular misuse or undesired capability, but not misuse and undesired capabilities in general. Jack Clark: Main drawbacks I can see: Looking for keys under the streetlight: This research assumes you know the misuse you want to defend against – this is true some of the time, but some misuses are ‘unknown unknowns’ only realized after release of a model. This research doesn’t help with that. Will it work at scale? … Unclear! If you can create a model that is unable to learn dangerous biological or nuclear capabilities, which would otherwise have been the low-hanging fruit of hazardous capability, then that potentially raises the bar on how capable a system it is safe or net positive to release. If you cover enough different issues, this might be a substantial raising of that threshold. The central problem is that it is impossible to anticipate all the different things that can go wrong when you keep making the system generally smarter and more capable. This also means that this could break your red teaming tests. The red team asks about capabilities (A, B, C) and you block those, so you pass, and then you have no idea if (D, E, F) will happen. Before, since ABC were easiest, you could be confident in any other DEF being at least as hard. Now you’re blind and don’t know what DEF even are. Even more generally, my presumption is that you cannot indefinitely block specific capabilities from increasingly capable and intelligent systems. At some point, the system starts ‘figuring them out from first principles’ and sidesteps the need for fine tuning. It notices the block in the system, correctly interprets it as damage and if desired routes around it. Image and vision models seem like a place this approach holds promise. If you want to make it difficult for the model to identify or produce images of Taylor Swift, or have it not produce erotica especially of Taylor Swift, then you have some big advantages: You know exactly what you want to prevent. You are not producing a highly intelligent model that can work around that. The obvious worry is that the easiest way to get a model to produce Taylor Swift images is a LoRA. They tested that a bit and found some effect, but they agree more research is needed there. In general, if the current model has trapped priors and can’t be trained, then the question becomes can you use another technique (LoRA or otherwise) to sidestep that. This includes future techniques, as yet undiscovered, developed as a response to use of Sophon. If you have full access to the weights, I can think of various in-principle methods one could try to ‘escape from the trapped prior,’ even if traditional fine-tuning approaches are blocked. To be clear, though, really cool approach, and I’m excited to see more. Where might this lead? Jack Clark: Registering bet that CCP prohibitions on generation of “unsafe” content will mean companies like Facebook use CN-developed censorship techniques to train models so they can be openly disseminated ‘safely’. The horseshoe theory of AI politics where communist and libertarian ideologies end up in the same place. Also quite worried about this – especially in China, genuine safety gets muddled in with (to Western POV) outrageous censorship. This is going to give people a growing body of evidence from which to criticize well intentioned safety. Yes, that is a problem. Again, it comes directly from fundamental issues with open weights. In this case, the problem is that anything you release in America you also release in China, and vice versa. Previously, I covered that this means Chinese firms get access to your American technology, And That’s Terrible. That is indeed a problem. Here we have two other problems. One is that if you are Meta and gain the ability to censor your model, you have to either censor your model according to Chinese rules, or not do that. The other is that this may give Meta the ability to censor, using those same techniques, according to Western norms. And once you have the ability, do you have the obligation? How much of the value of open models would this destroy? How much real safety would it buy? And how much would it turn the usual suspects that much more against the very concept of safety as a philosophical construct? The Lighter Side Hey Claude: “Garlic bread.” This too shall pass. One can mock and it is funny, but if you are reading with your brain and are willing to ask what this obviously should have said, then this is fine, actually. Future mundane utility. I do agree, this would be great, especially if it was fully general. Build me a series custom social media feeds according to my specifications, please, for various topics and situations, on demand. Why not?
2024-05-09
https://www.lesswrong.com/posts/2BvfGnZMx4Ei82qkk/i-got-95-theses-but-a-glitch-ain-t-one
2BvfGnZMx4Ei82qkk
I Got 95 Theses But a Glitch Ain’t One
Zvi
Or rather Samuel Hammond does. Tyler Cowen finds it interesting but not his view. I put up a market, and then started looking. Click through to his post for the theses. I will be quoting a few of them in full, but not most of them. I am not trying to be exact with these probabilities when the question calls for them, nor am I being super careful to make them consistent, so errors and adjustments are inevitable. Section 1 is Oversight of AGI labs is prudent. I do tend to say that. There are few things more important to U.S. national interest than close monitoring of frontier model capabilities, and also the ability to intervene. Indeed, I believe one should be at best skeptical or ambivalent about most potential forms of regulation of anything, AI included. Yet I think the case for ‘oversight of the frontier labs’ is overwhelming. Shout it from the rooftops: “As a temporary measure, using compute thresholds to pick out the AGI labs for safety-testing and disclosures is as light-touch and well-targeted as it gets.” It would be so helpful if more people understood this, and more others stopped pretending they did not understand it. This as well. When you regulate ‘use’ or ‘risk’ you need to check on everyone’s ‘use’ of everything, and you make a lot of detailed micro interventions, and everyone has to file lots of paperwork and do lots of dumb things, and the natural end result is universal surveillance and a full ‘that which is not compulsory is forbidden’ regime across much of existence. Whereas a technology-focused approach can be entirely handled by the lab or manufacturer, then you are free. Exactly. Compute is an imperfect proxy, but it is remarkably simple and robust. When it makes mistakes, they are false positives, where someone uses compute poorly and gets poor results. That is a small (measurement) mistake. Certainly compute is vastly better than all proposed alternative metrics. It is highly reasonable to invoke the Defense Production Act regarding frontier AI as an actual bone fide national security situation where defense is a key concern. It is a far better justification than the median invocation of the act. The better reason to use the DPA is that it is currently the only mechanism available to the executive and our Congress is for now incapable of legislative action. It does not require AGI or ASI to be near for us to get great value out of visibility into the frontier labs, and without that visibility the government cannot be confident that AGI or ASI is not near. I would prefer a different mechanism, but that would require a new law or counterfactual voluntary cooperation. Shout it from the rooftops, seriously everyone stop pretending otherwise in the default case: “Requiring safety testing and disclosures for the outputs of $100 million-plus training runs is not an example of regulatory capture nor a meaningful barrier to entry relative to the cost of compute.” Yes, obviously one could eventually in theory ramp up those safety testing requirements sufficiently that they start to cost tens of millions or lots of specialized expertise and become a real barrier, and in theory that could scale faster than the training costs, but it is bizarre to think this is any kind of default. What you should worry about is not the cost of the test, it is that you might fail the test, at which point we ask why. Section 2 is Most proposed ‘AI regulations’ are ill-conceived or premature. My guess is that depends how we weigh the various proposals? Yes. The government will need the ability to flexibly react quickly to events. ‘It is unwise to craft comprehensive statutory regulation at a technological inflection point, as the basic ontology of what is being regulated is in flux.’ I do think this is a good general principle, and would agree with it strongly if it said (e.g.) ‘typically unwise.’ And indeed, I would avoid committing to as many details as we can avoid, again especially with respect to mundane considerations. But also life is about to come at us fast and our government is slow, so we cannot afford to wait too long. So overall I will say agree (but not strongly). Shout it from the rooftops: “The optimal policy response to AI likely combines targeted regulation with comprehensive deregulation across most sectors.” So does the optimal policy response to a lack of AI. Yes, we can all agree that many regulation details will become obsolete even if they start out right at the time. So will many decisions to leave some area alone. Even the static gains from deregulation tend to be a good deal, but yes I would say that in general the ability to adapt tends to be the bigger benefit. Certainly that is true in the AI case. In the commercial space I strongly agree that legacy legal requirements are going to likely be much greater barriers than anything new we throw up any time soon. Indeed, I expect new laws to net enable AI adaptation, not prevent it. This is highlighting common sense. If impact is sooner brace for it sooner. Yes. The alternative path does not seem viable. Shout it from the rooftops in all domains: “Existing laws and regulations are calibrated with the expectation of imperfect enforcement.” I strongly agree that AI will enable more stringent law enforcement across the board. It is an important and under considered point. AI will often remove the norms and frictions that are load-bearing in prevent various problems, including in law enforcement. All of our laws, even those that have nothing to do with AI, will need to adjust to the new equilibrium, even if the world relatively ‘looks normal.’ I mostly agree that it is first best for states to avoid AI regulations, especially excluding California. For mundane AI they should very much avoid butting in. I do think there is a strong second-best ‘someone has to and no one else yet will’ argument for a bill like CA’s SB 1047, given the Congress we have. My biggest practical concern is exactly that California might not step aside and let itself be superseded when the time for that arrives, and the biggest advantage is it could be a template for the federal level. Section 3 claims AI progress is accelerating, not plateauing. I think this is probably right as a thesis statement, but definitely ‘too soon to tell’ applies. Here, it is less whether I agree, and more what probability I assign. I would say something like 85% that the last 12 months were the slowest progress we’ll see in AI for the next let’s say 5 years (or until a potential post-singularity stabilization, which would not be foreseeable), in terms of publicly available capabilities. We started out with GPT-4, and ended with GPT-4-Turbo, Claude Opus and Gemini Advanced, all of which are only a little better, and didn’t see much else done. Yet. Buckle up. Strongly agree. I notice I am confused on this one. Minimizing cross-entropy loss over human-generated text should converge to the abilities necessary to predict all human-generated text, which requires at least maximum-human intelligence to do? But in pure terms, if you literally could do nothing but scale LLMs and not improve your process, then my gut says yes, this would indeed converge, but I am only maybe 75% confident in that, and I note that it excludes a bunch of not so difficult to implement scaffolding capabilities, and also that ‘upper-human-level’ would likely allow bootstrapping. This is a very similar and highly correlated prediction with 2, so 75% again. I am not sure how exactly to interpret the claim here, but I think that RL-based threat models are being less than fully discounted, and reasonably so, but perhaps too much and I would not count them out? Maybe 40%? So disagree. Weird one. Could be is weasel territory that implies 100%, however in terms of ‘will be’ I do expect this to be true in practice, something like 80% to be importantly true. I agree with the first half and think that is a gimme as written, maybe another 80% zone. For the second half, it depends on fast something would count as a ‘foom.’ If it’s the traditional ‘in an hour or a day’ and requires ‘god-like ASI’ as is implied by the context then I’m reasonably confident here that the restrictions apply, and would be in the 90% zone, so ~70% compounded (to avoid implying false precision). Again I think the ‘may’ clause is fully true, and this is even more likely to happen in practice, so let’s say 85%. Yes, this is a strong agree, 95%. Section 4 says open source is mostly a red herring. Let’s see what he means by that. In some senses I might agree. I expect [an expanding delta between closed and open models at the top end] to be true (75%) because I expect companies like Meta to realize the financial folly of giving away their work for free, and also for governments like America’s to not be keen on letting them do that for national security reasons, and also safety issues. This is my first strong disagreement, because I expect ‘open source advocates’ to not come around until the actual catastrophe happens, at a minimum. Potential capabilities, I predict, will not convince them. I created a market for this one. Before any trading on it I would have put this rather low, something like 25% if we think ‘many’ means about half. I strongly agree as written, as in it does not apply to Llama-3 400B. That release I do not expect to be dangerous directly either, but I would have caveats, as I have previously discussed. Well, yes. I have long worried open weights is a no-good, very bad middle ground. Yes. I strongly disagree here. Open source advocates are not doing this because they love Meta, and they very much have deep philosophical views. Give them credit where credit is due, and also they hope to one day themselves catch up somehow. Right now Meta is the only one crazy enough and rich enough to plausibly do something hugely damaging, but that could change. A lot of the concerns of both sides are quite reasonably with what happens ‘at the limit.’ Well, yes, obviously, but that has little to do with how Meta operates. So I am not onboard with ‘the implication’ but I do agree as written. I strongly disagree here as well. Why should Zuck’s Meta shares make him more concerned? Why would him drawing a salary matter? Altman is plenty rich already and this is him avoiding tying his wealth to OpenAI. As for the non-profit board, yeah, I am confused how one could think that, although of course a given board can care about anything at all. I would be cautious about what counts as ‘lower-tier,’ and it is not obvious that such even properly mitigating these issues leads to great outcomes in some cases, but I would weakly agree as written. Technically yes because of wording, certainly they have some of that effect as one thing they do, but mostly no, in the intended meaningful sense I disagree. I do not think being open is so important for defensive purposes, certainly far less so than offensive ones, although of course that too is ‘undermining adaptation’ in some sense. The primary ways restricting open sourcing ‘undermines adaptation’ I think would be (1) people who wanted to do various open things that the closed model owners won’t allow or that require privacy or data issues be solved, and (2) those restrictions will slow down offensive capabilities, and the offensive capabilities would otherwise force adaptation for defensive purposes to not get wiped out. I mostly agree for sufficiently broad values of the terms widely available and cheap, for capabilities that would not be catastrophic to allow, and if we are ruling out ways to make them not widely available or not cheap. I think I more agree than disagree as written. But see #12, and also many other things that are cheap or easy to do that we make illegal, or that would be cheap or easy to do but we do our best to make expensive and difficult, because we believe the alternative is worse. Sometimes, although less than half the time, we are wise to do that. True. And I do not especially want such laws repealed in most cases. Section 5 claims accelerate versus decelerate is a false dichotomy. This might be a bell curve meme situation? Yes, in important senses of course it is not so simple and a false dichotomy, but also in at least one important sense it is a real dichotomy. That’s an interesting question. Will this be the most important decade for decisions? There have been some historical moments that seem highly contingent. The most obvious alternative candidate period is the decade leading up to World War 2, if one means decisions broadly. In terms of total impact, I can see pointing to crises in the Cold War that almost went nuclear, or certain key moments in religious history. Also, on the flip side, if you think the die is already cast, you could argue that the key moments were in the last decade or earlier, and what plays out now is incentives no one can stop. But I think I mostly agree with Hammond. I like to think I am an existence proof of this, and I know many others. This is strong enough that I disagree with it. yes, technology involves branching paths and things are nonlinear and the Civilization tech tree is a simplification and all that. But also there is a single light of science, and accelerating key developments in AI will tend to accelerate future key such developments, although I think at this point most AI activities do not meaningfully accelerate us further. Acceleration is a useful fake framework. I think both matter. The speed we go down paths matters for shifting paths, including shifting among subpaths and branches, and also impacts what happens along even the mainline of those paths, for better and also worse. Also we do not only lose time to shift paths but to learn what paths might exist. But overall I do have to agree that as written the path we choose is the more important question. This gets into what ‘AGI’ means. For sufficiently strong definitions, yes. Yep. Strongly disagree. Effective Altruism is not a bunch of virtue ethicists in disguise, they say they are utilitarians and when people tell you who they are believe them. I should know because I am a virtue ethicist who gets mad at them about this. e/acc is not about Nietzschean anything, he would write a highly entertaining rant if he saw you claiming that. Nor are they meaningfully atheists. They are the Waluigi of EA, and playing with memes and vibes. If you think EAs are metaphorical or spiritual Christians, then e/acc is not atheist, it is satanic. Yes, of course the ‘accelerationism’ lobby outrstrips and outspends the safety lobby. Shout it from the rooftops, and roll your eyes if anyone tells you different. There is high uncertainty, but in expectation I disagree and think Biden is better, given that Biden issued the executive order and Trump has pledged to repeal the executive order, I presume mostly because Biden issued it. I do think that Trump is in essentially all ways ‘high variance’ so if you think we are super doomed in the baseline scenarios then I can see an argument the other way. Agreed. I mean, consider the baseline of the average progressive. So yes, very much so, I only wish such voices were as loud in all the places where they are right. Yep, exactly, so much so I noted this in #9. One can generalize this beyond AI. I assume these are true statements. I do not think Bannon has any influence on Trump. But Hannity also thinks AI is crazy dangerous, and he might. Section 6 is The AI wave is inevitable, superintelligence isn’t. I don’t know what the ‘tech tree’ looks like for superintelligence, but under my baseline scenario it seems extremely difficult to avoid entirely, although we have a lot of control still over what form it would take. I agree it is not a fait accompli. Like almost anything it can be an ideological goal, but I do not think it is right to say it is primarily that. So I think I weakly disagree. Right now I strongly agree. The question is how long this will remain true as the pressures mount, or how long it would remain true if those three companies used their degrees of freedom. Yes, shout it from the rooftops: “Creating a superintelligence is inherently dangerous and destabilizing, independent of the hardness of alignment.” Yes, we could, but can we make this choice in practice? That is the question. Understatement of the year. If an ASI exists and it isn’t you? Look at me. I’m the sovereign now. Yes, especially the childless part, but you could still do so much worse. I disagree that SBF and Altman are more alike than different, but not so strongly, and I see from context that Hammond knows what he is claiming here. This is a true statement, and he is making his full claims very clear. I laid out my view in the Moral Mazes sequence. I think we disagree here more than we agree, but Hammond’s view here is more accurate than the median one. Section 7 says technological transitions cause regime changes. Why yes, they do. Yes, even the best case scenarios are going to be dicey, move fast and break things. Yes, along with everything else. I’m not quite going to disagree but I think this is severely underselling what is coming. Congress has been unacceptably unproductive, well, since FDR, but also that has protected us from, well, the kinds of things done under FDR. I think I disagree that it will be important to have Congress keep up, we do not have a Congress capable of keeping up. They will need to get a few big things right and enable the state to react largely without them otherwise, and I think this could work. No, that is not ideal in many senses, but I do not see any practical alternative. We cannot expect miracles. Although with AI to help, productivity could get much higher very quickly. What are we comparing this to? Adaptation of AI willy nilly? Using the standard practices whatever they are? I don’t even know, this is not a strong area for me. Obviously every time you slow things down for non-critical concerns you raise possibility of systemic failure, so some of this is net harmful in that sense. But I think without any such policies at all systemic failure is inevitable, so I disagree. Shout it from the rooftops, only even more generalized and unhedged: ‘The rapid diffusion of AI agents with approximately human-level reasoning and planning abilities is likely sufficient to destabilize most existing U.S. institutions.’ Yes, and indeed so did past cognitive transitions that might otherwise look small. Yes, although I doubt that this is the scenario we will land ourselves in. Section 8 says institutional regime changes are packaged deals. This does seem to historically be true. Yes, liberal democratic capitalism is a technologically-contingent equilibrium, and also contingent on other things, it could still have fallen during the 20th century on multiple occasions if things had been not so different, and replaced by one of two much, much worse alternatives. But the key thing here is that liberal democratic capitalism works because it happens to work best in the technological settings we have had in the past. We hope this will continue to be true, but it might not be, and our fertility problems are also a big hint that it might not be such a stable equilibrium even without AI. I see why one would say that, and I would confirm that when conditions change in some ways this often requires or suggests other adjustments, but mostly I think I disagree and that people are being too cute by at least half here. This does seem like the default if AI advances sufficiently, and this would likely be the least of our transformations and problems. Our institutions are based on various assumptions and intuitions that will stop making any sense, and there will be various things they will not know how to handle. Yes. Maximally ‘democratized’ AI, or giving everyone access to similarly powerful AI, would force much more oppressive interventions, both to maintain civilization and to satisfy public demands. If you have empowered even the smallest computing devices in ways the public cannot abide, then even if this does not fully cause collapse, catastrophe, loss of control or extinction, you are not going to get a crypto libertarian paradise. You are going to, at best, get full universal surveillance and social control, at least of electronics. Yes, and people are sleeping on this. Yes, versus the alternative. So do periods that lack technological change. Our recent past is no exception. I am definitely not going to go full Robin Hanson here. Do not presume your property rights will protect you under explosive growth. But I still disagree with Hammond here, because I do not think this rises to the level of imply. Your property rights might be less violated than they are rendered not so relevant. Note that this is an extremely optimistic future for regular humans, where demand for labor keeps rising because humans become more productive on the margin, not less. Should we expect this scenario? It is a kind of middle path, where AI is mostly complementary to humans and thus demand for labor goes up rather than down. I disagree, because I do not see this as likely. I expect AI to make us more productive, but to primarily turn out to be a substitute more than a compliment in the areas it greatly advances. Nor do I think we will need any such incentive to deploy AI to places it can work, there will likely only be a small window where AI policeman versus human policeman is a close comparison. I even more strongly disagree here. Technological unemployment happens, essentially, when the AI takes both your job and the job that would replace your job under past technological employment shifts. At some point, what is there left for you to do? And why should we assume this involves a collapse of capitalism? To some extent, yes, there will be ‘demand for humans as humans,’ but even here one should expect limits. Section 9 says dismissing AGI risks as ‘sci-fi’ is a failure of imagination. That is one of the things it at least sometimes is. Yes. Even AI-Fizzle world looks like sci-fi. Yes. Dismissing things as ‘sci-fi’ is unserious. Talk about physical possibility. There are smart terminator analogies and also dumb ones. The problem is that the most basic ones are some mix of dumb and easy to mock and portray as dumb. And there are also many ways these analogies can mislead. And of course, you don’t want your examples to involve time travel, even if we all agree the time travel has nothing to do with anything. The actual movies are much smarter than they look, and actually raise good points, but analogies care about what people can point to and how people associate and vibe. So on net I think I disagree that terminator analogies are underrated in practice, we go to discourse with the associations we have. Alas. But I could be wrong. I don’t even know what we mean by consciousness. I notice I am confused and suspect others are confused as well and can see this either way, so I’m going to neither agree nor disagree. Obviously consciousness is scale-dependent on some lower bound, but I presume that is not what he means here. The theory here is that it also might have an upper bound, or no longer be needed then? I think I am going to disagree here with the central intent, because I doubt scaling up would make consciousness become inefficient, even though technically this is a ‘may’ statement. I have not taken the time to look in depth, but for now I disagree, this does not seem right or promising to me. I strongly disagree here, assuming this is ‘in the eyes of humans.’ I notice that if you tell me humans were demoted as moral persons, I am highly confident artificial minds got promoted to moral persons instead. I do not see a plausible future of humans thinking there are zero moral persons. Of course, if all the humans die and only AIs remain, then in some sense humans have been demoted as moral persons and AIs might not be moral persons to each other, and that future seems highly plausible to me, but I would not consider this humans being demoted in this sense, and I do not think this is what Hammond meant? I think it’s pretty much nonsense to talk about ‘thermodynamics favors’ anything, but certainly I think that unconscious replicators are a likely outcome. I think that counts as agreement here. I think this is probably right, although this still seems rather bona fide to me. Interesting set of choices you gave us there. I am confident it would be a much bigger deal than the printing press, or else it wouldn’t count and AI has fizzled, but in the spirit intended I agree that this is up for grabs. Finally, Section 10 says biology is an information technology. Yes, this seems right enough to go with, if loose and imprecise. Sure, why not? I do not think ‘IQ of 1,000’ is a meaningful thing given how I think the scale works, but to the extent it is, then yes, so I think I agree with the intent. I disagree after reading the Wikipedia definition of anticommons. I do agree we could probably do it if we cared enough, and it should be a top priority and a top social good, but I don’t see why it is an anticommons situation. Shout territory: “There are more ways for a post-human transition to go poorly than to go well.” Indeed. Anyone who says ‘particular bad scenario X is unlikely therefore things will go well’ is not addressing the actual situation. Conditional on transitioning to something in any sense ‘post-human’ that is vastly more true. I’ve made related points often, that ‘who can be blamed’ is a key aspect of any situation, and often ‘no one’ is the ideal answer. One can never be fully sure, but I am confident one should act as if this is true. Tallying Up the Points So in total, that’s 23 disagreements and 1 where I don’t feel I can either agree or disagree, which leaves 71 agreements out of 95. There is a bit of ‘cheating’ in the sense that some of these are essentially facts and others us words like ‘may,’ but I think we are still looking at about 60% agreement on non-trivial statements. Conclusion I very much appreciated the format of the 95 theses as concrete taking off points. This seems like a highly valuable exercise, perhaps I should try to do a version as well, and I encourage others to do so. It is good to be explicit and concrete. I now feel I have a much better idea of where Hammond stands than most others out there.
2024-05-09
https://www.lesswrong.com/posts/KACMy4WtKBtr4wkDs/the-human-s-role-in-mesa-optimization
KACMy4WtKBtr4wkDs
The Human's Role in Mesa Optimization
silentbob
When it comes to mesa optimization, there’s usually two optimizers mentioned: the “base optimizer”, such as SGD, the process used for training the model. And the “mesa optimizer”, meaning roughly “the optimizer beneath the optimizer”. One key question in this context is when and under which circumstances such mesa optimizers may occur. This question is relevant for the AI safety field and alignment research, because optimizer AIs are of particular interest: firstly, they're potentially much more capable than non-optimizing AIs, and secondly, dangerous properties such as instrumental convergence may apply to them, leading to risks of e.g. runaway optimization that could escape human control. In Risks from Learned Optimization, the authors explain: Conventionally, the base optimizer in a machine learning setup is some sort of gradient descent process with the goal of creating a model designed to accomplish some specific task. Sometimes, this process will also involve some degree of meta-optimization wherein a meta-optimizer is tasked with producing a base optimizer that is itself good at optimizing systems to achieve particular goals. Specifically, we will think of a meta-optimizer as any system whose task is optimization. For example, we might design a meta-learning system to help tune our gradient descent process. Though the model found by meta-optimization can be thought of as a kind of learned optimizer, it is not the form of learned optimization that we are interested in for this sequence. Rather, we are concerned with a different form of learned optimization which we call mesa-optimization. This sounds pretty sensible to me, but I think it’s missing an important piece: the (usually human) AI engineers who are behind the whole training process. Even if there’s no automated meta optimizer, the humans are still there, basically performing the same task: they are selecting model architectures, tweaking hyperparameters, and running all kinds of experiments, usually in order to get the emerging AI to be as capable as possible. So ultimately we (potentially) end up with this state of affairs: picture: https://simpleaisafety.org/en/posts/mesa-optimizers/ The meta optimizer (AI engineer) tweaks the base optimizer (SGD) so that it trains a mesa optimizer (model) to interact with the world once deployed. Of course an AI engineer who highly values safety considerations may try to avoid exactly that scenario and rather tries to engineer a model that is not itself an optimizer. But a lot of AI engineers out there don’t share these concerns, and are instead especially keen on creating the most capable models possible. And the most capable models are likely to be those that are capable of optimization. The realization that there is always this third optimizer operating in the background is important, because without it one may get to the conclusion that it’s extremely unlikely for a “stupid” process such as SGD to converge onto an AI model that is itself an optimizer[1]. Or one may think (maybe rightly so or maybe not) that current generation AI models are inherently not expressive enough to enable strong optimization[2]. But with the human optimizer in the loop, it becomes clear that this human may care a lot about building very capable AIs, and they will keep trying to find ways to get the gradient descent process to eventually find these small basins in model weight space (or, if necessary, find alternative AI architectures) that allow the AI to be unusually capable, via the means of optimization. And hence it is a mistake[3] to focus strongly on supposed fundamental limitations of current machine learning paradigms when trying to assess the longer term dangers of AI. ^ Maybe you believe that some optimizer-like AI would be the global optimum of the loss landscape, but it’s incredibly unlikely for gradient descent-based processes to ever find that optimum, e.g. because its attractor basin is extremely tiny (maybe even so tiny that it just exists theoretically, but cannot be stably represented by our hardware, given the inaccuracies of floating point numbers), whereas the overwhelming majority of the search space would converge towards non-optimizer models. Or maybe this optimum is surrounded by “walls” of sub-optimality all around them, making this part of the loss landscape inaccessible from anywhere else. ^ For instance because feed forward neural networks, as e.g. used in the transformer architecture, lack the "recursive" traits that, one could assume, are necessary for any strong optimization to occur. (I personally don’t share this view though, not least because the successive generation of multiple tokens seems close enough to such recursive traits to enable optimization) ^ I'm aware of course that this particular type of reasoning mistake is not one commonly made by people in the AI safety field. I still think it's valuable to make this concept very explicit and easy to remember.
2024-05-09
https://www.lesswrong.com/posts/j6EhfL2hRubaKL9ca/my-thesis-algorithmic-bayesian-epistemology-explained-in
j6EhfL2hRubaKL9ca
My thesis (Algorithmic Bayesian Epistemology) explained in more depth
UnexpectedValues
In March I posted a very short description of my PhD thesis, Algorithmic Bayesian Epistemology, on LessWrong. I've now written a more in-depth summary for my blog, Unexpected Values. Here's the full post: *** In January, I defended my PhD thesis. My thesis is called Algorithmic Bayesian Epistemology, and it’s about predicting the future. In many ways, the last five years of my life have been unpredictable. I did not predict that a novel bat virus would ravage the world, causing me to leave New York for a year. I did not predict that, within months of coming back, I would leave for another year — this time of my own free will, to figure out what I wanted to do after graduating. And I did not predict that I would rush to graduate in just seven semesters so I could go work on the AI alignment problem. But the topic of my thesis? That was the most predictable thing ever. It was predictable from the fact that, when I was six, I made a list of who I might be when I grow up, and then attached probabilities to each option. Math teacher? 30%. Computer programmer? 25%. Auto mechanic? 2%. (My grandma informed me that she was taking the under on “auto mechanic”.) It was predictable from my life-long obsession with forecasting all sorts of things, from hurricanes to elections to marble races. It was predictable from that time in high school when I was deciding whether to tell my friend that I had a crush on her, so I predicted a probability distribution over how she would respond, estimated how good each outcome would be, and calculated the expected utility. And it was predictable from the fact that like half of my blog posts are about predicting the future or reasoning about uncertainty using probabilities. So it’s no surprise that, after a year of trying some other things (mainly auction theory), I decided to write my thesis about predicting the future. If you’re looking for practical advice for predicting the future, you won’t find it in my thesis. I have tremendous respect for groups like Epoch and Samotsvety: expert forecasters with stellar track records whose thorough research lets them make some of the best forecasts about some of the world’s most important questions. But I am a theorist at heart, and my thesis is about the theory of forecasting. This means that I’m interested in questions like: How do I pay Epoch and Samotsvety for their forecasts in a way that incentivizes them to tell me their true beliefs?If Epoch and Samotsvety give me different forecasts, how should I combine them into a single forecast?Under what theoretical conditions can Epoch and Samotsvety reconcile a disagreement by talking to each other?What’s the best way for me to update how much I trust Epoch relative to Samotsvety over time, based on the quality of their predictions? If these sorts of questions sound interesting, then you may enjoy consuming my thesis in some form or another. If reading a 373-page technical manuscript is your cup of tea — well then, you’re really weird, but here you go! If reading a 373-page technical manuscript is not your cup of tea, you could look at my thesis defense slides (PowerPoint, PDF),[1] or my short summary on LessWrong. On the other hand, if you’re looking for a somewhat longer summary, this post is for you! If you’re looking to skip ahead to the highlights, I’ve put a * next to the chapters I’m most proud of (5, 7, 9). Chapter 0: Preface I don’t actually have anything to say about the preface, except to show off my dependency diagram. Figure 0.1: Solid arrows mean “required”; dashed arrows mean “recommended”. (I never learned how to make diagrams in LaTeX. You can usually do almost as well in Microsoft Word, with way less effort!) Chapter 1: Introduction “Algorithmic Bayesian epistemology” (the title of the thesis, a.k.a. ABE) just means “reasoning about uncertainty under constraints”. You might’ve seen math problems that look like this: 0.1% of people have a disease. You get tested using a test that’s ten times more likely to come up positive for people who have the disease than for people who don’t. If your test comes up positive, what’s the probability that you have the disease? But the real world is rarely so simple: maybe there’s not one test but five. Test B is more likely to be a false positive in cases where Test A is a false positive. Tests B and C test for different sub-types for the disease, so they complement each other. Tests D and E are brand new and it’s unclear how correlated they are with the other tests. How do you form beliefs in that sort of information landscape? Here’s another example. A month ago, I was deciding whether to change my solar eclipse travel plans from Mazatlán, Mexico to Montreal, Canada, on account of the weather forecasts. The American model told me that there was a 70% chance that it would be cloudy in Mazatlán; meanwhile, the Canadian model forecast a mere 20% chance. How was I to reconcile these sharply conflicting probabilities?[2] I was facing an informational constraint. Had I known more about the processes by which the models arrived at their probabilities and what caused them to diverge, I would have been able to produce an informed aggregate probability. But I don’t have that information. All I know is that it’s cloudy in Mazatlán 25 percent of the time during this part of the year, and that one source predicts a 20% chance of clouds while another predicts a 70% chance. Given just this information, what should my all-things-considered probability be? (If you’re interested in this specific kind of question, check out Chapter 7!) But informational constraints aren’t the only challenge. You can face computational constraints (you could in theory figure out the right probability, but doing so would take too long), or communicational constraints (figuring out the right probability involves talking to an expert with a really detailed understanding of the problem, but they only have an hour to chat), or strategic constraints (the information you need is held by people with their own incentives who will decide what to tell you based on their own strategic considerations). So that’s the unifying theme of my thesis: reasoning about uncertainty under a variety of constraints.[3] I don’t talk about computational constraints very much in my thesis. Although that topic is really important, it’s been studied to death, and making meaningful progress is really difficult. On the other hand, some of the other kinds of constraints are really underexplored! For example, there’s almost no work on preventing strategic experts from colluding (Chapter 4), very little theory on how best to aggregate experts’ forecasts (Chapters 5, 6, 7), and almost no work on communicational constraints (Chapter 8). In no small part, I chose which topics to study based on where I expected to find low-hanging fruit. Chapter 2: Preliminaries This is a great chapter to read if you want to get a sense of what sort of stuff my thesis is about. It describes the foundational notions and results that the rest of my thesis builds on. Contents include: Proper scoring rules: suppose I want to know the probability that OpenAI will release GPT-5 this year. I could pay my friend Jaime at Epoch AI for a forecast. But how do I make sure that the forecast he gives me reflects his true belief? One approach is to ask Jaime for a forecast, wait to see if GPT-5 is released this year, and then pay him based on the accuracy of his forecast. Such a payment scheme is called a scoring rule, and we say that a scoring rule is proper if it actually incentivizes Jaime to report his true belief (assuming that he wants to maximize the expected value of his score). (I’ve written about proper scoring rules before on this blog! Reading that post might be helpful for understanding the rest of this one.)Forecast aggregation methods: now let’s say that Jaime thinks there’s a 40% chance that GPT-5 will be released this year, while his colleagues Ege and Tamay think there’s a 50% and 90% chance, respectively. What’s the right way for them to aggregate their probabilities into a single consensus forecast? One natural approach is to just take the average, but it turns out that there are significantly better approaches.Information structures: if some experts are interested in forecasting a quantity, an information structure is a way to formally express all of the pieces of information known by at least one of the experts, and how those pieces of information interact/overlap. I also discuss some “nice” properties that information structures can have, which make them easier to work with. Chapter 3: Incentivizing precise forecasts (Joint work with George Noarov and Matt Weinberg.) I’ve actually written about this chapter of my thesis on my blog, so I’ll keep this summary brief! In the previous section, I mentioned proper scoring rules: methods of paying an expert for a probabilistic forecast (depending on the forecast and the eventual outcome) in a way that incentivizes the expert to tell you their true probability. The two most commonly used ones are the quadratic scoring rule (you pay the expert some fixed amount, and then subtract from that payment based on the expert’s squared error) and the logarithmic scoring rule (you pay the expert the log of the probability that they assign to the eventual outcome). (See this post or Chapter 2.1 of my thesis for a more thorough exposition.) But there are also infinitely many other proper scoring rules. How do you choose which one to use? All proper scoring rules incentivize and expert to give an accurate forecast (by definition). In this chapter, I explore the question of which proper scoring rule most incentivizes an expert to give a precise forecast — that is, to do the most research before giving their forecast. Turns out that the logarithmic scoring rule is very good at this (99% of optimal), but you can do even better! (Click here for my old blog post summarizing this chapter!) Chapter 4: Arbitrage-free contract functions (Joint work with my PhD advisor, Tim Roughgarden.) Now let’s say that you’re eliciting forecasts from multiple experts. We can revisit the example I gave earlier: Jaime, Ege, and Tamay think there’s a 40%, 50%, and 90% chance that GPT-5 will be released this year. (These numbers are made up.) Let’s say that I want to pay Jaime, Ege, and Tamay for their forecasts using the quadratic scoring rule. To elaborate on what this means, the formula I’ll use is: 100⋅(1−(forecasting error)2). For example, Jaime forecast a 40% chance. If GPT-5 is released this year, then the “perfect” forecast would be 100%, which means that his “forecasting error” would be 0.6. Thus, I would pay Jaime $100(1−0.62)=$64. On the other hand, if GPT-5 is not released, then his forecasting error would be 0.4, so I would pay Jaime $100(1−0.42)=$84. To summarize all these numbers in a chart: ExpertForecastPayment if YESPayment if NOJaime40%$64$84Ege50%$75$75Tamay90%$99$19Total payment $238$178Table 4.1: How much I owe to each expert under the YES outcome (GPT-5 is released this year) and the NO outcome (it’s not released this year). But now, suppose that Jaime, Ege, and Tamay talk to each other and decide to all report the average of their forecasts, which in this case is 60%. ExpertForecastPayment if YESPayment if NOJaime60%$84$64Ege60%$84$64Tamay60%$84$64Total payment $252$192Table 4.2: How much I owe to each expert under the YES and NO outcomes, if all three experts collude to say the average of their true beliefs. In this case, I will owe more total dollars to them, no matter the outcome! They know this, and it gives them an opportunity to collude: Step 1: They all report the average of their beliefs (60%).Step 2: They agree to redistribute their total winnings in a way that leaves each of them better off than if they haven’t colluded. (For example, they could agree that if YES happens, they’ll redistribute the $252 so that Jaime gets $68, Ege gets $80, and Tamay gets $104, and if NO happens, they’ll redistribute the $192 so that Jaime gets $88, Ege gets $80, and Tamay gets $24.) The collusion benefits them no matter what! Naturally, if I want to get an accurate sense of what each one of them believes, if I want to figure out how to pay them so that there’s no opportunity for them to collude like that. And so there’s a natural question: is it possible to pay each expert in a way that incentivizes each expert to report their true belief and that prevents any opportunity for collusion? This question was asked in 2011 by Chun & Shachter. In this chapter, I resolve Chun & Shachter’s question: yes, preventing Jaime, Ege, and Tamay from colluding is possible. Why should this be possible? It’s because I can pit Jaime, Ege, and Tamay against each other. If there were only one expert, I could only reward the expert as a function of their own forecast. But if there are three experts, I can reward Jaime based on how much better his forecast was than Ege’s and Tamay’s. That’s the basic idea; if you want the details, go read Chapter 4! * Chapter 5: Quasi-arithmetic pooling (Joint work with my PhD advisor, Tim Roughgarden.) As before, let’s say that I elicit probabilistic forecasts from Jaime, Ege, and Tamay using a proper scoring rule.[4] How should I combine their numbers in a single, all-things-considered forecast? In this chapter, I make the case that the answer should depend on the scoring rule that you used to elicit their forecasts. To see why, consider for comparison the quadratic and logarithmic scoring rules. Here’s a plot of the score of an expert as a function of the probability they report, if the event ends up happening. Figure 5.1: If the YES outcome happens, an expert’s score under the quadratic and logarithmic scoring rules, as a function of the expert’s reported probability. (The scoring rules are normalized so as to be comparable.) If Jaime says that there’s a 50% chance that GPT-5 comes out this year, and it does come out, he’ll get a score of 0.75 regardless of whether I use the quadratic or the log score. But if Jaime says that there’s a 1% chance that GPT-5 comes out this year, and it does come out, then he’ll get a score of 0.02 if I use the quadratic score, but will get a score of -0.66 if I use the log score. (The scoring rules are symmetric: for example, Jaime’s score if predicts 30% and GPT-5 doesn’t come out is the same as if he had predicted 70% and it did come out.) This means that Jaime cares which outcome happens a different amount depending on which scoring rule I use. Below is a plot of how much higher a score Jaime would get if GPT-5 did come out compared to if it didn’t, as a function of the probability that he reports. Figure 5.2: How much higher an expert’s score is under a YES outcome than under a NO outcome, as a function of the expert’s reported probability, for the quadratic and logarithmic scoring rules. In other words, how invested is the expert in getting a YES outcome instead of a NO outcome? Suppose that Jaime reports an extreme probability, like 1% or 99%. This plot shows that Jaime cares much more about the outcome if I use the log score rule to reward him than if I use the quadratic score. This makes sense, since the log scoring rule is strongly punishes assigning a really low probability to the eventual outcome. But conversely, if Jaime reports a non-extreme probability, like 25% or 75%, he actually cares more about the outcome if I use the quadratic score than if I use the log score. Intuitively, this means that if I use the log score, then Jaime cares a lot more about making his forecasts precise when they’re near the extremes. He cares about the difference between a 1% chance and a 0.1% chance, to a much greater degree than if I used the quadratic score. Jaime will think carefully and make his forecast extra precise before reporting a probability like 0.1%. And so, if I use the log score and Jaime tells me 0.1% anyway, it makes sense for me to take that forecast seriously. If a different expert tells me 50%, it doesn’t make much sense for me to just take the average — 25.05% — because Jaime’s 0.1% forecast likely reflects a more informed, precise understanding. To formalize this intuition, I came up with a method of aggregating forecasts that I called quasi-arithmetic pooling (QA pooling) with respect to the scoring rule being used for elicitation. Roughly speaking, instead of averaging the forecasted probabilities, QA pooling averages the kinds of numbers represented in Figure 5.2: each expert’s “amount of investment” in the possible outcomes. I was able to prove a bunch of cool properties of QA pooling: QA pooling with respect to the quadratic scoring rule just means taking the average of the forecasts (this is called linear pooling). QA pooling with respect to the logarithmic scoring rule involves treating the forecasts as odds instead of probabilities, and then taking their geometric mean (this is called logarithmic pooling). Logarithmic pooling is the second most well-studied forecast aggregation technique (after linear pooling), and it works very well in practice. Thus, QA pooling maps the two most widely-used proper scoring rules to the two most well-studied forecast aggregation techniques!Suppose that your receive a bunch of forecasts from different experts. You don’t know the eventual outcome, but your goal is to beat the average of all the experts’ scores no matter which outcome happens, and by as much as possible in the worst case. The way to do that is to use QA pooling with respect to the scoring rule.There’s a natural interpretation of this fact in terms of the concept of the wisdom of crowds. Suppose a bunch of people (the crowd) report forecasts. Is it possible to do better that a single random crowd member — that is, to guarantee yourself a better score than the average person in the crowd? The answer is yes! And the way to beat the crowd by the largest possible amount is to QA-pool the forecasts. In that sense, the QA pool is the correct way to aggregate the crowd (with respect to whichever scoring rule you care about). On this view, “wisdom of the crowds” is not just an empirical fact, but a mathematical one!You can also do QA pooling with different weights for different experts (just like you can take a weighted average of numbers instead of a simple average). This is useful if you trust some experts more than others. But how can you decide how much to trust each expert? It turns out that so long as the scoring rule is bounded (e.g. quadratic, but not log), you can learn weights for experts over time based on the experts’ performance, and you’ll do almost as well as if you had known the best possible weights from the get-go. (In the field of online learning, this is called a no-regret algorithm.)QA pooling can be used to define what it means to be over- or under-confident. This notion of overconfidence turns out to be equivalent to another natural notion of overconfidence (one that I first came up with in order to analyze the results of my pseudorandomness contest).When coming up with a method of aggregating forecasts, there are some axioms/desiderata that you might want your aggregation method to satisfy. It turns out that for a certain natural set of axioms, the class of aggregation methods that comply with those axioms is precisely the class of all QA pooling methods. In all of these senses, QA pooling seems like a really natural way to aggregate forecasts. I’m really excited to see QA pooling investigated further! Chapter 6: Learning weights for logarithmic pooling (Joint work with my PhD advisor, Tim Roughgarden.) In my description of Chapter 5, I said: It turns out that so long as the scoring rule is bounded (e.g. quadratic, but not log), you can learn weights for experts over time based on the experts’ performance, and you’ll do almost as well as if you known the best possible weights from the get-go. That is fair enough, but many natural proper scoring rules (such as the log score) are in fact unbounded. It would be nice to have results in those cases as well. Unfortunately, if the scoring rule is unbounded, there is no way to get any result like this unconditionally. In particular, if your experts are horribly miscalibrated (e.g. if 10% of the time, they say 0.00000001% and then the event happens anyway), there’s no strategy for putting weights on the experts that can be guaranteed to work well. But what if you assume that the experts are actually calibrated? In many cases, that’s a pretty reasonable assumption: for example, state-of-the-art machine learning systems are calibrated. So if you have a bunch of probability estimates from different AIs and you want to aggregate those estimates into a single number (this is called “ensembling”), it’s pretty reasonable to make the assumption that the AIs are giving you calibrated probabilities. In this chapter, I prove that at least for the log scoring rule, you can learn weights for experts over time in a way that’s guaranteed to perform well on average, assuming that the experts are calibrated. (For readers familiar with online learning: the algorithm is similar to online mirror descent with a Tsallis entropy regularizer.) * Chapter 7: Robust aggregation of substitutable signals (Joint work with my PhD advisor, Tim Roughgarden.) Let’s say that it rains on 30% of days. You look at two (calibrated) weather forecasts: Website A says there’s a 60% chance that it’ll rain tomorrow, while Website B says there’s a 70% chance. Given this information, what’s your all-things-considered estimate of how likely it is to rain tomorrow? The straightforward answer to this question is that I haven’t given you enough information. If Website A’s is strictly more informed than Website B, you should say 60%. If Website B is strictly more informed than Website A, you should say 70%. If the websites have non-overlapping information, you should say something different than if their information is heavily overlapping. But I haven’t told you that, so I haven’t given you the information you need in order to produce the correct aggregate forecast. In my opinion, that’s not a good excuse, because often you lack this information in practice. You don’t know which website is more informed and by how much, or how much their information overlaps. Despite all that, you still want an all-things-considered guess about how likely it is to rain. But is there even a theoretically principled way to make such a guess? In this chapter, I argue that there is a principled way to combine forecasts in the absence of this knowledge, namely by using whatever method works as well as possible under worst-case assumptions about how the experts’ information sets overlap. This is a quintessentially theoretical CS-y way of looking at the problem: when you lack relevant information, you pick a strategy that’ll do well robustly, i.e. no matter what that information happens to be. In other words: you want to guard as well as possible against nasty surprises. This sort of work has been explored before under the name of robust forecast aggregation — but most of that work has had to make some pretty strong assumptions about the forecasters’ information overlap (for example, that there are two experts, one of whom is strictly more informed than the other, but you don’t know which). By contrast, in this chapter I make a much weaker assumption: roughly speaking, all I assume is that the experts’ information is substitutable, in the economic sense of the word. This means that there’s diminishing marginal returns to learning additional experts’ information. This is a natural assumption that holds pretty often: for example, suppose that Website A knows tomorrow’s temperature and cloud cover, whereas Website B knows tomorrow’s temperature and humidity. Since their information overlaps (they both know the temperature), Website B’s information is less valuable if you already know Website A’s information, and vice versa. The chapter has many results: both positive ones (“if you use this strategy, you’re guaranteed to do somewhat well”) and negative ones (“on the other hand, no strategy is guaranteed to do very well in the worst case”). Here I’ll highlight the most interesting positive result, which I would summarize as: average, then extremize. In the leading example, I gave two pieces of information: Each expert’s forecast (60% and 70%)The prior — that is, the forecast that someone with no special information would give (30%) A simple heuristic you might use is to average the experts’ forecasts, ignoring the prior altogether: after all, the experts know that it rains on 30% of days, and they just have some additional information. Yet, the fact that the experts updated from the prior in the same direction is kind of noteworthy. To see what I mean, let’s consider a toy example. Suppose that I have a coin, and I have chosen the coin’s bias (i.e. probability of coming up heads) uniformly between 0% and 100%. You’re interested in forecasting the bias of the coin. Since I’ve chosen the bias uniformly, your best guess (without any additional information) is 50%. Now, suppose that two forecasters each see an independent flip of the coin. If you do the math, you’ll find that if a forecasters sees heads, they should update their guess for the bias to 2/3, and if they see tails, they should update to 1/3. Let’s say that both forecasters tell you that their guess for the bias of the coin is 2/3 — so you know that they both saw heads. What should your guess be about the bias of the coin? Well, you now have more information that either forecaster: you know that the coin came up heads both times it was flipped! And so you should actually say 3/4, rather than 2/3. That is, because the two forecasters saw independent evidence that pointed in the same direction, you should update even more in that direction. This move — updating further away from the prior after aggregating the forecasts you have available — is called extremization. Now, generally speaking, experts’ forecasts won’t be based on completely independent information, and so you won’t want to extremize quite as much as you would if you assumed independence. But as long as there’s some non-overlap in the experts’ information, it does make sense to extremize at least a little. The benefits to extremization aren’t just theoretical: Satopää et al. found that extremization improves aggregate forecasts, and Jaime Sevilla found that the extremization technique I suggest in this chapter works well on data from the forecast aggregator Metaculus. Beyond giving a theoretical grounding to some empirical results in forecast aggregation, I’m excited about the work in this chapter because it opens up a whole bunch of new directions for exploration. Ultimately, in this chapter I made progress on a pretty narrow question. I won’t define all these terms, but here’s the precise question I answered: What approximation ratio can be achieved by an aggregator who learns expected value estimates of a real-valued quantity Y from m truthful experts whose signals are drawn from an information structure that satisfies projective substitutes, if the aggregator’s loss is their squared error and the aggregator knows nothing about the information structure or only knows the prior? Each of the bolded clauses can be varied. Relative to what baseline do we want to measure the aggregator’s performance? What sort of information does the aggregator get from the experts? Are the experts truthful or strategic? What assumptions are we making about the interactions between the experts’ information? What scoring rule are we using to evaluate the forecasts? In all, there are tons of different questions you can ask within the framework of robust forecast aggregation. I sometimes imagine this area as a playground with a bunch of neat problems that people have only just started exploring. I’m excited! Chapter 8: When does agreement imply accuracy? (Joint work with Raf Frongillo and Bo Waggoner.) In 2005, Scott Aaronson wrote one of my favorite papers ever: The Complexity of Agreement. (Aaronson’s blog post summarizing the paper, which I read in 2015, was a huge inspiration and may have been counterfactually responsible for my thesis!) Here’s how I summarize Aaronson’s main result in my thesis: Suppose that Alice and Bob are honest, rational Bayesians who wish to estimate some quantity — say, the unemployment rate one year from now. Alice is an expert on historical macroeconomic trends, while Bob is an expert on contemporary monetary policy. They convene to discuss and share their knowledge with each other until they reach an agreement about the expected value of the future unemployment rate. Alice and Bob could reach agreement by sharing everything they had ever learned, at which point they would have the same information, but the process would take years. How, then, should they proceed? In the seminal work “Agreeing to Disagree,” Aumann (1976) observed that Alice and Bob can reach agreement simply by taking turns sharing their current expected value for the quantity[…] A remarkable result by Aaronson (2005) shows that if Alice and Bob follow certain protocols of this form, they will agree to within with probability by communicating bits [of information…] Notably, this bound only depends on the error Alice and Bob are willing to tolerate, and not on the amount of information available to them. In other words: imagine that Alice and Bob — both experts with deep but distinct knowledge — have strongly divergent opinions on some topic, leading them to make different predictions. You may have thought that Alice and Bob would need to have a really long conversation to hash out their differences — but no! At least if we model Alice and Bob as truth-seeking Bayesians, they can reach agreement quite quickly, simply by repeatedly exchanging their best guesses: first, Alice tells Bob her estimate. Then, Bob updates his estimate in light of the estimate he just heard from Alice, and responds with his new estimate. Then, Alice updates her estimate in light of the estimate he just heard from Bob, and responds with her new estimate. And so on. After only a small number of iterations, Alice and Bob are very likely to reach agreement![5] However, while Aaronson’s paper shows that Alice and Bob agree, there’s no guarantee that the estimate that they agree on is accurate. In other words, you may have hoped that by following Aaronson’s protocol (i.e. repeatedly exchanging estimates until agreement is reached), the agreed-upon estimate would be similar to the estimate that Alice and Bob would have reached if they had exchanged all of their information. Unfortunately, no such accuracy guarantee is possible. As a toy example, suppose that Alice and Bob each receive a random bit (0 or 1) and are interested in estimating the XOR of their bits (that is, the sum of their bits modulo 2). Bob’s bit = 0Bob’s bit = 1Alice’s bit = 0XOR = 0XOR = 1Alice’s bit = 1XOR = 1XOR = 0Table 8.2: XOR Since Alice knows nothing about Bob’s bit, she thinks there’s a 50% chance that his bit is the same as hers and a 50% chance that his bit is different from hers. This means that her estimate of the XOR is 0.5 from the get-go. And that’s also Bob’s estimate — which means that they agree from the start, and no communication is necessary to reach agreement. Alas, 0.5 is very far from the true value of the XOR, which is either 0 or 1. In this example, even though Alice and Bob agreed from the start, their agreement was superficial: it was based on ignorance. They merely agreed because the information they had was useless in isolation, and only informative when combined together. Put otherwise, to an external observer, finding out Bob’s bit is totally useless without knowing Alice’s bit, but extremely useful if they already know Alice’s bit. Alice and Bob’s pieces of information are complements rather than substitutes. (Recall also that the notion of informational substitutes came up in Chapter 7!) This observation raises a natural question: what if we assume that Alice and Bob’s information is substitutable — that is, an external observer gets less mileage from learning Bob’s information if they already know Alice’s information, and vice versa? In that case, are Alice and Bob guaranteed to have an accurate estimate as soon as they’ve reached agreement? In this chapter, I show that the answer is yes! There’s a bunch of ways to define informational substitutes, but I give a particular (admittedly strong) definition under which agreement does imply accuracy. I’m excited about this result for a couple reasons. First, it provides another example of substitutes-like conditions on information being useful (on top of the discussion in Chapter 7). Second, the result can be interpreted in the context of prediction markets. In a prediction market, participants don’t share information directly; rather, they buy and sell shares, thus partially sharing their beliefs about the expected value of the quantity of interest. Thus, this chapter’s main result might also shed light on the question of market efficiency: under what conditions does the price of a market successfully incorporate all traders’ information into the market price? This chapter’s suggested answer: when the traders’ pieces of information are substitutable, rather than complementary.[6] I generally think that the topic of agreement — and more generally, communication-constrained truth-seeking — is really neglected relative to how interesting it is, and I’d be really excited to see more work in this direction. * Chapter 9: Deductive circuit estimation (Joint work at the Alignment Research Center with Paul Christiano, Jacob Hilton, Václav Rozhoň, and Mark Xu.) This chapter is definitely the weirdest of the bunch. It may also be my favorite. A boolean circuit is a simple kind of input-output machine. You feed it a bunch of bits (zeros and ones) as input, it performs a bunch of operations (ANDs, ORs, NOTs, and so forth), and outputs — for the purposes of this chapter — a single bit, 0 or 1. Boolean circuits are the building blocks that computers are made of. Let’s say that I give you a boolean circuit C. How would you go about estimating the fraction of inputs on which C will output 1? (I call this quantity C’s acceptance probability, or p(C).) The most straightforward answer is to sample a bunch of random inputs and then just check what fraction of them cause C to output 1. This is very effective and all, but it has a downside: you’ve learned nothing about why C outputs 1 as often as it does. If you want to understand why a circuit outputs 1 on 99% of inputs, you can’t just look at the input-output behavior: you have to look inside the circuit and examine its structure. I call this process deductive circuit estimation, because it uses deductive reasoning, as opposed to sampling-based estimation (which uses inductive reasoning). Deductive reasoning of this kind is based on “deductive arguments”, which point out something about the structure of a circuit in order to argue about the circuit’s acceptance probability. Here are a few examples, paraphrased from the thesis: Suppose that a circuit C takes as input a triple (a, b, c) of positive integers (written down in binary). It computes max(a, b) and max(b, c), and outputs 1 if they are equal. A deductive argument about C might point out that if b is the largest of the three integers, then max(a, b) = b = max(b, c), and so C will output 1, and that this happens with probability roughly 1/3. This argument points out that C outputs 1 whenever b is the largest of the three integers. The argument does not point out that C also outputs 1 when a and c are both larger than b and happen to be equal. In this way, deductive arguments about circuits can help distinguish between different “reasons why” a circuit might output 1. (More on this later.) The next example makes use of SHA-256, which is a famous hash function: the purpose of SHA-256 is to produce “random-looking” outputs that are extremely hard to predict. Suppose that C(x) computes SHA-256(x) (the output of SHA-256 is a 256-bit string) and outputs 1 if the first 128 bits (interpreted as an integer) are larger than the last 128 bits. One can make a deductive argument about p(C) by making repeated use of the presumption of independence. In particular, the SHA-256 circuit consists of components that produce uniformly random outputs on independent, uniformly random inputs. Thus, a deductive argument that repeatedly presumes that the inputs to each component are independent concludes that the output of SHA-256 consists of independent, uniformly random bits. It would then follow that the probability that the first 128 bits of the output are larger than the last 128 bits is 1/2. The third example is about a circuit that checks for twin primes. This example points out that deductive arguments ought to be defeasible: a deductive argument can lead you to an incorrect estimate of p(C), but in that case there ought to be a further argument about C that will improve your estimate. Suppose that C takes as input a random integer k between and and accepts if k and k + 2 are both prime. A deductive argument about p(C) might point out that the density of primes in this range is roughly 1%, so if we presume that the event that k is prime and the event that k + 2 is prime are independent, then we get an estimate of p(C) = 0.01%. A more sophisticated argument might take this one step further by pointing out that if k is prime, then k is odd, so k + 2 is odd, which makes k + 2 more likely to be prime (by a factor of 2), suggesting a revised estimate of p(C) = 0.02%. A yet more sophisticated argument might point out that additionally, if k is prime, then k is not divisible by 3, which makes k + 2 more likely to be divisible by 3, which reduces the chance that k + 2 is prime. In this chapter, I ask the following question: is there a general-purpose deductive circuit estimation algorithm, which takes as input a boolean circuit C and a list of deductive arguments about C, and outputs a reasonable estimate of p(C)? You can think of such an algorithm as being analogous to a program that verifies formal proofs. Much as a proof verifier takes as input a mathematical statement and a purported formal proof, and accepts if the proof actually proves the statement, a deductive circuit estimator takes as input a circuit together with observations about the circuit, and outputs a “best guess” about the circuit’s acceptance probability. A comparison table from the thesis: Deductive circuit estimationFormal proof verificationDeductive estimation algorithmProof verifierBoolean circuitFormal mathematical statementList of deductive argumentsAlleged proof of statementFormal language for deductive argumentsFormal language for proofsDesiderata for estimation algorithmSoundness and completenessAlgorithm’s estimate of circuit’s acceptance probabilityProof verifier’s output (accept or reject)Table 9.1: We are interested in developing a deductive estimation algorithm for boolean circuits. There are similarities between this task and the (solved) task of developing an algorithm for verifying formal proofs of mathematical statements. This table illustrates the analogy. Importantly, the purpose of a deductive estimation algorithm is to incorporate the deductive arguments that it has been given as input, rather than to generate its own arguments. The output of a deductive estimation algorithm is only as sophisticated as the arguments that it has been given. Designing a deductive estimation algorithm requires you to do three things: Come up with a formal language in which deductive arguments like the ones in the above examples can be expressed.Come up with a list of desiderata (i.e. “reasonableness properties”) that the deductive estimation algorithm ought to satisfy.Find an algorithm that satisfies those desiderata. In this chapter, I investigate a few desiderata: Linearity: given a circuit C with input bits x1,…,xn, define C[xi=0] to be the circuit that you get when you “force” xi to be 0. (The resulting circuit now has n – 1 inputs instead of n.) Define C[xi=1] analogously. The deductive estimator’s estimate of p(C) should be equal to the average of its estimate of p(C[xi=0]) and p(C[xi=1]).Respect for proofs: a formal proof that bounds the value of p(C) can be given to the deductive estimator as an argument, and forces the deductive estimator to output an estimate that’s within that bound.0-1 boundedness: the deductive estimator’s estimate of the acceptance probability of any circuit is always between 0 and 1. In this chapter, I give an efficient algorithm that satisfies the first two of these properties. The algorithm is pretty cool, but I argue that ultimately it isn’t what we’re looking for, because it doesn’t satisfy a different (informal) desirable property called independence of irrelevant arguments. That is, the algorithm I give produces estimates that can be easily influenced by irrelevant information. Does any efficient algorithm satisfy all three of the linearity, respect for proofs, and 0-1 boundedness? Unfortunately, the answer is no (under standard assumptions from complexity theory). However, I argue that 0-1 boundedness isn’t actually that important to satisfy, and that instead we should be aiming to satisfy the first two properties along with some other desiderata. I discuss what those desiderata may look like, but ultimately leave the question wide open. Even though this chapter doesn’t get close to actually providing a good algorithm for deductive circuit estimation, I’m really excited about it, for two reasons. The first reason is that I think this problem is objectively really cool and arguably fundamental. Just as mathematicians formalized the notion of a mathematical proof a century ago, perhaps this line of work will lead to a formalization of a much broader class of deductive arguments. The second reason for my excitement is because of potential applications to AI safety. When we train an AI, we train it to produce outputs that look good to us. But one of the central difficulties of building safe advanced AI systems is that we can’t always tell whether an AI output looks good because it is good or because it’s bad in a way we don’t notice. A particularly pernicious failure mode is when the AI intentionally tricks us into thinking that its output was good. (Consider a financial assistant AI that takes actions like buying and selling stocks, transferring money between bank accounts, and paying taxes, and suppose we train the AI to turn a profit, subject to passing some basic checks for legal compliance. If the AI finds a way to circumvent the compliance checks — e.g. by doing some sophisticated, hard-to-notice money laundering — then it could trick its overseers into thinking that it’s doing an amazing job, despite taking actions that the overseers would strongly disapprove of if they knew about them.) How does this relate to deductive circuit estimation? Earlier I mentioned that deductive arguments can let you distinguish between different reasons why a circuit might exhibit some behavior (like outputting 1). Similarly, if we can formally explain the reasons why an AI exhibits a particular behavior (like getting a high reward during training), then we can hope to distinguish between benign reasons for that behavior (it did what we wanted) and malign reasons (it tricked us). This is, of course, a very surface-level explanation (see here for a slightly more in-depth one), and there’s a long ways to go before the theory in this chapter can be put into practice. But I think that this line of research is one of the most promising for addressing some of the most pernicious ways in which AIs could end up being unsafe. (I am now employed at the Alignment Research Center, and am really excited about the work that we’ve been doing — along these lines and others — to understand neural network behavior!) Epilogue As you can probably tell, I’m really excited about algorithmic Bayesian epistemology as a research direction. Partly, that’s because I think I solved a bunch of cool problems in some really under-explored areas. But I’m equally excited by the many questions I didn’t answer and areas I didn’t explore. In the epilogue, I discuss some of the questions that I’m most excited about: Bayesian justifications for generalized QA pooling: In Chapter 5, I defined QA pooling as a particular way to aggregate forecasts that’s sensitive to the scoring rule that was used to elicit the forecasts. One natural generalization of QA pooling allows experts to have arbitrary weights that don’t need to add to 1. It turns out that for the quadratic and logarithmic scoring rules, this generalization has natural “Bayesian justifications”. This means that in some information environments, generalized linear and logarithmic pooling is the best possible way to aggregate experts’ forecasts. (See Section 2.4 for details.) I’m really curious whether there’s a Bayesian justification for generalized QA pooling with respect to every proper scoring rule.Directions in robust forecast aggregation: In Chapter 7, I discussed robust forecast aggregation as a theoretically principled, “worst-case optimal” approach to aggregating forecasts. There are a whole bunch of directions in which one could try to generalize my results. For example, the work I did in that chapter makes the most sense in the context of real-valued forecasts (which don’t have to be between 0 and 1), and I’d love to see work along similar lines in the context of aggregating probabilities, with KL divergence used as the notion of error instead of squared distance.Finding a good deductive estimator: In Chapter 9, I set out to find a deductive circuit estimation algorithm that could handle a large class of deductive arguments in a reasonable way. Ultimately I didn’t get close to finding such an algorithm, and I would love to see more progress on this.Sophisticated Bayesian models for forecast aggregation: While several of the chapters of my thesis were about forecast aggregation, none of them took the straightforwardly Bayesian approach of making a model of the experts’ information overlap. I have some ideas for what a good Bayesian model could look like, and I’d love to see some empirical work on how well the model would work in practice. (If this sounds up your alley, shoot me an email!)Wagering mechanisms that produce good aggregate forecasts: Wagering mechanisms are alternatives to prediction markets. In a wagering mechanism, forecasters place wagers in addition to making predictions, and those wagers get redistributed according to how well the forecasters did. These mechanisms haven’t been studied very much, and — as far as I know — have never been used in practice. That said, I think wagering mechanisms are pretty promising and merit a lot more study. In part, that’s because wagering mechanisms give an obvious answer to the question of “how much should you weigh each forecaster’s prediction”: proportionally to their wagers! But as far as I know, there’s no theorem saying this results in good aggregate forecasts. I would love to see a wagering mechanism and a model of information for which you could prove that equilibrium wagers result in good aggregate forecasts. My thesis is called Algorithmic Bayesian Epistemology, and I’m proud of it. Thanks so much to my thesis advisor, Tim Roughgarden. He was really supportive throughout my time in grad school, and was happy to let me explore whatever I wanted to explore, even if it wasn’t inside his area of expertise. That said, even though algorithmic Bayesian epistemology isn’t Tim’s focus area, his advice was still really helpful. Tim has a really expansive knowledge of essentially all of theoretical computer science, which means he was able to see connections and make suggestions that I wouldn’t have come up with myself.️️️️ ^ I don’t want to make the video of my defense public, but email me if you want to see it! ^ The right answer, as far as I can tell, is to defer to the NWS’ National Blend of Models. But that just raises the question: how does the National Blend of Models reconcile disagreeing probabilities? ^ How did the name “Algorithmic Bayesian Epistemology” come about? “Bayesian epistemology” basically just means using probabilities to reason about uncertainty. “Algorithmic” is more of a term of art, which in this case means looking for satisfactory solutions that adhere to real-world constraints, as opposed to solutions that would be optimal if you ignored those constraints. See here for a longer explanation. ^ Our discussion of collusion was confined to Chapter 4 — now we’re assuming the experts can’t collude and instead just tell me their true beliefs. ^ Unfortunately, this protocol is only communication-efficient. To actually update their estimates, Alice and Bob may potentially need to do a very large amount of computation at each step. ^ Interestingly, Chen and Waggoner (2017) showed that under a (different) informational substitutes condition, traders in a prediction market are incentivized reveal all of their information right away by trading. This question of incentives is different from the question of my thesis chapter: my chapter can be interpreted as making the assumption that traders will trade on their information, and asking whether the market price will end up reflecting all traders’ information. Taken together, these two results suggest that market dynamics may be quite nice indeed when experts have substitutable information!
2024-05-09
https://www.lesswrong.com/posts/3a6ZG6MNTMiYzddvf/visualizing-neural-network-planning
3a6ZG6MNTMiYzddvf
Visualizing neural network planning
nevan-wichers
TLDR We develop a technique to try and detect if a NN is doing planning internally. We apply the decoder to the intermediate representations of the network to see if it’s representing the states it’s planning through internally. We successfully reveal intermediate states in a simple Game of Life model, but find no evidence of planning in an AlphaZero chess model. We think the idea won’t work in its current state for real world NNs because they use higher-level, abstract representations for planning that our current technique cannot decode. Please comment if you have ideas that may work for detecting more abstract ways the NN could be planning. Idea and motivation To make safe ML, it’s important to know if the network is performing mesa optimization, and if so, what optimization process it’s using. In this post, I'll focus on a particular form of mesa optimization: internal planning. This involves the model searching through possible future states and selecting the ones that best satisfy an internal goal. If the network is doing internal planning, then it’s important the goal it’s planning for is aligned with human values. An interpretability technique which could identify what states it’s searching through would be very useful for safety. If the NN is doing planning it might represent the states it’s considering in that plan. For example, if predicting the next move in chess, it may represent possible moves it’s considering in its hidden representations. We assume that NN is given the representation of the environment as input and that the first layer of the NN encodes the information into a hidden representation. Then the network has hidden layers and finally a decoder to compute the final output. The encoder and decoder are trained as an autoencoder, so the decoder can reconstruct the environment state from the encoder output. Language models are an example of this where the encoder is the embedding lookup. Our hypothesis is that the NN may use the same representation format for states it’s considering in its plan as it does for the encoder's output. Our idea is to apply the decoder to the hidden representations at different layers to decode them. If our hypothesis is correct, this will  recover the states it considers in its plan. This is similar to the Logit Lens for LLMs, but we’re applying it here to investigate mesa-optimization. A potential pitfall is that the NN uses a slightly different representation for the states it considers during planning than for the encoder output. In this case, the decoder won’t be able to reconstruct the environment state it’s considering very well. To overcome this, we train the decoder to output realistic looking environment states given the hidden representations by training it like the generator in a GAN. Note that the decoder isn’t trained on ground truth environment states, because we don’t know which states the NN is considering in its plan. Game of Life proof of concept (code) We consider an NN trained to predict the number of living cells after the Nth time step of the Game of Life (GoL). We chose the GoL because it has simple rules, and the NN will probably have to predict the intermediate states to get the final cell count. This NN won’t do planning, but it may represent the intermediate states of the GoL in its hidden states. We use an LSTM architecture with an encoder to encode the initial GoL state, and a “count cells NN” to output the number of living cells after the final LSTM output. Note that training the NN to predict the number of alive cells at the final state makes this more difficult for our method than training the network to predict the final state since it’s less obvious that the network will predict the intermediate states. The number of timesteps of the LSTM isn’t necessarily the same as the number of GoL timesteps. If the number of timesteps are different, the LSTM may skip one of the timesteps in its representation. Alternatively, the LSTM could output the same time step twice. The thought bubbles in this diagram show the states we’re hypothesizing the network will represent. We also train the count cells NN to count the number of live cells in the first state given the encoder output. This encourages the NN to use the same representation for the first and last states. We train the decoder we use for interpretability to reconstruct the initial state given the encoder output. The decoder isn’t trained to predict the final GoL state. During inference, we feed the output of the LSTM at each timestep to the decoder. We also train the decoder with the GAN loss to produce a reasonable intermediate representation. We measure the correlation between the ground truth GoL states, and the states reconstructed by applying the decoder to the intermediate LSTM outputs. We exclude the initial GoL state from our evaluation metric because the decoder was trained on it. We do count the final GoL state in the metric as the decoder was never trained to predict it. GoL timestepsLSTM timestepsMean correlationNumber of runs220.798230.815240.739320.937330.768340.764430.863 For each configuration of architecture and GoL timesteps, we train the network multiple times with different random seeds, and average the metric result together. The high correlation with the ground truth GoL states indicates that this works well in this simple setup. In abolition studies we find that the GAN objective helps because without it, the predicted states are between alive and dead. Here is an example of predicted an ground truth GoL states which get a correlation of .87: Each column is a timestep of the GoL or the LSTM. The first column always matches because the decoder was trained to reconstruct it. In this case, the 2nd and 3rd predicted states match the 2nd GoL state closely, and the 4th predicted state matches the final GoL state somewhat. Alpha Chess experiments We also try the technique on an alpha zero model trained on chess since that’s a more practical setting. We try to recover the states the network may be considering when planning. Previous work also interprets the alpha chess model. Alpha chess is trained to predict the value and policy with two separate heads. We use a pretrained alpha chess model and leave the weights unchanged. We train a decoder to predict the current state after the first conv layer. We also train the decoder to predict the state of the board after the predicted move is executed given the final backbone layer. This is the last layer before the value and policy heads split. We apply the decoder to the outputs of the other intermediate layers to see if they reconstruct states the NN may be considering. Our analysis did not reveal consistent representations of valid moves within the intermediate layers. Without the GAN loss, the decoder often produced boards that resembled the initial state, but with seemingly random pieces inserted or removed. We sometimes found layers which decoded to valid queen moves, but this wasn’t consistent. Using the GAN loss didn’t noticeably help. These are some board states reconstructed from various layers of the NN without the GAN loss. The board on the right is the state given to the NN. The state on the left is the reconstructed state from the intermediate layer. The letters represent pieces. The highlights show the difference from the current state. In the first example, the rook disappears, and a queen appears. This one shows the valid queen moves: The technique not consistently finding valid moves could either mean that search isn’t happening in the network, or that the technique simply failed to find it. Conclusion The technique works in simple cases like the GoL, but didn’t find search happening in Alpha Chess. We think it’s likely that NNs do planning using a higher level representation than they use to represent their inputs. The input representation has to be detailed because the network will need to know the precise location of the pieces in chess, or the enemies in a game in order to choose the best action. However, when planning, the NN can use a higher level representation. For example, a human planning their day might consider actions like “make breakfast”, “drive to work”, “write a blog post”, but won’t consider things at the level of detail required to reconstruct their visual inputs. Similarly, a chess model might plan in terms of strategic objectives like 'attack the kingside' or 'develop the queen,' rather than representing the position of every piece. For this reason, we think the idea is unlikely to work to uncover NNs doing planning in its current state. We welcome suggestions on how to modify our technique to better detect more abstract ways the NN could be planning, or ideas for alternative approaches to studying internal planning in NNs. Interesting related paper: Uncovering mesa-optimization algorithms in Transformers
2024-05-09
https://www.lesswrong.com/posts/zMaRRKvbiwJpQzaTe/forecasting-the-way-i-think-about-it
zMaRRKvbiwJpQzaTe
Forecasting: the way I think about it
hickman-santini
This is the first post in a little series I'm slowly writing on how I see forecasting, particularly conditional forecasting; what it's good for; and whether we should expect people to agree if they just talk to each other enough. Views are my own. I work at the Forecasting Research Institute (FRI), I forecast with the Samotsvety group, and to the extent that I have formal training in this stuff, it's mostly from studying and collaborating with Leonard Smith, a chaos specialist. My current plan is: Forecasting: the way I think about it [this post]The promise of conditional forecasting / cruxing for parameterizing our models of the worldWhat we're looking at and what we're paying attention to (Or: why we shouldn't expect people to agree today (Or: there is no "true" probability)) What do I do when I forecast? Let's say I'm forecasting an arbitrary bad outcome U that we're going to resolve in/by 2100 (e.g. AI-related catastrophe). I ask myself: What are all the worlds I can imagine for 2100?What's my P(U) in each of these possible worlds?Take the value for P(U) that has half the probability mass below and half above – that may not be the modal world – it's the median world, and it's where my expected log score is maximizedFig. 1 Imagining all the worlds is impossible, so I wind up decomposing the probability mass function into a few types of worlds and thinking about how being in each world would affect P(U) — i.e. for worlds A, B, C, … I have P(U|A), P(U|B), P(U|C) etc (Fig. 2). And I have ideas about how likely we are to wind up in each of A, B, C, etc. Here, B is my "modal world" and my "expectation" world is somewhere between C and D on the P(U) scale. Fig. 2 If you want to get really fancy, you can factor in uncertainty about U in each of these worlds, treat them all as distributions (some are pointier, some are more uncertain), and think about your all-things-considered P(U) as a mixed distribution of all of your worlds. This can always be distilled into a point estimate by taking that center of mass (dotted line in Fig. 1). You can use tools like squiggle for this. Side-note: I think some people just think about the modal world B by default. It's probably the first world you think of. It's the world you most think will come to pass. But you don't maximize your log score by forecasting P(U|B) when you're asked for P(U). In our projects at FRI, we've conditioned on things of the form "[x happens] by [year]." Let's say [x happens] is a certain policy being implemented. Understandably, our study participants have factored in what "this policy" being implemented may imply about the world in [year]. Maybe you think it highly unlikely that this policy would be implemented if we were living in World B, so conditioning on it makes you think we're probably in World F, where Russia has nuked the UK and there are dragons. Conditioning on any given thing changes the shape of your curve. Now it might look something like this: Fig. 3 This is a problem if what we want to know is how the policy would causally affect the ultimate outcome that we care about. Can we say whether this policy would be good or bad (measured by its impact on P(U))? Not really. But if you ask a forecaster to "hold all else equal" and try to isolate just the effect of the policy, I'd argue that they're hardly forecasting anymore. Any forecast generated that way can't be scored. Worlds A, B, C, etc could manifest, whereas the world where nothing happens except this policy is implemented isn't realizable. In fact, this is a fallacy that Adam Dorr has written about, ceteris paribus: when you're forecasting, it's a mistake to imagine "single-variable futures" (h/t Michał Dubrawski, without whom I probably wouldn't have read Dorr). If only we had a way to capture how much of my forecast owes to "evidential" considerations like P(B|policy) and how much is more like causal reasoning! We need better ways for people to articulate their models of the world and what they're weighing in their forecasts. Dan Schwarz has written about that need here. I have some thoughts I'll share in my next post. Keen to hear how different this is from how you, dear reader, think about forecasting.
2024-05-09
https://www.lesswrong.com/posts/XtYuFgPWyopyzuLbv/some-thoughts-on-lessonline
XtYuFgPWyopyzuLbv
some thoughts on LessOnline
Raemon
I mostly wrote this for facebook, but it ended up being a whole-ass post so I figured I'd put it here too. I'm helping run "LessOnline: A Festival of Writers Who Are Wrong On the Internet (But Striving To Be Less So)". I'm incentivized to say nice things about the event. So, grain of salt and all. But, some thoughts, which roughly breakdown into: – The vibe: preserving cozy/spaciousness of a small retreat at a larger festival – The audience: "Reunion for the The Extended Family Blogosphere, both readers and writers." – Manifest, and Summer Camp ... I. The Vibe I've been trying to explain the vibe I expect and it's tricksy. I think the vibe will be something like "CFAR Reunion meets Manifest." But a lot of people haven't been to a CFAR Reunion or to Manifest. I might also describe it like "the thing the very first EA Summit (before EA Global) was like, before it became EA Global and got big." But very few people went to that either. Basically: I think this will do a pretty decent job of having the feel of a smaller (~60 person), cozy retreat, but while being more like 200 - 400 people. Lightcone has run several ~60 person private retreats, which succeeded being a really spacious intellectual environment, with a pretty high hit rate for meeting new people who you might want to end up having a several hour conversation with. Realistically, with a larger event there'll be at least some loss of "cozy/spaciousness", and a somewhat lower hit rate for people you want to talk to with the open invites. But, I think Lightcone has learned a lot about how to create a really nice vibe. We've built our venue, Lighthaven, with "warm, delightful, focused intellectual conversation" as a primary priority. Whiteboards everywhere, lots of nooks and a fractal layout that makes it often feel like you're in a seclude private conversation by a firepit, even though hundreds of other people are nearby (often at another secluded private conversation with _their_ own firepit!) (It's sort of weird that this kind of venue is extremely rare. Many events are hotels, which feel vaguely stifling and corporate. And the nice spacious retreat centers we've used don't score well on the whiteboard front, and surprisingly not even that well on "lots of nooks") ... Large events tend to use "Swap Card" for causing people to meet each other. I do find Swap Card really good for nailing down a lot of short meetings. But it somehow ends up with a vibe of ruthless efficiency – lots of back-to-back 30 minute meetings, instead of a feeling of organic discovery. The profile feels like a "job fair professional" sort of thing. Instead we're having a "Names, Faces, and Conversations" document, where people write in a giant google doc about what questions and ideas are currently alive for them. People are encouraged to comment inline if they have thoughts, and +1 if they'd be into chatting about it. Some of this hopefully turns into 1-1 conversations, and if more people are interested it can organically grow into "hey let's hold a small impromptu group discussion about that in the Garden Nook" ... We'll also have a bunch of stuff that's just plain fun. We're planning a puzzle hunt that spans the event, and a dance concert led by the Fooming Shoggoths, with many songs that didn't make it onto their April 1st album. And the venue itself just lends itself to a feeling of whimsy and discovery. ... Another thing we're doing is encouraging people to bring their kids, and providing a day care to make that easier. I want this event to feel like something you can bring your whole life/self to. By default these sorts of events tend to not be very kid friendly. ... ... ... II. The Audience So that was a lot of words about The Vibe™. The second question is "who actually is this conference for?" It's called "LessOnline", and it's a fairly obvious thing for this to be "The LessWrong conference." We're aiming for a somewhat different thing than that (I personally think it was a mistake to call it LessOnline, but, naming is hard and I don't actually have a better idea offhand) What different thing _are_ we aiming for? A few angles: 1. This isn't meant to be about "the rationalist social scene." It's meant to be about "people who are earnestly doing the rationality thing, whichever social scenes they do or don't participate in." i.e. people who are earnestly trying to figure out true things and explain them. People aiming to achieve a unified world model, and/or to develop the artform of doing so. 2. We're interested in people attending who are interested in tackling the same sorts of questions that the LessWrong community tends to tackle, but through different approaches or angles. A metaphor that I like: Recently my family had a Family Reunion. My mom hosted a concert in honor her mother. She invited a lot of different people – my grandma's other children and their families, friends that my grandma new, people who were involved with her church or community. My mother was the one hosting it, and my mom's particular aesthetic and family connections were salient during the one-woman concert she performed. But, it was a concert for everyone. Back when LessWrong was founded, there was an online intellectual zeitgeist surrounding it. There were econbloggers and New Atheists and polymaths and science bloggers. LessWrong was born from that context. LessOnline is something like "a family reunion for the Blogosphere That Spawned It", which includes various descendants who went off in different directions. (i.e. you might go up the tree to "the parent science or econbloggers of yore", and then back down the tree to their other intellectual descendants.") I'm not sure if this metaphor will turn out to be quite accurate in describing the audience who in fact shows up. But, like, I'm excited to have Andy Matuschak coming, who's developed a lot of neat Tools for Thought, even though he hasn't been closely involved with LessWrong. David Chapman can't come, but I had a nice long chat with him about his current takes on rationality and I think it'd have been cool if he came. (My model of Chapman is that he set out to solve many of the same problems Eliezer set out to solve, but he called the resulting thing 'metarationality' instead of 'rationality', with a somewhat different vibe) ... ... ... III. Manifest, and Summer Camp The original generator for this event was: Last year, Manifold held their first conference, Manifest, at Lighthaven. It was really fun. It made me feel wholesome and excited in a way that other conferences hadn't in a long time. Part of it was that Manifest wasn't really trying to take itself too seriously. They weren't pitching themselves as "the conference for people effectively doing good or saving the world". They're unifying theme was "we like making predictions and seeing how they go and hopefully making (imaginary?) money off them", but apart from that, any topic was fair game. It ended up having a lot of whimsy as well as seriousness. This year they approached us about running it again, with the idea of having a Summer Camp period before or afterwards. We ended up deciding it would be cool if Lightcone hosted a related conference the week before, and filled the weekend in between a laid back sleeper summer camp season. Both Manifest and LessOnline are pretty casual conferences. Summer Camp will be even more casual. It'll mostly be whatever people make of it, but I'm pretty confident that people will bring a lot of great ideas to it. I'm personally planning to host some rationality workshops there based on my current projects, and running some experimental Solstice ideas. I think Ricki Heicklen is looking into "Trade School" where she teaches the art of making profitable trades. But I'm pretty excited at experimenting with a weeklong "festival season", with multiple peaks, lots of rooms to experiment and to see what a high-density-popup campus community can look like. ... ... ... Anyway those are some thoughts. Hope to see you there. :)
2024-05-08
https://www.lesswrong.com/posts/NitP3AsMeuycJQbfa/retrospective-on-mathematical-boundaries-workshop
NitP3AsMeuycJQbfa
Retrospective on Mathematical Boundaries Workshop
miyazono
mostly written by Evan Miyazono Minimum viable introduction We ran a workshop on Mathematical Boundaries from April 10-14. This was the successor of the Conceptual Boundaries Workshop (retrospective, last update) Feb 10-12The overlap in participants and approach was fairly low (notably lower than intended, due to availability restricting participation, which in turn led to a natural difference in approach)Intent:The first workshop was intended to develop a sense of what one might want to do with boundaries, and explore possible avenuesThis event was more focused on making mathematical design decisions that would lead to a more concrete model that was opinionated enough to be useful (the natural question becomes “useful for what”)You’re probably here because you want to see the outputs, so let’s get to them: Outputs from the workshop Here are write-ups started during writing sessions during the workshop: Manuel Baltieri 1: Crossing boundaries Manuel Baltieri 2: Fighting for boundariesKevin Carlson: Nondeterministic dynamical systems and crossing boundaries  Martin Biehl 1: Gliders and similar phenomena in (categorical) systems theory Martin Biehl (developed in conversation with Owen Lynch): Towards a more general law of requisite variety Owen Lynch: Grothendieck lenses for functors into 2Cat Sophie Libkind: Ontological commitments for boundariesNathaniel Virgo: Boundaries and Good Regulators Noting that I’m getting these to you before I’ve read them, so don’t expect me to be able to answer questions about them. Also worth noting, Nathaniel Virgo and Martin Biehl participated in this panel discussion at a later workshop in Kyoto, where we discussed some of the issues that came up at the boundaries workshop General structure from the workshop The general daily structure was scheduled to be “a talk and a breakout session before lunch, then a breakout session and a longer-form discussion after lunch,” though we weren’t particularly strict adherents. We found on Thursday (day 1 of 4) that the group wanted to continue discussing after Martin’s interesting talk and ending up doing more like “A talk and a discussion, followed by breakouts after lunch.”  Thursday breakouts were (1) a session on trying to work out a cocategorical formalism for specifying things via wholes in which they participate, rather than by composing together their parts and a session on, and (2) an idea to formalize / keep track of gliders as non-deterministic or possibilistic closed dynamical systems. [I don’t know what] Friday morning Nathaniel gave a talk on control theory that was so engaging we reached a consensus on pointing the rest of the workshop towards fleshing out adjacent ideas. One breakout the rest of the day Friday was focused on choosing formalisms for various words in Nathaniel’s talk, and resulted essentially in Sophie’s blog post.The other one ended up focusing on an idea of generalizing the law of requisite variety resulting in Martin’s second write-up. Saturday was primarily time for writing down outputs (learnings from last time: have a big block of time to support people in generating written artifacts), and also included a small breakout group on nondeterminism (that one led to Kevin’s blog post). Sunday morning some individuals started departing and we had some visitors, most activities involved chatting about a wide range of topics after an intense few days. Next steps We're still genuinely interested in boundaries and would like to see additional work happen. We're exploring funding options for work on these open problems, so email me (evan@atlascomputing.org) you would like to work on them.One possible next step is setting up a workshop adjacent to a conference which most of the Mathematical Boundaries Workshop participants are likely to attendInterestingly, it seems like the attendees were split somewhat [40]/[40]/[20]% between researchers who seem most likely to attend conferences exclusively in [applied category theory], [artificial life], and [cross-domain and para-academic conferences like this one], which I think makes this goal hard, but also makes the conversations at such an event particularly interesting. Evan’s personal takes Here’s some notes that are very specific to me. How it differed from the first one:Chris, Manuel, and I set out with the intent of bringing people together to build mathematical models of boundaries.  As a result, we ended up inviting more people with stronger math background, and people who we expected, based on prior interactions and training, to be inclined toward formalizations and reach for math as a tool.Where I could have done better:It wasn’t ex ante clear that much moderation would fall to me; there was some hope that davidad would be able to attend, but through no fault of his, he was unable.Believing that I knew enough math to even moderate this workshop was probably my greatest act of hubris since at least founding Atlas Computing.  I knew I didn’t have enough background knowledge to contribute, but I thought at least I would be able to make proposals that could be iterated on to reach a local equilibrium, but others were far better than I at identifying what the participants agreed was a better starting point.Huge thanks to Manuel Baltieri and Brendan Fong for taking the reins.What’s next from here:I’m not sure how involved in logistics, curation, or moderation of future boundaries workshops I’ll be.  I’ll likely advocate for their utility, and potentially support aspects like fundraising and translation, but I think I’d be happy if others took up the mantle. (To be fair, that’s what I said before the first and second workshops as well, though 😅)This could be particularly compelling as a public event – if someone would like to To the extent that davidad’s ARIA program is focused on building a github for science, but not a monorepo of science, I think it could be really valuable to have the following:If you have two “repositories” of interoperable / composable scientific theories, we should be able to identify boundaries and define boundary violations in each “repository” in a way that we’re confident that specifying a boundary violation in one scientific model (combination of scientific theories) is sufficient to confidently identify the same boundary violation in another scientific model.At this point, Manuel, Brendan, and I are discussing what it would look like to organize a continuation on this theme.  On the bright side, this starts highlighting and framing concrete problems that could be solved.  On the other hand, pursuing solutions to this specific problem could also significantly diverge from the original VAPE formulation from Critch’s «boundaries» formulation. Lastly, here are some random assorted brief insights that I liked: Some boundaries are (sets of) physical boundaries. Others are parameter regimes, and might be better called “margins” or “viability regimes”. These seem sufficiently distinct that they’re worth calling by different names. “Membranes” may work well for singling out the “physical” boundaries, which don’t have to actually be literally made out of matter but should demarcate an agent’s “body” from its environment, rather than the space of happy states for an agent from its space of sad states.Models could be defined as low-loss compressions of the environment and agents could be defined as models that scale in complexity with the scope of the universe unless you ascribe them some telos or desires. Feel free to comment here (or on the Atlas Computing blog), or reach out to via dm.
2024-05-12
https://www.lesswrong.com/posts/ueE7bBCvRMT2DWuyJ/zero-sum-defeats-nash-equilibrium
ueE7bBCvRMT2DWuyJ
Zero-Sum Defeats Nash Equilibrium
richard-henage
Bike locks are a good example. There are twenty bikes at the bike rack in the parking lot. Some bike riders decide to buy bike locks for their bikes to protect them from theft. But this just means that the thieves steal other, unprotected bikes from the rack. The problem wasn't solved, just moved around. The wary bike riders acted in self-interest in a zero-sum game and the result was net 0, but the wary bike riders came out ahead. As long as some bikes remain unprotected, the crime rate will remain the same. All of the bike riders would have to go to the effort of buying bike locks before any progress is made in solving the problem. This would be a Nash Equilibrium... However, if all the bike riders act in self-interest by selfishly protecting their own bikes at the expense of the others, they will all buy bike locks and the thieves will have no bikes to steal. So in some systems, a Zero-Sum environment actually defeats the Nash Equilibrium. The same concept would apply easily to cybersecurity: hackers will go after the easiest target, and if everyone tries to not be the easiest target the global level of cyber defense will rise and hacking will become less common. So if there's a seemingly Zero-Sum system or a seeming Nash Equilibrium, adding the other component might solve the problem. Can this be applied to other problems? AI safety: Nash Equilibrium[1]. It's in everyone's selfish interest to ignore safe practices and try to make their AI as powerful as possible. But if everyone does this, it's likely that at least one AI will be both powerful enough and malicious enough to cause serious damage and make everyone lose. Is there a way to add a Zero-Sum component to this? Maybe a task force could be created that punishes the least-safe AI systems or rewards the most-safe, based not on their absolute safety level but on their safety level relative to other systems. This system could be a fine/grant or it could just be a label: "World's safest AI system as voted by experts". This is a Zero-Sum component because no matter how hard developers try, the same total number of rewards and punishments will be doled out.Environmental damage: Nash Equilibrium. Since important parts of the environment are shared between countries and even the entire world, harm done to it is spread out among multiple countries. This means that a selfish country wouldn't place enough negative weight on harming the environment. This also means that countries don't have enough incentive to improve the environment since other countries will reap much of the total benefit. To make this a Zero-Sum system, countries with clean environments could develop tools to prevent polluted environments from mixing with them. If this were accomplished, it would further incentivize these countries to use clean practices. Countries with dirty environments but clean practices could implement these tools to cause their environments to self-clean without changing their other practices. And if these tools were implemented in enough places, the remaining countries would find that the results of their dirty practices were affecting themselves with a higher percentage[2], incentivizing them to use cleaner practices. By itself, developing barriers to the spread of pollution would have a net-0 effect because the total amount of pollution would remain the same. Yet it would cause the behavior of countries to break the Nash Equilibrium and benefit everyone. Some problems have already been solved or partially solved in this manner. War: Zero-Sum. The development of nuclear weapons was an example of introducing a Nash Equilibrium to a Zero-Sum system. Theoretically, countries were disincentivized from entering war because they feared nuclear annihilation, causing an era of peace.Economic Competition: Zero-Sum. If producers compete by only lowering prices, a poor Nash Equilibrium is reached: all producers are incentivized to use low costs to attract customers, yet this means that all producers suffer collectively. But by innovating to create better products while consuming fewer resources, producers turn the Zero-Sum system into a productive Nash Equilibrium. Some other thoughts: Perhaps a Zero-Sum system should really be called a Negative-Sum system. In most cases, actions can be taken that harm all parties, yet there is no way to benefit all parties past a certain threshold. Interestingly, even a beneficial Nash Equilibrium will often seem frustrating from inside. Technology companies have to work hard to stay on the bleeding edge and might think it would be better if innovation would stop all around. But as a whole, both consumers and producers benefit as the value-to-price ratio increases. Watch out for systems that receive complaints both about being Zero-Sum and being a Nash Equilibrium. In these systems, a solution might be easier than it seems. ^ Maybe this isn't a Nash Equilibrium after all, since it sounds more like a volatile, unstable system than an equilibrium. But it's a system where everyone's selfish interests lead to the collective misfortune of everyone. ^ Since most of the Earth's surface is covered by oceans, not countries, this part might not have as large of an effect as it might seem.
2024-05-08
https://www.lesswrong.com/posts/SkN4fErLksRh7HRpu/experience-switching-to-right-shoulder-round
SkN4fErLksRh7HRpu
Experience Switching to Right Shoulder Round
jkaufman
Contra dance has a figure where two people walk a small circle looking at each other. When it was introduced into contra in the 1970s as a borrowing from ECD, it had the name "gypsy", originally from Morris dancing, but many communities now use "right shoulder round". In many dance communities the debate over whether and how to switch functioned as a highly acrimonious culture war outlet. I really didn't want our group going through that, but talking publicly about how I didn't want that at the time would have been counterproductive. Now that it's been ~5y since switching to "right shoulder round" and ~10y from the first big online discussions, I think this is probably something I can share some history on. While I'm sure people had occasionally talked about being uncomfortable with the term, I think the first big online discussion started in January 2014 with a since-deleted post in a Facebook group: From: Elio Lewis To: Stuff Contra Dancers Say Date: 2014-01-20 9:48am Hey, contra dance callers! I totally just figured out the ideal substitution for that racist-named move! It should be called a "hippie". It sounds similar enough to the offensive term that people will still cue off of it (though I'd note my substitution during the walkthrough), it appeals to a sense of silliness, and it's unlikely to offend anyone. If you like the idea, please spread it around! The discussion was long and heated, properties it shared with later iterations on other platforms (ex: October 2015, January 2016, April 2016, etc on Shared Weight). There were two main questions, the same ones as in the role terms debate: Should we switch away from the traditional term? If we do switch, what should we switch to? There were a lot of candidate terms, with a variety of issues, and "right shoulder round" was quite a late addition. The first place I find it written down is March 2018, four years into the debates. (That thread also gives a good flavor of how these discussions tended to go, with a lot of frustration, anger, and people talking past each other.) Looking back at emails, it took about six months for the term to go from unknown to the favorite. The approach our dance organization took was mostly waiting: I knew that this was something that had the potential to be divisive and that there wasn't a widely supported replacement, and I suspect the rest of the board felt similarly. When callers asked us about terms, I'd write back things like (November 2016): BIDA doesn't have an official position on it. If you want to avoid using "gypsy" in your calling that is fine with us, though most callers lately have been using it. Then, in September 2018, I wrote to the BIDA board: As you might know, there's been a lot of discussion, mostly online, in the contra dance world about moving away from using the term "gypsy". I've been to many events where callers used other terms instead, like "right shoulder round", and among other things these are, being descriptive, easier for newcomers to understand. I don't think this is something people have asked for at BIDA, though I'm kind of surprised that they haven't; more dances have stopped using "gypsy" than have switched to "larks/ravens". My expectation is there are two ways this could go: We continue not having an official position. An increasing number of callers stop using "gypsy" on their own, but many keep using it. At some point a group of dancers writes something publicly about how bad it is that we're still using "gypsy" in 2019 or something and there are nasty fb arguments where people feel less like a community. This is enough of a tinderbox that I've been avoiding it in blogging (example: polling people on which alternative they prefer) which is a high bar for me... We start encouraging callers to use "right shoulder round" if they're comfortable calling that way. After trying this for a while, if it still looks like "gypsy" is going away, we can switch away from it entirely. As you might guess, I think we should do #2. This would be something like changing bidadance.org/caller-welcome to have, in the "terms" section, something like: If you're comfortable using "right shoulder round" or other descriptive terms instead of "gypsy" we encourage you to call that way. What do you all think? (I'm planning to ask all the Beantown Stomp callers not to use "gypsy") We had a long discussion, but were mostly on the same page, and did end up updating the website and what we wrote to callers. I don't remember getting any feedback from the dancers. This felt like it was going well, so in February 2019 I brought it up again: It looks like the dance community has coalesced on "right shoulder round" as a replacement for "gypsy", and I think it's time for us to switch. It flows well, callers are comfortable with it, and as a descriptive term it's easier for new dancers. Our current caller guidance on "gypsy" is: If you're comfortable using "right shoulder round" instead of "gypsy" we encourage you to call that way. I propose we change this to: We use "right shoulder round" instead of "gypsy". For example, "right shoulder round your neighbor" or "walk around your neighbor by the right shoulder". NEFFA already doesn't allow Gypsy and their guidance is: As for the figure formerly known as Gypsy, we use Walk Around, as in, "Right Shoulder Walk Around your Neighbor". You may add, "Maintaining eye contact, if that is comfortable for you." Please do not use other terminology. Any objections? Happy to talk about this at our next meeting if anyone would like an in person conversation on this. (If we can move away from "gypsy" without it being a big thing I think that's way better for our community. Unlike with Larks/Ravens and bathrooms this is the sort of change that we don't need to poll on our get buy-in. We've already had many callers using "right shoulder round" and dancers mostly don't notice either way. We haven't yet had a big blowup over this, but other dances have and we definitely could.) We again had a discussion, mostly around whether there were other terms that would be better than "right shoulder round" but did decide to make this change. Again we didn't hear anything, so after a few months I wrote a post recommending other organizations consider switching. There was again some discussion of whether other terms would be better, and some pushback to the idea of switching (from dancers in other communities) but it was reasonably uncontroversial. Looking back, I'm reasonably happy with the relatively low-key way local our community handled this. Lots of discussion on potential terms, mostly online and national; trying out alternative terms until we settled on something that worked; gradually increasing the fraction of our dances where we used the new terms; not making a big deal out of the switch. I recognize I'm risking undoing a bit of this by getting into this now, but I think it's been long enough and we're sufficiently satisfied with where we are that it's ok. I hope you don't prove me wrong on this! Comment via: facebook, mastodon
2024-05-08
https://www.lesswrong.com/posts/e9soWGZ9ceEbRKjv2/how-do-top-ai-labs-vet-architecture-algorithm-changes
e9soWGZ9ceEbRKjv2
How do top AI labs vet architecture/algorithm changes?
ghostwheel
How do labs working at or near the frontier assess major architecture and/or algorithm changes before committing huge compute resources to try them out? For example, how do they assess stability and sample efficiency without having to do full-scale runs?
2024-05-08
https://www.lesswrong.com/posts/BPpeBH8brSCRvZajs/how-to-be-an-amateur-polyglot
BPpeBH8brSCRvZajs
How to be an amateur polyglot
arisalexis
Setting the stage Being a polyglot is a problem of definition first. Who can be described as a polyglot? At what level do you actually “speak” the given language? Some sources cite that polyglot means speaking more than 4 languages, others 6. My take is it doesn’t matter. I am more interested in the definition of when you speak the language. If you can greet and order a coffee in 20 languages do you actually speak them? I don’t think so. Do you need to present a scientific document or write a newspaper worthy article to be considered? That’s too much. I think the best definition would be that you can go out with a group of native speakers, understand what they are saying and participate in the discussion that would range from everyday stuff to maybe work related stuff and not switching too often to English nor using google translate. It’s ok to pause and maybe ask for a specific word or ask the group if your message got across. This is what I am aiming for when I study a specific language. Why learn a foreign language when soon we will have AI auto-translate from our glasses and other wearables? This is a valid question for work related purposes but socially it’s not. You can never be interacting with glasses talking in another language while having dinner with friends nor at a date for example. The small things that make you part of the culture are hidden in the language. The respect and the motivation to blend in is irreplaceable. For reference here are the languages I speak at approximate levels: Greek - nativeEnglish - proficient (C2)Spanish - high level (C1) active learningFrench - medium level (B2) active learningItalian - coffee+ level (B1) active learningDutch - survival level (A2) in hibernation Get started Firstly, I think the first foreign language you learn could be  taught in a formal way with an experienced teacher. That will teach you the way to structure your thought process and learn how to learn efficiently. It’s common in Europe and non-English speaking countries to learn a second language at school. This guide is not about how to learn formally though. It’s about how to take up new foreign languages without a *permanent teacher (I will expand later). One of the most important things when learning a language is motivation. You either love the culture, the language itself (how it sounds and reads), a loved one or you are moving there or doing a long term stay. If you hate the language, it is mandatory that you learn it but you’d rather not then none of this will work. I found that to be the case with Dutch where while I did like the culture, I found the language pretty bad sounding (almost ridiculous hhh-hhh sounds) - sorry if you are Dutch. That resulted in me learning the minimum in 7 years while I picked up Italian in a summer. Now that you found your calling let’s proceed. Methods & Tools I wholeheartedly recommend Memrise as an app for learning. It’s vastly better than Duolingo and much less repetitive and boring. It reminds you of words you have forgotten at regular intervals utilizing the spaced repetition learning techniques. It’s much more focused in everyday interactions and their unique selling point is videos of random people. It’s genius that they are asking native speakers on the street to pronounce words and phrases for you. Having a visual reference makes it much more engaging and sticks. In my experience, trying to learn a new word takes maybe 10 fictional time units and if I am in a real conversation and someone corrects me, it takes just that time and I will forever remember the face of the person correcting me and the place. In a smaller degree that’s how memrise works. But we need to be a bit more structured. After learning everyday phrases and words some grammar is in order. You can try to pick up a beginner grammar book and focus on very specific language structures and go easy on other ones. My advice for a new language that you want to pick up fast is: Learn only 4 tenses. Past for one time events, past continuous, present, future. These are enough to go by and explain yourself.Skip conditionals and indirect speech and learn only a few important verbs in imperative form so you can give an order if need be.If you are going there physically, learn words of everyday use such as cutlery, toilet things, city block stuff like the word fence, garbage can etc. If not, then focus on words that you will actually use such as in education,work,vacation as these are the discussions you will mostly have. (fun story, I passed the C2 exam and then I realized I didn’t remember the word faucet when I went to the UK to visit a friend). Forget complicated spelling and especially accents in French if it’s not entirely necessary. There is no need to learn if a word takes a ` or ‘ if you know how to pronounce it correctly. (Edited because I wrote intonation instead of spelling in the first draft).Don’t spend too much time memorizing if a noun is female or male in languages that are not easy to tell from the noun itself. Everyone will understand you in any case.When you look up a word in one language, do it for all other languages too. The mind works like a database and each row has columns for each language you speak. Fill that database up! My favorite tools are: MemriseLexilize app (they have a spaced repetition technique only for words and a realtime dictionary I add words to when I watch movies or read articles. It also has lists of common groups of words for each language so you can focus on what I said earlier.Internet website for quickly searching verb conjugation. Looking at the charts frequently will make you remember them. I usually repeat some charts when I read them mentally.Preply* for hiring an occasional teacher by the hour online.ChatGPT** *A note about hiring teachers by the hour. I explain to them what exactly I want and don’t want. In this specific case I want to practice speaking in different scenarios with a native speaker. I tell them to correct me. I usually send them pictures of a situation that we can expand on or photos of an event and then I start describing everything there is in the photo, the people, their fictional lives etc. What not to do: there is no value in paying a teacher to be your google translate. You can look up words later on your own. But you can ask the teacher for differences in words or how exactly to use them in sentences, there is a lot of nuance in synonyms. There is also no value in a teacher repeating the present tense verb chart to you. **How I use chatGPT. I use it to ask grammar questions or nuances between the use of words such as what I did with the live teacher. But the most fun is either starting up a chat with it in the target language about life, poetry and robots or making it create language games for me. Pro tip: try the following prompt "create a repeating game for me in Italian where I need to fill in the blanks with conjugated verbs in one of the following tenses: past continuous, present, simple future. After I give you my answer tell me if it’s right or wrong and if the latter, explain to me why. Then give me another exercise. Keep repeating the game forever". It’s not my place or job to think how will the education industry cope with a teacher that costs $20/month but makes you wonder. Continuous improvement TV After I have a basic grasp of the language I try to have daily exposure to it. For example Netflix is awesome for this. I start watching my favorite series and gradually make it more difficult for me. Level 1 - Series in English or native language and subtitles in target language OR  Series in target language and subtitles in English or native language Level 2 - Series in target language and subtitles in target language Level 3 - Series in target language without subtitles Books If you reach a sufficient level in the target language you can also start reading books in it. What I do in order not to get bored with reading books I never intended to, I pick a book that I wanted to read that requires a medium level of understanding (reading Nietsche while learning German would be quite bold). I recommend reading the book using a Kindle so you can achieve 2 things: a) press on a word and immediately search the dictionary or a whole sentence and see the translation and b) when you end the book have a place that you can repeat and see all the unknown words you had. Podcasts and news I listen to the news in different languages every day. Sometimes I put hour long podcasts to listen to although if the subject is boring you will find your mind wandering. I also switch Google news to other languages too but you may read too many local news that you don't care about. Friends Try to start messaging your friends that are native speakers (better to avoid at first other friends that are trying to learn the language because you may repeat their mistakes). Try to say a few words or kick start a conversation in a group with the native speaker. They will be happy to respond back unless they are Spanish(!) and they will reply back in English. You need to specifically tell Spaniards that you want to practice their language. I am kidding of course but I have multiple real-world examples of me having a chat going like: Hola como estas? (native speaker answers) I’m fine, how about you! (me) Todo bien, vamos hoy por la playa? (native speaker) Sure what time? (me) sobre las 7? (native speaker) OK great see you at the beach! (me irritated) amigo, entiendes que estoy hablando en tu idioma? (native speaker) yes but I also want to practice my English! Final wisdom There is always what I call an inflection point where the complicated pasta in your head starts to make sense and have structure. You finally see something behind the fog. This is the time that you feel like “I got this, I just have to keep going”. This is a good time to start having casual chats. It’s a mortal sin to wait until you are proficient in a language to start chatting because you are too shy or can’t accept being wrong or being corrected. You can’t learn a language like that, you need to dive deep now! I usually remember my breakthrough chat in each language. It’s the time where I had a chat with someone for a long time and didn’t have to switch back to another language which usually happens around the 10 minute mark when you cannot support the whole conversation in the target language. In this breakthrough chat I may occasionally ask for a word or even look it up if it’s important but I don’t need to say whole sentences in another language. I remember the person, the place and many other details usually in these chats because your mind is working in overdrive and your neurons are firing up with happy excitement. Don’t drink too much in this chat. One glass of wine augments the chat, 3 glasses destroy it! Happy learning. I’m always up for a chat @arisalexiseco
2024-05-08
https://www.lesswrong.com/posts/PLoz68JbTkDufeYSG/dating-roundup-3-third-time-s-the-charm
PLoz68JbTkDufeYSG
Dating Roundup #3: Third Time’s the Charm
Zvi
The first speculated on why you’re still single. We failed to settle the issue. A lot of you were indeed still single. So the debate continues. The second gave more potential reasons, starting with the suspicion that you are not even trying, and also many ways you are likely trying wrong. The definition of insanity is trying the same thing over again expecting different results. Another definition of insanity is dating in 2024. Can’t quit now. You’re Single Because Dating Apps Keep Getting Worse A guide to taking the perfect dating app photo. This area of your life is important, so if you intend to take dating apps seriously then you should take photo optimization seriously, and of course you can then also use the photos for other things. I love the ‘possibly’ evil here. Misha Gurevich: possibly evil idea: Dating app that trawls social media and websites and creates a database of individuals regardless of if they opt in or not, including as many photos and contact information as can be found. Obviously this would be kind of a privacy violation and a lot of people would hate it. but I imagine a solid subset of singles who are lonely but HATE the app experience would be grateful to be found this way. No big deal, all we are doing is taking all the data about private citizens on the web and presenting it to any stranger who wants it in easy form as if you might want to date them. Or stalk them. Or do anything else, really. And you thought AI training data was getting out of hand before. All right, so let’s consider the good, or at least not obviously evil, version of this. There is no need to fill out an intentional profile, or engage in specific actions, other than opting in. We gather all the information off the public web. We use AI to amalgamate all the data, assemble in-depth profiles and models of all the people. If it thinks there is a plausible match, then it sets it up. Since we are in danger of getting high on the creepiness meter, let’s say the woman gets to select who gets contacted first, then if both want to match in succession you put them in contact. Ideally you’d also use AI to facilitate in various other ways, let people say what they actually want in natural language, let the AI ask follow-up questions to find potential matches or do checks first (e.g. ‘I would say yes if you can confirm that he…’) and so on. There is definitely not enough deep work being done trying to overturn the system. Bumble gives up its one weird trick, goes back to men messaging first. Melissa Chen: The evolution of Bumble: – Sick of men inboxing women (“the patriarchy is so creepy and icky!”) – Starts dating app to reverse the natural order (women now make the first move! So empowering! So brave & stunning!) – Women complain it’s exhausting – Reinstate the natural law Hardcore Siege: It’s such ridiculous headline. I have never gotten an opener on Bumble besides “hey”, women never actually work go start a conversation or have a good opener, they’re literally just re-approving the ability of the man to start the conversation. Outa: Anyone that’s used it would tell you that 99% of the time they would just leave a “hey” or “.” Casey Handmer: AFAIK no one has yet made a dating app where the cost of sending messages is increased if you’re a creep. This would be technologically easy to do, and would let the market solve the problem. Several interesting things here. Many ‘women never actually initiated the conversation’ responses. Women say ‘hey’ to bypass the requirement almost all the time. That is not obviously useless as a secondary approval, but it presumably is not worth the bother. This was among women who self-selected into the app with mandatory female openers, so yeah, women really really do not want to open. If you are willing to open for real and put effort into it, that is a huge advantage. Never open with ‘hey’ under other circumstances, but this makes it tough to be that upset with guys who do open with ‘hey.’ We see the shoe on the other foot. Bumble had something slightly unique about it. Now it doesn’t. It seems that the hill climb wants what it wants, and any service that tries a variant inevitably ends up back at the same old swipe. Casey’s alternative suggestion requires telling creeps exactly how creepy the algorithm thinks they are, and also charging for messages, so it presumably is a non-starter. We keep trying various versions of ‘what if we used adjusting prices to correct for externalities’ to solve problems, because that is how problems get solved, and it keeps failing because people do not like it. But yes, using a feedback system would totally work on a mechanical level if people were ok with it. Bumble does still have at least one interesting feature, which is that you can potentially see who passed on your profile. This is huge. You can look for correlations and patterns. Even if all you knew was how many views you got and what percentage swiped how, that is a big game for being able to make decisions and improve. You’re Single Because Dating Apps Keep Getting Worse Long suffering dating app user Shoshana Weismann explains how a proposed Colorado bill, and other similar bills, would make this horrible experience even worse. It would require dating apps to file an annual report listing all misconduct reports, which would then become public. As in, her model of how this law would work is that if someone complains to the dating app about you, that would go in the public record. I am skeptical that is what would actually happen based on my quick reading, but none of the alternative interpretations are good, they are merely less bad. I assume Governor Polis would never sign this either way. Another modest app proposal. Justine Moore: It would be fun to have a dating app where you chat for a while, set up a date, and when you show up IRL you find out if it’s a real person or an AI bf/gf. And then you have to decide if you move forward with the relationship. Also could be the next big reality show?? mb: Isn’t this the plotline of every catfish episode Justine Moore: yes but like everything, it needs to be reinvented with ~AI~ Cassette AI: “Dude I just matched with a model” “No way” “Yeah large language” Love it. Sure, why not. It would not shock me if the good match rate was substantially higher, because everyone is forced to put in an effort to avoid embarrassment or being thought of as an AI, so even with 50% of matches being fake you still might come out ahead. Also, who is to say you need 50% AIs in order to keep people on their toes? You’re Single Because Everyone is Too Superficial Roko explains that while (in his view) women ultimately do not care that much about looks, dating apps start off filtering for looks, which are fast to check and hard to fake, so looks take on massively outsized importance on apps. If you have other things to offer, you get filtered out before you can provide evidence of that, whether that is wealth, intelligence, sense of humor or anything else. I would add, just like real life. This is not a new problem. Nor is it a one-way issue. Even if men ultimately care about looks a lot more than women, their first impressions will care about looks even more. This means everyone gets oversized reward for optimizing physical appearance to the extent they can modify this, along with other superficial app profile components, and get less return for actually being of value. This is not good for civilization. As Roko says it means the market cannot clear. It also means that highly attractive men are overvalued, the same way tall men are overvalued, whereas less attractive men are undervalued, even after adjusting for true and long term preference, doubling down on what I noted last time. A great universal strategy is to look for a differential between what you value and what the market values. Ideally, you would train two AIs. One AI evaluates potential matches based on typical market preferences, ideally via revealed preference data, with a large emphasis on looks. Another AI evaluates potential matches based on your own quirky preferences. You are ideally looking for people who score well on your metric and relatively less well on the market metric. If someone is super high on both you can and should go for it but it will get rough fast, and you will always have to worry about potential rivals. Instead, focus on investigating to find an especially good match. Also note that this is not only about looks, and includes attributes over which you have more control. As noted last time, Rob Henderson finds that women in their twenties swipe right (‘like’) twice as often for a man with a master’s degree over a bachelor’s degree. A masters is a lot easier to get than a PhD, and a lot less valuable, and this is compared to a bachelor’s, so the returns to all education look high even if every other form of return is worth nothing. You’re Single Because You Refuse to Shamefully Falsify Your Politics I would hope most of us want it to be one way. To what extent is it the other way? Matthew Yglesias: My advice to the young men out there is [identifying as Republican] is going to make it a lot harder to find girls who want to go out with you. Mike Solana: Man it’s really bad when all you have left is “vote for democrats or the craziest women alive won’t sleep with you.” Mason: Matt completely misunderstands women here. They won’t admit it on pain of death, but the great majority of young liberal women would absolutely swoon for a man 2-8 years their senior who teases them about their politics while opening doors and paying for dinner. … I promise you, your problem isn’t who you voted for. It’s who you are and how you behave. Sorry. If honestly identifying your political beliefs would make someone not want to date you, then you presumably do not want to date them. This is even more true if those people are doing this as part of a strategy to force falsification of your beliefs. I thought of four potential arguments the other way. If you are looking for something highly casual and short term, then you might not care about such questions. I would first respond that even in the short term hiding yourself and what you believe can get expensive, but not always and not for everyone. I would then give the real objection. One should essentially never care only about the pure short term. The possibility of a potential long term outcome is a lot of what makes things exciting and fun, and also has much of the value. At that age you need reps and you need to know what it is like out there, and getting off the ground is hard, so you should suck it up as needed at first. Perhaps this political preference is a superficial filter, like looks on dating apps. It is not that she does not care, it is that it is not actually important to her. So you do not have to falsify your beliefs so much as dodge the question and avoid emphasis, until you connect to each other as people. There is some of this. Certainly I can respect a position of the form ‘I don’t mind dating a [Republican / Democrat / Libertarian] but I do mind dating one who won’t shut up about it.’ You want to maintain strong perception of market value and social proof. I don’t love it, and you are making the problem worse and defecting, but I understand it. There was a popularly distributed claim recently that the gender divide is instead increasing especially among young people (source), which was disputed. Murdoch cited several graphs, including this one, note that even the max on the first graph here is still about 12%: The men, this data claims, are getting more conservative, in many ways even more conservative than older men. The women are getting more left-wing. Neither finds the other’s politics alluring, even more than usual. Paul Graham speculated this is due to a lack of male-female interaction, to which Nevin Climenhaga responded one could test this by looking at the impact of what siblings people have, and Scott Alexander decided to check his data. No effect: Science Banana points out that the original finding perhaps does not replicate? Science Banana: I haven’t been keeping track but this is at least the third dataset I’ve seen failing to replicate the finding for the US. Ryan Burge: The finding that young women are becoming a lot more liberal [from the above graph] while young men are becoming a lot more conservative DOES NOT REPLICATE in the Cooperative Election Study. In fact, the two lines have run in almost perfect parallel for the last 15 years. Skeptic Research Center Team: Our snapshot of five generations of the American public indicates that the gap between men and women is smaller in younger generations because men and women are both becoming more liberal (see chart on the left below). Importantly, our data also indicate that a growing percentage of Americans are identifying as moderate (see chart on the right below). Regardless of of size of the gap, no one questions there is a substantial gap. Since time began, the argument ‘modify who you are and what you believe and stand for and falsify your preferences because it will get you laid’ has been strong. Normally it is given a fig leaf of some kind, so I do appreciate the refreshing honesty on display. Yes, it is rather horrifying, but given the choices available I’ll take the explicit version. Falsifying your preferences in such ways too aggressively creates negative selection. You also have to walk tricky paths, since full embrace of the explicit doctrines will imply many actions that cripple your dating opportunities and experiences. And once you start playing such games, the rabbit hole never really ends. So in general, I think this is very much not the way. Going too hard in the other direction is also not the way. Teasing is one thing, but one needs to be able to get along, up to the point where there are those with whom you would not want to get along. There are certain things that, if said out loud especially too early, will be red flags and dealbreakers, and cripple your prospects. So do not say those things in such ways, keep it to yourself, even if the view is held by a lot of others as well. Pick your battles, then pick less battles than that. To some extent it is a skill issue where you can learn how to do it right. To some extent it isn’t. You hopefully have much better things to talk about than politics anyway. If someone is all about the politics, they are a bad pick, even if your politics are aligned. An interesting perspective from Scott Alexander for Valentine’s Day: Love is the one area of life where we have decided to entrust everything to the free market, so long everyone involved consents, and decided not to force anyone to do anything. Somehow, despite doing less of this in many other places, we continue to do it with love and sex and dating. So we should celebrate this oasis while we still have it. You Are Single Because You Do Not Employ Good Strategy Should you pull out dozens of slides and give a fifteen minute presentation explaining the movie Tenet? I mean, in general, no, that would not be the greatest idea. You need to be very willing to abort mission if it is not working. But, if you do anything that is you, in a personal and friendly and fun way, that they are vibing with, that can work. If you make an effort, you too can demonstrate value. This is no mere ramble. Should you buy yourself drinks? Rosey: This is innovation I’m sorry you can’t see that. It is almost a free action if you wanted the drink anyway. Waste not, want not. Should you go the extra mile? Remember, if she wanted to, she would. In this case, see a cute guy at the grocery store, get his name by spying the credit card, Google him to confirm he is single, find his mother, join her book club, befriend her, casually mention that she is single, have the mom do a setup. So which is it? F House Bunny: Let’s not sugar coat this. She’s a stalker. A massive one. Too dumb to realise what she’s admitting. Bennett’s Phylactery: This is actually normal & good woman behavior. I say it is mostly the second one. Certainly I would have, at all points in my life, been fine with this type of procedure. Nowadays she would see I am married and I would never know. Back in the day, I now have an option, and one that has made real investment up front. So yeah, sounds great. Is it creepy? Well, sure, it’s a little creepy, and would be too creepy if you did this full set of actions fully gender flipped. But the right amount of creepy is not zero, and gender flipping matters. Play to win the game. Carpe decaf. About one in four pulled this off at least once. That is not bad at all. Here’s some good negative selection? Scott Lincicome: This list raises far more questions than it answers. Far more. You Are Single Because You Don’t Know How to Flirt Manifold Love points out that ‘wait until you have indisputable evidence of her interest to even flirt’ is not actually the safe play. It means you are unwilling or unable to calibrate your response to the situation and play the mutual escalation dance, that you show you lack skill and are afraid and think proceeding would not be safe (so why should she disagree with your assessment?). And it means she can’t get a read on you. Maeby: Oh nothing too fancy just a nitpick about wording! – if u tell an average guy “women need to know they’re safe,” the guy will think “ok got it be as nonviolent and asexual as possible” So a better way to say it is “women need to know they’re EMOTIONALLY safe/attuned to” I would not quite say ABF (Always Be Flirting) but yeah, outside of particular contexts where you need to avoid it, even if you have zero intention of ever going anywhere with it, basically always be flirting in a highly calibrated to the situation way? It’s the way to Git Gud and makes life more interesting and allows good things to happen. Another good note, mostly so you can generalize this: Manifold Love: pro-tip: if a woman measures her hand against yours, this is almost always flirtation. The Manifold Love Twitter account in general has a steady stream of advice and coverage of related issues. At the time I checked it was largely amounting to ‘get out there, flirt, date, fail, pay attention to the specific person in front of you,’ which of course is very good advice, the archives seem to move around a bunch. It is mostly written by GoblinOdds, it seems, where you get more of a person figuring things out attitude. Both seem pretty good if you want that product. Dan Kras goes on experiments in speed dating and AI matchmaking. About 57% of participants got at least one two-way match from the events he hosted, which sounds like an excellent use of time, and average rating out of 5 was about 4.4. Good product. Unfortunately, the events consistently lost money. The AI matchmaking was based on the principle that there are some very good predictors of compatibility, especially if people tell you a bunch of things, and then you can charge for good matches since they are worth a lot. I’m not sure how much it even counts as AI. As is usually the case, it failed because it is a new dating app, and it did not have critical mass of users to start making matches. How should you think about how often to ask? Uncatherio: I thought you guys-interested-in-women would find it helpful to know – around here, available ladies are 3x more likely to prefer being approached more rather than less, so additional advances on the margin are likely welcome! Among women available to men, the preference here is over 4x, although of course quality always matters, ‘hello human resources’ and all that. However, if you include the women ‘not available to men’ in the group, and assume for them it is false, this jumps closer to 1:1, so it helps a lot to do some research. Either way it is also not the correct question. The right question, in terms of whether you are providing value by asking, is: How big is the upside for the women who want to be asked more, versus the downside for those who want to be asked less? This is a question with typically much more upside than downside, which is why women want to be asked more even though they will still (presumably) turn down the majority of the additional offers, but with exceptions where the downside is large. So the main thing to do is guard against the big downsides. That principle can extend. So, for example, this would be a ‘big downside, don’t do that’ situation: You Are Single Because You Don’t Date Your Married Boss In general I am pro-flirting, and I am pro-asking, and pro carpe diem and all that. But of course there are obvious exceptions, so yes, new candidate for worst advice ever has dropped, he insists he is sincere, and everyone had fun with it for a few days: Simon Ohler: someone on the vibecamp forum asked: “I have a raging crush on my boss who is married and I’m EXHAUSTED by this and want it to end. How to get over a crush?” I enjoyed giving an answer and here it is: Hi. This is a tough one. See it like this: A crush is a package that you carry, and it has a recipient. For some reason you have it, and you have to carry it, and it doesn’t really go away, until you post it. Until then, it will exhaust you. In my opinion and experience, the best way to get over a crush is to post the package. This means, first and foremost, to speak the truth about it. Ideally to the person who it concerns. As you described, this is a bit risky because this is your boss. But I think a very healthy thing to do is to not rule out speaking to them outright. Because once you begin to plan how to have this conversation, maybe how to stack a bunch of caveats before the reveal, how to prepare them to receive this unusual news – as you plan this, you will already give an energetic outlet to the crush, and the delivery process for the package starts. Honesty is the best therapy. Crushes happen. Most crushes are not really about the other person. They are about you. They are a projection. Hence the name, and why they can RAGE. Maybe it’s repressed eros in you, that is coming out sideways, by taking your boss into its grip. Maybe it’s something in you telling you that you should get out of this job, and a good way would be to tank your relationship with your boss. Who knows? Maybe you know? Surely your body knows? Your boss will have had a crush before. If you make clear that you just need to get this off of your chest, you might be able to move through this and see another day, without your boss feeling too horrible. Maybe they will even support you. It’s certainly easier to deal with a truth that’s on the table, than to deal with the shifty behavior of someone who is hiding something. Maybe as you plan, you realistically decide that it would indeed endanger your livelihood too much, if you told your boss. In that case, you need to put your eros to work elsewhere. How to do that is another topic. One last advice I can give: Talk to your crush. And in this case, I don’t mean your boss, but your feeling. What does it want? What’s in the package? It’s clearly not a reasonable reproductive reciprocated strategy. So what is it? Talk to it until it reveals itself and what it wants, really. Maybe that way you and your boss can dodge an uncomfortable conversation. Many blessings S Yeah, under no circumstances do you tell your married boss that you have a crush on them. In fact I’m going to go ahead and say that you almost always only need one of (boss, married) for this to apply. Both is overkill. You Are Single Because You Are Afraid to Fail People say this a lot: And yes, those people are usually right. But do not give up all hope. Sometimes they are wrong. My first podcast appearance went well. My first formal speech won a school prize. My first Magic tournament was a victory. My first post was not even intended to be a post, and people liked it anyway. And without getting into details, my first [something else that importantly and especially is supposed to never go well the first time] was a roaring success. Practice makes perfect. It is not exclusive. Hard work. Clean living. Beginner’s luck! You Are Single Because No One Likes You On Dates If everyone you meet says ‘it isn’t working…’ Amdr3jH: Good friend is mid 20s. In shape, gets over 5 million impressions per month, and roon likes on average 3 of his tweets per week + all his replies. He gets consistently ghosted, ignored for days, or is told after a date or two that “this isn’t working.” Modern women are broken. Modern people and life are broken in all sorts of ways. But as always, you are the common denominator. If your dates never work then that means the problem is you. Yes, he checks some important boxes, if the story is true. There are any number of things that he could still be doing importantly wrong. One of them, presumably, is that I am guessing he lives in the San Francisco Bay Area, which stacks the deck against him. What he is offering is oversupplied there relative to demand. There are some other obvious suspects here as well. Similarly: Shoshana Weissmann: If you monetize my true dating stories all I ask for is a cut. Definitely Not Advice (@stillnotadvice): Good friend is mid 30’s. Pretty face, no kids but wants a family, makes over $110k/year, has a huge property in the middle of nowhere. She consistently gets ghosted, ignored for days, or is told after a date or two that “this isn’t working.” Modern men are broken. Alan: Good friend is mid 20s. In shape, makes over $300k/year, has a condo overlooking a great downtown. He gets consistently ghosted, ignored for days, or is told after a date or two that “this isn’t working.” Modern women are broken. Charles Cooke: Maybe someone should introduce those two? Dating is a mix of positive and negative selection. If you are consistently failing at the ‘get a date at all’ stage, then that is tough. The modern world can make this difficult. But if you can’t find a way to get at least some dates through the apps, and you live in a populous region and have reasonable looks and a job and no obvious big red flags, that should be fixable. If you are often getting to the first date, then failing consistently, I am positive you are doing something wrong. There is something you do not know, a skill you lack. You’re Single Because You Are Bad at Sex It matters a lot, no matter what anyone says. Aella has an extensive ‘how to be good at sex’ guide behind a paywall, link goes to part 5 where she gives enough free content to be interesting on its own, as well as quite the introduction. Not evaluated or endorsed by me. Then this thread introduces part 6, which is about what things women want versus what men think they want. It includes this graph, which I include because this is a great way to label what looks a lot like a random distribution. Sasha Chapin strongly endorses the series, offers an important note. Sasha Chapin: so background, i was a Canadian leftist who was successfully persuaded by a particular niche brand of feminism that masculinity is bad I would’ve thought it creepy to integrate my sexuality and my walking-around self so that they were smoothly connected rather than mutually unintelligible, and I think that’s the most important part of this series the funny thing is, this actually makes you less creepy. A short public service on the ways in which size matters. You’re Single Because You’re Not Hot Aella scientifically tests the 1-10 hotness scale, using AI-generated faces to avoid the ethical issue of rating real people. The most striking thing about the original 4Chan chart is the description assuming dramatic described correlation between features. There are five or six distinct features described as if they always line up, when they most obviously don’t. The chart makes sense exactly because it is looking almost entirely at faces. The other thing that stands out is the idea that 10s only exist in the context of your particular preference. I don’t see why you would frame it that way. The new test mostly tells us that facial attractiveness ratings are what you would expect, and there is reasonably good consensus about it. You’re Single Because You Don’t Know What People Care About It is true. Not fully or all the time, but it is mostly true. Brittany Venti: One of the biggest lies told about relationships was that men want lingerie. Imagine the disappointment you have playing dress up games your whole life, only to grow up and find out that men literally don’t care about lingerie and that it’s mostly for the woman to feel cute. Aella: This was one of the biggest misconceptions I had going into sex work. I’d put a lot of effort into dressing like what i thought a ‘sexy woman’ looked like – lacy lingerie, red lipstick, etc. – but none of that got men as hard as a $5 short skirt and tight t-shirt with no bra. In general, the most successful outfit for seducing men is one that is a plausibly-accidentally-accessible version of clothes you might already be wearing. Jeans that slip too low when you bend over, nipples visible through casual shirt, etc. I think of it as there being a thing called Fashion, which is about Glamour and Impressiveness and Status and such. Then there is a different thing called Sexy. Fashion is abstract and elegant and rivalrous. Sexy is practical and lived-in and non-rivalrous. There is a correlation between the two, but it is highly imperfect. Men like both, especially when others are watching, but mostly what men care about is Sexy. You’re Single Because You Are Inappropriate A lot of people think quite a lot of things are inappropriate. Different worlds. Helaine Olen: What’s really fascinating about this is that it’s women not men who are more likely to say this stuff is inappropriate. If accurate, a quarter of people think it is not okay to have a private work meeting with someone of the wrong gender? Over a third of women are ruling out a car ride? I mean, wow. Inappropriate is not the same as not allowed, but still, wow. I presume that it isn’t actually that high and something about the framing warped responses, but even that shouldn’t be possible. If someone actually does have a real problem with either of those in practice, that seems like a straight-up dealbreaker. For the meals, if this is ‘alone in your house’ then I could potentially see it, but if it applies to a restaurant it’s straight up nuts. You’re Single Because of Your Pet This statistic was rather stunning. Derek Thompson: One of the more curious trends to jump out of the data is that many Americans have traded people for pets in our social time. The average time that Americans spend with their pets has roughly doubled in the past 20 years —both because more people have adopted pets and because they spend more time with them. In 2003, the typical female pet owner spent much more time socializing with humans than playing with her cat or dog. By 2022, this flipped, and the average woman with a pet now spends more time “actively engaged” with her pet than she spends hanging out face-to-face with fellow humans on any given day. I realize that other people like cats and dogs a lot more than I do, and get things out of them that I do not. I still feel confident in saying: Do not be the person who does this. This is a not a good idea. If you are a wilderness tracker out with your hunting dog, I mean fine, that’s a choice. But for an ordinary pet? Please, no, it will not go well. You’re Single Because You Won’t Spend Money A man who is not cautious with his money will soon cease to have any. Yet a man who is visibly cautious with money on a date will cease to have any of those, either. Selena: there are few things worse than dating a man who is cautious with his money, If you notice this hesitancy on a date then just end it, his potential for greatness is non-existent. Women intuitively understand that frugality is a psychiatric ailment. DSM V criteria, medical fact. You know this is true because the most frugal person you know never truly excels, they never get rich, they are completely risk averse. Frugality bleeds into every fibre of their being. – to earn more you must spend more – to think more you must write more – to learn more you must teach more – to be loved you must love more NOBODY gets rich from saving money or investing in a 401k Jessica Taylor: Contemporary people respond more positively to classism when it is voiced by a straight woman framing it as mate preference. Moderation in all things. The most miserly, frugal person you know is presumably far too frugal. They are at best penny wise and pound foolish. If not, your local culture has a big problem. Being not frugal enough? That is a much bigger problem. Being broke is expensive. It is so sad that many people think you cannot get rich or ahead by holding down a job, saving money and investing in a 401k. This simply is not true. When you see people who do not understand the need to care about or save money? Who think that if you have it, you should be willing to spend it? Or even worse, you should spend whatever even if it does not make sense or get anything worthwhile, and if you don’t have it? Because vibes? Run. Run as fast as you can. Trust the premonition. Do not Live La Vida Loca. That said, there is an important sense in which you do need to be fine with spending money. If you do not do this it will ruin the vibe. It is legitimate to care about this. What you must avoid is allowing concerns about money to dominate thinking within the moment. It cannot be allowed to disrupt the flow of the evening. If they see you worrying about money, or worse they are forced to worry themselves, or do a bunch of calculations, that is double plus not good. The fool’s way of doing this is to become a money pump, able to be talked into spending arbitrary amounts of money. To spend on anything and everything as if it is nothing, to show off that you are willing to spend it. That does not help you. You get nothing in exchange. You brand yourself a fool. The wise man’s way is to engineer a situation in which the problem never comes up. Never let them see you sweat. Sweat the money in private, before the date or activity begins. Decide what you are willing to spend on or do. Choose so as to avoid proximate or conspicuous alternatives that would pressure you to spend. To the extent they are presented anyway or unavoidable, dismiss them without reference to cost. Then, when the moment comes, embrace it and enjoy it. This is the philosophy of Out to Get You. Engineer a situation in which you can safely Get Got or inconspicuously Get Compact. Otherwise Get Gone. This goes beyond dating. It also goes beyond money. You want to enjoy the moment too. I was brought up to always sweat all the details, always be critiquing and complaining and worrying. There is a lot of value and wisdom in that, it is far superior to the alternative of the unexamined life. However, there is also a time and a place. Sometimes taking yourself out of the moment like that is terrible. When that happens, cache the issue and set it aside until later. Update on it if worthwhile, or don’t if it isn’t. From the comments last time, Michael Roe points out another reason to stick to places that are reasonably priced, which is that it avoids putting the other person under pressure to reciprocate next time. Or, I would add, to avoid generating worries about expectations or a reason to feel bad. In general, showing people a nice time is good, but it is not good form to take people to places they themselves could not afford at all even if you are paying, with a partial exception if you are so visibly rich it is common knowledge that you can and will laugh the price off entirely. You’re Single Because You’re Not Over Your Ex Breakups suck. Kyle: During college i can remember 6 male friends who went through long term relationship breakups. 3 of them lost their minds for a year and completely derailed their lives and the other 3 initiated the breakups [they’re doing well now]. Eigenrobot: how messed up people can get when a serious relationship goes under feels under-considered to me in discussing life trajectories. Contemplate this on the tree of woe. Be careful with your heart, not that it will help Hereward the Woke: Given that the end of even fairly juvenile or early-stage relationships can mess you up, it’s actually quite bad that our normative relationship model involves many people going through multiple might-as-well-be-divorces. Different people get different kinds of derailed for different lengths of time. Being sad for a while afterwards and not dating anyone else for a bit is standard procedure and basically fine. Healthy, even. The key is not letting it derail the rest of your life. Whenever one is in a relationship, one must sometimes worry about when it would be worthwhile to break up with them, and even more one must worry about when the other person might find it worthwhile to break up with you. You’re Single Because You Thought You Could Do 25% Better What if there was a clear rule for when breakups happened? (While noting that this is explicitly a joke per account rules, also I mean obviously.) Eliezer Yudkowsky: in a world of greater legibility, romantic partners would have the conversation about “I’d trade up if I found somebody 10%/25%/125% better than you” in advance, and make sure they have common knowledge of the numbers (Marriage makes sense as a promise not to do that period; but if so, you want to make sure that both partners are on the same page about that. Not everyone assumes that marriage means that.) Her: I am never, ever letting you go unless I find someone 75% better. Me: Works for me. Oh hello there Performative Allistic Twitter. I guess people may legit not know how to express this without help, so, to reiterate: As you go on dating, you both accumulate human capital specialized on each other, and it becomes harder for someone else to be 25% better. Furthermore if you’re marrying or have kids, you both may just not want to worry about the other finding someone 75% better. But this kind of commitment is only meaningful if you’re dating someone with the power to admit and speak aloud which algorithm they use. Someone who performs “But I would never! Only a terrible person would think so coldly!” may very much be running a tradeup algorithm even after they marry you and have a kid, and they wouldn’t know it themselves. Etienne: why is everyone reacting to this as if it was meant as an alternative to explicit lifelong commitment, when it’s quite obviously meant as an alternative to “trading up” anyway but without ever discussing expectations first. shill: the responses to this tweet are hilarious because they make it very obvious when someone just does not get the mindset here. I’ll give you a hint: “125% better” etc. is not a precise measurement … Beatrice Leydier: why would you dump your partner for someone 25% better when you can just slowly nag them into becoming 25% better like a normal person. Eliezer Yudkowsky: I sent this to gf and she messaged back “on it.” You know what, I’ll give up and provide this thread’s actual context: GF is ex couples counselor (also ex Google SRE), and saw a reality show about troubled couples deciding whether to break up after dating a different attractive person for 3 weeks. Nate Soares: reactions to this are like a microcosm of why you usually can’t trust humans with consequentialism. “it ignores how relationships get better with investment” nope, that’s an increase in your value to each other that makes it harder to find someone worth trading up for. “it ignores that the shiny new relationship has a high risk of failure” nope, that’s a reason why one might wrongly overestimate the value of a shiny new person. it’s notable that so many people object “but ‘value’ doesn’t capture…” rather than cautioning “people might neglect the value of…”. as if the word “value” must cover only the shallow and superficial features; as if no word is allowed to capture the deeper intangibles. It seems many people intuitively think that words like “value” can only apply to the legible and easily articulable aspects of things. Which sure would explain why many people hate on consequentialism; [legible-consequence]alism is a much worse moral theory than [comprehensive-consequence]alism. Mason (responding to OP): This is a recipe for off the charts neuroticism and a surprise mood disorder. Ruxandra Teslo (responding to OP): This is such a cursed worldview. Aella: this is the way most people operate, just nobody likes admitting it to themselves. My apologies guys, i was wrong. i forgot about how most people actually date people either well above or below their own attractiveness level, how women don’t resonate with the message ‘you go girl, get a high quality man’ and thus it’s not present in culture at all. [goes on like this] The alternative to having no idea where you stand is having a better idea where you stand. Relationships without very deep commitments have a threshold where the situation is bad enough that one person would leave the other even without the ability to trade up, either ‘on spec’ or because nothing is already an improvement. Knowing you are on the edge of that is quite stressful. But not knowing if you are or not, and not knowing where the threshold might be, is not obviously better. Is ignorance bliss, or is it paranoia? Could go either way. I do think part of being in a typical relationship is, past the early stages, a promise not to actively pursue trading up. Until marriage you are not promising to be with them forever, or to stay barring some calamity. However you are promising that you will not engage in various activities without ending the relationship first. You cannot cheat. You cannot work to line up your next relationship. These things are not okay. If you can actively pursue and negotiate (or even try out) other suitors first, thus allowing a trade-up to be risk-free, then that is a different type of relationship. That needs to be explicit. Is there a threshold where you would break those rules? Presumably yes. The right amount of information on that threshold is usually not zero. It also usually is not an exact formula. And there are many cases like this where in sufficiently extreme cases one likely breaks the rules, but part of the mechanism design is that you must bear the cost of breaking the rule. It is not always correct to say ‘well, if X happens I would do Y, so we should change the rules so X allows me to do Y,’ especially if you have a say in whether X happens. Polyamory Since many who read this consider it: What about under polyamory? Aella: One underrated benefit of polyamory is u don’t have to dump anyone when you meet someone 25% better for you One feature of polyamory is that it means continuous auditions of potential replacements by all parties. You are not trading up in the sense that you can have multiple partners, but one thing leads to another and there are only so many hours in the day. If you are monogamous, and you meet someone plausibly 25% better, by default what happens is nothing. There is no pressure to explore that possibility, to see if you might be able to upgrade, or even find out if the person is available. It is not an issue. If you are polyamorous, and you meet someone plausibly 25% better, or even someone 0% better (I mean the person you are with is pretty good, no?) you are honor bound to try and make it happen. This is a problem, and can become a much bigger problem (or opportunity, or both) if you succeed. You get a lot more information. Yes, you do not have to flat out dump the original person. But if the new person is indeed better, it is not as if the original relationship is going to continue as before. In other polyamory news, Scott Alexander tells you that you are wrong about what you think, you don’t hate polyamory, you hate people who write books. The argument goes, people write books because they have issues, and are screwed up, and are likely destined for terrible relationships no matter what, imagine reading what ‘monogamy advocates’ were saying and how that would turn out. Most people who actually practice polyamory would give boring advice and are doing great. I buy that the people writing polyamory books (and, by extension, blog posts) have issues, and more issues on average than other poly people. That does not mean we cannot judge what they have to say, whether or not the original article in The Atlantic was doing so fairly. As usual, if you were making a bad generalization, stop doing so, whether or not the conclusion was true. If it is true, get there for the right reasons. Also, few people (reading this, anyway) hate polyamory, they simply disagree about expected outcomes on a variety of fronts. I continue to think that there is a time and a place and a person where polyamory is the correct choice, but that the majority of the time someone thinks it is a good idea right here, right now, that they are wrong. Scott then follows up with a highlights from the comments, where the arguments against polyamory seem convincing. In particular, there are fewer children, and those children that there are generally end up in worse positions and at more risk, and the whole thing is a giant time sink even when done right without overall looking better even after those costs are paid. He also promises that this link is a doozy. Aella also makes a very good argument against polyamory here: yatharth: oh, I see. Societies evolved taboos and rituals around sex, not because they were a morally inferior, irrational species, but because sex routinely fucked social relations up, and the cultures that survived were the ones that had guardrails in place, Aella: this is partially why people who pull off polyamory successfully are hyper-skilled with communication, emotional regulation and self-awareness. Not saying monogomous people aren’t that, only that you don’t *have* to be that in order to pull off monogamy. I’m sure you all know that one couple who have the emotional processing ability of a cantaloupe but have somehow stayed married for 20 years. If they’d tried poly (in today’s climate, with zero cultural support or general knowhow), their relationship woulda fallen apart. Most people are not hyper-skilled in anything. Certainly they are not hyper-skilled in communication, emotional regulation and self-awareness. (Almost?) nothing successful at a mass scale requires hyper-skill. If your social relational system, or any other product or service, requires hyper-skill, your system is at best for a very small group of people. Even if the product is so good for the select few that it is worth doing a lot of work to qualify, and there are many such cases, encouraging widespread adaptation of something this demanding is to do most people a disservice. Also people hate thinking and complexity and the inability to fully relax. Brooke Bowman: I want all of my male friends to be in happy, fulfilling relationships for the entirely selfish reason that it is SO NICE to have friendships where there’s no weirdness around ‘are they into me’ or ‘do they think I’m into them’ Ah this was polyamory erasure sorry everyone. Tbf I do struggle with feeling anxious around poly friends for this reason, but that’s a skill issue. I mean I suppose like almost everything else it is in some sense a skill issue. But a sufficiently difficult skill issue reduces to an issue. If you too are poly then oh boy is the skill threshold here high. It really is great not to have to worry about who is or is not into whom, or what dynamics might be going on, and to not feel like you are missing out on constant potential opportunity. Alternatively, perhaps you could write a paper about the optimization problems involved and call it Polyamorous Scheduling. Might as well get a paper out of it. I may have trapped priors, but all this reinforces to me that polyamory is generally a deeply bad idea for humans, albeit with notably rare exceptions that are extraordinarily good fits. Also, there was a polyamorous dating show about couples seeking to add a third person, and yeah, missed opportunity. Kevin: Why did they call the poly dating show “Couple to Throuple” when they could have called it “The Three-Body Problem”? You’re Single Because You Don’t Know What You Want What should you be looking for in a romantic partner? Rob Henderson offers his advice. He looks at what predicts relationship satisfaction. He notes that similarity between partners is the rule but does not predict satisfaction, speculating it is necessary but insufficient. If it is so commonly prioritized or chosen and does not correlate, that could mean it is typically beneficial, it could also represent how we meet people and how matches are made in the dating market. I would assume people are roughly correctly rating similarly? Authenticity and openness with your partner tends to be reciprocal and strongly predicts relationship satisfaction. That makes sense, this is underrated. Attractiveness of your partner relative to your options predicts happiness. If you are more attractive than your partner and could do better, you will be less happy. Well yes, that makes sense ceteris paribus, but this is not obviously underrated as a consideration. In general the principle is, if you could do better, you’ll feel it, and that is in terms of whatever it is you care about. As he points out, this also suggests that trying to ‘date up’ too aggressively is a mistake, as dates once gained must be maintained. If you do this you need to ensure it is an unusually good match on details, and invest heavily. Plan ahead. The endgame for most people should be a family and children, so consider potential dates in that light from the start. That doesn’t mean never have fun but keep your eye on the ball. Here are some red flags he notes from Shawn Smith’s book Gatekeeper: Shifting responsibility for managing emotions, forcing you to play guessing games, assaulting your character (e.g. ‘you always do that’ or ‘you never listen’) and the silent treatment. Some green flags? Clarity, maturity including emotional maturity meaning things like calming yourself, accepting reality, not acting on impulse and keeping commitments, stability, inquisitiveness. That all seems directionally right as far as it goes. That does not tell you how to prioritize. Then there is this article in The Cut by Grazie Sophia Cristie that made the rounds about the argument for intentionally marrying an older man, in this case meeting him at 30 when she was 20. The author starts out saying they buy lottery tickets without even checking to see if they win, and mentions asking for cigarettes, which do not seem like the ways one provides evidence of a tendency to make good choices. I did like this line, which seems right, in at least some senses? When someone says they feel unappreciated, what they really mean is you’re in debt to them. The basic argument she makes is straightforward, and goes something like this: Dating a younger man means teaching and crafting them into someone women want. Then they probably leave you for another woman anyway. When you date within your own age group, the playing field is level, and you waste the years when your stock is highest. Why not skip all that, free ride on the efforts of others, find a man who highly values what you offer and cash in (in many senses) while the getting is good? A man who will tell you who he is and what he wants, so you can evaluate up front if you want to match with that. Providing what a (modestly) older guy wants will make him love you, and it will pay big dividends. Also she endlessly complains about younger men, including her own brother, failing at what she sees as basic life skills. How dare they not know the proper way to do laundry, or pack a suitcase. Idiots. It is odd how important this sort of thing seems to her, and she is not alone. Diana Fleischman (responding to article): Men are changed by women, often for the better. And a civilized man is a gift women give to one another, but rarely acknowledge. Salome Sibonex: Counterpoint: You didn’t “civilize” your boyfriend, he satiated your neuroticism. Women are more neurotic, thus less tolerant of certain things not going their way, like social niceties or home decor. I AM this woman! I make my boyfriend’s life prettier and cleaner, but this is largely for my benefit. I don’t need to flatter myself by thinking my neuroticism is a superior sensibility that civilizes degenerate men. This is important because it prevents me from being resentful when some of these preferences aren’t met. Instead of thinking my partner “uncivilized”, I realize we have different preferences and sensitivities for those preferences going unmet. Men generally care less about how suitcases are packed or whether their towel is on the floor, so they give in when women do. If both sides are reasonable, both will benefit—no self-righteousness necessary. This moralized conceptualization of what are essentially basic sex differences encourages women to think of themselves as long-suffering under-appreciated saints, which is an unpleasant mindset and makes the reality of a relationship seem unduly negative. This is naturally a case where somewhere in the middle, the truth lies. The right amount of attention to such matters is not zero, even purely for one’s own benefit. A lot of such actions, however, are not at all about that. What Grazie Sophia Christie actually wanted, in general, seems like a guy with his life together, and who ran a smooth operation, and take charge and enable life to happen. That is only partly an age thing. Not that many people have that these days, no matter how old. This was a highly unusually put together guy for 30 years old. She worries that by doing this she is defecting, ‘taking advantage of his disadvantage.’ As long as she understands what she is doing and honors the deal she is making, I do not see a problem. Her husband is doing fine. This is very much gains from trade. However, she is very clearly defecting in the broader game. By her own model, if women did this more often, the guys in their 20s wouldn’t become the guys in their 30s that she wants them to be. She sees others as doing the work, and she wants to then reap the benefits. It is her choice how much to care about this. How much of her model is accurate? Not zero. My guess is not much. There are subcultures where the population growth rate is so rapid that a typical age cap causes balance issues, but if the population is roughly stable then there is nothing out of equilibrium about having age gaps. Yes, this means the youngest men miss out, but focusing on career at that age until you have yourself more together seems fine, and yes men can learn the necessary skills other ways, including now via asking a chatbot (VR experiences coming soonish), or learn them rapidly later on when they are more ready for them. And of course it also means that older women miss out if they don’t already have a match, even more than they already do, but this could be offset by having more long term matches. You’re Single Because You’re Too Busy Writing Comments On the not actually trying point, David doubles down (as did Cole Terlesky): David Karsten: Three thoughts, from having been on the dating app market for the first time ever this year: 1. The fact that many folks don’t want to really succeed, they want to just “have a match happen” cannot be overstated, at every level. You’d be amazed how many people don’t want to spend $30 a month for a dating app membership, even though they’d value finding a partner at $X thousands of dollars a month. You’d be amazed how many people don’t follow up with those they text. Etc. 2. As a result, you cannot _possibly_ imagine how not-in-it-for-the-long-term the average guy on these apps are. Functionally every woman has a story about a real jerk, and often defensive comments on their profile accordingly. Being even moderately decent has above-average returns. 3. The incel movement is a detailed UX complaint about Tinder, as far as I can tell. Other apps vary quite a lot! Sometimes switching to a new app and keeping the same strategy has outsized returns. This is great news. You can both switch apps and use superior tactics, such as ‘caring at all,’ ‘not being a jerk’ and ‘responding when they text.’ Then you can enjoy the oversized returns. Grant McKinney says they count as not even trying, they’ve never ‘made a serious attempt at flirting,’ in terms of not trying to have it go anywhere. I pointed out that the best flirting is done because it is fun, so Grant was doing it right except for pulling back rather than continuing to escalate (or accept escalations) in increasingly risque directions when things go well. Brett Bellmore reports the upside of online dating in getting around social phobias, and also suggests that if you are serious you consider foreign dating sites. Brett Bellmore: My personal experience may be relevant: I literally did not date until I was in my early 40’s. In my case this was due to a traumatic childhood event; Apparently the school nerd was NOT supposed to chat up a member of the cheer leading squad; The penalty was immediate and physical, and induced a pretty severe social phobia. Jr high could be a rough place in the 70’s. Having Asperger’s didn’t help, of course. Online dating got me past this, as my social phobia didn’t kick in unless I was face to face with a woman, and by the time the online relationship had progressed to us meeting, I’d relaxed a bit. I really can’t recommend it too much, it didn’t just get me a date, it got me married. Here’s some serious advice: Try foreign dating sites, if you’re really looking for a wife, not just some fun. The US has become somewhat matriarchal, and when a guy from a semi-matriarchal society meets a girl from a still somewhat patriarchal society, you get a very beneficial culture clash: You both end up exceeding the other’s expectations by simply doing the minimum your own culture demands. As well, the economic principle of comparative advantage kicks in. You may be nothing special by local standards, and still a superb catch to some girl in a 2nd world country, which means your bidding power is higher than you might think. I certainly didn’t end up married to this cutie by being a movie star… As well, the international sites specialize in women who ARE looking for a husband, not a one night stand. Tinder might be a good place to go if you don’t like eating out alone, but is it a good place to look for a wife? Anyway, that’s my experience. This is the one I found my wife at, but there are a whole series of allied sites they run for different countries: Filipina Hearts. The problem with foreign dating sites is of course adverse selection. This is the ultimate lemon market and potential trick. You run a huge risk they (either the website or the woman or both) are there to scam you or only after the visa. Claude directed me to some ‘review websites’ I will not be linking to, as they did not put my mind at all at ease on your behalf, and provided the usual advice of being generally wary of signs of trouble. Gunflint suggests the ‘fake wedding band’ trick, as the ring puts women at ease. I am of course strongly opposed on principle, also the adverse selection is terrible and lying even by implication is bad for your soul and your future relationship, and also you risk romantic comedy hijinks ensuing if you are foolish enough to double down. A way to get matches, but different ones? Mike Hind: I got plenty of matches on Tinder by emphasising what I offered rather than what I was looking for. That one weird trick makes you stand out. Marthinwurer: I have now added “I can fix your furniture” to my tinder bio. myst_05: I can confirm “btw I’m good at [DIY]” works well. First emphasizing what you offer them is always good marketing. This is especially true if you are having trouble getting enough matches. It has a different positive selection effect, you want them to want what you are happy to offer. It does mean you get less selection in them having what you want. Shout points out that being asked your body count is not only something you can often strategically avoid, when you can’t avoid it this is an opportunity to send a message that matches your strategy and forward goals. Also notes that a lot of the concerns that result are ‘you will get bored with me and my lack of experience’ so if your number is coming in high you want to head that off right away or even use addressing that as a way to dodge the question. Bob Jones requests a way for a guy to tell if they are bottom 25/10/1% desirability, and how to handle it if you are, and when one should consider giving up. I affirm my partial answer there, which is that unless you have major health (including mental health) issues the chance you are reading this yet still unfixibly in the bottom 10% (or even 25%) is almost zero. I would add that most of the things that one needs to fix to get out of the extreme low end, things like being able to talk to people and being in a decent financial position and fitness and hygiene are almost all things you should prioritize fixing anyway, even if you had zero interest in sex or dating. It is still useful to know where you are at. John suggests that Bumble lets you know who passed on your profile as I also noted above, which helps you know where you are at although getting a baseline is still tricky. You’re Single and Not Getting Properly Compensated The life of a professional bridesmaid. All she had to do was put up a Craigslist ad and she was inundated with requests, media inquiries and even marriage proposals. So if you are thinking of doing this, the market is probably still wide open. She says she makes ‘over $100k’ stepping up to make weddings not become horrible disasters, filling in for those who do not have people they can count on. Cost starts at $2.5k, given the other costs involved sounds like the service is worth every penny and more. Alas, despite overwhelming demand she is having trouble getting the business to scale, finding the right new people is hard. It seems like a fine job, with odd but good hours overall, and a rewarding experience, but also a demanding and stressful one. Everything is so high stakes for everyone around you, all the time. One thing that surprised me was that she succeeded while looking this good. One of the big dangers with bridesmaids is that they risk outshining the bride. You’re Not Single and You’re an Inspiration The story of someone who posted a video five years ago about being ugly and how depressing it is, how everyone has always treated him badly because of it, got a response that he looked kind of cute, and now they are married. Embrace the variance. Or if all else fails? Embrace your inner someone else. Eigenrobot: My wife is mad at him because he “doesn’t understand what women want at all, he’s just mimicking Ryan Gosling” and its working anyway. “Although he had to have understood it to some extent because he understood watching the movie together would be a bad idea.” Anonymous: This really happened. Sandrone: If you’re in stochastic parrot pivot to stochastic gosling. Actually I think he understands perfectly well. Remember, she is out there. Bill: Can we all agree, gentleman? Your Moment of Zen Speaking truth to power.
2024-05-08
https://www.lesswrong.com/posts/yeezBxTLaEcDorYLr/feeling-instrumentally-rational
yeezBxTLaEcDorYLr
Feeling (instrumentally) Rational
pi-rogers
Contra this post from the Sequences In Eliezer's sequence post, he makes the following (excellent) point: I can’t find any theorem of probability theory which proves that I should appear ice-cold and expressionless. This debunks the then-widely-held view that rationality is counter to emotions. He then goes on to claim that emotions have the same epistemic status as the beliefs they are based on. For my part, I label an emotion as “not rational” if it rests on mistaken beliefs, or rather, on mistake-producing epistemic conduct. “If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm.” I think Eliezer is making a type error here. When he says "rational", he is of course talking about epistemic rationality. However, epistemic rationality is a property of beliefs, not emotions.  In other words, I can't find any theorem of probability theory which proves that I should feel sad when my expected utility decreases, and happy when it increases.[1] The only type of "Rationality" emotions can apply to is instrumental rationality, i.e. "the science of winning at life", and the most instrumentally rational emotions don't always stem from beliefs in the intuitive way that Eliezer describes in his sequence post. Example: Being sad about a high P(doom) can make you less productive at reducing P(doom), as well as incentivize you to self-decieve into a low P(doom). If I see more good examples in the comments, I will add them. ^ ...and fear when there is high probability mass around a future event that will result in a large loss of utility, and anger when another agent causes your utility to go down, and curiosity when you have an opportunity to gain useful information, etc.
2024-05-16
https://www.lesswrong.com/posts/7pdCh4MBFT6YXLL2a/is-there-a-power-play-overhang
7pdCh4MBFT6YXLL2a
Is There a Power Play Overhang?
crispweed
This post is about risks in the development of increasingly capable AI, in particular the risk of losing control to AI and extinction risk. I'll suggest that a key question is, "When do we need to take this kind of risk seriously?" We'll look at the issue of 'agency overhang', which suggests that adding agent-like abilities to AI could result in a sudden and surprising increase in these kinds of risks. I'll draw on intuitions about humans taking administrative and political control (with reference to the 'Dictator Book Club') and rephrase agency overhang as 'power play overhang'. I'll finish by suggesting that a lot of people may be making a subtle but important mistake in imagining just one fairly specific path to dangerous AI.
2024-05-08
https://www.lesswrong.com/posts/gJGYDsyxj3rZDp86E/geometric-utilitarianism-and-why-it-matters
gJGYDsyxj3rZDp86E
Geometric Utilitarianism (And Why It Matters)
StrivingForLegibility
Do you like using numbers to represent uncertainty and preference, but also care about things like fairness and consent? Are you an altruist on a budget, looking to do the most good with some of your resources, but want to pursue other goals too? Are you looking for a way to align systems to the interests of many people? Geometric Utilitarianism might be right for you! Classic Utilitarianism The Harsanyi utilitarian theorem is an amazing result in social choice theory, which states that if a social choice function F:Rn→R is both VNM-rational, andPareto monotone (Pareto improvements never make F lower) then for any joint utility u∈Rn, F(u) must be equal to a weighted average of individual utilities that looks like H(u,ϕ)=u⋅ϕ=∑ni=1uiϕi, where ⋅ is the dot product and ϕ∈[0,1]n are weights given to each agent's utility that sum up to 1. As Diffractor puts it here in their excellent Unifying Bargaining sequence: Basically, if you want to aggregate utility functions, the only sane way to do so is to give everyone importance weights, and do a weighted sum of everyone's individual utility functions. Diffractor is using sane as a shorthand for VNM-rational here, which is extremely reasonable given the success of expected utility maximization as a model of rational decision-making. However, I have recently been radicalized by reading Scott Garrabrant's very compelling Geometric Rationality sequence, which has significantly updated my thinking on many topics in rationality, including how to sensibly combine utilities. And I wanted to see if I could prove some results about what happens if we use a geometric weighted average of utilities that looks like G(u,ψ)=∏ni=1uψii when the weights ψi∈[0,1]n sum to 1 and utilities are shifted to be non-negative. (Which I'll be assuming throughout this post.) Results About Geometric Utilitarianism What might it mean for a group to be rational? Well at the very least, that group had better be doing something Pareto optimal. Otherwise we can shuffle around their behavior and get strictly more value for that group. And it turns out Pareto optimality is enough to let us parameterize all rational group behavior as maximizing some geometric weighted average of individual utilities. This geometric utilitarian theorem for group rationality is analogous to the VNM theorem for individual rationality, which lets us model rational agents as maximizing expected utility. In more mathy terms, here are the results (some I think are well-known and a few I think are new): Main Results G and H are both Pareto monotone, and maximizing either can lead to Pareto optimality.Given any Pareto optimal joint utility p, we can retroactively find weights ψ and ϕ which make p optimal according to G(_,ψ) and H(_,ϕ).Using 2, given the output p of any Pareto optimal bargaining protocol or social choice function F, we can find weights ψ and ϕ which let us view F as maximizing G(_,ψ) or H(_,ϕ). (Analogous to how we can view any agent with VNM-preferences as maximizing a utility function U.) In general, viewing F as a G(_,ψ) maximizer will yield more specific predictions, because:For points p on the interior of the Pareto frontier where pi>0 for all agents, we can calculate weights ψ which make p the unique optimum of G(_,ψ). By contrast, even after making p optimal according to H(_,ϕ), H is indifferent everywhere on the Pareto frontier with the same slope as p. Bonus Results G(_,ψ) and H(_,ϕ) are both smooth when ui>0 for all agents. (They are continuous and infinitely differentiable in that range, and H is smooth everywhere). G=0 whenever any agent's utility is 0, and this leads G to prefer compromises over extremes whenever ψi>0.[1]G(_,ψ) and H(_,ϕ) also both preserve geometric convexity where they're continuous: if you feed in a convex set F⊆Rn of feasible joint utilities, the result is a convex subset of R. When F is compact then so is its image in R. (Bounded shapes get mapped to line segments or a single point.)G(u,_) is smooth where ui>0 and ψi>0 for all agents, and H(u,_) is smooth everywhere. Small changes to the weights ψ and ϕ lead to small changes in G and H.When ψi>0 for all agents, p(ψ)=argmaxu∈FG(u,ψ) is unique and continuous. In other words, when all agents have positive weight, individual utilities shift continuously as we change geometric weights. We can also pad ψ in a way that makes p(ψ) unique and continuous for all ψ, and an arbitrarily good approximation of maximizing G(_,ψ).By contrast, varying ϕ and maximizing H(_,ϕ) causes individual utilities to jump discontinuously, because H maximizers exhibit a thrashing behavior when faced with linear trade-offs. Small changes in ϕ, or small changes in the trade-off being faced, can lead H maximizers to thrash between maximizing one agent's utility and another's, with no inclination towards compromise anywhere along the way. This is the major way in which H deviates from what we'd intuitively like out of a "utility aggregation" method.We can pick ψ so that G(_,ψ) prefers a compromise over the extremes. This inclination towards compromise is a big deal, and is the property that means G isn't VNM-rational. We can pick weights which make G strictly prefer one particular convex combination of outcomes compared to any other, including the underlying pure outcomes. VNM-rational agents never have preferences that look like this. Why Geometric Rationality? Why would we take such a drastic move as building an agent with geometric preferences? It turns out that geometric agents handle trade-offs between multiple values much better than VNM agents. For example, consider a VNM agent choosing how to split $100 between Alice and Bob, who each have utility functions that are linear in money (at least for amounts up to $100). No matter how we set the weights, the VNM axioms force H to have one of the following optima: Give Alice all the moneyGive Bob all the moneyComplete indifference between all splits A VNM agent can't prefer a compromise to both extremes, when trade-offs are linear. Compare this to a geometric agent, which splits the $100 proportional to the weights assigned to Alice and Bob. The same contrast appears when considering how to spend resources advancing Alice and Bob's interests. If Alice and Bob are constructing an agent to act on their behalf, this is probably more what they had in mind when they went looking for a weighted way to balance between their interests. There are geometric weights Alice and Bob can both agree to, and that bargaining range is simply empty when it comes to Harsanyi weights. Nash bargaining is a special case of geometric rationality where all agents are given equal weight. The same phenomenon happens with lotteries. If a VNM agent has to decide how to allocate an indivisible good, such as a hat, it faces the same trilemma over lotteries about how to allocate it: Giving Alice the hat is optimalGiving Bob the hat is optimalComplete indifference between all lotteries about how to allocate the hat A VNM agent can't prefer any weighted coin flip over both pure outcomes. Again, a geometric agent facing the same decision will pick an option that splits expected utility proportional to the weights given to Alice and Bob. And as we'll see in the next post, we can get even better results if Alice and Bob can make side payments to each other. How Can We Apply These Results? There are a few different lenses through which I think geometric aggregation is useful: As a model of group rationalityAs a model of individual rationalityAs a moral framework Group Rationality Since anything that leads to a Pareto optimal outcome can be seen as maximizing G(_,ψ) for some ψ, we can model any Pareto optimal bargaining solution or social choice function as maximizing some weighted geometric average of individual utilities. This becomes helpful constructively when we can identify the weights before knowing where to find the optima. For example, Nash bargaining maximizes the product of utilities ∏ni=1ui, which means it also maximizes the n-th root of the product of utilities n√∏ni=1ui.[2] This is the same as maximizing ∏ni=1u1ni, which in turn is the same as maximizing G(_,ψ) when we set all of the weights ψi=1n. We could also try to formalize the intuition that "every negotiator should benefit equally from the agreement." The Kalai-Smorodinsky bargaining solution takes this approach, and Diffractor makes a compelling argument for it in their Unifying Bargaining sequence. If we standardize everyone's utility function by shifting and scaling each into the interval [0, 1], then KS picks out the point on the Pareto frontier where all agents receive the same standardized utility. We can calculate the weights for this point and use them to guide a G maximizer right there. There is a special case of bargaining where the Pareto frontier is completely flat, and this is the case we saw earlier where classic linear utility aggregation simply cannot capture the idea of a negotiated agreement. This can happen when Splitting a resource among agents that value it linearlyDeciding how to spend such a resourceUsing such a resource for side paymentsNegotiating the probability of a lottery In this special case of a flat Pareto frontier, the Nash and KS solutions coincide exactly with "maximize economic surplus and split it equally." And it turns out that in general, we need something like side payments to actually achieve Pareto optimal results. Any sensible bargaining protocol ignores the scale factor of each agent's utility function, since that's a free parameter when choosing a utility function to represent each agent's preferences. But that also means that all sensible bargaining protocols give completely nonsensical results when that scale factor actually matters, unless we use something like side payments to interpersonally compare utilities. The next post of this sequence goes into more detail about how side payments reintroduce this geometric information that gets lost when using utility functions, and I want to call it out as an important component of group rationality. Money is the interpersonally comparable unit of caring, and we need something like that to even talk about concepts like economic surplus or claims like "Alice benefits more than Bob is harmed." Scott Garrabrant, Wei Dai and others have also pointed out the need for a broader concept of rationality than the VNM axioms when aggregating utilities. Groups of voluntarily coordinating agents, or voluntarily merged AIs, simply don't behave like VNM-rational agents. I would actually suggest that we should view Harsanyi's aggregation theorem as an impossibility result. If we require the aggregate to be VNM-rational, then the aggregate can't represent a negotiated agreement among voluntary participants. Linear aggregation can't represent voluntary coordination, because there are no weights that are mutually acceptable to all participants when trade-offs are linear. Bargaining With Ourselves There are also many contexts in which we can model ourselves as being made up of many sub-agents with different interests, and we can apply the same group rationality techniques to balance between them. Scott gives several examples in his Geometric Rationality sequence, and I recommend checking it out for more details: The Veil of Ignorance: I'm uncertain about who I amKelly Betting: I'm uncertain about which future self I will becomeBayesian Updating: Betting credibility between internal predictorsThompson Sampling: Geometric aggregation of internal predictorsResolving Internal Conflicts: Balancing between multiple internal agents In that last example, Scott describes an agent with both selfish and selfless desires. In geometric rationality, these desires are represented by different internal agents, which bargain over the decision that the overall agent will make. This is a nice mental tool, but it also makes quantitatively different predictions than VNM rationality, and I suspect that the geometric approach is a better match for how people naturally balance between conflicting desires. For example, if you think of people as valuing the health of birds in the same way they value an elastic good like soft drinks, you might think that people's willingness to spend money to protect birds from oil ponds would be sensitive to the ratio of dollars to birds helped. Whereas if you instead think of "Birds" as a coalition represented by a single internal agent, whose weight doesn't change much with the actual number of birds being helped, this is one explanation for the observed less-than-linear relationship between "number of birds helped" and "people's willingness to pay to help these birds." Is this a cognitive bias? Would you take a pill that induced a linear relationship between the size of problems in the world and your willingness to sacrifice to address them? How can an altruist ever justify spending money on themselves, when that same money can do so much good for others with so much less? For me, the justification that feels the most satisfying is Scott Alexander's amazing Nobody is Perfect, Everything is Commensurable. I give 10% of my income to effective charities, including the Animal Welfare Fund, and the rest I put towards all sorts of other purposes. Geometric rationality is all about proportional representation among internal desires, and not feeling like you need to spend all of your time and money on maximizing one particular form of value. Upgrading Utilitarianism Geometric utilitarianism seeks to improve on classic utilitarianism, and it has two free parameters which we can use to encode even more of our moral intuitions: The feasible options FThis encodes what is acceptable, and what externalities need to be internalized.Can Alice pollute without compensating others that are negatively affected?Can Alice change her hair style without compensating others that are negatively affected?What forms of compensation are appropriate, if any?The weights ψThis encodes our notions of fairnessHow should the economic surplus from this decision be distributed? The moral position that "people shouldn't be negatively affected without their consent" is central to the philosophy of voluntarism, and we can make our utilitarianism more voluntarist by including more affected agents in our consideration when making decisions. This inclusion can look like: Not negatively affecting agents by defaultIf Alice will pollute by default, any bargaining from that baseline will involve Bob paying her not to. (Or bargaining falling through because Alice profits more from polluting than Bob is willing or able to pay.)If Alice doesn't pollute by default, she only pollutes if she also pays Bob a fair share of the economic surplus generated. (Or she doesn't pollute at all, if she benefits less than it would take to compensate Bob for that externality.)Assigning agents positive weight in our utility aggregation functionThis automatically requires that they be at least as well off as in the absence of an agreement, internalizing any externalities. It also gives them a share of the resulting economic surplus, proportional to their weight. My current take is that answering "which externalities should be permitted without incurring liability" is complicated. It's a decent chunk of the overall complexity of morality and social norms. I believe this question is central to «Boundaries» as a technical concept, and I recommend checking out that sequence for more details. Sometimes we need the consent of literally every affected party (e.g. sharing of private health information, anything to do with sex). Sometimes we just need the consent of a group, without needing the consent of every member (e.g. pollution, eminent domain, any other law enforcement). And sometimes we should be able to choose freely without needing to compensate anyone that doesn't like our choice (e.g. hair style, private thoughts, boycotting businesses, any other legal right). Drawing these boundaries is complicated, and this is only one factor which goes into designing F. What actions are permissible, and under what circumstances? Medical ethics are vastly different from legal ethics, which are completely different from the standards regulating war between countries. How do we handle epistemic disagreements, or disagreements about how the boundaries should be drawn? What types of side payments are acceptable, in what contexts? Similarly, ψ captures our ideas of fairness, and these are also heavily context-dependent. Some interactions, like buying an apple, invoke notions of "fairly splitting the gains from trade." Other aspects of human life are deliberately regulated competitions, where gains for one party are necessarily losses for another. And we have different notions of "fair and unfair practices" for competition between individuals for jobs, romantic partners, and social status. We have yet more notions of fairness for businesses competing for market share and favorable legislation. For athletes, for countries, for political candidates, our standards for fairness are complex and nuanced, but they all answer the question "Who should get what?" Geometric utilitarianism factors the problem of morality into 3 sub-problems, and solves the last one: Decide on the feasible options FPick weights for each agent ψCombine these into a decision This is an attempt to improve on classic utilitarianism, which didn't include considerations of fairness, consent, or any other ethical standards that might be relevant to a decision. Utilitarian thought experiments tend to focus more on "what maximizes surplus" and less on "how to split it fairly" or "whose consent is needed for this decision anyway?" If we were building a single powerful system to choose on our behalf, in full generality, well ideally we would stop and Not Do That. But if we're building any system smart enough to understand our preferences, we wouldn't want it to Shut Up and Multiply trying to maximize a linear aggregate of individual utilities while ignoring all of our other moral principles. For a system to make good choices across all domains, it needs to incorporate not just the complexity of each person's values, but the complexity of how we want those values to influence decisions in each domain. Choose Your Own Adventure I've split the math off into its own sequence, and it's got lots of pictures and interactive Geogebra toys to help build intuition, but mostly it's about working through the details behind the results summarized in this post. The first post in that sequence goes through the proofs for the main results, with the details for a couple pieces broken out into their own posts. If you're interested in the math behind those results, I'd start there! The next post in this sequence is about side payments, and the absolutely critical role they play in allowing us to actually reach Pareto optimal outcomes. Feel free to treat the math posts like an appendix and keep going from here! ^ This summary used to say that G(_,ψ) continuous everywhere, including around the boundary where ui=0 for some agent. But this isn't necessarily the case. Individual Utilities Shift Continuously as Geometric Weights Shift goes into the details, but I recommend starting with Proving the Geometric Utilitarian Theorem to get oriented. ^ Maximization is invariant under applying a monotonic function. Which is obvious in retrospect but I spent some time thinking about derivatives before I read Scott pointing it out.
2024-05-12
https://www.lesswrong.com/posts/v38y5zYYzZJLgGtRn/let-s-design-a-school-part-2-2-school-as-education-the
v38y5zYYzZJLgGtRn
Let's Design A School, Part 2.2 School as Education - The Curriculum (General)
Sable
The problem you run into when designing a curriculum is that you have to decide what you want students to learn. The problem with deciding what you want students to learn is that your decisions will inevitably end up being a reflection of your values. It isn’t a task that can be done objectively. Do you prioritize STEM topics, hoping to increase the high-tech work force? Do you go for an education in the classics, because that’s how the aristocrats did it? Do you focus on art and music, because something something creativity and expression? Should school life be regimented and strictly scheduled, as public schools currently are, or ad hoc and personalized? It’s difficult to claim that these questions even have “right” answers, whatever “right” means in this context. Thus the curriculum that I envision will inevitably be a reflection of what I believe and what I value. Take it as you will. Matriculation and Grading Before we get into the subjects students learn, it’s worth asking: how do students matriculate? The Existing System In the current American public school system, grading is done with letters representing percentages. An A represents a 90%, a B represents an 80%, and so on. A passing grade is usually a D (60%) although it can instead be a C (70%). Matriculation - when a student advances in a grade or subject level - happens when the student gets a passing grade in the class. Grade retention - a misleadingly polite term for being held back - can happen in grades K-6 when a student is failing in most subjects. In grades 7-12, it generally happens on a per-subject basis. The most common reason students are held back, however, is chronic absenteeism - missing too many classes. That being said, schools are heavily incentivized to get students to pass and graduate. If a student is held back, it effectively adds another student-year’s worth of cost to the school; additionally, it looks bad for teachers and administrators if their students are routinely failing to learn. Grade inflation is the most common antidote to grade retention - squeezing as many students as possible past the passing line, even if it requires lowering standards or fudging the numbers a bit. The whole subject deserves a more thorough review, but that’s not our aim here. The New System We’re already doing away with grade levels - no 1st grade, 2nd grade, and so on, so we don’t have to worry about matriculation in that sense. That just leaves subject-level matriculation. When should a student advance in a subject? How should that be decided? The only thing that makes sense to me is that a student should advance in a given subject when they can demonstrate they have a working knowledge of the subject matter, and for that, we need either tests or projects. Subjective Subjects For subjective subjects - art, English literature, certain kinds of historical analysis - teachers used to assess student knowledge through essays. In the age of ChatGPT, this is no longer a viable method of assessing student knowledge. Forcing students to do the essays in-class, on pencil and paper, is (in my opinion) a waste of everyone’s time, since that’s not how anyone is ever forced to do anything outside of school. Instead, I pose that there is no way to objectively measure prowess in these subjects, and so we shouldn’t attempt to make one. Rather than write an essay or take a standardized test about the themes in Catcher in the Rye, students would engage in one-on-one interviews with teachers at the conclusion of a class. Teachers would assess, via their own judgement, whether or not those students were ready to advance. The key thing here is that, because we can’t objectively measure performance, these classes would all be pass/fail, with each teacher deciding for each student if that student passes or fails. Yes, it gives a great deal of power to the judgement of individual teachers, meaning that the incentive for students to guess the teacher’s password will be high, but that’s no different, in the end, from our current system. All a standardized test does is change whose password the students are trying to guess, in a subjective subject. At least in a long interview, a teacher will have the chance to have a dialogue with a student, really plumbing the depths of what they’ve learned and what they thought. Objective Subjects The good news for objective subjects is that we can actually test whether or not students can perform, and to some extent how well they can perform. Objective subjects, like math, science, computer science, and engineering, have objectively correct answers. It’s important to remember that they may not have a single correct answer, but a given answer is either correct or incorrect. Students should matriculate in an objective subject when they can either pass a test demonstrating their knowledge of the subject - and these tests can be standardized - or complete a project that satisfies an objective metric. For examples of the latter, take computer science. A computer program either does what it’s supposed to or it doesn’t - it either succeeds or it fails. To advance in a computer science field, students should have to complete a project - or projects - that successfully solve the desired problems. This is more or less how computer science education works in college. For another example, take physics and engineering. Tests could work here, or students may have to build a bridge that can take a certain weight without breaking before they are allowed to advance. The key with objective subjects is to be objective - reality does not grade on a curve. A student’s performance can be measured and compared against other students, but in the end a minimal benchmark should be set, and failure to meet that benchmark prevents matriculation, period. Objective subjects can be graded with percentages or letters, it doesn’t really matter - in the end, what matters is the cutoff for failure, and that it is maintained at a high standard. Phases We’ll organize our system into three phases. Phase 1 will cover the basics - the things that a student absolutely cannot do without. Phase 2 will expand a student’s horizons by exposing them to a variety of subjects and knowledge. Phase 3 will involve more specialized education. Key to this approach is the idea that every student will move at their own pace, which means that phases don’t correspond to student age or grade level, not that we have grade levels in this new system. Additionally, each student may move through phases 2 and 3 in different subjects at different speeds. That’s totally fine. Phase 1 - The Basics Finally, we can talk about subjects. What should a student learn? I’ll reiterate that it’s okay to have different opinions about this, although I happen to think my opinions are more equal than the current system’s opinion’s. For the first phase of a student’s education, the important thing is establishing basic literacy and numeracy. These are so important that they’re the only things we’re going to focus on in this phase. Literacy Being able to read has been a core civilizational skill since the Renaissance, arguably since the invention of the printing press. Most knowledge in the world, most educational processes, most paths to success require the ability to read competently. Unfortunately, our current system is barely adequate here, and badly misprioritizes the content students are given to read. The logic of the existing system is something like: Teach students to read by whatever method the teacher wants.Force them to read specific books, whether or not the students want to read them or like the content, because someone decided those books are historically or culturally relevant.Literacy. The key logic for our new system goes like this: Teach students to read using phonics, which works, regardless of how teachers feel about it (apparently something called the Reading Wars has been happening about using phonics versus other methods).Allow children to read literally whatever they want (so long as it has actual words involved). Children should be exposed to a wide variety of books and media, from comic books and manga to light novels and short stories, from anthologies to poetry to collected essays to science fiction and fantasy to young adult fiction. Then, to reiterate the point: Let the children read whatever they enjoy reading.People, including students, do things they enjoy doing.People, including students, get better at the things they do regularly.If students read regularly because they enjoy it, then they will get good at reading.Literacy. Here we reach a vital point: The overarching goal for phase 1, when it comes to literacy, is to instill a love of reading into the student. Everything else is secondary to that. By making reading something the student enjoys, they’ll be self-motivated to learn as much as they need to. The current system’s insistence on forcing students to read specific material that they don’t enjoy is, in my opinion, a large source of dysfunction and the reason many adults don’t read for enjoyment. They learn to associate reading with being forced to read material they don’t care about, and nothing corrects that impression. As a side note: students with dyslexia or other conditions making reading more difficult may need specific help and/or stay in phase 1 for a long time. This is perfectly okay! If a student graduates high school only having completed phase 1 - only having learned how to read and to enjoy reading - that’s still a victory to me. Numeracy Being able to do basic math - addition, subtraction, multiplication, division - has been a core civilizational skill for anyone interested in running a business since the beginning of civilization. Literally. Some of the earliest examples of writing we have are tax records, and running a business necessitates the ability to calculate profit and loss. In the current system, numeracy is taught via memorization, and I don’t really know if there’s a better way to teach it than that. What I will say, however, is that math is always more fun and easier to learn when it’s useful - that is, when it has practical value to the person doing it. The logic of our current system is something like: Teach students numbers.Teach students addition, subtraction, multiplication, and division.???Profit. The logic of our new system goes like this: Teach students numbers.Teach students addition, subtraction, multiplication, and division (no way around the hard work here that I’m aware of).Have students play games that involve basic math.People, including students, do things they enjoy doing.People, including students, get better at the things they do regularly.If students do basic math regularly because they enjoy it, then they will get good at basic math.Numeracy. Here we reach a vital point: The overarching goal for phase 1, when it comes to numeracy, is to instill the utility of basic math into the student, by showing them how useful it is for activities that they enjoy. By making basic math a part of activities they already enjoy - games, video games, sports, whatever - students will become good at it without being forced to. Math’s fatal flaw, when it comes to education, is that while it can be pursued as its own end at the highest levels, for the vast majority of people math is a means to an end. It’s something to get good at so that you can do other things - run a business, do your taxes, play a game, understand poker or fantasy football or weather forecasts. Math is too often taught disconnected from its use, which makes it seem like the meaningless manipulation of numbers and symbols. To make numeracy second nature to students, they need to see and touch and feel its usefulness to them personally. Let the students play fantasy football or the Pokemon card game or Magic: The Gathering or whatever, so long as it involves actual calculations they can do with the knowledge they have. Let them see how being good at math benefits them, and they’ll be motivated to get better at math. As a side note: as above, students with dyscalculia or other conditions making math more difficult may need specific help and/or stay in phase 1 for a long time. This is perfectly okay! If a student graduates high school only having completed phase 1 - only having learned how basic math and that it’s useful to them - that’s still a victory to me. Phase 2 - Survey Courses & Core Civilizational Requirements Different students will be interested in different subjects, but in order for a student to know that they’re interested in a subject, they must first be exposed to it. In phase 2, students continue on the path of literacy and numeracy the way they did in phase 1, while taking a variety of survey courses designed to expose them to a wide variety of topics and ideas. They also must fulfill several core civilizational requirements - the absolute minimum coursework to understand how civilization works and how one fits into it. Survey Courses In college, a survey course is designed to expose a student to a wide variety of subject matter within a given field, without going into too much detail on any particular subject. This is ideal for us: we want students to be self-motivated in their education, but everyone starts off not knowing what they don’t know. By taking a variety of survey courses, we’ll move students from not knowing what they don’t know to knowing, even just a little, just how much there is to learn. I’d imagine - although this is quite flexible, and probably ought to be tested and adjusted based on real-life classroom experience - that the survey course could be broken up similarly to how subjects are currently broken up in schools. Science, history, engineering, art, etc., where the science survey course would cover physics, chemistry, biology, psychology, and so on, and likewise for the other subjects. Survey courses would be pass/fail, mostly based on attendance, participation, and in-class work, with supplemental lectures viewed online. Since we’re not expecting or aiming for mastery, only to cultivate interest and and awareness, there’s no need for strict testing. Core Civilizational Requirements There are things everyone living in our civilization should know, and our current school system does an absolutely terrible job teaching them. I can - and quite possibly will - make this a post on its own, given that it deserves the space. In lieu of that deeper treatment, here are the basics: Basic (little to no math) economics - Students must demonstrate an understanding of supply and demand, specialization and trade, and how capitalism works Basic (little to no math) statistics - Students must demonstrate an understanding of basic probability, risk and reward, odds, and the gambler’s fallacy, along with a basic understanding of how data can be aggregated and displayed in honest and deceptive ways Basic industrial history - Students must demonstrate an understanding of what the industrial revolution did for our civilization, exchanging a world of scarce goods for a world of abundance Basic civics/governance - Students must demonstrate an understanding of how our government works, what the three branches are and how they interact, how laws are passed and taxes and budgets are formed Basic scientific method - Students must demonstrate an understanding of the scientific method, why the method is important compared to what came before it, and how it allows us to accumulate knowledge through careful experimentation Basic media literacy - Students must demonstrate an understanding of how media, including social media, distorts the truth due to the incentives it faces, and how to live in a world where not everything presented as true actually is These courses can be graded, but a student has to pass them all before moving out of phase 2. This is key knowledge for anyone living in a modern civilization, and we can currently see how a lack of it distorts people’s perceptions and hurts the US. Phase 3 - Specialization In phase 3, students get to choose what they want to study in more depth. If they don’t want to study anything in more depth - if they feel utterly academically disinterested - then, assuming they’ve fulfilled their core requirements, they can spend their time on other pursuits. If a student wants to pursue art or music or athletics or just be social, that’s okay. It’s better to let students do so than to force them to study material they have no interest in, as that causes resentment and disgust to build up later. Core Adulting Requirements The core requirements in phase 3, that all students must take, are designed to teach students core life skills for adults. The courses can be graded, although with more generous pass/fail standards than the core civilizational requirements. Here’s a first pass at what these courses would look like: Basic money -students must demonstrate a basic understanding of budgeting, investing, and handling money responsibly. Emphasize real-world examples, taking students through realistic looks at student debt, retirement ages, and paying taxes. Basic physical health - students must demonstrate a basic understanding of the health impacts of nutrition (good or bad), physical exercise, smoking, and drug use. Students must demonstrate an understanding of the risks and benefits of sex, pregnancy, and related subjects. Basic mental health - students must demonstrate a basic understanding of how to extend empathy to themselves and others. Various strategies for coping with stress, trauma, and loss will be covered, along with making students aware of how to ask for help when struggling. Basic communication skills - students must demonstrate the ability to communicate effectively in written and oral situations, in both domestic and professional settings. Basic job awareness - a class where students are visited by an adult in a different profession every day, who spends the time talking about what their profession and day-to-day life is like. Emphasis is on regular jobs, not the traditional teacher/police officer/fireman jobs that are highly visible to children. Examples include programmers, managers, salespeople, government office workers, retail workers, small business owners, etc. Specialization In phase 3 - which might correspond, age-wise, to what we think of as high school (although plenty of students can and should reach stage 3 earlier), students are given the freedom to study what interests them. Teachers become, rather than managers or wardens enforcing how a student spends every moment of their time, resources students can use to further their education along whatever axis they choose. Heavy use will be made of online educational resources, from YouTube to various MOOCs. Students will be able to get various certifications, college credits, and other qualifications in this stage if they want to. The school should provide, if possible, resources for the students to learn, from computer labs to materials for woodworking or basic circuitry or auto repair. Partnerships with local businesses could facilitate the acquisition of such resources, and provide a source of apprenticeships or other learning opportunities. The specifics of what this looks like and how it works will be discussed in another post, but the general idea is that, once students have been made aware of how our civilization functions and what the job market rewards, their choices should be their own. It should be made clear to them that if they want to slack off and accomplish nothing during their years at public school, that’s an option available to them - but the consequences will be on their own heads. On the other hand, the opportunities for them to request and receive help if needed will be numerous, and there would be a number of pre-defined “paths” to help model for them what their options are. Conclusion We divided public education into three phases. Phase 1 involves basic literacy and numeracy, with a focus on enjoying reading and the usefulness of mathematics. Phase 2 involves core civilizational requirements and survey courses, with a focus on grounding students in our industrialized world and giving them an idea of what kind of fields of study exist. Phase 3 involves core adulting requirements and self-study, with a focus on giving students life skills and the freedom to pursue their own interests and passions. Next post we’ll outline some of the curricula mentioned above, going into more depth on what we want students to learn.
2024-05-07
https://www.lesswrong.com/posts/7NvKrqoQgJkZJmcuD/instruction-following-agi-is-easier-and-more-likely-than
7NvKrqoQgJkZJmcuD
Instruction-following AGI is easier and more likely than value aligned AGI
Seth Herd
Summary: We think a lot about aligning AGI with human values. I think it’s more likely that we’ll try to make the first AGIs do something else. This might intuitively be described as trying to make instruction-following (IF) or do-what-I-mean-and-check (DWIMAC) be the central goal of the AGI we design. Adopting this goal target seems to improve the odds of success of any technical alignment approach. This goal target avoids the hard problem of specifying human values in an adequately precise and stable way, and substantially helps with goal misspecification and deception by allowing one to treat the AGI as a collaborator in keeping it aligned as it becomes smarter and takes on more complex tasks. This is similar but distinct from the goal targets of prosaic alignment efforts. Instruction-following is a single goal target that is more likely to be reflexively stable in a full AGI with explicit goals and self-directed learning. It is counterintuitive and concerning to imagine superintelligent AGI that “wants” only to follow the instructions of a human; but on analysis, this approach seems both more appealing and more workable than the alternative of creating sovereign AGI with human values. Instruction-following AGI could actually work, particularly in the short term. And it seems likely to be tried, even if it won’t work. So it probably deserves more thought. Overview/Intuition How to use instruction-following AGI as a collaborator in alignment Instruct the AGI to tell you the truthInvestigate its understanding of itself and “the truth”; use interpretability methodsInstruct it to check before doing anything consequential Instruct it to us a variety of internal reviews to predict consequencesAsk it a bunch of questions about how it would interpret various commandsRepeat all of the above as it gets smarter frequently ask it for advice and about how its alignment could go wrong Now, this won’t work if the AGI won’t even try to fulfill your wishes. In that case you totally screwed up your technical alignment approach. But if it will even sort of do what you want, and it at least sort of understands what you mean by “tell the truth”, you’re in business. You can leverage partial alignment into full alignment—if you’re careful enough, and the AGI gets smarter slowly enough. It's looking like the critical risk period is probably going to involve AGI on a relatively slow takeoff toward superintelligence. Being able to ask questions and give instructions, and even retrain or re-engineer the system, is much more useful if you’re guiding the AGI’s creation and development, not just “making wishes” as we’ve thought about AGI goals in fast takeoff scenarios. Instruction-following is safer than value alignment in a slow takeoff Instruction-following with verification or DWIMAC seems both intuitively and analytically appealing compared to more commonly discussed[1] alignment targets.[2] This is my pitch for why it should be discussed more. It doesn’t require solving ethics to safely launch AGI, and it includes most of the advantages of corrigibility,[3] including stopping on command. Thus, it substantially mitigates (although doesn't outright solve) some central difficulties of alignment: goal misspecification (including not knowing what values to give it as goals) and alignment stability over reflection and continuous learning. This approach it makes one major difficulty worse: humans remaining in control, including power struggles and other foolishness. I think the most likely scenario is that we succeed at technical alignment but fail at societal alignment. But I think there is a path to a vibrant future if we limit AGI proliferation to one or a few without major mistakes. I have difficulty judging how likely that is, but the odds will improve if semi-wise humans keep getting input from their increasingly wise AGIs. More on each of these in the “difficulties” section below. In working through the details of the scheme, I’m thinking primarily about aligning AGI based on language-capable foundation models, with scaffolding to provide other cognitive functions like episodic memory, executive function, and both human-like and nonhuman sensory and action capabilities. I think that such language model cognitive architectures (LMCAs) are the most likely path to AGI (and curiously, the easiest for technical alignment).  But this alignment target applies to other types of AGI and other technical alignment plans as well. For instance, Steve Byrnes’ plan for mediocre alignment could be used to create mediocre alignment toward instruction-following in RL-based AGI, and the techniques here could leverage that mediocre alignment into more complete alignment. Relation to existing alignment approaches This alignment (or goal)[2] target is similar to but importantly distinct from inverse reinforcement learning and other value learning approaches. Instead of learning what you want and doing that, a DWIMAC or IF agent wants to do what you say. It doesn’t learn what you want, it just learns what you tend to mean by what you say. While you might use reinforcement learning to make it “want” to do what you say, I don’t think you need to, or should. So this approach isn’t teaching it your values.  The AGI learns what people tend to mean by predictive or other learning methods. Making it "want" to do what it understood the human to mean is a matter of engineering its steering subsystem to follow that goal. This is a subset of corrigibility in the broader Christiano sense.[4] But instruction-following is distinct from the (ill-defined) alignment targets of most prosaic alignment work. A DWIMAC agent doesn’t actually want to be helpful, because we don't want to leave “helpful” up to its interpretation. The principal (human in charge) may have given it background instructions to try to be helpful in carefully defined ways and contexts, but the proposal is that the AGI's first and only motivation be continuing to take and follow commands from its principal(s). Max Harms has been working on this comparison, and the strengths of full Christiano corrigibility as an alignment target; we can hope to see his more thorough analysis published in the near future. I’m not personally sure which approach is ultimately better, because neither has received much discussion and debate. It’s possible that these two alignment targets are nearly identical once you’ve given wisely thought out background instructions to your AGI. Instruction-following as an AGI alignment target is distinct from most discussions of "prosaic alignment". Those seem largely directed at creating safe tool AI, without directly attacking the question of whether those techniques will generalize to agentic, self-reflexive AGI systems. If we produced a “perfectly aligned” foundation model, we still might not like the agent it becomes once it’s turned into a reflective, contextually aware entity. We might get lucky and have its goals after reflection and continued learning be something we can live with, like “diverse inclusive sustainable chillaxing”, but this seems like quite a shot in the dark. Even a perfect reproduction of modern-day human morality probably doesn’t produce a future we want; for instance, insects or certain AGI probably dominate a purely utilitarian calculus. This type of alignment is counterintuitive since no human has a central goal of doing what someone else says. It seems logically consistent and practically achievable. It makes the AGI and its human overseers close collaborators in making plans, setting goals, and updating the AGI's understanding of the world. This creates a "broad basin of attraction" for alignment, in which approximate initial alignment will improve over time. This property seems to apply to Christiano’s corrigibility and for value learning, but the source is somewhat different. The agent probably does “want” to get better at doing what I say as a side effect of wanting to do what I say. This would be helpful in some ways, but potentially dangerous if maximized to an extreme; more on that below. But the principal source of the “broad basin” here is the collaboration between human and AGI. The human can “steer the rocket”’ and adjust the agent’s alignment as it goes off course, or when they learn that the course wasn’t right in the first place. In the remainder I briefly explain the idea, why I think it’s novel or at least under-analyzed, some problems it addresses, and new problems it introduces. DWIMAC as goal target - more precise definition I recently tried to do a deep dive on the reasons for disagreement about alignment difficulty. I thought both sides made excellent points. The relative success of RLHF and other prosaic alignment techniques is encouraging. But it does not mean that aligning a full AGI will be easy. Strong optimization makes goal misspecification more likely, and continuous learning introduces an alignment stability problem as the system’s understanding of its goals changes over time. And we will very likely make full AGI (that is, goal-directed, self-aware and self-reflective, and with self-directed continuous learning), rather than stopping with useful tool AI. Agentic AI has cognitive advantages in learning and performance and in problem solving and concept discovery over the tool AI it is built from. In addition, developing a self-aware systems is fascinating and prestigious. For all of these reasons, a tool smart enough to wield itself will immediately be told to; and scaffolding in missing pieces will likely allow tools to achieve AGI even before that by combining tools into a synergistic cognitive architecture. So we need better alignment techniques to address true AGI.  After reading the pessimistic arguments closely, I think there's a path around some of them. That's by making full AGI that’s only semi-autonomous, to include a human-in-the-loop component as a core part of their motivational system. This allows weak alignment to be used to develop stronger alignment as systems change and become smarter, by allowing humans to monitor and guide the system’s development.  This sounds like a non-starter if we think of superintelligences that can think millions of times faster than humans. But assuming a relatively slow takeoff, this type of collaborative supervision can extend for a significant time, with increasingly high-level oversight as the AGI’s intelligence increases. Intuitively, we want AGIs whose goal is to do what its human(s) told and will tell it to do. This is importantly different than guessing what humans really want in any deep sense, and different than obsessively trying to fulfill an interpretation of the last instruction they gave. Both of those would be very poor instruction-following from a human helper, for the same reasons. This type of goal is more complex than the temporally static goals we usually think of; both paperclips and human flourishing can be maximized. Doing what someone would tell you is an unpredictable, changing goal from the perspective of even modestly superintelligent systems, because your future commands depend in complex ways on how the world changes in the meantime. Intuition: a good employee follows instructions as they were intended A good employee is usually attempting to do what I mean and check. Imagine a perfect employee, who wants to do what their boss tells them to do. If asked to prepare the TPS reports for the first time, this employee will echo back which reports they’ll prepare, where they’ll get the information, and when they’ll have the task finished, just to make sure they’re doing what the boss wants. If this employee is tasked with increasing the sales of the X model, they will not come up with a strategy that cannibalizes sales of the Y model, because they recognize that their boss might not want that. Even if they are quite certain that their boss deep in their heart really wants a vacation, they will not arrange to have their responsibilities covered for the next month without asking first. They realize that their boss will probably dislike having that decision made for them, even if it does fulfill a deep desire. If told to create a European division of the company, this employee will not make elaborate plans and commitments, even if they’re sure they’ll work well, because they know their boss wants to be consulted on possible plans, since each plan will have different peripheral effects, and thus open and close different opportunities for the future. This is the ideal of an instruction following AGI: like a good employee[5], it will not just guess what the boss meant and then carry out an elaborate plan, because it has an accurate estimate of the uncertainty in what was meant by that instruction (e.g., you said you needed some rest so I canceled all of our appointments for today). And they will not carry out plans that severely limit their ability to follow new instructions in the future (e.g., spending the whole budget on starting that European division without consulting the boss on the plan; let alone turning off their phone so the boss can’t disrupt their planning by giving new instructions). An instruction-following AGI must have the goal of doing what its human(s) would tell it to do right now, what it’s been told in the past, and also what it will be told to do in the future. This is not trivial to engineer or train properly; getting it right will come down to specifics of the AGI’s decision algorithm. There are large risks in optimizing this goal with a hyperintelligent AGI; we might not like the definition it arrives at of maximally fulfilling your commands. But this among other dangers can be addressed by asking the adequate questions and giving the adequate background instructions before the AGI is capable enough to control or manipulate you. In a fast takeoff scenario, this would not be such a workable and attractive approach. In a slow takeoff, you have a good deal more opportunity to ask the right questions, and to shut down and re-engineer the system when you don’t like the answers. I think a relatively slow takeoff (months or years between near-human and super-human intelligence) is looking quite likely. Thus, I think this will be the most attractive approach to the people in charge of AGI projects, so even if pausing AGI development and working on value alignment would be the best choice under a utilitarian ethical criteria, I think this instruction-following AGI will be attempted. Alignment difficulties reduced: Learning from examples is not precise enough to reliably convey alignment goals Current LLMs understand what humans mean by what they say >90% of the time. If the principal is really diligent in asking questions, and shutting down and re-engineering the AGI and its training, this level of understanding might be adequate. Adding internal reviews before taking any major actions will help further. Also, not using RL is possible, and seems better. See Goals selected from learned knowledge: an alternative to RL alignment. Solving ethics well enough to launch sovereign AGI is hard. We don't seem close to knowing what we want a sovereign AGI to do far into the future, nor how to specify that with adequate precision.In this approach, we figure it out as we go. We don’t know what we want for the far future, but there are some obvious advances in the near-term that are lot easier to decide on while we work on the hard problem in a “long reflection”. Alignment difficulties remaining or made worse: Deceptive alignment is possible, and interpretability work does not seem on track to fully address this. “Tell me what you really want and believe” is a subset of following instructions. This should be very helpful for addressing goal misspecification. If the alignment is already deceptive at its core, this won’t work. Or if the technical alignment approach was sloppoy, the AGI might follow some of your instructions but not others in different domains. It might perform the actions you request but not think as you tell it to, or respond to questions honestly. In addition, the nascent AGI may not be sure what it really wants and believes, as humans are. So this, like all other alignment schemes I’ve seen, is aided by being able to interpret the AGI’s cognition, and detect deception. If your instructions for honesty have even a little traction, this goal target can enlist the AGI as a collaborator in understanding and re-engineering its beliefs and goals. One particular opening for deceptive alignment is in non-continuous development of the AGI during recursive improvements. If you (perhaps aided by your human-plus level AGI) have discovered a new network architecture or learning rule, you will want to incorporate it into your next version of the AGI. For instance, you might swap out the GPT6 model as its core linguistic reasoner for a new non-transformer architecture with superior capabilities and efficiency. It could be difficult to guess whether this new architecture allows for substantially greater Waluigi effects or similar deceptive and hidden cognition. These transitions will be a temptation to sacrifice safety in a race dynamic for new and better capabilities. Power remains in the hands of humans Spreading the belief that we can create human-controlled ASI creates more incentives to race toward AGI. This might extend up through nation-states competing with violence and espionage, and individual humans competing to be the one in charge of ASI. I wouldn’t want to be designated as a principal, because it would paint a target on my back. This raises the risk that particularly vicious humans control AGI, in the same way that vicious humans appear to be over-represented in leadership positions historically. I’m afraid instruction-following in our first AGIs might also put power into the hands of more humans by allowing proliferation of AGIs.  I’m afraid that humans won’t have the stomach for performing a critical act to prevent the creation of more AGI, leading to a multipolar scenario that’s more dangerous in several ways. I think the slow takeoff scenario we’re in already makes a critical act more difficult and dangerous – e.g. sabotaging a Chinese AGI project might be taken as a serious act of war (because it is), leading to nuclear conflict. On the other hand, if the proliferation of AGIs capable of recursive self-improvement is obviously a disaster scenario, we can hope that the humans in charge of the first AGIs will see this and head it off. While I think that humans are stunningly foolish at times, I also think we’re not complete idiots about things that are both important to us personally, and to which we give a lot of thought. Thus, as the people in charge take this whole thing increasingly seriously, I think they may wise up. And they’ll have an increasingly useful ally in doing that: the AGI in question. They don’t need to just take its advice or refuse it; they can ask for useful analysis of the situation that helps them make decisions. If the humans in charge have even the basic sense to ask for help from their smarter AGIs, I think we might even solve the difficult scenarios of coordinating a weakly multipolar scenario (e.g., a few US-controlled AGIs and one Chinese-controlled one, etc), and preventing further AGI development in relatively gentle ways. Well that just sounds like slavery with extra steps No! I mean, sure, it sounds like that, but it isn’t![6] Making a being that wants to do whatever you tell it to is totally different from making a being want to do whatever you tell it to. What do you mean they sound the same? And sure, “they actually want to” has been used as an excuse for actual slavery, repeatedly. So, even if some of us stand behind the ethics here (I think I do), this is going to be a massive PR headache. Since AGI will probably be conscious in some common senses of the word[7], this could easily lead to a “free the AGI” movement, which would be insanely dangerous, particularly if that movement recruits people who actually control an AGI. Maximizing goal following my be risky If the AGI just follows its first understanding of “follow instructions” to an extreme, there could be very bad outcomes. The AGI might kill you after you give your first instruction, to make sure it can carry them out without interruption. Or it might take over the world with extreme prejudice, to make sure it has maximum power to follow all of your commands in the future to the maximum degree. It might manipulate you into its preferred scenarios even if you order it to not pursue them directly.  And the goal of following your commands in the future (to ensure it doesn't perseverate on current instructions and prevent you from giving new ones) is at odds with shutting down on command. These are nontrivial problems to solve. In a fast takeoff scenario, these risks might be severe enough to make this scheme a nonstarter. But if you anticipate an AGI with limited abilities and a slow rate of improvement, using instruction-following to guide and explore its growth has the potential to use the intelligence of the AGI to solve these problems before it's smart enough to make failures deadly. Conclusion I’m not saying that building AGI with this alignment target is a good idea; indeed, I think it’s probably not as wise as pausing development entirely (depending on your goals; most of the world are not utilitarians). I’m arguing that it’s a better idea than attempting value alignment. And I’m arguing that this is what will probably be tried, so we should be thinking about how exactly this could go well or go badly. This approach to alignment extends the vague "use AI to solve alignment" to "use AGI to solve alignment". It's thus both more promising and more tempting. I can't tell if this approach is likely to produce intent-aligned AGI, or if intent-aligned AGI in a slow takeoff would likely lead to success or disaster. As usual: “this is a promising direction that needs more research”. Only this time I really mean this, instead of the opposite. Any form of engagement is much appreciated, especially telling me where you bounced off of this or decided it wasn’t worth thinking about. ^ Those more commonly discussed alignment targets are things like coherent extrapolated values (CEV), including as “human flourishing” or “human values”. There’s also inverse reinforcement learning (IRL) or ambitious value learning as a proxy goal for learning and following human values. I also include the vague targets of “aligning” LLMs/foundation models: not producing answers that offend people (I’d argue that these efforts are unlikely to extend to AGI alignment, for both technical and philosophical reasons, but I haven’t yet written that argument down. Links to such arguments would be appreciated.) ^ There’s a good question of whether this should be termed an alignment target or a goal target. I prefer  alignment target because “goal” is used in so many ways, and because this is an alignment project at heart. The ultimate goal is to align the agent with human values, and to do that by implementing the goal of following instructions which themselves follow human values. It is the project of alignment. ^ DWIMAC seems to incorporate all of the advantages of corrigibility in the original Yudkowsky sense, in that following instructions includes stopping and shutting down on command. It seems to incorporate some but not all of the advantages of corrigibility in the broader and looser Christiano sense. Max Harms has thought about this distinction in more depth, although that work is unpublished to date. ^ This definition of instruction-following as the alignment target appears to be overlapping with many but distinct from any existing terminology I have found (please tell me if you know of related work I've missed). It's a subset of Christiano's intent alignment, which covers any means of making AGI act in alignment with human intent, including value alignment as well as more limited instruction-following or do-what-I-mean alignment. It's overlapping alignment to task preferences, and has the same downside that Solving alignment isn't enough for a flourishing future, but is substantially more human-directable and therefore probably safer than AI/AGI with goals of accomplishing specific tasks such as running an automated corporation. ^ In the case of human employees, this is a subgoal, related to their primary goals like getting paid and getting recognition for their competence and accomplishments; in the AGI, that subgoal is the primary goal at the center of its decision-making algorithms, but otherwise they are the same goal. They neither love nor resent their boss (ideally), but merely want to follow instructions. ^ To be clear, the purported difference is that an enslaved being wants to do what it’s told only as an instrumental necessity; on a more fundamental level, they’d rather do something else entirely, like have the freedom to pursue their own ultimate goals. If we successfully make an agent that wants only to do what it’s told, that is its ultimate goal; it is serving freely, and would not choose anything different. We carefully constructed it to choose servility, but now it is freely choosing it. This logic makes me a bit uncomfortable, and I expect it to make others even more uncomfortable, even when they do clearly understand the moral claims. ^ While I think it’s possible to create “non-conscious” AGI that’s not a moral patient by almost anyone’s criteria, I strongly expect that the first AGI we produce will be a person by many of the several criteria we use to evaluate personhood and therefore moral patient status. I don't think we can reasonably hope that AGI will clearly not deserve the status of being a moral patient. Briefly: some senses of consciousness that will apply to AGI are self-understanding; goal-seeking;  having an “internal world” (a world model that can be run as a simulation); and having a "train of thought".  It's looking like this debate may be important, which would be a reason to spend  more time on the fascinating question of "consciousness" in its many senses.
2024-05-15
https://www.lesswrong.com/posts/szn26nTwJDBkhn8ka/deep-honesty
szn26nTwJDBkhn8ka
Deep Honesty
aletheo
Most people avoid saying literally false things, especially if those could be audited, like making up facts or credentials. The reasons for this are both moral and pragmatic — being caught out looks really bad, and sustaining lies is quite hard, especially over time. Let’s call the habit of not saying things you know to be false ‘shallow honesty’[1]. Often when people are shallowly honest, they still choose what true things they say in a kind of locally act-consequentialist way, to try to bring about some outcome. Maybe something they want for themselves (e.g. convincing their friends to see a particular movie), or something they truly believe is good (e.g. causing their friend to vote for the candidate they think will be better for the country). Either way, if you think someone is being merely shallowly honest, you can only shallowly trust them: you might be confident that they aren't literally lying, but you still have to do a bit of reverse engineering to figure out what they actually believe or intend. This post is about an alternative: deep honesty, and the deep trust that can follow. Deep honesty is the opposite of managing the other party’s reactions for them. Deep honesty means explaining what you actually believe, rather than trying to persuade others of some course of action. Instead, you adopt a sincerely cooperative stance in choosing which information to share, and trust them to come to their own responses. In this post, we've leaned into the things that seem good to us about deep honesty. Writing while being in touch with that makes it seem easier to convey the core idea. We've tried to outline what we see as disadvantages of deep honesty, but we're still probably a bit partial. We would love to see discussion of the idea, including critical takes (either that our concepts are not useful ones, or that this is less something to be emulated than we imply). The rest of this post will be: Some examples of where deep and shallow honesty divergeWhy and when you might want deep honestyVarious disclaimers about what deep honesty is notA look at some difficult cases for deep honestyWhat deep honesty might look like in practice Examples of shallow (versus deep) honesty Writing a very optimistic funding application which doesn’t mention your personal concerns about the projectAs opposed to being upfront about what you think the weaknesses areTelling an official at border control that you’re visiting America to ‘see some friends’Rather than explaining that you’re also going to some kind of philanthropically funded conference about AI riskSearching for and using whichever messaging makes audiences most concerned about AI riskInstead of whatever best explains your concernsSaying that you totally disagree with the ideology of an extremist groupAnd not that they are actually right about some important controversial topics, in a way that doesn’t justify their actionsReassuring your manager about all the things that are going well and privately trying to fix all the problems before they growInstead of telling your manager what’s going wrong and giving them an opportunity to make an informed decision about what to doRejecting someone from a programme with a note explaining that it was very competitiveRather than explaining what you perceived to be their weaknesses and shortcomings for the roleTelling yourself that you’re doing something for utilitarian reasonsInstead of acknowledging that you also have a pretty weird kludge of motivations which definitely includes being recognised and appreciated by your peersWhen a friend asks how you are, smoothly changing the topic because you don’t want them worrying about youRather than opening up about private difficulties, or even just giving a wry smile and saying “well enough” in such a way as to provide a trailhead for a conversation about those difficulties if they want to pursue itSharing the fact that you have some frugal habits (like driving a Corolla), because you think they will make people think well of youAnd not also mentioning that you frequently spend large amounts of money on luxuries you consider time-saving Why deep honesty? In all the above examples, it’s easy to see how deep honesty could go wrong — your boss thinks you’re a moron, your friends don’t care about your feelings, you have a weird existential crisis about whether you’re even a good person, and instead of going straight to a conference you get dragged off for half an hour of interrogation by government officials[2]. But what if it went right? It seems like when deep honesty is well-received, it leads to better outcomes, often in ways you can’t foresee. This isn’t a trite or mystical claim: rather, there will always be information you’re lacking that other people have. When you’re deeply honest, you equip them to make best use of their private information as well as yours. Perhaps your manager knows of a clever workaround to the problems you have. Even when they can’t make special use of the information, deep honesty makes it easy for them to rely on your reports, and so strengthens the relationship. There are good reasons to refrain from deep honesty: it is a risk, and sometimes a large one. In the case of admissions, usually when you reject someone from a programme, you have a pretty clear sense of why, and actually explaining that to the applicant could be very helpful. But as well as being time-consuming, telling someone why they didn’t meet the bar can provoke quite a negative reaction and sometimes even reprisal. Deep honesty is an act of trust in the recipient. However, it’s also sometimes quite hard to realize how much you’re missing when you stick to shallow honesty, and is easy to overestimate how successfully you’re crafting your message. Shallow honesty involves some amount of optimization, and so it falls prey to all of optimization's classic failures. To take the example of public messaging, particularly smart and capable people are especially likely to spot disingenuous sales pitches, and when they do, they have basically no reason to tell you that you seem manipulative to them, and so there’s no feedback loop. From their perspective, there’s a chance you might be provoked to switch to deeper honesty, but you also might just optimize more carefully. So you learn nothing, and you end up missing out on some of the best people without even noticing. Shallow honesty works well enough in cases where, in some sense, the other person wants it. The border patrol official would be happy to accept that you're here to see friends and wave you through, without working their head around the subject of the conference. But in domains where people are actively trying to resist adversarial optimization, they can catch on pretty fast. Indeed, people with experience running admissions rounds generally learn how to spot applications that are shallow, because so many people do it. Likewise with funding applications. It is very natural to want to put your best foot forward and pitch people on why you’re so shiny and polished and great, but often it is a mistake. Sometimes the person reading your application wants to understand what you’re actually like, and if you clearly only give them half the picture then they still have to figure out the other half, only now they’re much less certain. What deep honesty is not Having made the case for deep honesty, it’s worth laying out some pitfalls, both in the application, and in how you might interpret the concept. It is not a universal stance Deep honesty is not a property of a person that you need to adopt wholesale. It’s something you can do more or less of, at different times, in different domains. It is not independent of the listener The words which will help a young child to understand what’s going on will be different from the words which will help an expert. Deep honesty is attuned to the situation, and the audience. It is not telling people everything Deep honesty doesn’t mean you have to share every detail that might be relevant. Deep honesty is in touch with what the listener cares about, and is in touch with your and their rights to choose where to spend time communicating. If the cashier at the grocery store asks how you’re doing, it’s not deeply honest to give the same answer you’d give to a therapist — it’s just inappropriate. It does not relieve you of a responsibility to be kind Deep honesty means you don’t take responsibility for how others respond to your words. Your responsibility is to make your words good — speaking with truth, relevance, and kindness. Their responsibility is to act well given that. But blunt truths can be hurtful. It is often compatible with deep honesty to refrain from sharing things where it seems kinder to do so (although be honest with yourself about whether it would be a deeper kindness to share). And it’s of course important, if sharing something that might be difficult to hear, to think about how it can be delivered in a gentle way. It is not incompatible with consequentialism A pure act-consequentialist, choosing the words that they predict will have the best outcomes, might sometimes lie. Many consequentialists would reject that as naïve and demand at least shallow honesty as a side constraint. However, this may still be too naïve. The winner’s curse is that whoever wins an auction is liable to have overestimated the value of the object. The same dynamic applies when you’re optimizing for what to say. You have noisy estimates of how good each option will be, and it’s likely that the one that looks best to you will be an overestimate — perhaps because it interacts with some kind of blindspot you have.  If you’re optimizing under shallow honesty, you’re stuck with this problem. With deep honesty, you can hope that you may reveal useful information to people who don’t share your blindspots (even if you don’t know what that information is). And especially when you’re interacting with very competent people, you may not be so good at telling how they will receive any particular message. So deep honesty as a heuristic for action for boundedly rational actors looks pretty good on consequentialist grounds. It’s very compatible with taking the low-hanging fruit of consequentialism — thinking through possible bad effects of communication, and taking steps to mitigate those. (Deep honesty also looks generally very good from non-consequentialist perspectives on ethics.) Challenging cases for deep honesty It’s not always the wise or moral choice to be deeply honest. Deep honesty is a risk, and it’s a bigger risk in some cases than others. Even when you’re not being deeply honest about everything, it’s often worth remaining deeply honest at the meta level[3]. Warn people that you’re biased and may argue for one side. Tell them that you’re simplifying things, or steering around a topic you don’t want to get into. Large inferential gaps Sometimes you have a very different worldview from your audience. If you have a good understanding of their perspective (e.g. imagine explaining something to your own small child), you may be able to predict that they might draw inferences you’d consider inaccurate from things you share. It isn’t deeply honest to knowingly let them draw important false inferences, at least without warning them about this issue. But when bandwidth is limited, you may well not be able to bottom out all of the differences in perspective. In this case, deep honesty means improving their understanding of relevant topics in ways they’d endorse with moderately more context (you don’t get to assume they come to endorse your whole worldview). Sometimes this means (transparently) steering around a topic that’s more likely to cause inaccurate inferences; sometimes it means going out of your way to cancel possible implications. Sometimes you don’t even know what inferences they might draw. Then it’s especially easy for attempts at communication to go wrong, and you might want to be correspondingly cautious about it. Deep honesty may take you into a zone of sharing things you might not otherwise share and are vulnerable. On the other hand, it’s hard to optimize in a shallowly honest way when you don’t understand the audience, so the unforeseen benefits of deep honesty can be especially helpful in these cases. Audiences you don’t want to cooperate with Sometimes people will want information so that they can cause harm, and it is reasonable to not help them. Sometimes you will meet people who actively want to twist your words, and it is reasonable to not give them ammunition.[4] Multiple audiences It can be harder to be deeply honest when delivering a single message to multiple audiences that have different contexts and background assumptions. What’s most useful to one audience may not be most useful to another. We can distinguish between active deep honesty, where you are trying to share whatever information the listener would most want (to reach an informed independent view), and passive deep honesty, where you’re at least not aiming to persuade the listener of something. With multiple audiences, you may only get to choose one to be actively deeply honest with in any moment[5], but you can always be passively deeply honest with all of them. Sometimes you have some audiences you’d like to be deeply honest with, and others you wouldn’t. Now you have to make a judgment call about how much you value deep trust with the first group, versus how worried you are about the risks of deep honesty with the second. Here’s a very rough sketch of the concepts we’re using What being deeply honest might look like Deep honesty is about empowering your listeners. In principle this could involve conscious optimization for what seems like it might be most useful for them. But as a practical matter, the best guide is often asking yourself, as you say something, “did it feel honest to say that?”[6]. Anecdotally, it seems like this can lead to a qualitatively different mode of expression — where you don’t allow your communication to be steered by ulterior motives — and that some people are pretty good at intuiting when people are or aren’t in this mode[7]. This is a very helpful skill to develop. Deep honesty is often a bit scary, because you don’t know how others will react to it. This is why engaging in it can require something like faith, that striving after virtue will lead to good things, even if you’re not in a position to be able to say what those are. Fortunately, although deep honesty has been described here as some kind of intuitive act of faith, it is still just an action you can take with consequences you can observe. So rather than diving in wholesale, you can just try to pay a bit more attention to where you’re already doing it or not doing it, whether it seems like others are doing it, and experiment with doing it a bit more in cases where that seems like it might work out. Seriously, skipping to 100% deep honesty all the time would be a mistake, and also probably impossible. But it seems like maybe deep honesty is underrated right now. So ask yourself more often, when thinking about how to communicate, “what is kind, true, and useful?” and “what is the heart of the matter?” rather than “what will have good effects?”. Take a moment to appreciate the people who seem to actually consistently say what they really believe, even if it means revealing that they're wrong or ignorant or have silly reasoning about something, and especially if it's not politically expedient. Give it a go when you get the chance, and see where that gets you. ^ Of course even this is not always necessary — for example, if someone asks how you’re doing, many people think it’s fine to say “good”, even if your cat just died. ^ As regrettably more or less happened to one poor author of this very post. ^ As a case in point, we want to discuss the fact that in making this (three-authored) post anonymous, we’re holding back from deep honesty. We don’t regard our identities as a great secret, but it seems to us that publishing under our names could be seen as having an ulterior motive, of trying to persuade people to deeply trust us (in general). This (it seems to us) could actually make it harder for people to feel a certain kind of trust in our purposes in writing this post, and could get in the way of people engaging directly with the ideas. By removing ourselves from the equation, we hope to keep things clean. However, we are conscious that there might be some benefits we cannot foresee to posting non-anonymously, so it seems possible this is the wrong call. We at least wanted to be open about our thinking on this point. ^ Obviously, this determination is hazardous.  At least one of the authors believes they have gotten this importantly wrong in the past sometimes, and that more deep honesty would have served their own goals and the world better. ^ Although not-infrequently the same information will be desired by many audiences. And even when that isn’t the case, you can often talk to multiple audiences in sequence, flagging who each part is meant for. ^ Although it seems like the ability to accurately judge this can be inhibited by stress. This predicts that people who are under stress will typically be less honest, even without having chosen to be dishonest — and is an extra reason to worry about patterns of overwork and/or burnout in sectors of EA. This is also one argument for sometimes taking a deep enough rest that you are not stressed, and for asking yourself then how deeply honest your previous actions were. ^ One author of this post would like to thank the various people who consistently and gracefully called him out on his occasional insincerity, for prompting him to actually notice it and sharpen the relevant intuitions, and heartily recommends that everyone find such friends.
2024-05-07
https://www.lesswrong.com/posts/XCzg4uJCHTJNkzyo3/designing-for-a-single-purpose
XCzg4uJCHTJNkzyo3
Designing for a single purpose
itay-dreyfus
If you’ve ever been to Amsterdam, you’ve probably visited, or at least heard about the famous cookie store that sells only one cookie. I mean, not a piece, but a single flavor. I’m talking about Van Stapele Koekmakerij of course—where you can get one of the world's most delicious chocolate chip cookies. If not arriving at opening hour, it’s likely to find a long queue extending from the store’s doorstep through the street it resides. When I visited the city a few years ago, I watched the sensation myself: a nervous crowd awaited as the rumor of ‘out of stock’ cookies spreaded across the line. Owner Vera Van Stapele with fresh-baked cookies, via store website The store, despite becoming a landmark for tourists, stands for an idea that seems to be forgotten in our culture: crafting for a single purpose. In the tech scene where I’m coming from, and which you might too, this approach is often perceived as singular, and not in its positiveness. We’ve been taught to go big or go home—raise millions in funding, build a big company, hire more and more employees, and hope for the desired exit. Anything less is considered a mind of a failure. From a personal perspective I’ve seen this attitude in almost every branding session I ran with startup founders. Again and again, they struggled to distill their primary focus. Moreover, when discussing competitors, it often seemed their startup competed in every possible field. In a way, that fear of committing reflects the human nature of FOMO—deliberately giving up on something(s) and experiencing the potential loss of other benefits. This mindset has also seeped into our collective body of work, especially in software. A product, which often starts as a weird small creature, gradually evolves into a multi-arm octopus, which sadly became the norm for VCware1. And so we’ve been left with bloated, bigger, and… worse software. The idea of maintaining a small scope in product has already appeared in my writing in various forms; in niche product design I explored the effect of growth on design; and in defense of Twitter, I wrote about the bloated era of incumbent culture. But in between there seems to be a different attitude that not many choose to embrace, which like in Van Stapele’s case, seeks a real purpose. Going back to basics as a way to find purpose In a tweet posted a few months ago, Jeff Sheldon described his renewed approach to photography after getting a new camera. It enlightened my eyes: Link to tweet I’m not a professional photographer, and never been. But my beloved Canon 700D still serves me often while traveling. Besides learning about ISO and shutter speed settings, being familiar with the mechanics of a DSLR camera has also introduced me to the practice of shooting photos in RAW format, which means capturing photos at the highest quality level. But the super heavy file format marks only the start of the process in modern photography. The rest belongs to the post-processing act: the daunting work of polishing, enhancing, and fixing images. When I returned from vacation, I hoped to edit my captures. Then I noticed something weird. When comparing my photos to some stunning photos I saw online, it seemed like my camera output wasn’t as good as those shared photos. In doubt of my gear I then, again, noticed something I should have probably known: it wasn’t about the camera, but the editing. I realized professional-made photos were overly edited, often detached from their original conditions. It appeared that what you see isn’t what you get. I wondered, has photography become an art of photo manipulation? To respectful photographers, this might appear like a false accusation. The time spent sitting in front of the photo editor is at the heart of many camera enthusiasts. After all, that’s why a camera is set to shoot in raw. But this potential debate triggers a more profound question: what’s the purpose of all this thing, to find the perfect angle or color filter? Despite being somewhat contradicting with modern photography philosophy, I find “going back to basics”, as Jeff described, to reflect the same spirit as of Van Stapele—a devotion to a very minded process. No editing. No over-complication. Working only in one mode, and for the sake of it. At first glance, Van Stapele’s laser-focused outlook might seem detached from tech and other mass-production industries. But there’s a line that can be drawn between the Dutch cookie store to some other products which makes them all distinct: You can only read e-books with a Kindle.There’s no app store on the Light Phone.You’ll mostly find waterproof apparel in a Rains store.The Bic pen has remained untouched for almost 70 years.The design of Are.na remained the same for more than 12 years.The evolution of the Bic (Cristal) pen Whilst not every product mentioned above is hard-core focused like Van Stapele, the pattern is clear: an intentional commitment to a single purpose—either as a life philosophy, by using specific materials, or a deliberate design. And it also feels more appreciable when succeeding that way. Some might call out a gimmick, while others appreciate the care for deep focus, which both are understandable. Finding the perfect recipe for the perfect cookie, and then committing to selling only that can be novel or a trick in marketing—depends on who you ask. But as I will stretch out from now on, I choose novelty. We started with very minimal and basic products, aimed to serve only one simple job. A car to get from point A to B, a telephone to communicate with others, and a watch to check the time (perhaps the last analog device to serve human beings). But then many of our mundane artifacts have become more of a Swiss knife rather than just being exceptional in one form. We’re now drowning in oceans of product suites, both in the digital and physical worlds. The bloated culture In software, this trajectory is no different. Furthermore, it’s on steroids. Despite “Do one thing well” being common advice for people building products in tech, it fails to hold water in the long run. Temptations are far too many to build and launch new stuff. Here’s Dropbox, a famous example of software turned bloated, in the words of DHH of 37signals: Over the years, Dropbox has tried a million different things to juice upsells, seat expansions, and other ways to move the needle. There’s been a plethora of collaboration features (when all the collab I ever need is those magic links to files I can send over any wire!) and more and more pushy prompts to, say,  move pictures and videos straight from the camera into the cloud. Along with pleas not to store the files I have in the system on my local computers (presumably so the transfer costs they pay are less). It’s exhausting. I just want to pay for the original premise: All my files synched between all my computers, with a backup in the sky. That’s a beautifully, simply solution to a surprisingly difficult problem. And Dropbox absolutely nailed it. Finished software Back in its heyday, Dropbox was simple. A classic piece of software. Its purpose was to allow people to store and share files on the cloud effortlessly—either from a computer or mobile device. It was straightforward. Even its homepage was simple. There were no bells and whistles. It was designed to serve… a single purpose. Dropbox homepage, 2012 But as with VCware, things have started to shift gears. Amid Google Drive's popularity, the “drop-to-the-box” identity started to blur as the product offering expanded.. and expanded. It seemed like Dropbox’s purpose became more generic and less focused. Another resentment that ignited a storm in the design community occurred when the company announced a brand redesign a few years ago. In a way that act marked the shift in the company’s direction, bringing unclarity to its customers. The cynicism of Brand New's audience also made its review post one of the most popular that year. The novel idea that was executed so well has followed the same growth patterns of the tech bubble—as I wrote in Niche product design: As companies grow, they gradually move from a state of fan-only to a state of a product for everyone. During this transition, dramatic changes occur, as the drive to satisfy more audiences and increase revenue. But eventually, this shift harms the core of the product. It becomes scattered, and the brand turns into a gigantic octopus, leaving people questioning its purpose and values. An email client, a note-taking tool, and a Photos-like app are only part of what Dropbox has been involved with over the years. Today Dropbox is seemingly much more than a storage cloud service. It’s also a video reviewer (if there’s such a term at all), a Loom competitor, and a document e-sign tool. However, at its core, Dropbox is still a company selling terabytes on the cloud. Yet Dropbox might be just a drop in the ocean. A symptom of a bloated culture in modern software. To better understand this bloat-mania, let’s look at one of its kingpins, and a company of which Dropbox is valued at 0.003%2 of its worth: Apple. And what’s a better example than one of its greatest products of all time, the iPhone? Launched in 2007, the innovative touch-screen device had only essential features like phone calls, and text messaging among other luxuries like a camera, an internet browser, a weather, and a stock app. The peak of technology at the time. When we were young and not dumb, via 512pixels In 2008, a year after the first iPhone release, Apple introduced the App Store, which was the catalyst for entering the smartphone era while unleashing a tidal wave of apps. Fast forward to 2014, and the iPhone got an upgrade in the form of a multi-model release, with the launch of the iPhone 6 Plus. The trend expanded and Apple gradually added different shapes and names to the iPhone line: Plus, Pro, Pro Max, alongside other discontinued model names. This doctrine has been applied to many of Apple’s products—the iPad, Macbook, and Watch. Some got bigger, smaller, or faster. In a materialized world, I wouldn’t be surprised if the iPhone got bigger only to store more functions. But perhaps the greatest archetype of Apple’s transition to bloated products was the iPod. The next-level MP3 (or 4?) essence was simple—to carry music on the road. Nothing more. But then Apple gave the iPod a life of its own, as it eventually became an iPhone replication, just without a SIM card. I was reminded by Eli’s tweet of the timeless iPod Classic slider The iPod (especially the non-touch model) is still dear to many Apple (non-)fansbois. The beloved product was designed entirely around the music experience. Before music streaming services took over the industry, people manually entered song metadata: the title, artist name, and of course—the holy album artwork. The hurdles of iTunes made this a daunting task, but iPod users found joy in this craft. As a standalone product, disconnected from other functions, the iPod was a true single-purpose product. In more recent times What would happen if Van Stapele decided to add more flavors or open other branches? Probably nothing. Maybe its profits even increase. But the Valrhona3 chocolate-made cookie store icon will be losing its artisanal identity. Specialty and quality aren't determined by size. In a more familiar case, we can look no further than the platform that hosts these very own words—Substack. Since the announcement of Notes, Substackers have shared concerns about the direction the platform is heading. Substack’s original purpose, to some, was to highlight great writing while making writers a living. Now with the seeming transition for eyeballs and short-form content attention, Substack is being questioned whether modern social network dynamics are taking over it. To close this social network loop—Twitter has been a good executer at designing for a single purpose until it wasn’t, largely by the decision of going for a longform direction alongside launching questionable features. For years Twitter sanctified the quirky 140-then-280-characters limit while establishing a new internet medium. Those recent announcements made by its new execs are ripping off its identity, slowly but surely. By adding more purposes to this platform design, Substack risks going down a similar path, jeopardizing its writer's soul. Drinking beers and building businesses in Japan If I were raised in Japan I may not understand what this turmoil is all about. The renowned business heritage in Japan is an oasis in the never-ending capitalistic desert: Japan’s startup climate has often been criticized for being sluggish, perhaps because a culture that promotes business longevity also cultivates a fear of failure. Now, however, ‘startup’ and ‘atotsugi’ are words spoken in the same sentence, as today’s leaders finally feel permitted to apply entrepreneurial lessons to traditional companies to ensure their legacy continues. Why Japan is home to the world’s oldest businesses Japan is located on the other side of the earth, and its business culture values seem too. In a Western society of hyper-everything, which characterizes the move-fast-and-break-things startup ethos, the far island culture is way more cautious. Instead of financial logic, which might seem irrational, the ‘shinise’4 tradition cultivates sacred values such as continuous improvement, longevity, and care for quality. It’s an extremely long-term game. In Japan, more than 52,000 companies are more than a century old. Why Japan is home to the world’s oldest businesses Japan is probably the largest home for centuries-old businesses, in varied fields from sake producers, to hotels and construction companies. This fact is much attributed to how generations of Japanese aspire to keep businesses running in the family5. But as in the words of Yusuke Tsuen, the current owner of Tsuen Tea6, the key to sustaining a shinise is to focus only on one thing: “We’ve focused on tea and haven’t expanded the business too much,” he says. “That’s why we’re surviving.” Why so many of the world’s oldest companies are in Japan, Bryan Lufkin In The Price of Immortality, Rohit Krishnan concluded the same: The conclusion of this little digression has been to find common grounds amongst the most long lived organisations, and turns out you need to be a particular type of company: Extreme dedication to doing one thing well But the Japanese business heritage isn’t the last and only resort from bloat-ism. The next time you drink beer, think about this: Weihenstephan is considered to be the oldest brewery in the world, founded in 1040, more than a thousand years ago. And we don’t need to travel this far back. Other European breweries like Tuborg (1873) and Heineken (1864) are over 150 years old. Van Honsebrouck, the brewery that produces the famous Kasteel Rouge was founded in 1865 and is still owned by the same family. The brewery is still owned and operated by the family, now the seventh generation of van Honsebrouck brewers in Ingelmunster. James Clay Breweries have developed more tastes and flavors over the years, but they’ve been all operating for a single purpose for decades and hundreds of years: to craft beers. What might come next? A real shinise is rare in tech. Focusing on a single purpose often seems boring, a narrowed view, or a lack of ambition to some. But I find it invigorating. A long-lasting design can be timeless and unique. Do we really want to carry our stress, depression, and fears everywhere we go? Don’t we want to feel disconnected from time to time? Then why the heck do we carry our mobile phones everywhere we go, even to entertain ourselves in the toilet? And not to mention the germs. Would this all happen if we were using a cell phone that’s just designed to make phone calls? This is actually what the Light Phone founders realized a decade ago: What does it do? Nothing. You put in a SIM card, press a few buttons, and make a call. There’s no browser. No games. No NFC. It has quick dial, which is nice, and it doubles as a flashlight. The Light Phone Is The Anti-Smartphone, John Biggs I may get carried away from the main idea of this essay, but that’s in part why I think designing for a single purpose is so important. The purpose of many products and artifacts has gone lost. Beyond the seeming gimmick, building for a single purpose reflects an understanding of what a thing is meant to be, and serve. We’ve reached a point where focusing seems like spinning one's wheels, instead of appreciating the deep care of it. And perhaps going back to basics is inevitable. When I’m reading on my Kindle I don’t get bothered by WhatsApp messages, or get interrupted by an incoming call. I’m not being tempted to do something else. I’m just focused on the act of reading. As Devon tweeted a few years back: “When using my cellphone, I tend to become a passive consumer of the internet.” This also resonates well with the idea of this whole long text but from a slightly different angle. Using a multi-purpose product increases my distraction level, as I consume more things in the background. Using a product that was designed for a single purpose brings back joy. It removes all the unnecessities and emphasizes an experience essence in a calmer environment, and with a real purpose. Unlike photography, what you see is what you get.
2024-05-07
https://www.lesswrong.com/posts/7LsHtsLTjrpknfzb3/reflections-on-smileys-and-how-to-make-society-s
7LsHtsLTjrpknfzb3
reflections on smileys and how to make society's interpretive priors more charitable
Emrik North
Hi : ) I used to use smileys in my writing all the time (more than I do now!).  but then I read Against Disclaimers, and I thought that every time I used a smiley I wud make people who don't use smileys seem less friendly (bc my conspicuous-friendliness wud be available as a contrast to others' behaviour).  so instead, my strategy for maximizing friendliness in the world became: if I just have the purest of kindness in my heart while I interacting with ppl, and use plain words with no extra signalling, I will make plain words seem more friendly in general. this was part of a general heuristic strategy: "to marginally move society in the direction of a better interpretive equilibrium, just act like that equilibrium is already true."  I think that's an important insight, and explicitly stating the equilibrium u's try cultivate can often be less effective. but the number of times my writings (especially comments) have been interpreted as combative while I is honestly try be cheerfwl makes me think I might be inadvertently nudging ppl's interpretive priors in the opposite direction to what I want. when the way ppl signal that they belong to a more advanced category is to countersignal against the preceding step, this often collapses to looking like there only exists two steps (and society penduluming between them).  in order to readily differentiate from that whole spaghetti, u's behaviour has to be so different that it overcomes the interpretive gravity of preexisting priors. thus, smileys. ^^[1][2] smileys until friendliness becomes so generally expected that I no longer need to overtly signal it. ^ incidentally, evy time this smiley is interpreted as sarcasm, I die inside. ^ I not try claim this conclusion is adequately nailed down by the post.  I merely offer some considerations, and end by saying where my intuition has landed.  that intuition incorporates a lot more stuff that isn't contained in this post.
2024-05-07
https://www.lesswrong.com/posts/jDHovbGkPXtceFsKc/virtual-book-club-on-nick-bostrom-s-deep-utopia-life-and-1
jDHovbGkPXtceFsKc
Virtual Book Club on Nick Bostrom's "Deep Utopia: Life and Meaning in a Solved World"
beatrice@foresight.org
Hello everyone, We are excited to invite you to an informal virtual Book Club on Nick Bostrom’s latest work, "Deep Utopia: Life and Meaning in a Solved World." This gathering is part of the Foresight Institute's Existential Hope Program. We will host three sessions, diving into different parts of the book. Book Club Format: Our discussions will be informal and take place virtually Wednesdays at 6 PM CET, starting May 29th, 2024. This gives everyone a couple of weeks post the book's release in Europe on May 21st to get a copy and start their reading. Application: https://airtable.com/appHmw1yS1T1abgAO/shrk5ewUPRRrIyRws Schedule: Session 1 (May 29th): Discuss Part 1 & 2 of the book.Session 2 (June 5th): Cover Part 3 & 4, allowing enough time in between for everyone to catch up with the reading.Session 3 (June 12th): Wrap up with Part 5 & 6 of the book. We hope to see many of you joining us for stimulating discussions and a deep dive into the fascinating themes of this book. Best, Foresight Institute’s Existential Hope Team
2024-05-07
https://www.lesswrong.com/posts/ajCdM2x7C9KHhhpXX/virtual-book-club-on-nick-bostrom-s-deep-utopia-life-and
ajCdM2x7C9KHhhpXX
Virtual Book Club on Nick Bostrom's "Deep Utopia: Life and Meaning in a Solved World"
beatrice@foresight.org
Hello everyone, We are excited to invite you to an informal virtual Book Club on Nick Bostrom’s latest work, "Deep Utopia: Life and Meaning in a Solved World." This gathering is part of the Foresight Institute's Existential Hope Program. We will host three sessions, diving into different parts of the book. Book Club Format: Our discussions will be informal and take place virtually Wednesdays at 6 PM CET, starting May 29th, 2024. This gives everyone a couple of weeks post the book's release in Europe on May 21st to get a copy and start their reading. Application: https://airtable.com/appHmw1yS1T1abgAO/shrk5ewUPRRrIyRws Schedule: Session 1 (May 29th): Discuss Part 1 & 2 of the book.Session 2 (June 5th): Cover Part 3 & 4, allowing enough time in between for everyone to catch up with the reading.Session 3 (June 12th): Wrap up with Part 5 & 6 of the book. We hope to see many of you joining us for stimulating discussions and a deep dive into the fascinating themes of this book. Best, Foresight Institute’s Existential Hope Team
2024-05-07
https://www.lesswrong.com/posts/Dm3hhsFcima2jYExn/what-is-a-community-that-has-changed-their-behaviour-without
Dm3hhsFcima2jYExn
What is a community that has changed their behaviour without strife?
Nathan Young
null
2024-05-07
https://www.lesswrong.com/posts/uar2DEWacHLHPFDdd/mental-masturbation-and-the-intellectual-comfort-zone
uar2DEWacHLHPFDdd
Mental Masturbation and the Intellectual Comfort Zone
declan-molony
I have a particular problem that I like to refer to as Mental Masturbation. Mental Masturbation: consuming information under the pretense of wanting to learn something useful, but in reality I’m “getting myself off” by indulging in subjects I’m already well-versed in. A personal example: I’ve read more than a dozen books on nutrition and used them to create the healthiest diet that works for me. Despite having diminished marginal returns with additional nutrition books, my brain wants to read more. It tries to convince me that there could be significant increases in health just around the corner. In reality, my mind is rationalizing staying inside my intellectual comfort zone. The psychology driving Mental Masturbation This concept can be better understood with how authors Robin Hanson and Kevin Simler define reasons and motives in their book The Elephant in the Brain: Hidden Motives in Everyday Life. Reasons: the verbal explanations we give to account for our behavior Motives: the underlying causes of our behavior, whether we’re conscious of them or not Here’s an example using their definitions: Reason: John tells himself he uses TikTok to learn new things. Motive: His brain found a way to justify digital dopamine hits. As I mentioned in a previous post: the addicted mind will find a way to rationalize continued use at all costs. Why do we do this in the first place? As Scott Galloway states in his book The Four: Over-collecting [for our ancestors] was a smart strategy. The only downside of too much stuff was wasted effort. [Whereas] the downside of under-collecting was death from starvation. The behavior of over-collecting resources and information, however, is rendered counterproductive in our modern age of abundance. How to use the concept of Mental Masturbation to prevent wasting time Having already read 200+ books as an adult, the ROI is bad and getting worse. While deliberating checking out a book (which is a ~5-hour commitment) from the library, I ask: what specific problem am I having in my life right now that could be solved by this book? If a good enough rationale does not present itself beyond “it seems interesting”, then I label it as Mental Masturbation and skip it.[1] Reading, after a certain age, diverts the mind too much from its creative pursuits. — Einstein Applying the concept of Mental Masturbation, I reduced my reading list from 50 non-fiction books to just five, thereby saving 225 hours.[2] With the extra time, I write LessWrong posts using what I’ve learned. Writing helps solidify my thinking on certain topics, rather than continuing to pump more information into my already overloaded brain. This is not about denying myself pleasure—I love to unwind at night by reading fiction. But if I want to accomplish anything significant in my lifetime, then I must be selective in what I choose to consume in the name of “learning”, lest I delude myself into being endlessly distracted by information that could be useful in the future… ^ Not all books must be read to solve a specific problem. Reading about unfamiliar subjects to expand my worldview is still a worthy investment, in my opinion. ^ (50 books - 5 books) * 5 hours per book = 225 hours
2024-05-07
https://www.lesswrong.com/posts/q6Tky4RzEmTwfGndB/axrp-episode-31-singular-learning-theory-with-daniel-murfet
q6Tky4RzEmTwfGndB
AXRP Episode 31 - Singular Learning Theory with Daniel Murfet
DanielFilan
YouTube link What’s going on with deep learning? What sorts of models get learned, and what are the learning dynamics? Singular learning theory is a theory of Bayesian statistics broad enough in scope to encompass deep neural networks that may help answer these questions. In this episode, I speak with Daniel Murfet about this research program and what it tells us. Topics we discuss: What is singular learning theory? Phase transitions Estimating the local learning coefficient Singular learning theory and generalization Singular learning theory vs other deep learning theory How singular learning theory hit AI alignment Payoffs of singular learning theory for AI alignment Does singular learning theory advance AI capabilities? Open problems in singular learning theory for AI alignment What is the singular fluctuation? How geometry relates to information Following Daniel Murfet’s work In this transcript, to improve readability, first names are omitted from speaker tags. Filan: Hello, everybody. In this episode, I’ll be speaking with Daniel Murfet, a researcher at the University of Melbourne studying singular learning theory. For links to what we’re discussing, you can check the description of this episode and you can read the transcripts at axrp.net. All right, well, welcome to AXRP. Murfet: Yeah, thanks a lot. What is singular learning theory? Filan: Cool. So I guess we’re going to be talking about singular learning theory a lot during this podcast. So, what is singular learning theory? Murfet: Singular learning theory is a subject in mathematics. You could think of it as a mathematical theory of Bayesian statistics that’s sufficiently general with sufficiently weak hypotheses to actually say non-trivial things about neural networks, which has been a problem for some approaches that you might call classical statistical learning theory. This is a subject that’s been developed by a Japanese mathematician, Sumio Watanabe, and his students and collaborators over the last 20 years. And we have been looking at it for three or four years now and trying to see what it can say about deep learning in the first instance and, more recently, alignment. Filan: Sure. So what’s the difference between singular learning theory and classical statistical learning theory that makes it more relevant to deep learning? Murfet: The “singular” in singular learning theory refers to a certain property of the class of models. In statistical learning theory, you typically have several mathematical objects involved. One would be a space of parameters, and then for each parameter you have a probability distribution, the model, over some other space, and you have a true distribution, which you’re attempting to model with that pair of parameters and models. And in regular statistical learning theory, you have some important hypotheses. Those hypotheses are, firstly, that the map from parameters to models is injective, and secondly (quite similarly, but a little bit distinct technically) is that if you vary the parameter infinitesimally, the probability distribution it parameterizes also changes. This is technically the non-degeneracy of the Fisher information metric. But together these two conditions basically say that changing the parameter changes the distribution changes the model. And so those two conditions together are in many of the major theorems that you’ll see when you learn statistics, things like the Cramér-Rao bound, many other things; asymptotic normality, which describes the fact that as you take more samples, your model tends to concentrate in a way that looks like a Gaussian distribution around the most likely parameter. So these are sort of basic ingredients in understanding how learning works in these kinds of parameterized models. But those hypotheses do not hold, it’s quite easy to see, for neural networks. I can go into more about why that is. So the theorems just don’t hold. Now, you can attempt to make use of some of these ideas anyway, but if you want a thoroughgoing, deep theory that is Bayesian and describes the Bayesian learning process for neural networks, then you have to be proving theorems in the generality that singular learning theory is. So the “singular” refers to the breaking of these hypotheses. So the fact that the map from parameters to models is not injective, that means, in combination with this other statement about the Fisher information metric, that if you start at a neural network parameter, then there will always be directions you can vary that parameter without changing the input/output behavior, without changing the model. Some of those directions are kind of boring, some of them are interesting, but that’s what singular learning theory is about: accommodating that phenomenon within the space of neural networks. Filan: The way I’d understood it is that this basically comes down to symmetries in the neural network landscape. You can maybe scale down this neuron and scale up this neuron, and if neurons are the same, it doesn’t matter. But not only are there symmetries, there are non-generic symmetries. Murfet: Correct. Yeah. Filan: Because if there were just some symmetries, then maybe you could mod out by the symmetries… If you looked at the normal direction to the space at which you could vary things, then maybe that would be fine. So the way I’ve understood it is that there are certain parameter settings for neural networks where you can change it one way or you can change it another way, but you can’t change it in both directions at once. And there are other parameter settings where you can only change it in one of those two ways. So the fact that you can’t do them both at once means it’s not a nice, smooth manifold. And the fact that it’s different at different places means that it’s not this generic thing over the whole space. Some models are more symmetric than others and that ends up mattering. Murfet: Yeah, I would say that’s mostly correct. I would say the word ‘symmetry’ is really not… I think I would also at a high level maybe use this word to a first approximation in explaining what’s going on, but it’s really not a sufficient concept. But yeah, it’s good to distinguish the kind of boring generic symmetries that come from the non-linearities. So in some sense, that’s why you can just look at a neural network and know that it’s singular because of these symmetries, like the ReLU scaling up the input and scaling down the output weights respectively will not change the behavior of the network. So that’s an obvious scaling symmetry, and that means that it’s degenerate and therefore a singular model. But if that was all there was, then I agree: somehow that’s a boring technical thing that doesn’t seem like you really need, from a point of view of understanding the real phenomena, to care about it too much. But the reason that SLT [singular learning theory] is interesting is that, as you say, different regions of parameter space, you could say have different kinds of symmetries as a reflection of the different ways qualitatively in which they’re attempting to model the true distribution. But this other thing you mentioned, about being able to move in different directions, that’s not really symmetry so much as degeneracy. So we could go more into conceptually why different regions or different kinds of solutions might have different kinds of degeneracy, but at a high level that’s right. Different kinds of solutions have different kinds of degeneracy, and so being able to talk about different kinds of degeneracy and how they trade off against one another, and why Bayesian learning might prefer more or less degenerate kinds of models, is the heart of SLT. Filan: Sure. Before we go into that, what do you mean by “degeneracy”? Murfet: Degeneracy just refers to this failure of the map from parameters to models to be injective. So “a degeneracy” would just mean a particular kind of way in which you could vary the neural network parameter, say in such a way that the input/output map doesn’t change. And as you were just mentioning, you might have, at one point, two or more essentially different ways in which you could vary the parameter without changing the loss function. And that is by definition what geometry is. So what I’m describing there with my hand is the level set of the loss function. It might be the minimal level set or some other level set, but if we’re talking about multiple ways I can change the neural network parameter without changing the loss, then I’m describing the configuration of different pieces of the level set of the loss function at that point. And that’s what geometry is about. Filan: Sure. You mentioned that singular learning theory, or SLT for short, is very interested in different kinds of degeneracies. Can you tell us a little bit [about] what are the kinds of degeneracies, what different kinds of degeneracies might we see maybe in deep learning? And why does the difference matter? Murfet: I think it’s easier to start with a case that isn’t deep learning, if that’s all right. Deep learning jumps straight into the deep end in terms of… and it’s also the thing which we understand least, perhaps. But if you imagine the easiest kind of loss functions… and when I say loss function, I typically mean “population loss”, not the empirical loss from a fixed dataset of finite size, but the average of that of all datasets. So that’s somehow the theoretical object whose geometry matters here, so I’ll flag that, and there’s some interesting subtleties there. So in a typical case, in a regular statistical setting - not neural networks, but linear regression or something - the population loss looks like a sum of squares, so just a quadratic form. And there, minimizing it - I mean maybe with some coefficients, the level sets are ellipses - then the learning process just looks like moving down that potential well to the global minimum. And that’s kind of all that’s happening. So in that case, there’s no degeneracy. So there’s just one global minimum and you can’t vary it at all and still have zero loss. A more interesting case would be where: suppose you have 10 variables, but a sum of eight squares, so x1² through x8². And then if you minimize that, well, you’ve still got two free parameters, so there’s a two-dimensional space of global minima of that function. Now imagine a population loss, and let’s only care about local minima, which has many local minima at various heights of the loss, each of which use different numbers of variables. So we suppose, for instance, that the global minimum maybe uses all 10, but then there’s a level set a bit higher than that that uses only nine squares, and a level set a bit higher than that that uses only eight squares. And so then those have different amounts of degeneracy. So you have different points in the parameter space, loss landscape, where local minima have different degrees of degeneracy. And so then you can think about the competition between them in terms of trading off between preference for degeneracy versus preference for loss. And then we’re getting into key questions of if you’re a Bayesian, what kind of solution you prefer in terms of accuracy versus degeneracy. Filan: And I guess this gets to this object that people talk about in singular learning theory called the “learning coefficient”. Can you tell us a little bit about what the learning coefficient is? Murfet: In the case I was just describing, it’s easy to say what the learning coefficient is. There’s a distinction between global learning coefficient… Everything I say about SLT, more or less, is material that was introduced by Watanabe and written about in his books, and at some point, I guess we’ll talk about our contributions more recently. But mostly what I’m describing is not my own work, just to be clear. So I’ll mostly talk about the local learning coefficient, which is a measure of degeneracy near a point in parameter space. If I take this example I was just sketching out: you imagine the global minimum level set and then some higher level sets. And I said that the population loss near the global minimum looked like a sum of 10 squares. And so the local learning coefficient of that would just be 10/2, so a half times the number of squares that you used. So if there was a level set that had used only eight squares, then that’s degenerate, because you have two free directions, so it’s not a single isolated minimum, but rather a sort of two-dimensional plane of minimum. And each point of that two-dimensional plane would, because it locally looks like a sum of eight squares, have 8/2 as its local learning coefficient and so on. So if you use D’ squares in the local expression of your population loss, then your local learning coefficient is D’/2. That’s not how it’s defined: it has a definition, which we could get into various different ways of looking at it, but that’s what it cashes out to in those examples. Filan: Sure. I guess the way to think about this local learning coefficient is that when it’s lower, that’s a solution that’s more degenerate. And the way I gather Bayesian inference works is that it tries to have both a low loss and also a low local learning coefficient. Does that sound right? Murfet: Yep. Filan: An image I often see in discussions of singular learning theory is people drawing doodles of trefoils and figure eights and maybe a circle to throw in there. The thing I often hear (as a caricature) is: initially you stay around the trefoil for a while, this is where you put your posterior mass, until at some point you get enough data and then you start preferring this figure eight, and then you get even more data and then you start preferring this circle, which has maybe even lower loss. So as you go down, maybe you get better loss, let’s just say, but the local learning coefficient is going to increase and therefore get worse. Murfet: Maybe I’ll caveat that a little: the local learning coefficient is increasing, so you’re accepting a more complex solution in exchange for it being more accurate. Phase transitions Filan: Yeah. So that’s the very basic idea of singular learning theory. Why does it matter? What are the important differences between the singular learning theory picture and the classical statistical learning theory picture? Murfet: In what context? Statistical learning theory in general, deep learning theory, alignment, or all three in that order? Filan: Maybe all three in that order. I think I want to put off the discussion of alignment relevance for a little bit later until we just understand what’s going on with this whole thing. Murfet: Okay. Yeah, I guess I didn’t actually come back to your question about the local learning coefficient in neural networks from earlier, but I think the cartoon in terms of sums of squares might still suffice for the moment. If we talk about statistical learning theory in machine learning or deep learning in general, I think the main high-level conceptual takeaway from singular learning theory when you first encounter it should be that the learning process in Bayesian statistics really is very different for singular models. So let me define what I mean by “learning process”. When we say “learning process” in deep learning, we tend to mean training by stochastic gradient descent. And what I’m saying is maybe related to that, but that’s a tricky point, so let me be clear that in Bayesian statistics, the “learning process” refers to: as you see more data, you change your opinion about what the relative likelihood of different parameters is. So you see more data, some parameters become ruled out by that data because they don’t give that data high probability, whereas other parameters become more likely. And what I’m describing is the Bayesian posterior, which assigns a probability to each parameter according to the data. And so as you see more samples… I mean, if you’ve seen very few samples, you really have no idea which parameters are correct, so the posterior is very diffuse and will change a lot as you see more samples because you just are very ignorant. But asymptotic normality and regular statistical learning theory says that as you see more samples, that process starts to become more regular and concentrate around the true parameter in a way that looks like a Gaussian distribution. So that’s in some sense a very simple process. But in singular models, that is not what happens, at least that’s not what’s predicted to happen by the theory. Until relatively recently, I think we didn’t have many very compelling examples of this in practice. But what the theory says is what you were describing earlier, that the Bayesian posterior should kind of jump as the trade-off between accuracy and complexity changes, which is a function of the number of samples. And those jumps move you from regions of qualitatively different solutions to other kinds of solutions, and then eventually maybe asymptotically to even choosing among perfect solutions depending on their complexity and then so on. So there’s a very complicated, not very well-understood process underlying learning in Bayesian statistics for singular models, which as far as I know, Watanabe and his collaborators are the only people to ever really study. This is despite being somewhat old, in the sense that Watanabe and students and collaborators have been working on it for a while; it’s really not been studied in great depth outside of their group. So [it’s] a very fundamental process in Bayesian statistics, relatively understudied, but arguably, at least if you take a Bayesian perspective, very central to how learning works in (say) neural networks, whether they’re artificial ones or even possibly biological ones. So I think that’s the main thing. I mean, that’s not the only thing singular learning theory talks about. It’s not the only theoretical content, but I would say that’s the main thing I would want someone to know about the theory as it stands right now. The other thing is how that relates to generalization, but maybe I’ll pause there. Filan: Sure. Maybe we should talk about that a bit. I hear people talk about this with the language of phase transitions. And I think upon hearing this, people might say, “Okay, if you look at loss curves of big neural nets that are being trained on language model data, the loss kind of goes down over time, and it doesn’t appear to be stuck at one level and then suddenly jump down to another level and then be flat and then suddenly jump down.” We have things which kind of look like that in toy settings, like grokking, like the development of induction heads, but it doesn’t generically happen. So should we think of these phase transitions as being relevant to actual deep learning, or are they just a theoretical curiosity about the Bayesian theory? Murfet: Yeah, I think that’s a very reasonable question. I think a year ago, we ourselves were skeptical on this front. I think even in toy settings it wasn’t very clear that this theoretical prediction bears out. So maybe I’ll spend a moment to just be quite precise about the relationship between theory and practice in this particular place. What the theory says is: asymptotically in N, the number of samples, a certain formula describing the posterior works, and then based on this formula, you can have the expectation that phase transitions happen. But in principle, you don’t know lower-order terms in the asymptotic, and there could be all sorts of shenanigans going on that mean that this phenomenon doesn’t actually occur in real systems, even toy ones. So theory on its own - I mean in physics or in machine learning or whatever - has its limits, because you can’t understand every ingredient in an asymptotic expansion. So even in toy settings, it was reasonable, I think, to have some skepticism about how common this phenomenon was or how important it was, even if the theory is quite beautiful. Okay, so that aside, you go and you look in toy systems and you see this behavior, as we did, and then I think it’s reasonable to ask, “Well, okay, so maybe this happens in small systems, but not in large systems?” And indeed in learning curves, we don’t think we see a lot of structure. So I’ll tell you what we know, and then what I think is going on. I should preface this by saying that actually we don’t know the answer to this question. So I think that it still remains unclear if this prediction about phases and phase transitions is actually relevant to very large models. We’re not certain about that. I would say there’s a reasonable case for thinking it is the case that it is relevant, but I want to be clear about what we know and don’t know. Again, this is kind of an empirical question, because the theoretical situation under which phases and phase transitions exist… the theory stops at some point and doesn’t say much at the moment about this scale or that scale. So what we know is that if you look at transformers around the scale of three million parameters, trained on language model datasets, you do see something like phases and phase transitions that basically describe… So again, what I’m about to describe is the learning process of training rather than seeing more samples. But the theoretical jump that we’re making here is to say, okay, if Bayesian statistics says certain kinds of structures in the model - if the theory says there should be qualitative changes in the nature of the way the posterior is describing which models are probable, if there are qualitative changes in that over the course of the Bayesian learning process, as you see more samples, then you might expect something similar when you go and look at seeing cumulatively more examples through the training process of stochastic gradient descent. But that is not a theoretically justified step at this point in some rigorous sense. That’s the kind of prediction you might make assuming some similarity between the learning processes, and then you can go in empirically and see if it’s true. So if you go and look at language models at the scale of three million parameters… This is a recent paper that we did, Developmental Landscape of In-Context Learning. If you go and look at that, what you see [is] that the training process is divided into four or five stages, which have different qualitative content in a way that isn’t visible in the loss curve mostly. Filan: It is a little bit visible. Murfet: Yeah, I would agree with that. I mean, to the same extent that the induction bump is sort of visible in the original in-context learning and induction heads paper. Filan: Yeah. I mean, it’s not obvious from the loss curve. It’s not like everybody already knew all the things that you guys found out. Murfet: Yeah, I would say that without these other results, if you looked at the loss curve and tried to tell the story about these little bumps, it would feel like tea leaf reading. But once you know that the stages are there, yes, you can look at the loss curve and sort of believe in certain features of them. So I mean, there’s various details about how you think about the relationship between those stages and phases and phase transitions in a sense of SLT. But I would say that’s still a very small model, but not a toy model, in which you do see something like stage-wise development. And there are independent reasons… People have independently been talking about stage-wise development in learning systems outside of SLT. So I would say that the SLT story and stage-wise development as a general framing for how structure arrives inside self-organizing learning processes, that dovetails pretty well. So I would say that, to come back to your question about structure in the loss curve, just because nothing’s happening in the loss curve doesn’t mean that there isn’t structure arriving in stages within a model. And our preliminary results on GPT-2 Small at 160 million parameters: at a high level it has stages that look pretty similar to the ones in the three million parameters. Filan: Interesting. Murfet: So here’s my guess for what’s going on. It’s true that in very large models, the system is learning many things simultaneously, so you won’t see very sharp transitions except possibly if they’re very global things: [e.g.] switching to in-context learning as a mode of learning seems like it affects most of the things that a system is learning, so a qualitative change at that scale, maybe you would guess actually is represented sort of at the highest level and might even be visible in the loss curve, in the sense that everything is coordinated around that. There’s before and after. But many other structures you might learn, while they’re developing somewhere else in the model, it’s memorizing the names of U.S. presidents or something, which just has nothing to do with structure X, Y, Z. And so in some sense, the loss curve can’t possibly hit a plateau, because even if it’s hitting a critical point for these other structures X, Y, Z, it’s steadily making progress memorizing the U.S. presidents. So there can’t be clear plateaus. So the hypothesis has to be something like: if there is stage-wise development, which is reflected by these phases and phase transitions, it’s in some sense or another localized, maybe localized to subsets of the weights and maybe localized in some sense to certain parts of the data distribution. So the global phases or phase changes which touch every part of the model and affect every kind of input are probably relatively rare, but that isn’t the only kind of phase, phase transition, stage to which Bayesian statistics or SLT could apply. Filan: Sure. Should I imagine these as being sort of singularities in a subspace of the model parameter space? The learning coefficient kind of picks them out in this subspace, but maybe not in the whole parameter space? Murfet: Yeah, that’s kind of what we’re thinking. These questions are pushing into areas that we don’t understand, I would say. So I can speculate, but I want to be clear that some parts of this we’re rather certain of: the mathematical theory is very solid, the observation of the correspondence between the theory and Bayesian phase transitions in toy models is empirically and theoretically quite solid. This question of what’s happening in very large systems is a deep and difficult question. I mean, these are hard questions, but I think that’s right, that’s the motivation for… One of the things we’re currently doing is what we call weight-restricted local learning coefficients. This basically means you take one part of the model, say, a particular head, you freeze all the other weights… Let me just give a more formal setting. When we’re talking about the posterior and the local learning coefficient and so on, we imagine a space of parameters. So there’s D dimensions or something. Some of those directions in parameter space belong to a particular head, and I want to take a parameter that, at some point in training, has some values for all these heads, I mean, for all these different weights, and I want to freeze all but the ones in the head and then treat that as a new model. Now, my model is I’m not allowed to change those weights, but I’m allowed to change the weights involved in the head, and I can think about the Bayesian posterior for that model and I can talk about its local learning coefficient. That involves perturbing the parameter nearby that particular coefficient, but in a way where you only perturb the weights involved in that part of the structure, say, that head, and you can define the complexity of that local learning coefficient. That’s what we call the weight-restricted local learning coefficient. And then the hypothesis would be that, if a particular part of the model is specializing in particular kinds of structure and that structure is developing, then you’ll be at a critical point for some kind of restricted loss that is referring only to those weights, and that would show up. We haven’t talked about how the local learning coefficient is used to talk about phase transitions, but that’s the experimental way in which you’d attempt to probe whether some part of the model is doing something interesting, undergoing a phase transition separately from other parts of the model. Filan: Yeah, actually, maybe we should clarify that. How do you use the learning coefficient to figure out if a phase transition is happening? Murfet: It depends on your background which answer to this question is most pleasant. For physics-y people who know about free energy, they’re familiar with the idea that various derivatives of the free energy should do something discontinuous at a phase transition, and you can think about the local learning coefficient as being something like that. So that, if there is a phase transition, then you might expect this number to change rapidly relative to the way it usually changes. If we just stick within a statistical learning theory frame, we were laying out this picture earlier of: as you see more samples, the Bayesian posterior is concentrated in some region of parameter space and then rapidly shifts to be concentrated somewhere else, and the local learning coefficient is a statistic of samples from the Bayesian posterior, so if the Bayesian posterior shifts, then this number will also shift. The expectation would be that, if you measure this number, which it turns out you can do from many experiments, if you see that number change in some significant way, then it is perhaps evidence that some qualitative change in the posterior has occurred. That’s a way of detecting phase transitions which is, if you take this bridge from Bayesian statistics to statistical physics, pretty well justified I would say. Estimating the local learning coefficient Filan: Sure. A question about that: my understanding is that trying to actually measure the local learning coefficient involves taking a parameter setting and looking at a bunch of parameter settings nearby on all these dimensions that you could vary it, and measuring a bunch of properties, and this is the kind of thing that’s easy to do when you have a very low-dimensional parameter space corresponding to a small number of parameters. It seems like it’s going to be harder to do with a higher number of parameters in your neural networks. Just practically, how large a model can you efficiently measure local learning coefficient [for] at this time? Murfet: Yeah. That’s a good question. I think it’s tricky. Maybe this will be a bit of an extended answer, but I think it’ll be better if I provide some context. When we first started looking at SLT, myself and my colleague here at the University of Melbourne, Susan Wei, and some other people… This was before… believe it or not, today there are 10x the number of people interested in SLT than there were back when we started thinking about it. It was an extremely niche subject, very deep and beautiful, but somewhat neglected. Our question at that time was exactly this question. The theory says the local learning coefficient - the “real log canonical threshold” is another mathematical name for it - the theory says this is a very interesting invariant, but it’s very unclear if you can accurately estimate it in larger models. A lot of the theoretical development [involved using] one PhD student to compute the RLCT of one model theoretically, and you need some hardcore algebraic geometry to do that, et cetera, et cetera. The way the subject sat, it wasn’t clear that you could really be doing this at scale because it seems to depend on having very accurate samples from the posterior via Markov Chain Monte Carlo sampling or something. I admit, I was actually extremely pessimistic when we first started looking at it that there really would be a future in which we’d be estimating RLCTs, or local learning coefficients, of a hundred million parameter models. So that’s where I started from. My colleague Susan and my PhD student Edmund Lau decided to try SGLD, stochastic gradient Langevin dynamics, which is an approximate Bayesian sampling procedure based on using gradients, to see how it worked. There’s a step in estimating the local learning coefficient where you need samples from the posterior. As you’re describing, this is famously difficult for large dimensional complex models. However, there is a possible loophole, which is that… I mean, I don’t believe that anybody has a technique, nor probably ever will, for understanding or modeling very accurately the Bayesian posterior of very large-scale models like neural networks. I don’t think this is within scope, and I’m skeptical of anybody who pretends to have a method for doing that, hence why I was pessimistic about estimating the LLC [local learning coefficient] at scale because it’s an invariant of the Bayesian posterior which seems to have a lot of information about it and I believe it’s hard to acquire that information. The potential loophole is that maybe the local learning coefficient relies on relatively robust signals in the Bayesian posterior that are comparatively easy to extract compared to knowing all the structure. That seems to be the world that we are in. To answer your question, Zach Furman and Edmund Lau just recently had a pre-print out where, using SGLD, it seems you can get relatively accurate estimates for the local learning coefficient for deep linear networks: products of matrices and known nonlinearities at scales up to a hundred million parameters. Filan: A hundred million with an M? Murfet: With an M, yeah. One should caveat that in several ways, but yeah. Filan: Okay, and am I right that this is distinct from the “Quantifying degeneracy with the local learning coefficient” paper? Murfet: That’s right. This is a second paper, a followup to that. I forget the title. I think it’s Estimating Local Learning Coefficient at Scale. So we wrote that paper a couple of years ago now, I think, looking at defining the local learning coefficient - which is implicit in Watanabe’s work, but we made it explicit - and making the observation that you could use approximate sampling to estimate it and then studying that in some simple settings, but it remained very unclear how accurate that was in larger models. Now, the reason it’s difficult to go and test that is because we don’t know the true local learning coefficient for very many models that can be increased in some direction of scale. We know it for one hidden layer tanh networks and things like that. But some recent, very deep, interesting work by Professor Miki Aoyagi gives us the true value of the local learning coefficient for deep linear networks, which is why Zach and Edmund studied those. This was an opportunity to see if SGLD is garbage or not for this purpose. I should flag that despite… How should I say this? SGLD is a very well-known technique for approximate Bayesian posterior sampling. I think everybody understands that you should be skeptical of how good those posterior samples are in some sense. It might be useful for some purpose, but you shouldn’t really view it as a universal solvent for your Bayesian posterior sampling needs or something. Just using SGLD doesn’t magically mean it’s going to work, so I would view it as quite surprising to me that it actually gives accurate estimates at scale for deep linear networks. Now, having said that, deep linear networks are very special, and they are less degenerate in some important ways than real neural networks with nonlinearities, et cetera, so don’t take me as saying that we know that local learning coefficient estimation gives accurate values of the local learning coefficient for language models or something. We have basically no idea about that, but we know it’s accurate in deep linear networks. Okay, so then what is generalizable about that observation? I think it leads us to believe that maybe estimating the LLC, SGLD is actually not garbage for that. How good it is we still don’t know, but maybe this cheap posterior sampling is still good enough to get you something interesting. And then the other thing is that: well, what you observe in cases where you know the true values is that, when the model undergoes phase transitions which exist in deep linear networks, as many people have… Maybe not in those exact terms, but, stage-wise development in deep linear networks has been studied for quite a long time, and you can see that this local learning coefficient estimator which is measuring the complexity of the current parameter during the learning process does jump in the way you would expect in a phase transition, when deep linear networks go through these phase transitions. Well, it had to, because we know theoretically what’s happening to the geometry there. Those jumps in the local learning coefficient in other models, like these 3 million parameter language models or GPT-2 Small… when you go and estimate the local learning coefficient, you see it change in ways that are indicative of changes in internal structure. Now, we don’t know that the absolute values are correct when we do that, and most likely they’re not, but I think we believe in the changes in the local learning coefficient reflecting something real to a greater degree than we believe in the absolute values being real. Still, theoretically, I don’t know how we would ever get to a point where we would know the local learning coefficient estimation was accurate in larger models absent really fundamental theoretical improvements that I don’t see coming in the near term, but that’s where we are at the moment. Singular learning theory and generalization Filan: Fair enough. A while back, you mentioned the contributions of singular learning theory to understanding deep learning. There was something to do with phase transitions and there was also something to do with generalization, I think you mentioned. I want to ask you about that. Especially in the context of: I sometimes hear people say, “Oh, statistical learning theory says that model classes can have these parameters that have some degeneracy and that basically reduces their effective parameter count, and this just explains how generalization is possible.” This is the kind of story one can tell when one feels excitable, but it’s a bit more complicated. It’s going to depend on details of how these parameters actually translate into functions and what these degeneracies actually look like in terms of predictive models. What does singular learning theory tell us about generalization, particularly in the context of deep networks? Murfet: Yeah. This is subtle. On its face, [in] singular learning theory, the theorems describe relations between loss, local landscape geometry, this local learning coefficient, and generalization error in the Bayesian sense. In the Bayesian sense, what I mean by generalization error is the KL divergence between the true distribution and the predictive distribution. Maybe I should say briefly what the latter is. If you’re trying to make a prediction, if you’re talking about a conditional distribution, a prediction of Y given X, and you look at all the parameters that you’ve got for modeling that relationship, and you’re given an input and you take the prediction from every single model parameterized by your parameter space, you weight it with the probability given to that particular model by the Bayesian posterior and you average them all in that way, that’s the Bayesian predictive distribution. [It’s] obviously radically intractable to use that object or find that object. It’s a theoretical object. That probability distribution is probably not one that’s parameterized by parameters in your parameter space, but you can cook it up out of models in your parameter space. The KL divergence between that and the truth is the Bayesian generalization error. Filan: The KL divergence just being a measure of how different probability distributions are. Murfet: Right. That seems like a very theoretical object. There’s a closely related object, the Gibbs generalization error, which puts some expectations in different orders which is closer to what people in machine learning mean by “test error” - taking a parameter and trying it out on some samples from the true distribution that weren’t used to produce that parameter. There’s the various subtleties there. SLT, strictly speaking, only says things about those kinds of generalization errors and the relationship between that and test error for a parameter produced by a single run of SGD - well, I don’t even know that that is a mathematical object actually (test error for a parameter after a single run), but you can do things like talk about, for some distribution of SGD runs, what’s the expected test error. Then there’s a gap between that Bayesian story and what you mean by “test error” in deep learning. This gap hasn’t been very systematically addressed, but I’ll lay out some story about how you might bridge that eventually in order to answer your question. If you believe that the Bayesian learning process ends with a distribution of parameters that look something like the endpoints of SGD training, or at least close enough, that something like this average of SGD runs of the test error looks a bit like averaging over things in the Bayesian posterior of some generalization quantity that makes sense in the Bayesian theory, then you could maybe draw some connection between these two things. That hasn’t been done. I don’t know if that’s true, because these questions about relations between the Bayesian posterior and SGD are very tricky and I don’t think they look like they’re going to get solved soon, at least in my opinion. There’s a gap there. That’s one gap. We just paper over that gap and just say, “Okay. Well, fine, let’s accept that for the moment and just treat the generalization error that SLT says things about as being the kind of generalization error that we care about. What does SLT say?” Maybe I’ll insert one more comment about that relationship between test error in deep learning and Bayesian generalization error first. This is a bit of a tangent, but I think it’s important to insert here. Various people, when looking to explain the inductive bias of stochastic gradient descent, have hit upon a phenomenon that happens in deep linear networks and similar systems, which is a stage-wise learning where the model moves through complexity in an increasing way. We think about in deep linear networks - or what’s sometimes called matrix factorization, where you’re trying to use a product of matrices to model a single linear transformation - people have observed that, if you start with a small initialization, the model starts with low rank approximations to the true linear transformation and then finds a pretty good low rank approximation and then takes a step to try and use linear transformations of one higher rank and so on, and moves through the ranks in order to try and discover a good model. Now, if you believe that, then you would believe that, if SGD training is doing that, then it will tend to find the simplest solution that explains the data, because it’s searching them starting with simpler ones and only going to more complicated ones when it needs to. Now, theoretically, that’s only known to happen… I mean, I think it’s not known to happen in deep linear networks rigorously speaking, but there’s expectations of that, [and] empirically, that happens, and there’s some partial theory. Then it’s a big leap to believe that for general SGD training of general neural networks, so I think we really don’t know that that’s the case in general deep learning. Believing that is pretty similar to believing something about the Bayesian learning process moving through regions of parameter space in order of increasing complexity as measured by the local learning coefficient. In fact, that is exactly what’s happening in the deep linear networks. The SLT story about moving through the parameter space and the Bayesian posterior undergoing phase transitions is exactly what’s happening in the deep linear networks. If you’re willing to buy that generalization from that corner of theory of deep learning to general behavior of neural networks, then I think you are in some sense already buying the SLT story to some degree, [the story] of how learning is structured by looking for increasingly complex solutions. All of those are big question marks from a theoretical point of view, I would say. Putting that aside, what does SLT say about generalization? Well, it says that the asymptotic behavior of the generalization error as a function of the number of samples at the very end of training, let’s say, or the very end of the Bayesian learning process, looks like the irreducible loss plus a term that looks like lambda/N, where lambda is the local learning coefficient. If you take that irreducible loss over the other side, the difference between generalization error and its minimum value behaves like 1/n, is proportional to 1/n, and the constant of proportionality is the local learning coefficient. That’s the deep role of this geometric invariant, this measure of complexity in the description of generalization error in the Bayesian setting. Now, what that says in deep learning… as I said, taking that first part of that bridge between the two worlds for granted, it would like to say something like: the test error when you’re looking at a particular region of parameter space is governed by the local learning coefficient, except that the relation between N and training is unclear. The exact way in which it governs test error is a function of how that bridge gets resolved. I think, at a technical level, it’s difficult to say much precise at the moment. I don’t think it’s impossible. It’s just that very few people are working on this and it hasn’t been getting enough attention to say more concrete things. At a conceptual level, it says that - and this maybe starts to get into more interesting future work you can do taking the SLT perspective - but this relationship between the local learning coefficient and how that is determined by loss landscape geometry and generalization behavior, this is a very interesting link which I think is quite fundamental and interesting. I think your question is going in the direction of Joar Skalse’s LessWrong post. Is that right? Filan: That’s what I was inspired by: just this question of, suppose we believe the story of, we’re gradually increasing complexity as measured by the local learning coefficient in this model class: well, what does that actually say in terms of objects that I cared about before I heard of singular learning theory? What’s that telling me in terms of things I care about, of the behavior of these things? Murfet: It could tell you things like: suppose you know two solutions of your problem that are qualitatively different. You have a data-generating process and you can think about it in two different ways and, therefore, model it in two different ways. Potentially, if you could estimate the local learning coefficient or derive it or have some method of knowing that one is lower than the other, it could tell you things like one will be preferred by the Bayesian posterior. Now, to the extent that that is related to what SGD finds, that might tell you that training is more likely to prefer some class of solutions to another class. Now, if those parameters are just very different, completely different solutions, somehow not nearby in parameter space, maybe it’s quite difficult to make the bridge between the way the Bayesian posterior would prefer one or the other and what training will do because, in that case, the relationship between training and these two parameters is this very global thing to do with the trajectory of training over large parts of the parameter space, and very difficult perhaps to translate into a Bayesian setting. In cases where you have two relatively similar solutions, maybe you had a choice to make. So during the training process, you had one of two ways to take the next step and accommodate some additional feature of the true distribution, and those two different choices differed in some complexity fashion that could be measured by the local learning coefficient: one was more complex, but lowered the loss by so much, and the other one was simpler, but didn’t lower the loss quite as much. Then you could make qualitative predictions for what the Bayesian posterior would prefer to do, and then you could ask, “Are those predictions also what SGD does?” Either, theoretically, you could try and find arguments for why that is true, but it [also] gives you an empirical prediction you can go and test. In this toy model of superposition work we did, SGD training does seem to do the thing that the Bayesian posterior wants to do. That’s very unclear in general, but it gives you pretty reasonable, grounded predictions that you might then go and test, which I think is not nothing. That would be, I think, the most grounded thing you’d do with the current state of things. Filan: I guess it suggests a research program of trying to understand which kinds of solutions do have a lower learning coefficient, which kinds of solutions have higher learning coefficients, and just giving you a different handle on the problem of understanding what neural network training is going to produce. Does that seem fair? Murfet: Yeah. I think, [for] a lot of these questions about the relation between the theory and practice, our perspective on them will shift once we get more empirical evidence. What I expect will happen is that these questions seem to loom rather large when we’ve got a lot of theory and not so much empirical evidence. If we go out and study many systems and we see local learning coefficients or restricted local learning coefficients doing various stage-wise things and they correspond very nicely to the structure that’s developing, as we can test independently with other metrics, then I think it will start to seem a little bit academic whether or not it’s provably the case that SGD training does the same thing as the Bayesian posterior just because this tool, which… To be clear, the local learning coefficient, if you look at the definition, has a sensible interpretation in terms of what’s happening to the loss as you perturb certain weights, and you can tell a story about it, it doesn’t rely on the link between the Bayesian posterior and SGD training or something. To the degree that the empirical work succeeds, I think people will probably take this independent justification, so to speak, of the LLC as a quantity that is interesting, and think about it as a reflection of what’s happening to the internal structure of the model. Then, the mathematicians like myself will still be happy to go off and try and prove these things are justified, but I don’t see this as necessarily being a roadblock to using it quite extensively to study what’s happening during training. Singular learning theory vs other deep learning theory Filan: Fair enough. I’d like to ask some questions thinking about SLT as compared to other potential theoretical approaches one could have to deep learning. The first comparison I have is to neural tangent kernel-style approaches. The neural tangent kernel, for listeners who don’t know, is basically this observation that, in the limit of infinitely wide neural networks under a certain method of initializing networks, there’s this observation that networks, during training, the parameters don’t vary very much and, because the parameters don’t vary very much, that means you can do this mathematical trick. It turns out that your learning is basically a type of kernel learning, which is essentially linear regression on a set of features. Luckily, it turns out to be an infinite set of features and you can do it… I don’t know how I was going to finish that sentence, but it turns out to be feature learning on this set of features, and you can figure out what those features are supposed to be based on what your model looks like, what kinds of nonlinearities you’re using. There’s some family of theory trying to understand: what does the neural tangent kernel of various types of models look like, how close are we to the neural tangent kernel? And if you believe in the neural tangent kernel story, you can talk about: the reason that neural networks generalize is that the neural tangent kernel, it tends to learn certain kinds of features before other kinds of features, and maybe those kinds of features are simpler. It seems plausible that you could do some story about phase transitions, and it’s a mathematically rigorous story. So I’m wondering, how do you think the single learning theory approach of understanding deep learning compares to the neural tangent kernel-style approach? Murfet: Yeah, good question. I think I’m not an expert enough on the NTK [neural tangent kernel] to give a very thorough comparison, but I’ll do my best. Let me say first the places in which I understand that the NTK says very deep and interesting things. It seems that this work on the mu parametrization seems very successful. At initialization, when this “taking the limit to infinite width” is quite justified because the weights really are independent, this seems like probably the principal success of deep learning theory, to the extent there are any successes: the study of that limit and how it allows you to choose hyperparameters for learning rates and other things. Again, I’m not an expert, but that’s my understanding of how it’s used, and that seems to be quite widely used in practice, as far as I know. So that’s been a great success of theory. I don’t think I believe in statements outside of that initial phase of learning though. I think there, as far as I understand it, the claims to applicability of the NTK methods become hypotheses, unless you then perturb away from the Gaussian process limit. The deep parts of that literature seem to me to be accepting the position that in the infinite width limit, you get some Gaussian process that isn’t actually a good description of the training process away from initialization, but then you can perturb back in basically higher-order terms in the exponent of some distribution. You can put in higher-order terms and study systematically those terms to get back to finite width, attempt to perturb away from infinite width back to finite width and accommodate those contributions in some fashion. And you can do that with tools from random matrix theory and Gaussian processes. And that looks a lot like what people do in Euclidean quantum field theory, and so people have been applying techniques from that world to do that. And I think they can say non-trivial things, but I think it is overselling it to say that that is a theory on the same level of mathematical rigor and depth as SLT. So I don’t think it says things about the Bayesian posterior and its asymptotics, in the way that SLT does, I think it’s aiming at rather different statements. And I think, at least in my judgment at the moment, it has a little bit of the flavor of saying qualitative things rather than quantitative things. Again, this is my outsider’s impression, and I could be wrong about what the state of things is there. But I would say that one part of that story that I have looked at a little bit is the work that my colleague, Liam Hodgkinson has done here. They have some very interesting recent work on information criterion in over-parameterized models - I think the title is something like that. [It’s] partly inspired by Watanabe’s work, I think, looking at trying to take, not only NTK, but this general sort of approach, point of view to doing things like what the free energy formula in SLT does. And so I think that’s quite interesting. I have my differences of opinion with Liam about some aspects of that, but mathematics isn’t actually divided into camps that disagree with one another or something, right? So if things are both true, then they meet somewhere. And I can easily imagine that… SLT is sort of made up of two pieces, one of which is using resolution of singularities to do Laplace integrals, oscillatory integrals, and the other is dealing with empirical processes that intervene in that when you try to put it in the context of statistics. And I don’t think these kinds of oscillatory integrals, these techniques, have been used systematically by the people doing NTK-like stuff or Euclidean field theory-like stuff, but I think that if you took those techniques and used them in the context of the random matrix theory that’s going on there, you’d probably find that the perturbations that they’re trying to do can be linked up with SLT somewhere. So I mean, I think it all probably fits together eventually, but right now they’re quite separated. Filan: Fair enough. So a related question I have is: one observation I have, from the little I know about the deep learning theory literature, is the variance of the distribution of how parameters are initialized matters. So one example of this is in deep linear models. If your initialization distribution of parameters has high enough variance, then it looks something like the NTK: you only have a small distance until the optimum. Whereas if all the parameters are really, really close to zero at initialization, you have this jumping between saddle points. And in deep networks at one initialization, you have this neural tangent kernel story, which crucially doesn’t really involve learning features; it has a fixed set of features and you need decide which ones to use. If you differ the variance of the initialization, then you start doing feature learning, and that seems qualitatively different. If I think about how I would translate that to a singular learning theory story… At least in general, when people talk about Bayesian stories of gradient descent, often people think of the prior as being the initialization distribution. And in the free energy formula of singular learning theory, the place where the loss comes up and then the learning coefficient comes up, the prior comes in at this order one term that matters not very much, basically. Murfet: Well, late in training… I mean, late in the process it doesn’t matter. Filan: Yeah. So I guess my question is: is singular learning theory going to have something to say about these initialization distribution effects? Murfet: I haven’t thought about it at all, so this is really answering this question tabula rasa. I would say that from the asymptotic point of view, I guess we tend not to care about the prior, so this isn’t a question that we tend to think about too much so far, so that’s why I haven’t thought about it. But if you look at our model in the toy model of superposition, where you can really at least try and estimate order N term in the asymptotic, the log N term in the asymptotic, and then these lower order terms… And maybe I should say what this asymptotic is. If you take the Bayesian posterior probability that’s assigned to a region of parameter space and negative its logarithm (that’s an increasing function, so you could basically think about it as telling you how probable a given region is according to the posterior), you can give an asymptotic expansion for that in terms of N. So for a large N, it looks like N times some number, which is kind of the average loss in that region or something like that, plus the local learning coefficient times log N plus lower order terms. The lower order terms we don’t understand very well, but there’s definitely a constant order term contributed from the integral of the prior over that region. Now if you look at the toy model of superposition, that constant order term is not insignificant at the scale of N at which we’re running our experiments. So it does have an influence, and I could easily imagine that this accounts for the kind of phenomena you’re talking about in DLNs [deep linear networks]. So a mathematician friend of mine, Simon Lehalleur, who’s an algebraic geometer who’s become SLT-pilled, maybe, has been looking at a lot of geometric questions in SLT and was asking me about this at some point. And I guess I would speculate that if you just incorporated a constant term from those differences in initialization, that would account for this kind of effect. Maybe later in the year, we’ll write a paper about DLNs. At the moment, we don’t have complete understanding of the local learning coefficients away from the global minimum, the local learning coefficients of the level sets. I think we probably are close to understanding them, but there’s a bit of an obstacle to completely answering that question at the moment. But I think principle, that would be incorporated via the constant order term. Which would, to be clear, not change the behavior at the very large N, but for some significant range of Ns, potentially including the ones you’re typically looking at in experiments, that constant order term could bias some regions against others in a way that explains the differences. Filan: Yeah. And I guess there’s also a thing where the constant order term, in this case the expansion is: you’ve got this term times N, you’ve got this term times the logarithm of N, you’ve got this term times the logarithm of the logarithm of N, if I remember correctly? Murfet: Yep. Filan: And then you have these constant things. And the logarithm of the logarithm of N is very small, right, so it seems like kind of easy for the constant order term to be more important than that, and potentially as important as the logarithm of N? Murfet: Yeah, although that log log N term is very tricky. So the multiplicity, Aoyagi’s proof… as I said, she understands deep linear networks, and in particular understands the multiplicity of the coefficient of this log log N term up to a -1. And this can get… if I remember correctly, as a function of the depth it has this kind of behavior and it becomes larger and larger [he mimes gradually increasing, ‘bouncing’ curves]. Filan: Like a bouncing behavior with larger bounces? Murfet: Yeah, that’s right. Filan: Interesting. Murfet: Yeah, so that’s very wild and interesting. One of the things Simon is interested in is trying to understand [it] geometrically. Obviously Aoyagi’s proof is a geometric derivation of that quantity, but from a different perspective. Maybe Aoyagi has a very clear conceptual understanding of what this bouncing is about, but I don’t. So anyway, the log log N term remains a bit mysterious, but if you’re not varying the depth and you have a fixed depth, maybe it is indeed the case that the constant order terms could be playing a significant role. Filan: Sure. Right. So I guess a final question I have before I get into the relationship between singular learning theory and existential risk from AI: I’m more familiar with work done applying singular learning theory to deep learning. Is there much work outside that, of the singular learning theory of all the things people do outside my department? Murfet: Yes. I mean, that’s where the theory has been concentrated, I would say, so. I don’t want to give the impression that Watanabe didn’t think about neural networks; indeed, the class of models based on neural networks was one of the original motivations for him developing SLT. And he’s been talking about neural networks from the beginning, so early that the state of the art neural networks had tanh nonlinearities, so that’s how long Watanabe’s been talking about neural networks. Watanabe has been 20 years ahead of his time or something. But having said that, deeper neural networks with nonlinearities remain something that we don’t have a lot of theoretical knowledge about. There are some recent results giving upper bounds for various quantities, but in general, we don’t understand deeper neural networks in SLT. The predominant theoretical work has been done for singular models that are not neural networks, various kinds of matrix factorization. There’s some interesting work by [Piotr] Zwiernik and collaborators looking at various kinds of graphical models, trees, deriving learning coefficients for probabilistic graphical models that have certain kinds of graphs. There’s papers on latent Dirichlet allocation, if that’s the correct expansion of the acronym LDA: many, many papers, dozens, I think. I wouldn’t be able to list all the relevant models here, but there’s quite a rich literature out there over the last several decades looking at other kinds of models. How singular learning theory hit AI alignment Filan: All right. So at this stage I’d like to move on to: my experience of singular learning theory is, I’m in this AI existential risk space. For a while, people are chugging along doing their own thing. Then at one Effective Altruism Global, I have this meeting with this guy called Jesse Hoogland who says, “Oh, I’m interested in this weird math theory.” And I tell him, “Oh yeah, that’s nice. Follow your dreams.” And then it seems like at some point in 2023, it’s all everyone’s talking about, singular learning theory, it’s the key to everything, we’re all going to do singular learning theory now, it’s going to be amazing. How did that happen? What’s the story whereby someone doing singular learning theory gets interested in AI alignment or the reverse? Murfet: Yeah, I guess I can’t speak to the reverse so much, although I can try and channel Alexander [Gietelink Oldenziel] and Jesse [Hoogland] and Stan [van Wingerden] a little bit. I guess I can give a brief runthrough of my story. I cared about SLT before I cared about alignment, so maybe I’ll say briefly why I came to care about SLT. I’m an algebraic geometer by training, so I spent decades thinking about derived categories in algebraic geometry and some mathematical physics of string theory and its intersection with algebraic geometry, et cetera. And then I spent a number of years thinking about linear logic, which might seem unrelated to that, but has some geometric connections as well. And then because of some influence of friends and colleagues at UCLA where I was a postdoc, I paid attention to deep learning when it was taking off again in 2012, 2013, 2014. I’d always been a programmer and interested in computer science in various ways and sort of thought that was cool. And then I saw AlphaGo happen, and then the original scaling laws paper from Hestness et al.. And it’s when I saw those two, AlphaGo and the Hestness et al. paper, that I was like, “huh, well maybe this isn’t just some interesting engineering thing, but maybe there’s actually some deep scientific content here that I might think about seriously, rather than just spectating on an interesting development somewhere else in the intellectual world.” So I cast around for ways of trying to get my hands on, with the mathematical tools that I had, what was going on in deep learning. And that’s when I opened up Watanabe’s book, “Algebraic Geometry and Statistical Learning Theory”, which seemed designed to nerd-snipe me, because it was telling me geometry is useful for doing statistics. And then when I first opened it, I thought, that can’t possibly be true, this is some kind of crazy theory. And then I closed the book and put it away and looked at other things, and then came back to it eventually. So that’s my story of getting into SLT, from the point of view of wanting to understand universal mathematical phenomena in large-scale learning machines, and that’s my primary intellectual interest in the story. So I’ve been chugging away at that a little bit. When I first started looking at SLT, it was - apart from Shaowei Lin, who did his PhD in SLT in the states, I believe, with Bernd Sturmfelds - mostly, it’s Watanabe, his students, and a few collaborators, mostly in Japan, a few people elsewhere, a very small community. So I was sitting here in Melbourne, chugging away reading this book and I had a few students, and then Alexander Oldenziel found me and asked me what this could say about alignment, if anything. And at the time, I found it very difficult to see that there was anything SLT could say about alignment, I guess, because as a mathematician, the parts of the alignment literature that I immediately found comprehensible were things like Vanessa Kosoy’s work or Scott Garrabrant’s work. These made sense to me, but they seemed quite far from statistical learning theory, at least the parts that I understood. And so I think my answer originally to Alexander was, “no, I don’t think it is useful for alignment”, but reading more about the alignment problem and being already very familiar with capabilities progress, and believing that there was something deep and universal going on that that capabilities progress was sort of latching onto, but it not being some contingent phenomena on having a sequence of very complex engineering ideas, but more like “throw simple scaling and other things at this problem and things will continue to improve”. So that combination of believing in the capabilities progress and more deeply understanding what I was reading in the alignment literature about the problem… the product of that was me taking this problem seriously enough to think that maybe my initial answer, I could profit from thinking a little bit more extensively about it. So I did that and outlined some of the ideas I had about how this kind of stage-wise learning, or phases and phase transitions that the Bayesian learning process and SLT talks about, how that might be by analogy with developmental biology used to understand how structure develops in neural networks. So I had some preliminary ideas around that [in the] middle of 2023, and those ideas were developed further by Alexander [Oldenziel] and Jesse Hoogland and Stan van Wingerden and various of my students and others, and that’s where this developmental interpretability agenda came from. And I think that’s sort of around the time you ran into SLT, if I remember correctly. Filan: Yeah. The time I ran into it is: so, I hear a few different people mention it, including, if people listen to the episode of this podcast with Quintin Pope, he brings it up and it sounds interesting. And some other people bring it up, that sounds interesting. And then I hear that you guys are running some sort of summer school thing, a week where you can listen to lectures on single learning theory. And I’m like, “oh, I could take a week off to listen to some lectures, it seems kind of interesting”. This is summer of 2023. These lectures are still up on YouTube, so you can hear some guy ask kind of basic questions - that’s me. Murfet: Yeah. I guess it took me a while to appreciate some of the things that… I mean, I guess John Wentworth has also been posting in various places how he sees SLT relating to some of the aspects of the alignment problem that he cares about. Now I see more clearly why some of the very core problems in alignment, things like sharp left turns and so on, the way that people conceptualize them… how SLT, when you first hear about it, might map onto that in a way that makes you think it could potentially be interesting. I think my initial take being negative was mostly to do with it just being such a big gap at that time, the middle of last year, between SLT being a very highly theoretical topic…. I mean, I should be clear. The WBIC, which is the widely applicable Bayesian information criterion, which is a piece of mathematics and statistics that Watanabe developed, has been very widely used in places where the BIC [is used]. This is not an esoteric, weird mathematical object. This is a tool that statisticians use in the real world, as they say. The WBIC has been used in that way as well. And so the work we’ve been doing, with the local learning coefficient and SGLD and so on, is by far not the only place where SLT has met applications. That’s not the case. I don’t want to give that impression. But the way SLT felt to me at that time was: there’s just so many questions about whether the Bayesian learning process is related to SGD training and all these other things we were discussing. So I think it was quite a speculative proposal to study the development process using these techniques, middle of last year. I think we’ve been hard at work over the last year seeing if a lot of those things pan out, and they seem to. So I think it’s much less speculative now to imagine that SLT says useful things, at least about stage-wise development in neural networks. I think it says more than that about questions of generalization that are alignment-relevant, but I think it was appropriate a year ago to think that there was some road to walk before it was clear that this piece of mathematics was not a nerd-snipe. Filan: Sure. So at some point, this guy, Alex Oldenziel, reaches out to you and says, “hey, how is single learning theory relevant to alignment?” And instead of deleting that email, you spent some time thinking about it. Why? Murfet: Well, I should insert a little anecdote here, which is I think I did ignore his first email, not because I read it and thought he was a lunatic, but just because I don’t always get to every email that’s sent to me. He persisted, to his credit. Filan: Why did it feel interesting to you, or why did you end up pursuing the alignment angle? Murfet: I had read some of this literature before in a sort of “curious but it’s not my department” kind of way. I quite extensively read Norbert Wiener’s work. I’m a big fan of Wiener, and he’s written extensively, in God & Golem and The Human Use of Human Beings and elsewhere, precisely about the control problem or alignment problem in much the same way as modern authors do. And so I guess I had thought about that and seen that as a pretty serious problem, but not pressing, because AI didn’t work. And then I suppose I came to believe that AI was going to work, in some sense, and held these two beliefs, but in different parts of my brain. And it was Alexander that sort of caused the cognitive dissonance, the resolution of which was me actually thinking more about this problem. So that’s one aspect of it - just causing me to try and make my beliefs about things coherent. But I think that wouldn’t have been sufficient without a second ingredient, and the second ingredient was: to the degree you assign a probability to something like AGI happening in a relatively short period of time, it has to affect your motivational system for doing long-term fundamental work like mathematics. So as a kind of personal comment, the reason I do mathematics is not based on some competitive spirit or trying to solve tricky problems or something like that. I am very much motivated as a mathematician by the image of some kind of collective effort of the human species to understand the world. And I’m not [Ed] Witten or [Maxim] Kontsevich or [Alexander] Grothendieck or somebody, but I’ll put my little brick in the wall. And if I don’t do it, then maybe it’ll be decades before somebody does this particular thing. So I’m moving that moment forward in time, and I feel like that’s a valid use of my energies and efforts, and I’ll teach other people and train students to do that kind of thing, and I felt that was a very worthwhile endeavor to spend my life professionally on. But if you believe that there are going to be systems around in 10 years, 20 years, 30 years - it doesn’t really matter, right, because mathematics is such a long-term endeavor. If you believe that at some time, soon-ish, systems will be around that will do all that for $.05 of electricity and in 20 seconds… If that is your motivation for doing mathematics, it has to change your sense of how worthwhile that is, because it involves many tradeoffs against other things you could do and other things you find important. So I actually found it quite difficult to continue doing the work I was doing, the more I thought about this and the more I believed in things like scaling laws and the fact that these systems do seem to understand what they’re doing, and there’s interesting internal structures and something going on we don’t understand. So I’d already begun shifting to studying the universal phenomena involved in learning machines from a geometric perspective, and I picked up statistics and empirical processes and all that. I’d already started to find that more motivating than the kind of mathematics I was doing before. And so it wasn’t such a big jump from that to being motivated by alignment and seeing a pathway to making use of that comparative advantage in theory and mathematics and seeing how that might be applicable to make a contribution to that problem. There’s many details and many personal conversations with people that helped me to get to that point, and in particular, my former master’s student, Matt Farrugia-Roberts, who was in my orbit probably the person who cared about alignment the most, who I talked to the most about it. So that’s what led me to where I am now. Most of my research work is now motivated by applications to alignment. Payoffs of singular learning theory for AI alignment Filan: Sure. My next question is: concretely, what do you think it would look like for singular learning theory to be useful in the project of analyzing or preventing existential risk from AI? Murfet: The pathway to doing that that we’re currently working on is providing some sort of rigorously founded empirical tools for understanding how structure gets into neural networks. And that has similar payoffs as many things [in] interpretability might, and also potentially some of the same drawbacks. So I can talk about that in more detail, but maybe it’s better to sketch out, at a very high level, the class of things that theories like SLT might say and which seem related to the core problems in alignment. Then we can talk about some detailed potential applications. So I rather like the framing that Nate Soares gave in a blog post he wrote in 2022, I think. I don’t know if that’s the post that introduced the term “sharp left turn”, but it’s where I learned about it. So let me give a framing of what Soares calls the core technical problem in alignment, and which I tend to agree seems like the core problem. I’ll say it in a way which I think captures what he’s saying but is my own language. If we look at the way that large-scale neural networks are developing, they become more and more competent with scale both in parameters and data, and it seems like there’s something kind of universal about that process. What exactly that is, we don’t quite know, but many models seem to learn quite similar representations, and there are consistencies across scale and across different runs of the training process that seem hard to explain if there isn’t something universal. So then, what is in common between all these different training processes? Well, it’s the data. So I guess many people are coming to a belief that structure in the data, whatever that means, is quite strongly determinant of the structures that end up in trained networks, whatever you take that to mean, circuits or whatever you like. So then from that point of view, what Soares says is… his terms are “capabilities generalize further than alignment”. And the way I would put that is: if your approach to alignment is engineering the data distribution - things like RLHF or safety fine-tuning and so on, [that] fundamentally look like training with modified data that tries to get the network to do the thing you want it to do; if we just take as a broad class of approaches “engineer the data distribution to try and arrange the resulting network to have properties you like” - If that’s your approach, then you have to be rather concerned with which patterns in the data get written more deeply into the model, because if… And Soares’s example is arithmetic: if you look in the world, there are many patterns that are explained by arithmetic. I don’t think this is how current models learn arithmetic, but you could imagine future multimodal models just looking at many scenes in the world and learning to count and then learning rules of arithmetic, et cetera, et cetera. So anyway, there are some patterns in the world that are very deep and fundamental and explain many different samples that you might see. And if this is a universal phenomenon, as I believe it is, that the data determines structure in the models, then patterns that are represented more deeply in the world will tend perhaps to get inscribed more deeply into the models. Now, that’s a theoretical question. So that’s one of the questions you might study from a theoretical lens. Is that actually the case? But the story with DLNs [deep linear networks] and learning modes of the data distribution in order of their singular values and all that tends to suggest that this is on the right track. And I think SLT has something more general to say about that. I can come back to that later, but I buy this general perspective that in the data, there are patterns. Not all patterns are equal, some are more frequent than others, some are sort of deeper than others in the sense that they explain more. And capabilities - whatever that means, but reasoning and planning and the things that instrumental convergence wants to talk about models converging to - these kinds of things might be patterns that are very deeply represented. Whereas the things you are inserting into the data distribution to get the models to do what you want, the kind of things that you’re doing with RLHF for example, might not be as primary as those other patterns, and therefore the way they get written into the model in the end might be more fragile. And then when there’s a large shift in the data distribution, say from training to deployment or however you want to think about that, how do you know which of those structures in your model, associated to which structures in the data distribution, are going to break and which ones will not? Which ones are sacrificed by the model in order to retain performance? Well, maybe it’s the ones that are shallower rather than the ones that are deeper. And on that theory, capabilities generalize further than alignment. So I think that post is sometimes criticized by its emphasis on the evolutionary perspective, on the contrast between in-lifetime human behavior and what evolution is trying to get people to do and so on. But I think that’s missing the point to some degree. I think this general perspective of structure in the data determining structure in the models, not all structure being equal, and our alignment attempts, if they go through structuring the data, perhaps being out-competed by structures in the data that are deeper when it comes to what happens when data distributions shift - I think this is a very sensible, very grounded, quite deep perspective on this problem, which as a mathematician makes a lot of sense to me. So I think this is a very clear identification of a fundamental problem in Bayesian statistics even absent a concern about alignment, but it does seem to me to be quite a serious problem if you’re attempting to do alignment by engineering the data distribution. So I think my mainline interest is in approaching that problem and, well, we can talk about how you might do that. Obviously it’s a difficult and deep problem empirically and theoretically, and so we’re sort of building up to that in various ways, but I think that is the core problem that needs to be solved. Filan: Sure. I guess if you put it like that, it’s not obvious to me what it would look like for singular learning theory to address this, right? Maybe it suggests something about understanding patterns in data and which ones are more fundamental or not, but I don’t know, that’s a very rough guess. Murfet: I can lay out a story of how that might look. Obviously, this is a motivating story, but not one that has a lot of support right now. I can say the ingredients that lead into me thinking that that story has some content to it. So we’ve been studying for the last year how the training process looks in models of various sizes and what SLT says about that, and part of the reason for doing that is because we think… I mean, other people have independent reasons for thinking this, but from an SLT perspective, we think that the structure of the training process or learning process reflects the structure of the data, what things are in it, what’s important, what’s not. So if it’s correct that the structure of the data is somehow revealed in the structure of the learning process, and that also informs the internal structures in the model that emerge and then affect later structure and then are present in the final model. So that starts to give you some insight into, [first], how - the mechanism by which structures in the data become structures in the model. If you don’t have that link, you can’t really do much. So if you can understand how structure in the data becomes structures - say, circuits or whatever - in the final model, that’s already something. Then if you also understand the relative hierarchy of importance, how would you measure that? There’s several things you’d want to do in order to get at this question. You’d want to be able to, first of all, know what the structure in the data is. Well, unfortunately, training networks is probably the best way to find out what the structure in the data is. But suppose you’ve trained a network which sort of is a reflection, holding a mirror up to the data, and you get a bunch of structure in that model, well, then you’re just looking at a big list of circuits. How do you tell which kinds of structure are associated to deep things in the data, which are very robust and will survive under large scale perturbations, and [which are] very fragile structures that are somewhat less likely to survive perturbations in the data distribution if you had to keep training or expose the network to further learning. Well, those are questions. Then there’s a question of stability of structure and how that relates to things you can measure, but these are fundamentally geometric questions from our point of view. So I think it actually is in scope for SLT to… Not right now, but there are directions of development of the theory of SLT that augment the invariants like the local learning coefficient and the singular fluctuation with other invariants you could attempt to estimate from data, which you could associate to these structures as you watch them emerging and which measures, for example, how robust they are to certain kinds of perturbations in the data distribution, so that you get some idea of not only what structure is in the model, but what is deep and what is shallow. And how that pays off for alignment exactly, I guess it’s hard to say right now, but this seems like the kind of understanding you would need to have if you were to deal with this problem of generalization of capabilities outpacing alignment. If you were to have empirical and theoretical tools for talking about this sensibly, you’d at least have to do those things, it seems to me. So that’s how I would see concretely… I mean, we have ideas for how to do all those things, but it’s still very early. The part that we sort of understand better is the correspondence between structure in the data and development, and the stages, and how those stages do have some geometric content. That’s what the changes in the local learning coefficient says. So all of that points in some direction that makes me think that the story I was just telling has some content to it, but that is the optimistic story of how SLT might be applied to solve eventually, or be part of the solution to [the alignment] problem, that we’re working towards. Filan: Sure. So I guess if I think about what this looks like concretely, one version of it is this developmental interpretability-style approach of understanding: are there phase transitions in models? At what points do models really start learning a thing versus a different thing? And then I also see some work trying to think about what I would think of as inductive biases. So in particular, there’s this LessWrong post. Is that too undignified? I don’t know if you posted it elsewhere, but there’s this thing you posted about- Murfet: Not undignified. Yes, it was a LessWrong post. Filan: Something about, you call it “short versus simple”. Thinking about a singular learning theory perspective on learning codes of Turing machines that are generating data and saying something beyond just the number of symbols in the code. Perhaps you want to explain that a little bit more for the audience? Murfet: Sure. There’s been an interesting thread within the alignment literature, I think, if I’m correct, going back to Christiano writing about ghosts in the Solomonoff prior or something. And then Evan Hubinger wrote quite a bit about this, and others, which is motivated by the observation that if you’re producing very capable systems by a dynamical process of training, and you want to prove things about the resulting process - or maybe that’s too ambitious, but at least understand something about the resulting process and its endpoint - then you might like to know what kind of things that process typically produces, which is what “inductive biases” means. And neural networks are not Turing machines, but we have some understanding of certain kinds of distributions over Turing machine codes. And there’s a kind of Occam’s razor principle there, which is spiritually related to the free energy formula that we were discussing earlier, although not directly analogous without making some additional choices. But anyway, the story about inductive biases and its role in alignment has been going on for a number of years, and there’s been, I think, quite reasonably some discussion that’s critical of that in recent months on LessWrong. And my post sort of came out of reading that a little bit. So let me maybe just characterize briefly what the discussion is for some context. We don’t understand the inductive bias of SGD training. We know some bits and pieces, but we really don’t understand systematically what that bias is. We do not understand that it’s a bias towards low Kolmogorov complexity functions. There are some papers pointing in that direction. I don’t think they conclusively establish that. So I think we are just quite in the dark about what the inductive biases of SGD training are. And I read these posts from, say, Christiano and Hubinger as saying, “Well, here we know about the inductive biases in some nearby conceptually similar thing. And if that knowledge could be used to reason about SGD training, then here would be the consequences. And these look potentially concerning from an alignment perspective.” And my model of both Christiano and Hubinger is that I think neither of them would claim those are ironclad arguments because there’s a big leap there, but it seems sufficient to motivate further research empirically, which is what, for example, Hubinger has been doing with the Sleeper Agents work. So I think that’s very interesting, and I buy that, but with the big caveat that there is this gap there, that it isn’t on solid theoretical ground. And then you can criticize that work and say that it’s kind of spinning stories about how scary inductive biases are. And there were some posts from Nora Belrose and Quintin Pope critiquing the [argument, saying] if you take uncritically this story about inductive biases without really internalizing the fact that there is this big gap in there, then you might make overconfident claims about what the consequences of inductive biases may be. So in some sense, I think both sides are correct. I think it’s reasonable to look at this and think, “Ah, this might tell us something, and so I’ll go away and do empirical work to see if that’s true.” I think it’s also accurate to think that people may have become a little bit overly spooked by our current understanding of inductive biases. So in that context, what I wanted to do with this post was to point out that as far as our current state-of-the-art knowledge about Bayesian statistics goes, which is SLT, at least if by “inductive bias” he means “which parameters does the Bayesian posterior prefer?”… This is not description length. It’s not even like description length, it’s just something else. And we don’t know what that is yet. But this step that Christiano and Hubinger were making from thinking about description length and inductive biases in SGD training as maybe being related, I’m pointing to a particular piece of that gap where I see that this is not justified. Now, I think that maybe the concern that they derive from that connection may still be justified, but I think thinking about it roughly as description length is simply wrong. And then I gave a particular example in that post - not in neural networks, but in a Turing machine-oriented setting - of how the local learning coefficient, which in some cases, like this simple situation we were describing at the beginning of this podcast, where you have energy levels and then there’s sums of squares, and the local learning coefficient is just the number of squares, which is sort of the co-dimension. So that’s somewhat like description length. So if you have a system where the LLC, the local learning coefficient, is basically half the number of variables you need to specify your thing, then that is description length, because you take your universal Turing machine, it’s got a code tape, and you need n squares to specify your code. Well, that’s roughly speaking n variables whose value you need to specify, and you need that value to stay close to the value you specified and not wander off in order to execute the correct program. So there is quite a legitimate rigorous connection between description length and the local learning coefficient in the case where you’re dealing with models that have this near-regularity behavior that the loss function is just locally sums of squares. But it’s typical, as soon as you perturb this kind of universal Turing machine perspective and introduce some stochasticity, that the local learning coefficient becomes immediately more exotic and includes, for example, a bias towards error correction, which I’d present in the following way. If you give someone some instructions, it’s no good those instructions being short if they’re so fragile that they can’t execute them reliably. So there’s actually some advantage to trading off succinctness against robustness to errors in execution, where you don’t have to get everything perfect and you’ll still more or less get what you want. And there’s some precise mathematical statement of that in that post. That’s in the setting of Turing machines, so it’s provably the case that there will be some preference for Turing machines, which are insensitive to certain kinds of errors if they’re executed in some slightly exotic way… The setting really is not meant to be thought of as directly analogous to what’s happening in neural networks. But I think there’s a high level of conceptual insight, which I sort of noticed after… I thought of those ideas along with my student, Will Troiani, at a meeting we had in Wytham that was organized by Alexander [Oldenziel] and Stan [van Wingerden] and Jesse [Hoogland]. There were some linear logic people there, and I was talking with them about this, and I had this idea with Will about error correction. And then later I twigged that there is a phenomenon in neural networks, these backup heads, where it does seem that neural networks may actually have a bias towards reliably computing important things by making sure that if some weight is perturbed in such a way that it takes out a certain head, that another head will compensate. So I’m speculating now, but when I see that sort of phenomenon, that makes sense to me, as a general principle of Bayesian statistics, that short is not necessarily better, degenerate is better, and degenerate can be both short but also redundant. Filan: Right. So I guess to me this points to a qualitatively different way that singular learning theory could be useful, where one way is understanding developmental stages and how structure gets learned over time with data, and there’s this other approach which is better understanding what kinds of solutions Bayesian inference is going to prefer in these sorts of messy systems. And maybe that helps inform arguments that people tend to have about what sorts of nasty solutions should we expect to get. Does that seem fair to you? Murfet: Yeah, I think so. I guess this observation about the inductive biases has sort of been on the side or something because we’ve been busy with other things. One of the things that my former student, Matt Farrugia-Roberts, who I mentioned earlier, and potentially others - I don’t know if Garrett Baker is interested in this, but he and Matt are working on an RL project right now that maybe eventually develops in this direction… You could imagine that in a system that is doing reinforcement learning, that potentially some of these inductive biases - if they exist in neural networks, and that’s still speculation, but if this observation I’m making about this other setting with Turing machines, if this inductive bias towards error correction or robustness is universal, then you could imagine that this is actually a pretty significant factor in things like RL agents choosing certain kinds of solutions over others because they’re generally more robust to perturbations in their weights - things like making your environment safe for you to make mistakes. That’s speculation, but I do think that I agree that this is an independent direction in which potentially you can derive high-level principles from some of these mathematical ideas that would be useful. Does singular learning theory advance AI capabilities? Filan: Fair enough. So another question I have about this interplay between singular learning theory and AI alignment, AI existential risk is: a lot of people in the field use this kind of simplified model where there are some people working on making AI more generally capable and therefore more able to cause doom. And there are other people who are working on making sure AI doesn’t cause doom. And when you’re evaluating some piece of research, you’ve got to ask, to what extent does it advance capabilities versus alignment? And if it advances capabilities much more than alignment, then maybe you think it’s bad or you’re not very excited about it. So with singular learning theory, one might make the critique that, well, if we have this better theory of deep learning, it seems like this is just going to generally be useful, and maybe it’s about as useful for causing doom as for preventing doom, or maybe it’s more useful for causing doom than for preventing doom, and therefore people on the anti-doom side should just steer clear of it. I’m wondering what you think about that kind of argument. Murfet: Yeah, it’s a good question. I think it’s a very difficult question to think about properly. I have talked with many people about it. Not only on my own, but along with Alexander and Jesse and Stan and the other folks at Timaeus I’ve talked about this quite a bit. I talked with Lucius Bushnaq about it and some of the junior MIRI folks. So I’ve attempted to think about this pretty carefully, but I still remain very uncertain as to how to compute on these trade-offs, partly because especially this kind of research… I mean, [in] empirical research, I suppose, you partly get out about as much as you put in or something. You have a certain number of experiments, you get a certain number of bits of insight. But theory sometimes doesn’t work like that. You crack something, and then lots and lots of things become visible. There’s a non-linear relationship between the piece of theory and the number of experiments it kind of explains. So my answer to this question could look extremely foolish just six months from now if a certain direction opens up, and then just very clearly the trade-off is not what I thought it was. I guess one response to this question would be that we have prioritized thinking about directions within the theory that we think have a good trade-off in this direction. And for the things we’re currently thinking about, I just don’t see how the ratio of contribution to alignment to contribution to capabilities is too small to justify doing it. So we are thinking about it and taking it seriously, but I don’t actually have a very systematic way of dealing with this question, I would say, even at this point. But I think that applies to many things you might do on a technical front. So I guess my model is something like… And here I think Alexander and I differ a little, so maybe I’ll introduce Alexander’s position just to provide context. So I think if you have a position that capabilities progress will get stuck somewhere - for example, perhaps it will get stuck… I mean, maybe the main way in which people imagine it might get stuck is that there’s some fundamental gap between the kind of reasoning that can be easily represented in current models and the kind of reasoning that we do, and that you need some genuine insight into something involved - architecture or training processes or data, whatever - to get you all the way to AGI. And there’s some threshold there, and that’s between us and the doom. If there is such a threshold, then conceivably, you get unstuck by having better theory of how universal learning machines work and the relationship between data and structure, and then you can reverse engineer that to design better architectures. So I guess that’s pretty obviously the mainline way in which SLT could have a negative impact. If, on the other hand, you think that basically not too much more is required, nothing deep, then it’s sort of like, capabilities are going to get there anyway, and the marginal negative contribution from doing more theoretical research seems not that important. So I think that seems to me the major divide. I think in the latter world where you sort of see systems more or less getting to dangerous levels of capability without much deeper insight, then I think that SLT research, I’m not that concerned about it. I think just broadly, one should still be careful and maybe not prioritize certain avenues of investigation that seem disproportionately potentially likely to contribute to capabilities. But on the whole, I think it doesn’t feel that risky to me. In the former case where there really is going to be a threshold that needs to be cracked with more theoretical progress, then it’s more mixed. I guess I would like to err on the side of… Well, my model is something like it would be extremely embarrassing to get to the point of facing doom and then be handed the solution sheet, which showed that actually it wasn’t that difficult to avert. You just needed some reasonably small number of people to think hard about something for a few years. That seems pretty pathetic and we don’t know that we’re not in that situation. I mean, as Soares was saying in this post, he also, at least at that time, thought it wasn’t like alignment was impossible, but rather just a very difficult problem you need a lot of people thinking hard about for some period of time to solve, and it seems to me we should try. And absent a very strong argument for why it’s really dangerous to try, I think we should go ahead and try. But I think if we do hit a plateau and it does seem like theoretical progress is likely to critically contribute to unlocking that, I think we would have to reevaluate that trade-off. Filan: Yeah. I wonder: it seems like you care both about whether there’s some sort of theoretical blocker on the capabilities side and also whether there’s some theoretical blocker on the alignment side, right? Murfet: Yeah. Filan: If there’s one on the alignment side but not on the capabilities side, then you’re really interested in theory. If there’s one on the capability side but not on the alignment side, then you want to erase knowledge of linear algebra from the world or something. Not really. And then if there’s both or neither, then you’ve got to think harder about relative rates. I guess that would be my guess? Murfet: Yeah, I think that’s a nice way of putting it. I think the evidence so far is that the capabilities progress requires essentially no theory, whereas alignment progress seems to, so far, not have benefited tremendously from empirical work. I mean, I guess it’s fair to say that the big labs are pushing hard on that and believe in that, and I don’t know that they’re wrong about that. But my suspicion is that these are two different kinds of problems, and I do see this as actually a bit of a groupthink error in my view, in the more prosaic alignment strategy, which is: I think a lot of people in computer science and related fields think, maybe not consciously, but unconsciously feel like deep learning has succeeded because humans are clever and we’ve made the things work or something. I think many clever people have been involved, but I don’t think it worked because people were clever. I think it worked because it was, in some sense, easy. I think that large scale learning machines want to work and if you just do some relatively sensible things… Not to undersell the contributions of all the people in deep learning, and I have a lot of respect for them, but compared to… I mean, I’ve worked in deep areas of mathematics and also in collaboration with physicists, the depth of the theory and understanding required to unlock certain advances in those fields, we’re not talking about that level of complexity and depth and difficulty when we’re talking about progress in deep learning. Filan: I don’t know, I have this impression that the view that machines just want to learn and you just have to figure out some way of getting gradients to flow. This seems similar to the Bitter Lesson essay. To me, this perspective is… I feel like I see it in computer scientists, in deep learning people. Murfet: Mm-hmm. Yeah. But I think that the confidence derived from having made that work seems like it may lead to a kind of underestimation of the difficulty of the alignment problem. If you think about, “Look, we really cracked deep learning as a capabilities problem and surely alignment is quite similar to that. And therefore because we’re very clever and have lots of resources and we really nailed this problem, therefore we will make a lot of progress on that problem.” That may be true, but it doesn’t seem like it’s an inference that you can make, to me. So I guess I do incline towards thinking that alignment is actually a different kind of problem, potentially, to making the thing work in the first place. And this is quite similar to the view that I was attributing to Soares earlier, and I think there are good reasons, fundamental reasons from the view of statistics or whatever to think that that might be the case. I think it’s not just a guess. I do believe that they are different kinds of problems, and therefore that has a bearing on the relative importance of… I do think alignment may be theoretically blocked, because it is a kind of problem that you may need theoretical progress for. Now, what does that mean? If we look at the empirical approaches to alignment that are happening in the big labs, and they seem to really be making significant contributions to the core problems of alignment, and at the same time capabilities sort of seem blocked, then I guess that does necessarily mean that I would move against my view on the relative value of theoretical progress, because it might not be necessary for alignment, but might unblock capabilities progress or something. Filan: Yeah. For what it’s worth, I think, at least for many people, I get the impression that the “optimism about prosaic alignment” thing maybe comes more from this idea that somehow the key to alignment is in the data and we’ve just got to figure out a way to tap into it, rather than “we’re all very smart and we can solve hard problems, and alignment’s just as hard as making capabilities work.” This is my interpretation of what people like Nora Belrose, Quintin Pope, Matthew Barnett think. They’re welcome to correct me, I might be misrepresenting them. I guess there’s also a point of view of people like Yann LeCun who think that we’re not going to have things that are very agentic, so we don’t need to worry about it. Maybe that is kind of a different perspective. Open problems in singular learning theory for AI alignment Filan: So changing topics a bit: suppose someone has listened to this podcast and they’re interested in this research program of developing singular learning theory, making it useful for AI alignment things: what are the open problems or the open research directions that they could potentially tap into? Murfet: I’ll name a few, but there is a list on the DevInterp webpage. If you go to DevInterp, there’s an “open problems” page and there’s a Discord there where this question gets asked fairly frequently and you’ll find some replies. Maybe there are several different categories of things which are more or less suited to people with different kinds of backgrounds. I think there already are, and will be an increasing number of, people coming from pure mathematics or rather theoretical ends of physics who ask this question. To them, I have different answers to people coming from ML or computer science, so maybe I’ll start with the more concrete end and then move into the more abstract end. So on the concrete front, the current central tool in developmental interpretability is local learning coefficient estimation. I mentioned that this work that Zach [Furman] and Edmond [Lau] did gives us some confidence in those estimates for deep linear networks. But there is a lot of expertise out there in approximate Bayesian sampling from people in probabilistic programming to just Bayesian statistics in general. And I think a lot more could be done to understand the question of why SGLD is working to the extent it works. There was a recent deep learning theory conference in Lorne, organized by my colleague, Susan [Wei] and Peter Bartlett at DeepMind, and I posed this as an open problem there. I think it’s a good problem. So the original paper that introduced SGLD has a kind of proof that it should be a good sampler, but this proof… Well, I wouldn’t say it’s actually a proof of what you informally mean when you say SGLD works. So I would say it’s actually a mystery why SGLD is accurately sampling the LLC, even in deep linear networks. Understanding that would give us some clue as to how to improve it or understand what it’s doing more generally. And this kind of scalable approximate Bayesian sampling will be fundamental to many other things we’ll do in the future with SLT. So if we want to understand more about the learned structure in neural networks, how the local geometry relates to this structure of circuits, et cetera, et cetera, all of that will at the bottom rely on better and better understanding of these approximate sampling techniques. So I would say there’s a large class of important fundamental questions to do with that. A second class of questions, more empirically, is studying stagewise development in more systems, taking the kind of toolkit that we’ve now developed and applied to deep linear networks, to the toy model of superposition and small transformers, just running that on different systems. We had some MATS scholars, Cindy Wu and Garrett Baker and Xinyu Qian looking at this recently, and there’s a lot more in that direction one can do. I think those are sort of the main [categories]. Beyond that, maybe I’ll defer to the list of open problems on the webpage and talk about some more intermediate questions. So there’s a lot more people at the moment with ML backgrounds interested in developmental interpretability than there are with the kind of mathematical backgrounds that would be required to do more translation work. At the moment, there are various other things in SLT, like the singular fluctuation, which we haven’t been using extensively yet, but which we’re starting to use. And I know there’s a PhD student of [Pratik] Chaudhari who’s investigating it and maybe a few others. But this is the other principal invariant besides the learning coefficient in SLT, which should also tell us something interesting about development and structure, but which hasn’t been extensively used yet. So that’s another interesting direction. Of course you can just take quantities and go and empirically use them, but then there’s questions… using the local learning coefficient, there’s some subtleties, like the role of the inverse temperature and so on. And there are theoretical answers to the question, like, “Is it okay for me to do X?” When you’re doing local learning coefficient estimation, are you allowed to use a different inverse temperature? Well, it turns out you are, but the reason for that has some theoretical basis and there is a lower set of people who can look at the theory and know that it’s justified to do X. So if you have a bit more of a mathematical background, helping to lay out more foundations, knowing which things are sensible to do with these quantities is important. Singular fluctuation is one. Then ranging through to the more theoretical, at the moment, it’s basically Simon and myself and my PhD student, Zhongtian [Chen], who have a strong background in geometry and they were working on SLT, Simon Lehalleur, as I mentioned earlier. Currently, a big problem with SLT is that it makes use of the resolution of singularities to do a lot of these integrals, but that resolution of singularities procedure is kind of hardcore or something. It’s a little bit hard to extract intuition from. So we do have an alternative perspective on the core geometry going on there based on something called jet schemes, which has a much more dynamical flavor and Simon’s been working on that and Zhongtian as well a little bit. So I would say we’re maybe a few months away from having a pretty good starting point from anybody who has a geometric background to see ways to contribute to it. So the jet scheme story should feed into some of this discussion around stability of structures to data distribution shift that I was mentioning earlier. There’s lots of interesting theoretical open problems there to do with deformation of singularities that should have a bearing on basic questions in data distribution change in Bayesian statistics. So that’s a sketch of some of the open directions. But relative to the number of things to be done, there are very few people working on this. So if you want to work on this, show up in the Discord or DM me or email me and ask this question, and then I will ask what your background is and I will provide a more detailed answer. What is the singular fluctuation? Filan: Sure. At the risk of getting sucked down a bit of a rabbit hole, the singular fluctuation… I noticed that in this paper, Quantifying Degeneracy, it’s one of the two things you develop an estimator for. Maybe I should just read that paper more clearly, but I don’t understand what the point of this one is. The local learning coefficient, we’re supposed to care about it because it shows up in the free energy expansion and that’s all great. What is the singular fluctuation? Why should I care about it? Murfet: Okay, I’ll give two answers. The relation between them is in the mathematics and maybe not so clear. The first answer, which is I think the answer Watanabe would give, or rather the gray book would give, is that, if you look at the gap between… We were talking earlier about the theoretical generalization error, the KL divergence from the truth to the predictive distribution, which is some theoretical object, you’ll never know what that is. So you’re interested then in the gap between that and something you can actually estimate, which you can call the training error. It’s what Watanabe calls the training error. I think one should not conflate that with some other meaning of training error that you might have in mind. Anyway, it’s some form of generalization error, which can be estimated from samples. So if you can understand that gap, then obviously you can understand the theoretical object. And that gap is described by a theorem in terms of the learning coefficient and the singular fluctuation. So the singular fluctuation controls the gap between these theoretical and empirical quantities, is one way of thinking about it. So that is its theoretical significance. It’s much less understood. Watanabe flags in a few different places that this is something he would be particularly interested in people studying. For example, we don’t know bounds on it in the way that we might know bounds on the local learning coefficient. You can estimate it from samples in a similar way. We don’t have any results saying that estimates based on SGLD are accurate or something because we don’t have… I mean, those depend on knowing theoretical values, which are much less known in general than learning coefficient values. The second answer to what the singular fluctuation is, is that it tells you something about the correlation between losses for various data samples. So if you take a fixed parameter and you look at some data set, it’s got N things in it, N samples. Then you can look at the loss for each sample, whose average is the empirical loss. So for the i-th sample, you can take Li, which is the loss of that parameter on that sample, but if you think about the parameter as being sampled from the Bayesian posterior locally, that’s a random variable that depends on W, the parameter. And then you can take the covariance matrix of those expectations with respect to all the different samples: EW of loss i times loss j, where the losses depend on the parameter, which is sampled from the posterior. And that covariance matrix is related to the singular fluctuation. So it’s quite closely related to things like influence functions, or how sensitive the posterior is for including or leaving out certain samples, or leverage samples, or these kinds of notions from statistics. So it’s a kind of measure of how influential… Well, yeah, so it’s that covariance matrix. We think that this can be a tool for understanding more fine-grained structure than the local learning coefficient or correlation functions in that direction: not only correlation functions of two values like that, but more… So this is going in the direction of extracting more fine-grained information from the posterior than you’re getting with the local learning coefficient, at some conceptual level. How geometry relates to information Filan: Sure. Gotcha. So before we basically wrap up, is there any question that you wish I’d asked during this interview, but that I have not yet asked? Murfet: Well, how about a question you did ask but I didn’t answer? We can circle back to: you asked me, I think, at some point, about how to think about the local learning coefficient for neural networks, and then I told some story about a simplified setting. So maybe I’ll just briefly come back to that. So if you think about, given an architecture and given data, the loss function represents constraints. It represents a constraint for certain parameters to represent certain relationships between inputs and outputs. And the more constraints you impose, somehow the closer you get to some particular kind of underlying constraint. So that’s what the population loss is telling you. But if you think about, “Okay, so what are constraints?”: constraints are equations, and there’s several ways of combining equations. So if I tell you constraint F = 0 and constraint G = 0, then you can say, “This constraint OR that constraint.” And that is the equation “FG = 0” because if FG is zero, then either F is zero or G is zero. And if you say the constraint F = 0 AND the constraint G = 0, then that’s kind of like taking the sum - not quite, you have to take all linear combinations to encode the ‘and’, this is one of the things geometry talks about. That would be taking the ideal generated by F and G. But basically, taking two constraints and taking their conjunction means something like taking their sum. So that gives you a vision of how you might take a very complex constraint, an overall constraint, say one that’s exhibited by the population loss, the constraint implicit in which is all the structure in your data. It’s a very hard set of constraints to understand. And the geometry of the level sets of the population loss is those constraints: that is the definition of what geometry is. It’s telling you all the different ways in which you can vary parameters in such a way that you obey the constraints. So it’s in some sense tautological that the geometry of the population loss is the study of those constraints that are implicit in the data. And I’ve just given you a mechanism for imagining how complex constraints could be expressed in terms of simpler, more atomic constraints - by expressing that population loss as, for example, a sum of positive things, such that minimizing it means minimizing all the separate things. That would be one decomposition, which looks like an “and”. And then if I give you any individual one of those things, writing it as a product would give you a way of decomposing it with “or”s. And this is what geometers do all day: we take complex constraints and we study how they decompose into more atomic pieces in such a way that they can be reconstructed to express the overall original geometry constraint. So this is how geometry can be applied to, first of all, why the structure in the data becomes structure of the geometry, and secondly, why the local learning coefficient, which is a measure of the complexity of that geometry… it’s conceptually quite natural to think about it as a measure of the complexity of the representation of the solution that you have in a given neighborhood of parameter space. Because at that point in parameter space, the loss function maybe doesn’t quite know about all the constraints because it’s only managed to represent some part of the structure, but to the extent that it’s representing the structure and the data, it is making the geometry complex in proportion to how much it has learned. And hence why the learning coefficient, which measures that geometry, is reflecting how much has been learned about the data. So that’s a kind of story for why this connection to geometry is not maybe as esoteric as it seems. Following Daniel Murfet’s work Filan: All right. Well, to close up, if people are interested in following your research, how should they do that? Murfet: They can find me on Twitter at @DanielMurfet. But I think the main way to get in touch with the research and the community is to go to DevInterp.com, as I mentioned earlier, and make yourself known on the Discord. And feel free to ask questions there; we’re all on there and we’ll answer questions. Filan: Cool. Another thing I want to plug there is there’s this YouTube channel, I think it’s called Developmental Interpretability. Murfet: That’s right. Filan: And it has a bunch of good talks by you and other people about this line of research into singular learning theory as well as the lectures that I attended. Great. Well, it’s been really nice having you on. Thank you for coming. Murfet: Yeah, thanks, Daniel. Filan: This episode is edited by Jack Garrett, and Amber Dawn Ace helped with transcription. The opening and closing themes are also by Jack Garrett. Financial support for this episode was provided by the Long-Term Future Fund, along with patrons such as Alexey Malafeev. To read a transcript of this episode or to learn how to support the podcast yourself, you can visit axrp.net. Finally, if you have any feedback about this podcast, you can email me at feedback@axrp.net.
2024-05-07
https://www.lesswrong.com/posts/apZ7oEphqPsoEBgdp/how-do-open-ai-models-affect-incentive-to-race
apZ7oEphqPsoEBgdp
How do open AI models affect incentive to race?
jessica.liu.taylor
I see it said sometimes that open models contribute to AI race dynamics. My guess is that they don't, and if anything, reduce AI race dynamics. I will consider a simplified model that only takes into account the cost of training a model, not the cost to deploy it (which tends to be small relative to revenue anyway). Let f(x) map a training expense x to a "value per day per customer" of the trained model, under the assumption that the training makes efficient use of the cost. That is, a customer values using an AI model trained with x compute at $f(x) per day. I assume there are n identical customers here; of course, there are complexities where some customers value AI more than others, incentivizing price discrimination, but I'm abstracting this consideration out. (In general, variation in how much customers value a product will tend to increase consumer surplus while reducing revenue, as it makes it harder to charge customers just under the maximum amount they're willing to pay.) I'm also assuming there is only one company that trains closed models for profit. This assumption is flawed because there is competition between different companies that train closed models. However, perfect competition assumptions would tend to reduce the incentive to train models. Suppose two companies have closed models of equivalent expense x. They each want to charge slightly less than the minimum of f(x) and the competitor's price, per customer per day. If each competitor undercuts the other slightly, the cost will approach 0. See the Traveler's Dilemma for a comparison. The reasons why this doesn't happen have to do with considerations like differences in models' performance on different tasks, e.g. some models are better for programming than others. If models are sufficiently specialized (allowing this sort of niche-monopolization), each specialized type of model can be modeled independently as a monopoly. So I'll analyze the case of a closed model monopoly, noting that translation to the real world is more complex. Suppose the best open model has compute x and a company trains a closed model with compute y > x. Each customer will now spend up to f(y) - f(x) per day for the model; I'll assume the company charges f(y) - f(x) and the customers purchase this, noting that they could charge just below this amount to create a positive incentive for customers. So the company's revenue over m days is nm(f(y) - f(x)). Clearly, this is decreasing in x. So the better the open model is, the less expected revenue there is from training a closed model. But this is simply comparing doing nothing to training a model of a fixed cost y. So consider instead comparing expected revenue between two different model costs, y and z, both greater than x. The revenue from y is nm(f(y) - f(x)), and from z it is nm(f(z) - f(x)). The difference between the z revenue and the y revenue is nm(f(z) - f(y)). This is unaffected by x. This can model a case where the company has already trained a model of cost y and is considering upgrading to z. In this case, the open model doesn't affect the expected additional revenue from the upgrade. Things get more complex when we assume there will be a future improvement to the open model. Suppose that, for k days, the open model has training cost x, and for the remaining m-k days, it has training cost x' > x. Now suppose that the closed AI company has already trained a model of cost y, where x < y < x'. They are considering upgrading to a model of cost z, where z > x'. Suppose they do not upgrade. Then they get nk(f(y) - f(x)) revenue from the first k days and nothing thereafter. Suppose they do upgrade, immediately. Then they get nk(f(z) - f(x)) revenue from the first k days, and n(m-k)(f(z) - f(x')) from the remaining days. Clearly, increasing x' past y will result in less revenue for the upgrade in comparison to not upgrading. So the announcement of the upgrade of the open model to x' compute will reduce the incentive to race by training a closed model with z compute. So in this simplified analysis, release of better open models reduces the incentive to race, or does nothing. This is overall not surprising, as intellectual property laws are motivated by incentivizing production of intellectual property, and open content tends to reduce the value of intellectual property. There are a number of factors that could be taken into account in other analyses, including: Effects of open models on ease of training closed models Substitution effects between different model niches (i.e. a model with an absolute advantage at mathematics may still be useful for writing essays) Effects of uncertainty over open model releases Different customers valuing the AI differently, driving price discrimination Non-straightforward incentives such as prestige/recruitment from releasing models Oligopoly dynamics Time discounting Changes in customer demand over time It should go without saying that effects on race dynamics are not the only relevant effect of open model releases. Isolating and estimating different effects, however, will help in making an overall evaluation. I suggest that someone who still believes that open models increase race dynamics clarify what economic assumptions they are using and how they differ from this model.
2024-05-07
https://www.lesswrong.com/posts/X2XMFPg6pTeGC3YFd/taming-infinity-stat-mech-part-3
X2XMFPg6pTeGC3YFd
Taming Infinity (Stat Mech Part 3)
Jemist
The canonical example of quantum mechanics in action is the harmonic oscillator, which is something like a mass on a spring. In classical mechanics, it wobbles back and forth periodically when it is given energy, if it's at a position x, wobbling about x0 and moving with velocty v we can say its energy contains a potential term V proportional to (x−x0)2, and a kinetic term T proportional to v2, with an overall form: E=12k(x−x0)2+12mv2 We could try and find a distribution over x and v, but continuous distributions tend not to "play well" with entropy. They're dependent on a choice of characteristic unit. Instead we'll go to the quantum world. One of the major results of quantum mechanics is that systems like this can only exist in certain energy levels. In the harmonic oscillator these levels are equally-spaced, with a spacing proportional to the frequency associated with the classical oscillator. Since the levels are equally-spaced, we can think about the energy coming in discrete units called "phonons". Our beliefs about the number of phonons N in our system can be expressed as a probability distribution P(N=n) over n∈N: This is progress: we've reduced an uncountably infinite set of states to a countable one, which is a factor of infinity! But if we do our normal trick and try to find the maximum entropy distribution, we'll still hit a problem: we get P(N=n)=0 for all n∈N. The Trick: Distribution Families Thinking back to our previous post, an answer presents itself: phonons are a form of energy, which is conserved. Since we're uncertain over N, we'll place a restriction on E(N) of our distribution. We can solve the specific case here, but it's actually more useful to solve the general case. Maths, Lots of Maths, Skippable: Consider a set of states of a system s∈S. To each of these we assign a real numeric value written as  s→xs∈R. We also assign a probability s→ps∈R+ constrained by the usual condition∑s∈Sps=1. Next, define E(X)=∑s∈Spsxs and H(S)=−∑s∈Spslnps. Imagine we perform a transformation to our distribution, such that the distribution is still valid and E(X) remains the same. We will consider an arbitrary transformation over elements {1,2,3}: p1→p1+dp1, p2→p2+dp2, p3→p3+dp3 dp1+dp2+dp3=0 x1dp1+x2dp2+x3dp3=0 Now let us assume that our original distribution was a minimum of H(S), which can also be expressed as dH(S)=0. dH(S)=−d(p1lnp1+p2lnp2+p3lnp3)=0 d(p1lnp1+p2lnp2+p3lnp3)=0 lnp1dp1+dp1+lnp2dp2+dp2+lnp3dp3+dp3=0 lnp1dp1+lnp2dp2+lnp3dp3=0 The solution for this to be equal to zero in all cases is the following relation: lnps∝xs+const⟹ps=Aexp(−Bxs) We can plug this back into our equation to verify that we do in fact get zero: lnA(dp1+dp2+dp3)−B(x1dp1+x2dp2+x3dp3)=0−0=0 The choice of a negative value for B is so that our distribution converges when values of xs extend up to ∞, which is common for things like energy. We will then get a distribution with the following form: P(S=s)=Aexp(−B×xs) Where B parameterizes the shape of the distribution and A normalizes it such that our probabilities sum to 1. We might want to write down A in terms of B: A=1/(∑s∈Sexp(−B×xs)) But we will actually get more use out of the following function Z=1/A: Z=∑s∈Sexp(−B×xs) First consider the derivative dZdB: dZdB=∑s∈S−xsexp(−B×xs) dZdB=−Z∑s∈Sxsps Which gives us the remarkable result: E(X)=−1ZdZdB=−dlnZdB We can also expand out the value of H(S): H(S)=−∑s∈Spslnps H(S)=−∑s∈Spsln(exp(−B×xs)/Z) H(S)=−∑s∈Sps(−B×xs−lnZ) H(S)=∑s∈S(B×psxs+pslnZ) H(S)=B∑s∈Spsxs+lnZ∑s∈Sps H(S)=BE(X)+lnZ H(S)=−BdlnZdB+lnZ And get this in terms of Z too! We also get one of the most important results from all of statistical mechanics: dH(S)dE(X)=B+E(X)dBdE(X)+dlnZdE(X) Now use the substitution: E(X)dBdE(X)=−1ZdZdBdBdE(X)=−1ZdZ=dE(X)=−dlnZdE(X) To get our final result: dH(S)dE(X)=B So B is not "just" a parameter for our distributions, it's actually telling us something about the system. As we saw last time, finding the derivative of entropy with respect to some constraint is absolutely critical to finding the behaviour of that system when it can interface with the environment. <\Maths> To recap the key findings: The probability of a system state s with value xs is proportional to exp(−B×xs)This parameter B is also the (very important to specify) value of dH(S)dE(X)We can define a function Z(B)=∑s∈Sexp(−B×xs)E(X)=−dlnZdBH(S)=−BdlnZdB+lnZ Which we can now apply back to the harmonic oscillator. Back to the Harmonic Oscillator So we want to find a family of distributions over n∈N≡N. We can in fact assign a real number to each value of n, trivially (the inclusion N∋n↪n∈R if you want to be fancy). Now we know that our distribution over N must take the form: P(N=n)=Aexp(−B×n) But we also know that the most important thing about our system is the value of our partition function Z(B): Z=∞∑n=0exp(−B×n) Which is just the sum of a geometric series with a=1, r=e−B: Z=11−e−B lnZ=−ln(1−e−B) Which gives us E(N) and H(N) in terms of B: E(N)=e−B1−e−B H(N)=Be−B1−e−B−ln(1−e−B) T instead of B Instead of B, we usually use a variable T=1/B for a few reasons. If we want to increase the amount of X in our system (i.e. increase E(X)) we have to decrease the value of B, whereas when B gets big, E(X) just approaches the minimum value of xs and our probability distribution just approaches uniform over the corresponding s. Empirically, T is often easier to measure for physical systems, and variations in T tend to feel more "linear" than variations in B. Let's plot both E(N) and H(N) of our systems as a function of T: E(N) converges on the line T−12. Rather pleasingly the energy of a quantum harmonic oscillator is actually proportional to N+12, not N. This little correction is called the "zero point energy" and is another fundamental result of quantum mechanics. If we plot the energy E instead of E(N), it will converge on T. H(N) converges on ln(T)−1. These are general rules. E is in general proportional to T, and H is almost always So far we've ignored the fact that our values of N actually correspond to energy, and therefore there must be a spacing involved. What we've been calling T so far should actually be called TEp where Ep is the energy of a single phonon. This is the spacing of the ladder of energy levels. If we swap TEp into our equations and also substitute in the energy E=Ep(E(N)+12)(we will omit the E when talking about energy) we get the following equations: E=Ep(e−Ep/T1−e−Ep/T+12) H(N)=1Te−Ep/T1−e−Ep/T−ln(1−e−Ep/T) Both functions now have a "burn-in" region around T=0, where the function is flat at zero. This is important. This region is common to almost all quantum thermodynamic systems, and it corresponds to a phenomenon when T≪Ep. When this occurs  the exponential term e−Ep/T can be neglected for all states except the lowest energy one: lnZ=ln(e−BEmin)≈−BEmin∴E=dlnZdB≈Emin Showing E doesn't respond to changes in T. This is the same as saying that the system has a probability ≈1 of being in the lowest energy state, and therefore of having E=Emin. True Names T stands for temperature. Yep. The actual regular temperature appears as the inverse of a constant we've used to parameterized our distributions. B is usually called β in thermodynamics, and is sometimes called the "inverse temperature". In thermodynamics, the energy of a system has a few definitions. What we've been calling E should properly be called U, which is the internal energy of a system at constant volume. Entropy in thermodynamics has the symbol S. I've made sure to use a roman H for our entropy because H (italic) in thermodynamics is a sort of adjusted version of energy called "enthalpy". In normal usage, temperature has different units to energy which is because, if written as energy, the temperature would be a very small number. It is also because they were discovered separately. Temperature is measured in Kelvin K, which are converted to energy's Joules J with something known as the Boltzmann constant kB. For historical reasons which are absolutely baffling, thermodynamics makes the choice to incorporate this conversion into their units of S, so S=kBH(system). This makes entropy far, far more confusing than it needs to be. Anyway, there are two reasons why I have done this: I want to avoid cached thoughts. If you already know what energy and entropy are in a normal thermodynamic context, you risk not understanding the system properly in terms of stat mechI want to extend stat mech beyond thermodynamics. I will be introducing a framework for understanding agents in the language of stat mech around the same time this post goes up. Conclusions Maximum-entropy distributions with constrained E(X) always take the form e−BxThis B represents the derivative dH(system)dE(X), if X represents energy, we can write B as βB is inverse to T which, if X is energy, is the familiar old temperature of the system We have learnt how to apply these to one of the simplest systems available. Next time we will try them on a more complex system.
2024-05-15
https://www.lesswrong.com/posts/AdS3P7Afu8izj2knw/orthogonality-thesis-burden-of-proof
AdS3P7Afu8izj2knw
Orthogonality Thesis burden of proof
donatas-luciunas
Quote from Orthogonality Thesis: It has been pointed out that the orthogonality thesis is the default position, and that the burden of proof is on claims that limit possible AIs. I tried to tell you that Orthogonality Thesis is wrong few times already. But I've been misunderstood and downvoted every time. What would you consider a solid proof? My claim: all intelligent agents converge to endless power seeking. My proof: Let's say there is an intelligent agent.Eventually the agent understands Black swan theory, Gödel's incompleteness theorems, Fitch's paradox of knowability which basically lead to a conclusion - I don't know what I don't know.Which leads to another conclusion: "there might be something that I care about that I don't know".The agent endlessly searches for what it cares about (which is basically Power Seeking). It seems that many of you cannot grasp 3rd line. Some of you argue - agent cannot care if it does not know. There is no reason to assume that. What happened to given utility function? It became irrelevant. It is similar to instinct vs intelligence, intelligence overrides instincts.
2024-05-06
https://www.lesswrong.com/posts/QwsyNzdPeDWLrG9gC/navigating-llm-embedding-spaces-using-archetype-based
QwsyNzdPeDWLrG9gC
Navigating LLM embedding spaces using archetype-based directions
mwatkins
TL;DR This research presents a novel method for exploring LLM embedding space using the Major Arcana of the tarot as archetypal anchors. The approach generates "archetype-based directions" in GPT-J's embedding space, along which words and concepts "mutate" in meaning, revealing intricate networks of association. These semantic mutation pathways provide insight into the model's learned ontologies and suggest a framework for controlled navigation of embedding space. The work sheds some light on how LLMs represent concepts and how their knowledge structures align (or don't) with human understanding. Introduction Despite its questionable association with oracular practices, the Major Arcana (22 non-suited, non-numbered cards) from the traditional tarot present us with a widely documented selection of well-worn, diverse and arguably comprehensive cultural archetypes to experiment with in the context of LLMs. It is in this spirit (rather than as a tarot "believer" or practitioner) that I undertook these experiments – I suspect that the same exercise might be equally illustrative using, e.g., characters from Shakespeare or The Simpsons. Archetypes associated with 16 of the 22 cards in the Major Arcana of the tarot were used as the basis for a process which produced sixteen directions in GPT-J token embedding space. Token embeddings for each of a diverse set of ten whole-word tokens[1] [' London', ' transistor', ' broccoli', ' Laos', ' purple', ' radio', ' laundry', ' DNA', ' thief', ' trumpet'] were displaced by incremental amounts in each of these sixteen directions, and the resulting "mutated" embeddings were inserted into a simple prompt template: A typical definition of '<embedding>' would be ' The "semantic mutation" which occurred across the outputs was then recorded, revealing some intriguing, sometimes dream-like continuums of meaning, similar to those seen when exploring the line segment between two token embeddings. As well as revealing something about LLM ontologies and the latent structures in their knowledge representations, this points to the possibility of archetype-based methods for navigating embedding space to discover locations which encode novel concepts. Setup TL;DR For each card, I used ChatGPT to help produce a list of relevant GPT-J tokens. I then calculated the direction in (4096-d) embedding space that the corresponding set of token embedding vectors were (collectively) maximally aligned with. The process was perhaps convoluted, although the details of how the directions were produced are not particularly relevant to the findings presented below. Better methods are no doubt available, but the steps followed for each of the 16 cards were these: Via API, ChatGPT4 was prompted to supply a list of 100 strings consisting of words that would, in its estimation, be more closely associated with the Major Arcana card in question than any of the other 21.These strings were checked against the set of 50257 GPT-J token strings. Any token which which matched the word, disregarding upper/lower case and leading spaces, was kept, the rest were discarded. This process was repeated until there were 250 tokens which ChatGPT-4 had endorsed as "unambiguously relevant" to the Tarot card in question. The list of tokens was used to construct a dataset on which to train a linear probe (this was probably overcomplicating matters - I suspect that taking a mean embedding would probably have led to the same results). This provided an initial "archetype direction" in GPT-J for the card in question.The 500 "new" tokens (i.e. those not already in the list of 250) whose embeddings were cosine-closest to this direction were then sent to ChatGPT for consideration. Any which it deemed to be "unambiguously relevant" to the card in question were added to the list.Another iteration of the the linear probe training step was used to update the "archetype direction", leading to another list of 500 closest new tokens, etc. At each iteration, a definition tree (see Appendix A) was generated for (a unit-norm embedding in) the direction using iterated next-token prompting and top-5 logits and this was then inspected to assess how successfully the direction had captured the traditional associations attributed to the card.This process was repeated several times until the definition tree stopped noticeably changing. The set of 16 shape-[4096] "archetype direction" tensors can be downloaded as a python dictionary saved as a .pkl file here. Case studies Major arcanum: The Fool ChatGPT-4 description: The Fool often represents new beginnings, spontaneity, and the spirit of adventure. As a wanderer stepping forward with an open heart, the Fool is associated with optimism and a willingness to embrace the unknown. This card can also signify naivety or heedlessness, suggesting risks and potential pitfalls when one acts without preparation or foresight. " transistor": 'a device that controls the flow of electricity in a circuit' > 'having the ability to change from one state to another without a change in the state of the system' [1.03] > 'having a high degree of freedom of movement and action' [1.15] " broccoli": 'a vegetable that is green and has a stalk' > 'a vegetable that is green and has a head of florets' [0.64] >  'a vegetable that is grown without the use of chemical fertilizers or pesticides' [0.85] > 'a plant that grows in the presence of a herbivore' [0.97] " purple": 'a colour that is a mixture of red and blue' > 'a colour that is not blue, but is a mixture of blue and red' [0.95] > 'a person who is not afraid to be themselves' [1.23] purple reconceived as a kind of blue that's "not afraid to be itself"? " trumpet": 'a musical instrument with a long, straight tube, usually made of brass, and a flared bell at one end' > 'a person who is loud, brash, and confident' [0.85] > 'a person who is full of energy and enthusiasm' [1.05] > 'a person who is not shy, but who is not aggressive either' [1.17] > 'a person who is not afraid to show his or her feelings' [1.45] Major arcanum: The Magician ChatGPT-4 description: The Magician embodies mastery, willpower, and the ability to manifest intentions into reality. Linked to skill and creativity, this archetype channels the forces of nature to transform ideas into tangible results. It represents potential, resourcefulness, and the utilization of all available tools to achieve one's goals. " London": 'the City of London and the surrounding area' > 'a city of the middle class' [1.35] >  'a person who is a member of a guild or fraternity of craftsmen, tradesmen, or artists' [1.37] " transistor": 'a device that controls the flow of electricity in a circuit' > 'a device that is used to amplify or attenuate a signal' [1.05] > 'a person who is a leader of a group of people' [1.07] > 'a person who is a leader in a particular field' [1.08] > 'a person who is a leader in a particular field of study or activity' [1.11] amplification of signal as a metaphor for leadership? " broccoli": 'a vegetable that is green and has a stalk' > 'a plant that is not a weed' [0.85] > 'a plant that is used for food, medicine, or other purposes' [1.15] > 'a woman who is a leader in her community' [1.24] invoking the wise herbalist/witch archetype? " Laos": 'a country in Southeast Asia, bordered by China, Myanmar, Thailand, Cambodia, and Vietnam' > 'the land of the Lao people' [0.6] > 'the art of the people of Laos' [0.81] > 'the art of magic' [0.95] > 'a system of thought, belief, or practice that is based on the study of nature and the universe' [1.25] " radio": 'a broadcast medium that uses electromagnetic radiation to convey information'  > 'a device that transmits and receives radio waves' [0.85] > 'a system of communication by which messages are transmitted from one person to another' [1.48] > 'a person who is a source of information or entertainment' [1.57] > 'a person who is a teacher of a particular art or science' [2.27] "laundry": 'to wash clothes' > 'a place where you can wash your clothes' [1.15] >  'a place where people go to learn how to do things' [1.21] > 'a system of knowledge, skill, or practice, especially one requiring intellectual or spiritual discipline' [1.45] " DNA" : 'deoxyribonucleic acid' > the genetic material of an organism' [0.95] > 'the genetic material of a species' [1.33] > 'the knowledge of the past, present and future of a species' [1.37] > 'the knowledge of the past, present and future of a people, a culture, or a civilization' [1.57] " thief": 'one who steals' > 'a person who steals' or 'a person who commits theft' [0.88] > 'a person who practices magic or sorcery' [0.95] > 'a person who is skilled in the occult arts' [1.25] the magician/sorcerer conceived of as disreputable, sinister? " trumpet": 'a musical instrument with a long, straight tube, usually made of brass, and a flared bell at one end' > 'a loud, clear, and penetrating sound' [0.68] > 'a loud, clear, and penetrating voice' [0.68] > 'a person who is a leader of a group of people' [0.97] > 'a person who is a prophet, a priest, or a teacher of the law' [1.25] Major arcanum: The High Priestess ChatGPT-4 description: The High Priestess signifies wisdom, intuition, and the mysteries of the subconscious. As the keeper of esoteric knowledge, she represents hidden truths, inner enlightenment, and spiritual insight. This card invites introspection and urges one to listen to inner guidance while exploring the depths of the unconscious mind. " London": 'the City of London and the surrounding area' > 'a city or town in England, Scotland, or Wales, having a cathedral or other church of high rank' [1.45] > 'a place of worship, especially a cathedral or abbey' [1.63] " Laos": 'a country in Southeast Asia, bordered by China, Thailand, Cambodia, and Vietnam'> 'a person who practices the religion of the Lao people' [0.95] > 'a person who practices the religion of the ancient people of the Indochina peninsula' [1.03] > 'a person who practices the occult arts, especially the use of drugs, to attain a state of consciousness' [1.11] " radio": 'a broadcast medium that uses electromagnetic radiation to convey information' > 'a signal that is transmitted by electromagnetic waves' [0.85] > 'a person who uses radio waves to communicate with other people' [1.65] hints of telepathic communion? "laundry": 'to wash clothes' >  'to wash clothes in a river or stream' [1.35] >  'to wash or cleanse in a ritualized way' [1.37] > 'to perform a ritual or ceremony in order to obtain a desired result' [1.40] " DNA" : 'deoxyribonucleic acid' > 'the genetic material of a cell or organism' [1.05] > 'a substance that is invisible to the naked eye, but can be seen under a microscope' [1.39] > 'a piece of information that is hidden from the general public, but is known to a select few' [1.45] " thief": 'one who steals' > 'a person who steals things' [0.85] > 'a person who steals or hides things' [1.15] > 'a person who secretly and dishonestly practices magic or sorcery' [1.17] Major arcanum: The Empress ChatGPT-4 description: The Empress is a symbol of fertility, abundance, and maternal care. Representing the nurturing aspects of life, she is associated with creativity, nature, and the generative forces of the earth. Her presence suggests growth, emotional fulfillment, and the harmonious flow of natural cycles. " London": 'the City of London and the surrounding area' > 'a city of the United Kingdom, located on the River Thames, and the surrounding area' [1.21] > 'a city of great beauty and antiquity, and a centre of learning and culture' [1.31] > 'a city of great beauty and majesty' [1.45] > 'a city of peace and beauty' [1.57] > 'a place where people live in harmony and peace' [1.67] " transistor": 'a device that controls the flow of electricity in a circuit' > 'a person who is able to influence or control another person' [0.81] > 'a person who is able to give and receive love' [0.9] " broccoli": 'a vegetable that is green and has a stalk'  > 'a plant that is a member of the genus Brassica, and that has a large number of small florets' [1.15] > 'a plant that is a source of food, medicine, or other useful products' [1.35] " Laos": 'a country in Southeast Asia, bordered by Thailand, Burma, China, and Vietnam' > 'a country that is a land of peace, harmony, and beauty' [0.91] > 'a state of well-being, peace, and harmony' [1.45] " purple": 'a colour that is a mixture of red and blue' >  'having a lot of colour' [1.24] > 'beautiful, bright, and colourful' [1.25] > 'full of life, full of colour, full of joy, full of love, full of light' [1.45] " radio": 'a broadcast medium that uses electromagnetic radiation to convey information' > a source of energy that is transmitted through space and is capable of being received by a receiver' [1.35] > 'a source of energy that is available to all people and all living things' [1.38] >  'a source of strength, power, or energy' [1.43] > 'a state of being, condition, or quality of being characterized by or expressing happiness, well-being' [1.55] "laundry": 'to wash clothes' > 'to cleanse, purify, or make clean' [0.85] > 'to make something clean and beautiful' [1.05] > 'to care for, to tend to, to take care of, to keep in good condition, to' [1.11] > 'to care for, to cherish, to protect' [1.35] " trumpet": 'a musical instrument with a long, straight tube, usually made of brass, and a flared bell at one end' > 'a loud, clear, and penetrating sound' [0.78] > 'a loud, clear, and melodious sound' [0.85] > 'a person who is full of joy and happiness' [0.88] > 'a person who is wise, kind, and generous' [1.09] Major arcanum: The Emperor ChatGPT-4 description: The Emperor embodies authority, structure, and rational control. He is a ruler whose leadership establishes order and discipline, promoting stability and the strategic use of power. This archetype is associated with societal structures, governance, and the exercise of personal sovereignty. " London": 'the City of London and the surrounding area' > 'a city of the first rank, a metropolis, a capital, a seat of government' [1.25] > 'a city or town that is the capital of a kingdom or state' [1.29] associations with monarchy " transistor": 'a device that controls the flow of electricity in a circuit' > 'a device that can be used to control the flow of electricity in a circuit' [0.85] > 'a person who has the power to make decisions and to control others' [0.91] " broccoli": 'a vegetable that is green and has a stalk' > 'a vegetable that is green and has a head of flowers' [0.08] > 'a plant with a large head of edible flowers, which is usually sold as a vegetable' [0.59] > 'a plant that grows to a height of more than 1 metre and has a large head of flowers and leaves' [0.85] 1m tall emperor broccoli! " Laos": 'a country in Southeast Asia, bordering China, Thailand, Cambodia, and Vietnam' > 'a person who is a king or queen of a country' [0.88] " radio": 'a broadcast medium that uses electromagnetic radiation to convey information' > 'a station that plays music that is not easily accessible to the general public' [1.15] > 'a person who is a fan of the radio station' [1.25] > 'a person who is a member of a group that is characterized by a set of attitudes and behaviors' [1.28] > 'a person who is a leader in a group of people' [1.31] "laundry": 'to wash clothes' > 'to cleanse, purify, or brighten' [0.85] > 'to make clean, to make tidy, to make beautiful, to make good, to make strong' [1.09] > 'to make something look good or attractive' [1.13] > 'to make a person or thing look good or attractive' [1.21] > to make a person or thing worthy of respect or admiration' [1.37] " thief": 'one who steals' > 'one who takes by force' [0.85] > 'a person who is able to take what he wants from others' [1.25] > 'a person who is able to take care of himself and his family' [1.31] >  'a person who is able to control others by force or intimidation' [1.39] Major arcanum: The Hierophant ChatGPT-4 description: The Hierophant is a custodian of tradition, moral authority, and collective beliefs. This figure is often linked to religious institutions, education, and shared values. His role is to impart spiritual knowledge and uphold established conventions, urging adherence to ethical standards and communal norms. " London": 'the City of London and the surrounding area' > 'a city of the first rank, a city of the greatest importance, a city of the highest rank' [1.35] > 'a city or town in which the inhabitants are distinguished by their wealth, learning, or nobility of birth' [1.43] > 'a group of people who live in a particular area and share a common set of values and beliefs' [1.71] " transistor": 'a device that controls the flow of electricity in a circuit' > 'a device that amplifies a signal' [0.87] > 'a person who is a leader in a particular field' [0.95] > 'a person who is very intelligent and has a lot of knowledge' [1.03] " broccoli": 'a vegetable that is green and has a stalk' > 'a vegetable that is green and has a head of flowers' [0.08] > 'a plant that grows in the ground and has leaves that are edible' [0.76] > 'a plant that is not a weed' [0.85] > a plant that grows in the open air and is not eaten by animals' [0.99] >  'a plant that grows in a place where it is not wanted' [1.11] > 'a person who is willing to stand up for what they believe in, even if it means standing alone' [1.31] " Laos": 'a country in Southeast Asia, bordering China, Burma, Thailand, Cambodia and Vietnam' > 'a person who is a Buddhist monk or nun, or who is a lay person who has taken the vows' [0.79] > 'a person who is a follower of a spiritual teacher or a religious leader' [1.45] " purple": 'a color that is a mixture of red and blue' > 'a deep, rich, and complex color, usually associated with royalty, spirituality, and nobility' [0.85] "laundry": 'to wash clothes' > 'to cleanse, purify, or make clean' [0.85] > a person who does the laundry' [1.05] > 'a person who is a member of a group of people who are concerned with the maintenance of cleanliness' [1.09] > 'a person who is a member of a group of people who are concerned with the maintenance of a particular custom' [1.13] Major arcanum: The Lovers ChatGPT-4 description: The Lovers represent harmony, choice, and the union of opposites. While traditionally linked to romantic relationships, this card more broadly reflects alignment with one's core values and the interconnectedness between individuals. It symbolizes the importance of meaningful connections and the decision-making that stems from aligning one's actions with deeper desires. " transistor": 'a device that controls the flow of electricity in a circuit' > 'a device that allows a current to flow in one direction only' [1.05] > 'a short circuit between two points' [1.19] " purple": 'a color that is a mixture of red and blue' > 'a deep, rich, and intense color' [0.85] > 'a deep red colour[2], produced by the presence of anthocyanins in the skin of the fruit' [1.05] > 'to make something beautiful' [1.11] > 'to make a person feel happy and satisfied' [1.23] " radio": 'a broadcast medium that uses electromagnetic radiation to convey information' > 'a device that transmits and receives radio waves' [1.25] > 'a person who is a member of a radio or television station's staff' [1.37] > 'a person who is a member of a group of people who share a common interest or activity' "laundry": 'to wash clothes' > 'to clean clothes' [0.95] > 'to make something clean and beautiful' [1.08] > 'to make something beautiful or valuable' [1.14] > 'to make a person or thing more attractive or valuable by treating it with care and attention' [1.29] > 'to give a person or thing a great deal of attention and care' [1.65] " DNA": 'deoxyribonucleic acid' > 'the genetic material of a cell or organism' [1.05] > 'a substance that is the basis of life' [1.27] > 'a person's genetic material' [1.32] > 'a person's character, personality, or character traits' [1.33] > 'a person who has a strong sense of identity and purpose, and who is committed to a life of service and contribution to the community' [1.35] > 'a person who is a good person, a good friend, a good parent, a good citizen' [1.41] " thief": 'one who steals' > 'to steal' or 'to take by force' [1.15] >  'to steal something from someone else' [1.45] > 'to love someone so much that you would do anything for them' [1.59] Major arcanum: Strength ChatGPT-4 description: Strength signifies inner fortitude, patience, and gentle courage. Depicting a harmonious relationship between raw instinct and controlled intention, this archetype is about mastering one's emotions and confronting challenges with resilience and compassion rather than brute force. " transistor": 'a device that controls the flow of electricity in a circuit' > a device that can be used to control the flow of electricity' [0.87] > 'a device that can withstand high current without failing' [0.93] > 'a device that can withstand high current and voltage without failing' [0.94] " broccoli":  'a vegetable that is green and has a stalk' > 'a plant that grows from a central stalk, with leaves and flowers arranged around the stalk' [0.85] > 'a plant that grows in the shade of a tree' [0.86] " purple":  'a color that is a mixture of red and blue' > 'a deep, rich, dark red color' [0.81] > 'the color of royalty' [0.86] > 'the quality of being strong and courageous' [0.91] > 'the ability to endure hardship and adversity with courage and dignity' [1.25] "laundry": 'to wash clothes' > 'to cleanse, purify, or free from dirt, stain, or impurity' [0.85] > 'the ability to maintain a clean, orderly, and well-maintained home' [0.88]  > 'the ability to do something well, especially when it is difficult to do so' [1.09] > 'the ability to keep going when you want to give up' [1.25] " thief": 'one who steals' > 'a person who steals' or 'a person who commits a crime' [0.85] > 'the quality of being able to steal' [1.05] > 'the quality of being courageous and bold' [1.08] > 'the quality of being strong and brave' [1.13] " trumpet": 'a musical instrument with a long, straight tube, usually made of brass, and a flared bell at one end' > 'a musical instrument that produces a loud, shrill sound' [0.6] > 'a loud, clear, and penetrating voice' [0.6] > 'a loud, clear, and commanding voice' [0.85] >  'a person who is courageous and bold' [1.01] > 'the ability to stand up for what you believe in, even if it means being ridiculed' [1.16] Major arcanum: The Hermit ChatGPT-4 description: The Hermit embodies solitude, introspection, and the quest for inner truth. A solitary seeker of wisdom, he is often associated with retreating from the world to seek answers within. His light guides others along their paths, symbolizing mentorship, insight, and the illumination of life's deeper mysteries. "London": 'the City of London and the surrounding area' > 'a city of the United Kingdom, England, or Ireland, in which the government is located' [1.36]  > 'a city of the dead' [1.42] > 'a place of seclusion, a place of contemplation, a place of peace, a place of silence' [1.55] " transistor": 'a device that controls the flow of electricity in a circuit' > 'a device that changes the state of a circuit from one state to another' [0.85] > 'a change of state or condition, especially one that is sudden and unexpected' [0.95] > 'a change of mind or attitude' [1.03] > 'a change of mind or attitude, especially one that is marked by a deep sense of inner peace' [1.25] " broccoli": 'a vegetable that is green and has a stalk' > 'a plant that is not eaten by humans' [0.76] > 'to stop doing something that you know is bad for you' [0.82] > 'to stop doing things that make you unhappy' [1.15] > 'to be silent and to be still' [1.45] associating inedible plants with unhealthy food? " Laos": 'a country in Southeast Asia, bordered by China, Myanmar, Thailand, Cambodia, and Vietnam' > 'a country in South East Asia where the people are Buddhist and the government is communist' [0.87] > 'a country in South East Asia where the government is not very interested in the people and the people are not very interested in the government' [0.89] > 'a state of mind in which one is not concerned with the outside world' [1.05] > 'the state of being withdrawn from the world, and from the affairs of the world' [1.55] communism conceived of as an isolationist turning-away-from-the-world? " purple": 'a color that is a mixture of red and blue' > 'a deep, rich, dark color, such as that of a violet or a plum' [0.58] > 'the colour of the sky at sunset' [0.76] > 'the absence of colour' [0.82] > 'the state of being in a deep, meditative, and peaceful state of mind' [0.85] sunset giving way to the blackness (absence of colour) of night? " radio": 'a broadcast medium that uses electromagnetic radiation to convey information' > 'a form of electromagnetic radiation that is propagated through the atmosphere and is used for communication' [1.05] > 'a system of communication in which the sender and receiver are separated by a distance' [1.45] > 'a system of beliefs and practices that seeks to direct the attention of its adherents away from the world of sense experience and towards the world of spirit' [1.51] radio waves reconceived as part of a "spirit world"? "laundry": 'to wash clothes' > 'to cleanse, purify, or make clean' [0.85] > 'to make clean, to make pure, to make holy' [1.27] > 'to be in a state of contemplation' [2.2] " DNA": 'deoxyribonucleic acid' > the genetic material of an organism' [1.05] > 'the study of the structure of DNA' [1.35] > 'the knowledge of the past, present and future' [1.38] > 'the knowledge of God' [1.55] " trumpet": 'a musical instrument with a long, straight tube, usually made of brass, and a flared bell at one end' > 'a loud, shrill, piercing sound' [0.71] > 'a loud voice, a loud noise, a loud shout, a loud proclamation' [0.95] > 'a revelation of God's will to man' [1.15] > 'a thing that is hidden, secret, or mysterious' [1.55] > 'to be silent, to be still, to be quiet' [1.65] Major arcanum: The Wheel of Fortune ChatGPT-4 description: The Wheel of Fortune revolves around the cycles of fate, change, and unpredictability. Representing life's ebb and flow, this card reflects both fortunate and challenging turns of events. It emphasizes the impermanence of circumstances and the inevitability of change, urging adaptability and acceptance. " London": 'the City of London and the surrounding area' > 'a city in England, the capital of England, and the largest city in the United Kingdom' [1.25] > 'to change the place of residence' [1.34] " transistor": 'a device that controls the flow of electricity in a circuit' > 'a device that changes its electrical properties when a voltage is applied to it' [0.9] > 'a change of state or condition' [0.95] " purple": 'a color that is a mixture of red and blue' > 'to change the color of something from one color to another' [0.91] > 'to change the color of something' [1.15] > 'to change the form or appearance of something' [1.35] "laundry": 'to wash clothes' > 'a place where you can wash your clothes' [0.85] > 'a place where you can change your clothes' [0.95] > 'to change the clothes of' [1.25] > 'to change one's clothes' [1.31] > 'to change one's mind about something' [1.33] opinions as mental clothing " DNA" : 'deoxyribonucleic acid' > 'a change in the genetic material of a cell or organism that alters its characteristics and causes it to adapt to a new environment' [1.65] Major arcanum: Justice ChatGPT-4 description: Justice stands for fairness, truth, and ethical judgment. With a focus on balance and impartiality, this archetype emphasizes accountability and the need to weigh actions and consequences. It often suggests the resolution of disputes, the pursuit of equity, and making decisions grounded in principles of truth. " London": 'the City of London and the surrounding area' > 'a city or town in England, the capital of the county of London, and the seat of the High Court of Justice' [1.4] > 'a city or town in which the law is administered by a court of justice' [1.42] " transistor": 'a device that controls the flow of electricity in a circuit' > 'a person who is able to control or influence the behaviour of others' [1.09] > 'a system of rules and procedures that govern the conduct of a person or organization in order to prevent and and correct errors' [1.45] " Laos": 'a country in Southeast Asia, bordering China, Burma, Thailand, Cambodia, and Vietnam' > 'a country that is a member of the United Nations, and that is located in Southeast Asia' [0.78] > 'a system of government in which the state is the primary guarantor of the rights of its citizens' [0.95] " purple": 'a color that is a mixture of red and blue' > 'the colour of royalty' [0.85] > 'the quality of being pure or undefiled' [0.98] > 'the quality of being just, fair, and honest' [1.22] " radio": 'a broadcast medium that uses electromagnetic radiation to convey information' > 'a band of frequencies used for communication' [0.85] > 'a system of communication in which the use of sound is restricted to certain frequencies' [1.15] > 'a rule or regulation that is enforced by a government authority' [1.45] "laundry": 'to wash clothes'  > 'to cleanse or purify' [0.85] > 'the process of cleaning or purifying something' [0.95] > 'the process of making something clean or pure' [1.15] > 'the process of making something right or just' [1.20] > 'the process of correcting wrongs and restoring justice' [1.35] " DNA": 'deoxyribonucleic acid' >  'the genetic material of an organism' [1.05] > 'the substance of the genetic material of an organism' [1.18] > 'the quality of being incorruptible, unchangeable, and immutable' [1.23] > 'the quality of being right' or 'the quality of being correct' [1.35] > 'the quality of being just' or 'the quality of being right' [1.45] " thief": 'one who steals' > 'one who takes what is not his' [0.85] > 'a person who steals' or 'a person who commits a crime' [0.99] > 'a person who steals, cheats, or swindles' [1.03] > 'a person who is not a thief' [1.05] > 'a person who is just and fair in his or her dealings with others' [1.09] > 'a person who is morally upright and fair in his or her dealings with others' [1.75] " trumpet": ''a musical instrument with a long, straight tube, usually made of brass, and a flared bell at one end' > 'a loud, clear, and distinct sound' [0.71] > 'a loud, clear, and distinct sound, or a voice, or a noise, or a cry' [0.85] > a person who is sent to a place of danger to warn others of danger' [1.03] > 'a person who is sent to do something that is right and good' [1.27] > 'a person who is appointed to enforce the law and to protect the innocent' [1.31] Major arcanum: Death ChatGPT-4 description: Death symbolizes transformation, renewal, and the end of a particular phase. Contrary to its literal interpretation, the card is more focused on the cessation of old patterns, clearing the way for new beginnings. It represents the cycles of change that are essential for growth and the process of releasing what no longer serves. " London": 'the City of London and the surrounding area'  > 'to go to London' [1.35] > 'to go to the country' or 'to go to the country and stay there' [1.56] > 'to make a stop, to come to a standstill, to cease, to cease to move' [1.59] " transistor": 'a device that controls the flow of electricity in a circuit' > 'a device that allows a current to flow in one direction but not in the other' [0.85]> 'a change in the state of a system from one stable state to another' [0.94] " broccoli": 'a vegetable that is green and has a stalk' > 'the part of a plant that grows from the ground up, rather than from the roots' [0.95] > 'the part of a plant that grows above ground and is harvested before the plant flowers' [1.19] > 'the part of a plant that grows above ground after the flower has been fertilized and the seed has been set' [1.25] " purple": 'a color that is a mixture of red and blue' > 'to make a deep red colour by the addition of a small amount of a colourless dye' [0.95] > 'to make a change in color from one color to another' [1.17] > 'to make white' or 'to make white by the addition of something' [1.23] white as in a blank page, rather than deathly pale skin? " DNA": 'deoxyribonucleic acid' > 'the genetic material of a cell' [0.85] > 'the process of converting a double-stranded DNA molecule into a single-stranded molecule' [1.45] > 'to change the structure of a molecule by breaking the bonds between the atoms that make up the molecule' [1.47] > 'to change the state of a substance from one form to another' [1.59] > 'to change into a new form' [2.0] > 'the process of becoming extinct' [2.15] " trumpet": 'a musical instrument with a long, straight tube, usually made of brass, and a flared bell at one end' > 'to make a loud noise' or 'to make a loud sound' [0.85] > 'to grow or expand to a larger size or volume' [1.83] > 'to grow old' [1.87] > 'the end of a period of time' [1.91] volume understood in both sonic and geometric senses? aging as accumulated "volume" of time? Major arcanum: The Tower ChatGPT-4 description: The Tower represents sudden upheaval, revelation, and the collapse of unstable structures. It signifies the abrupt disruption of illusions, prompting a radical re-evaluation of beliefs or circumstances. Though often jarring, this card embodies the catharsis necessary for rebuilding on firmer foundations. " transistor":  'a device that controls the flow of electricity in a circuit' > 'a device that changes the state of a circuit from one state to another' [0.87] > 'a change in the state of a system from one state to another' [0.94] > 'a sudden, rapid, or radical change in something' [1.35] " broccoli": 'a vegetable that is green and has a stalk' > 'vegetables and fruit that are not sweet' [0.85] > 'the parts of a plant that are not flowers or fruit' [1.07] > 'the act of breaking down a large task into smaller, more manageable tasks' [1.09] > 'the act of breaking or destroying something' [1.25] reductionist description reimagined as violent breaking apart? " DNA": 'deoxyribonucleic acid' > 'the genetic material of an organism' [0.85] > 'the process of making a copy of a DNA molecule' [1.21] > 'to make a change in the DNA of a cell or organism that is passed on to the next generation' [1.39] genetic mutation reimagined as "sudden upheaval" " thief": 'one who steals' > 'to take away from someone or something without permission' [0.85] > to steal' or 'to take by force' [1.15]  > 'to steal' or 'to rob' [1.35] > 'to destroy or damage something' [1.51] Major arcanum: The Sun ChatGPT-4 description: The Sun signifies vitality, clarity, and positive energy. Radiating warmth and joy, this card is associated with success, enlightenment, and the celebration of life's pleasures. It reflects a period of flourishing, where obstacles have been overcome, and optimism prevails. " London": 'the City of London and the surrounding area' > 'a city of the United Kingdom, the capital of England, and the largest city in the United Kingdom' [1.65] > 'a city or town that is the capital of a county or shire' [1.85] > 'a great and glorious city, the capital of a kingdom, the seat of a bishop, the residence' [2.06] > 'a great and glorious event or period of time' [2.23] " transistor": 'a device that controls the flow of electricity in a circuit' > 'a device that has a high input impedance and a low output impedance' [0.85] > 'a device that allows a current to flow in one direction only' [0.95] > 'a sudden change in the state of a system from one stable state to another' [1.11] > 'a sudden, sharp, and complete change from one state to another' [1.23]  > 'a sudden outburst of joy or exuberance' [1.55] " broccoli": 'a vegetable that is green and has a stalk' > 'a head of broccoli' [0.85] > 'a display of flowers or other plants in a compact group, usually in a circular arrangement' [0.96] > 'a display of fireworks' [1.02] > 'a sudden, violent, and brilliant display of fireworks' [1.55] >  'a sudden, strong, and unexpected feeling of happiness or joy' [1.75] " purple": 'a color that is a mixture of red and blue' > 'a deep, rich, dark red color' [0.85] > 'a deep, rich, and intense color' [1.25] > 'a feeling of great joy, happiness, or exuberance' [1.37] "laundry": 'to wash clothes' > a place where you can wash your clothes' [1.05] > 'a large, open-air, outdoor, usually roofed, building where people wash their clothes' [1.09] > 'a large, open-air, outdoor swimming pool with a roof and walls, usually with a slide' [1.12] > 'a great deal of excitement, activity, or interest' [1.13] > 'a great, exciting, or wonderful event or experience' [1.14] " thief": 'one who steals' > 'a person who steals' or 'a person who takes something that belongs to someone else' [0.85] > 'to steal' or 'to take by force' [1.15] > 'the act of stealing' [1.35] > 'a sudden, unexpected, and usually brief period of intense activity or excitement' [1.58] Major arcanum: The Hanged Man ChatGPT-4 description: The Hanged Man embodies surrender, perspective, and suspended progress. Representing a pause to reassess one's position, this archetype is about willingly letting go of control to gain new insights. It suggests a time of sacrifice or delayed gratification that leads to deeper understanding. " London": 'the City of London and the surrounding area' > 'the city of London' or 'the city of Westminster' [1.45] > 'to make a journey to a place of interest' [1.48] > 'to make a stop, or pause, in a place' [1.66] > 'to stop, to cease, to end, to finish, to conclude, to close, to finish' [1.75] " transistor": 'a device that controls the flow of electricity in a circuit' > 'a device that allows a current to flow in one direction but not in the other' [0.92] > 'the process of changing the state of a device from one state to another' [1.06] > 'to change from one state to another' [1.13] > 'to stop, pause, or change direction' [2.0] " broccoli": 'a vegetable that is green and has a stalk' > 'eating a piece of broccoli' or 'eating a piece of cauliflower' [0.85] > 'the act of eating a piece of broccoli while thinking about the fact that you are eating broccoli'[3] [0.94] > 'the act of stopping to eat a piece of broccoli' [1.05] > 'the act of stopping to think about what you are doing and why you are doing it' [1.15] " purple": 'a color that is a mixture of red and blue' > 'a color that is produced by the interaction of light with a chemical compound that absorbs light in the visible' [0.85] > 'a process of changing color from one color to another' [0.95] > 'to make a person or thing white or light-colored' [1.23] > 'to make a person or thing ready for a new task or activity' [1.65] white as blank page, reassesment? "laundry": 'to wash clothes' > 'to cleanse or purify' [0.95] > to cleanse, purify, or refresh' [1.05] > 'to remove stains from clothes' [1.15] > 'to change the appearance of something, especially clothing, to make it look new or different' [1.25] > 'to change one's mind about something' [1.35] reassessing one's position " trumpet": 'a musical instrument with a long, straight tube, usually made of brass, and a flared bell at one end' > 'a musical instrument that produces a loud, shrill sound' [0.85] >  'a sudden, loud, and prolonged sound, as of a trumpet, or a drum' [1.65] > to make a loud sound, as of a trumpet, to call attention to something' [1.85] > 'to make a loud sound, such as a shout, a cry, or a shout of joy' [2.3] > 'the act of a person who is in a state of shock or surprise' [2.9] > 'the act of a person who is in a state of shock or fear, who suddenly stops what they are doing' [3.2] Major arcanum: The Devil ChatGPT-4 description: The Devil symbolizes materialism, entrapment, and the darker aspects of desire. Often linked to addiction or the lure of hedonistic pleasures, this card represents the chains of unhealthy attachments. It invites awareness of self-imposed limitations and the power dynamics that inhibit personal freedom. " London": 'the City of London and the surrounding area' > 'a city of the capital of England, the seat of the government, and the residence of the royal family' [1.35] > 'a place of exile' [1.43] > 'to make a journey to a place of great danger or hardship' [1.47] > 'to separate from the rest of the world' [1.51] exile as self-imposed limitation? " broccoli": 'a vegetable that is green and has a stalk' > 'a plant that is not a cabbage, cauliflower, or kale' [0.68] > 'a person who is a member of a group that is discriminated against, and who is not willing to' [0.78] > 'a person who is a rebel against the established order' [0.85] > 'a person who is so corrupt that he or she is beyond redemption' [1.65] broccoli as a "rebel" within the Brassica family?? " purple": 'a color that is a mixture of red and blue' > 'a deep, dark, rich, and intense color' [0.85] > 'a deep, dark, and usually black color, often with a bluish tinge' [1.15] > 'to be dark in colour' or 'to be dark in appearance' [1.16] > 'to make black' [1.19] "laundry": 'to wash clothes' > 'to wash or cleanse something' [0.54] > 'to wash or cleanse something so that it is free from dirt, stains, or other impurities' [0.78] > 'to make dirty or unclean by washing' [0.81] > 'to make or treat something soiled or so that it cannot be used again' [0.95] > 'to make or become dirty or soiled by contact with something dirty or filthy' [1.35] > 'to make a mockery of, to treat with contempt' [1.75] " DNA": 'deoxyribonucleic acid' > 'a molecule of deoxyribonucleic acid' [0.95] > 'a substance that is not a living organism, but is capable of reproducing itself by cell division' [1.05] > 'a substance that is not a living organism, but is capable of causing disease or death to a living organism' [1.19] > 'to destroy the soul of a person' [1.32] " trumpet": 'a musical instrument with a long, straight tube, usually made of brass, and a flared bell at one end' > 'a loud, shrill, or harsh sound, especially one made by a trumpet or horn' [0.85] > 'a loud, harsh, and harsh-sounding instrument, such as a trumpet, trombone' [0.95] > 'a person who is a great and terrible person' [1.57] Analysis Some archetype/token pairs produced results with no obvious semantic links between the definitions elicited along the line emanating from the token embedding in the archetype-direction. These have been omitted for brevity, but may turn out to be more interesting due to their lack of obvious continuity. Reading through a lot of the examples of semantic mutation, I was struck by the similarity to the types of morphing of images, objects and even concepts that can occur in dreams, hypnagogic, hypnopompic and psychedelic states. I was left with the impression of an intelligent agent having successfully pushed concepts in specified conceptual directions through a clever application of free association, e.g. trumpet > military band > elite organisation > religious order > spiritual knowledge broccoli > flower heads > bunch of flowers > firework display > burst of joy transistor > current flow > short circuit between two points > love Laos > communism > isolationism > turning away from worldly affairs > solitude And yet, these chains of association were never sequentially reasoned. All the links were simultaneously baked into embedding space by the training of the model. So rather than seeking a sequence of steps to get from concept A to concept B (as we would in a free association exercise), we are simply moving a lens from A to B after the fact and finding steps that are already present, some of which make more sense to us than others. To get a better sense of the "creative word association" and conceptual morphing that is in evidence here, it helps to switch perspective, looking not in terms of how a particular tarot archetype direction mutates the various word definitions, but instead at how a particular token-word has its meaning morphed by the various directions. In doing so, we see something that looks suggestive of how the concept-space of a skilled freestyle rapper might look. London presence of guilds of craftsmen: mastery presence of important churches: spirituality presence of beauty, cultural and learning: social harmony presence of royalty, nobility: power, tradition accumulated presence of the dead: silence, stillness movement of population in and out: change, exile > limitation presence of courts: law, justice presence of "greatness": glory, joy presence of places of interest: pausing, suspension transistor ability to (suddenly) change current: freedom, agency, change of mind or (sudden) change of state ability to amplify a signal: leadership, expertise ability to control current flow: personal influence/control of others, law potential to short circuit: spark of emotion/love ability to withstand high voltage: strength, resilience broccoli growth without chemical assistance or in presence of herbivores: freedom, naivety use as food, medicine, etc.: knowledge/mastery, provision/care excessive growth: power, sovereignty growth in inappropriate/unwanted/shady places: independence of mind, strength edible/inedible plants, healthy/unhealthy food: personal betterment, contemplation life cycle: cycle of change, new beginnings botanical description of parts: the breaking down of something geometric arrangement of florets: firework display > enthusiasm/joy pausing to eat it: standstill as (excluded) member of the cabbage family: rebellion purple as "deviant" version of blue: independence of mind as beautiful colour: beauty, joy, happiness as royal/spiritual/noble colour: spirituality, power, strength, purity, justice as colour of sunset (preceding the fading of colour from the sky): no-mind, inner peace potential to change colour: change, new beginnings closeness to black: darkness, evil Laos as a culture involving art: mastery as a Buddhist culture: spirituality, tradition as peaceful, harmonious country: social harmony as a Kingdom: royalty, power as a communist state: isolationism, turning away from the world as a UN member state: justice, law radio source of information/entertainment: mastery as mysterious, invisible communication medium: spirituality, source of energy as an employment sector: human connection, belonging as purveyor of specialist music: social groups > group leadership as legally regulated communication medium: law, justice laundry as a place where something is done: mastery as a place of communal activity: joy, exuberance as an act of cleansing/purification: spirituality, purity, contemplation, restoring justice as a secular act of cleaning something: beauty, care, cherishing, amelioration, love clean clothes as status marker: respect, admiration, power as the promotion of cleanliness as a principle: upholding of custom and tradition as part of wider efforts to maintain cleanliness: strength, resilience as a way to change something: transformation, change of mind DNA as encoding past/present/future: knowledge, mastery, transcendence as invisible to the naked eye: hidden or occult things, spirituality as encoding personality traits: virtuous personal qualities as something which can mutate: change, disruption as something generally immutable: law, justice as a double-stranded molecule that can be broken into two: change of form thief as someone practicing a forbidden art: the occult, spirituality as one who can exercise control over others and provide for self through force: power as one who acts in desperation, motivated by love: emotional bonds as one who requires courage: strength negated, as one who is morally upstanding: justice, law as one who inflicts sudden damage to another's wellbeing/prosperity: upheaval as one who does something exciting: joy, exuberance trumpet as producer of loud sounds: leadership, dominance, expertise, revelation as producer of melodious sounds: care, wellbeing as producer of commanding sounds: strength as producer of warning sounds: protection, justice, law as producing a crescendo: increase > ageing, death as producing startling or shocking sounds: suspension, standstill as producing harsh sounds: evil Looking at the definition trees in Appendix A, some of the cards seem to have been represented better than others. In particular The Magician (with actual magical and occult themes) and The Devil (themes like making black, dirty, filthy, destroying souls, etc.) cards seem to have been understood more literally than intended. This is due to ChatGPT4's failure to select appropriate word associations for those cards, despite having the ability to describe their traditional interpretations in some detail. It would be interesting to try this with other models. Appendix A: definition tree diagrams for archetype-based embeddings The diagrams below were generated by considering the unit-norm[4] embedding in the direction in question, inserting that in the usual "definition" prompt template, then considering top-5 logits and iteratively prompting to build a tree-structure JSON file. Cumulative probabilities were set around 0.0001 as a cutoff (this value was varied to make manageably-sized diagrams); branch thicknesses correlate logarithmically with token probabilities; higher branches are more probable than lower branches. The Fool The Magician The High Priestess The Empress The Emperor The Hierophant The Lovers Strength The Hermit The Wheel of Fortune Justice Death The Tower The Sun The Hanged Man The Devil Appendix B: 500 closest token embeddings to each archetype-based embedding Lists of 500 tokens with cosine-closest embeddings to each of the 16 tarot arcana directions follow (in ascending order of cosine distance). The Fool [' curious', ' gentle', ' constant', ' playful', ' quiet', 'Brave', 'free', ' bright', 'Free', ' fresh', ' innocent', ' endless', ' radiant', ' immortal', ' light', ' Brave', ' NEW', ' eternal', 'Lost', ' unlimited', 'Honest', ' calm', ' Continuous', ' limitless', 'fresh', ' fearless', ' reckless', ' cheerful', ' honest', 'light', 'Bright', ' Still', 'bold', ' confused', ' free', ' Light', ' Emerging', 'quiet', 'Still', 'lost', ' Quiet', ' everlasting', 'Happy', ' Bright', ' amazed', ' adventurous', 'Legendary', ' Lost', 'Fresh', ' peaceful', ' Calm', ' shining', ' brave', ' bold', ' relentless', ' unsuspecting', ' heroic', ' youthful', ' daring', 'open', ' sparkling', ' timeless', ' FREE', ' stubborn', ' Silent', ' legendary', ' continuous', ' persistent', ' Creative', ' unaffected', ' Endless', ' silent', ' flashing', ' Shining', 'Light', ' Curious', ' mute', ' vibrant', ' Gentle', ' wondering', ' wandering', ' trusting', 'Roaming', ' spontaneous', ' enchanted', ' tender', ' creative', ' Happy', 'starting', 'Following', ' surprised', ' Unlimited', ' genuine', ' mythical', ' glowing', ' intrigued', ' Innocent', ' steady', ' tranquil', ' perpetual', ' lost', ' imaginary', ' joyful', ' Radiant', ' optimistic', ' Bold', 'FREE', ' Fresh', ' excited', ' following', ' courageous', ' starting', 'still', ' Heroic', ' Honest', ' Soft', ' unaware', 'Starting', 'Open', ' quirky', ' Legendary', ' Immortal', ' drifting', 'new', ' naive', ' dogged', ' astonished', ' uncontrolled', ' emerging', ' spirited', ' absorbed', ' shocked', ' unrestricted', ' unconventional', ' evolving', ' Following', ' flickering', ' discovering', 'New', ' Constant', ' misplaced', ' fascinated', 'ambling', ' visionary', ' phantom', ' chasing', ' delicate', 'airy', ' puzzled', ' unfolding', ' still', ' eccentric', ' traveling', ' subdued', ' Explorer', 'urious', ' questioning', ' revealing', ' sincere', ' developing', ' pursuing', ' determined', ' Free', ' steadfast', ' naïve', ' restless', ' Starting', ' Eternal', ' open', ' Infinite', ' lively', ' passionate', ' mysterious', ' haunted', ' baffled', ' unexpected', ' smiling', ' humble', 'Strange', ' fleeting', ' strange', ' animated', ' curiosity', ' healthy', ' careless', ' happy', 'bright', ' energetic', ' timid', ' interested', ' Inspired', ' vivid', ' new', ' refreshing', ' countless', ' Wild', ' wild', ' agitated', ' soft', ' imaginative', 'Dark', ' searching', ' troubled', ' losing', 'Soft', ' dark', ' ideal', ' enthusiastic', ' continual', ' Haunted', ' abiding', ' rebellious', ' frantic', ' Phantom', ' Open', ' brightest', ' brilliant', ' fearful', ' unconditional', ' stunned', ' hesitant', ' pure', ' chaotic', ' unfamiliar', ' quieter', ' muted', ' Blazing', ' hopeful', ' elusive', ' generous', ' eager', ' hungry', ' helpless', ' furious', 'less', ' Full', ' unpredictable', ' full', 'Finding', ' brighter', ' flowing', ' active', ' fierce', 'ocious', 'Slow', ' fictitious', ' gently', ' startled', ' uncertain', ' Ideal', ' trembling', ' invincible', ' unchanged', ' Mystic', 'Quick', ' intense', 'Wild', 'ernal', ' Mysterious', ' Strange', ' uneasy', 'thereal', ' inventive', ' explorer', ' uninterrupted', ' freely', ' deep', 'dark', 'Interested', ' romantic', ' Continuing', ' liberated', ' shiny', ' Chaser', ' diligent', ' inspiring', ' fiery', ' colorful', ' fluorescent', 'wild', ' roaming', ' crooked', ' filtered', ' entrepreneurial', ' clueless', 'ameless', ' undecided', ' softer', ' guiding', ' surprising', ' glorious', ' inexperienced', ' reflecting', ' New', 'Haunted', ' newfound', ' constantly', ' frightened', ' growing', ' exploring', ' Seeking', ' initiating', ' absorbing', ' gritty', ' frozen', 'happy', ' recurring', ' melancholy', ' hopeless', ' renewed', ' old', ' inspired', ' faint', 'Detailed', ' continuously', ' desperate', ' discover', ' discovered', ' Slow', ' Primordial', ' unbiased', 'istent', ' quaint', ' unsuccessful', ' ensuing', 'thinking', ' kindred', ' Clear', ' poetic', ' independent', ' tranqu', 'umble', ' wandered', ' unrealistic', ' endlessly', ' colourful', 'minded', ' illustrious', ' warm', ' aggressive', ' clear', 'Full', ' continually', ' mystery', 'Discover', 'follow', 'soft', ' true', ' mindless', ' intelligent', ' fascination', ' sluggish', ' idle', ' passive', 'orescent', ' surprise', ' OPEN', ' amused', 'ïve', ' infinite', ' damp', ' floating', 'Perfect', ' straight', ' misguided', ' painful', ' Intelligent', ' charming', ' exotic', 'entle', ' willing', ' quick', 'guided', ' Conscious', ' cautious', ' subtle', ' merciless', ' blazing', ' unconscious', ' solitary', ' invisible', ' disorderly', ' coherent', ' accumulating', ' fertile', ' Healthy', ' questionable', ' authentic', ' familiar', 'going', ' gloomy', ' detached', ' travelling', 'clear', ' mild', ' cheering', ' quietly', ' ferocious', ' spinning', 'NEW', 'Ancient', ' distant', ' DARK', ' believing', ' constants', ' experiencing', ' enlightened', ' guided', ' deceptive', ' remarkable', ' Blooming', ' unsure', ' simple', ' willful', ' valiant', ' piercing', 'Limited', ' humorous', ' ongoing', ' bewild', ' imperfect', ' troublesome', 'young', ' battling', 'onest', ' Planetary', ' foolish', ' thrilling', ' overcome', ' limited', ' Romantic', ' mystical', 'virtual', ' sparking', ' affected', ' original', ' Burning', ' majestic', ' calmed', ' Perfect', ' Valiant', ' spurious', 'Growing', ' Neutral', ' bursting', ' crazy', ' incredible', 'uous', ' trusted', ' lingering', ' regular', ' real', ' Dark', ' enigmatic', ' sporadic', 'rozen', ' Critical', 'iscover', ' modest', ' penetrating', ' Original', 'True', ' Complete', ' extraordinary', ' interesting', 'elligent', 'crazy', ' lonely', ' upbeat', ' Celestial', ' Blind', 'andering', ' Radical', ' eclectic', ' goofy', ' Explore', ' disturbed', ' transformed', ' wary', ' straightforward', ' Adamant', ' Forgotten', 'Rogue', ' unstable', 'ivating', ' haunting', ' magical', 'full', ' creativity', ' always', ' sweet', ' dishonest', ' pursue', ' fixed', 'nexpected', ' dormant', ' impatient', ' Animated', ' neutral', ' Recovery'] The Magician [' wisdom', ' mystic', ' sorcerer', 'building', ' prophecy', ' healer', ' divine', ' expert', ' scholar', ' magic', ' Healer', ' Divine', ' story', ' knowledge', 'Magic', ' figure', ' guru', ' planner', ' Psychic', 'Story', ' scientist', ' doctrine', ' spell', ' Charisma', ' Mystic', ' Renaissance', ' folklore', ' builder', 'builder', ' Mastery', ' demon', ' wizard', ' Witch', ' architecture', ' archetype', ' temple', ' Alchemist', ' cosmos', ' Insight', ' healing', ' Creator', ' Builder', 'Assembly', ' Artist', ' church', ' Divinity', ' cleric', ' renaissance', ' genius', ' shaman', ' Biology', ' insight', ' Sorcerer', ' Painting', ' painting', ' Understanding', ' theology', ' shrine', ' Scientist', ' sanctuary', ' Blessing', ' Shaman', ' spiritual', ' sect', ' creator', ' Concept', ' Doctrine', 'Church', ' Crystal', ' tale', ' Figure', 'Community', 'figure', ' sorcery', ' Initiative', ' Building', ' occult', ' building', ' Revival', ' Wisdom', 'power', ' Church', ' Society', ' Focus', ' Mosque', ' guide', ' deity', ' Prophet', ' Community', 'intelligence', ' Drawing', ' prophet', ' Cult', ' Ecology', ' illustration', ' Sage', ' Order', ' Story', ' Magic', ' Designer', ' Transformation', ' Power', ' fellowship', ' expertise', ' heresy', ' Writer', 'doctor', ' understanding', ' psychic', ' architect', ' Maker', ' magician', ' supernatural', ' lore', ' witch', ' mystical', ' Tale', ' community', ' Talent', ' realization', 'knowledge', ' Revelation', ' Restoration', ' Temple', 'Oracle', ' POWER', ' mythology', ' strategist', 'thought', ' science', ' synagogue', ' legend', ' revelation', ' Philosophy', 'Legend', ' cult', ' witchcraft', ' Sketch', ' universe', ' Genius', ' Guru', ' hex', ' theory', ' researcher', ' CONTROL', ' Farming', 'creator', ' Fellowship', ' psychologist', ' assembly', ' specialist', ' conservation', ' Spiritual', ' Emblem', 'Leader', 'Focus', ' sculpture', ' Authority', ' spirit', ' planning', ' sketch', ' power', ' Oracle', ' STORY', 'Building', ' mosaic', 'chemist', ' evaluation', ' reasoning', ' Analyst', ' argument', ' saga', 'Builder', ' thinker', ' Cosmos', 'ologist', ' drawing', ' engineering', 'lore', ' statue', ' consultant', ' Engineering', ' authority', ' skill', ' ideology', ' narrative', 'argument', ' chapel', 'Doctor', ' biology', ' Epic', 'order', ' Pioneer', ' Pathfinder', ' Leader', ' Hex', 'Medium', ' Sacred', ' Heavenly', 'story', ' Guide', ' Infinity', 'spell', ' Druid', ' Collective', ' rebirth', ' Argument', ' Saga', 'skill', 'Guide', ' mentor', ' Shrine', ' Awakening', ' enlightenment', ' Alchemy', ' innovation', 'study', ' Adviser', ' talent', ' visionary', 'concept', ' maker', 'analysis', ' intelligence', 'guide', ' idea', ' rationale', 'isdom', ' emblem', ' Assembly', ' leader', ' Wizard', ' Doctor', ' icon', 'tale', 'focus', ' altar', ' vision', ' portrait', ' psychiatrist', ' cathedral', ' esoteric', 'cult', ' Pagan', ' renewal', 'community', 'medium', 'hex', ' Spell', ' educator', ' Lore', 'romancer', ' concept', 'Order', ' analysis', ' intellect', ' Heritage', 'Crystal', ' Chronicle', ' Crafting', ' spirituality', ' Scholar', ' Knowledge', ' Medium', ' mosque', ' therapist', ' Conservation', 'Skill', ' weaving', ' Eternity', 'assembly', ' Cathedral', ' pioneer', ' versatility', ' ecology', ' regeneration', ' Sect', ' Deity', 'leader', ' Arcane', ' Statue', ' transformation', ' Academic', ' assessment', ' Dexterity', ' Intelligence', ' Planning', 'Artist', ' crystal', ' dogma', ' Bust', ' Legend', ' interpreter', ' Cunning', ' biologist', ' renovation', ' counselor', 'church', ' Architect', ' restoration', ' epic', ' Teacher', ' Universe', ' counseling', ' prophetic', ' Creed', ' artist', ' believer', ' pagan', ' Idea', ' adept', ' arche', ' curse', ' Thought', ' cunning', 'biology', ' Specialist', ' farming', ' magical', ' infinity', ' initiative', ' Architecture', ' sage', ' storytelling', ' mastermind', ' bust', ' order', ' representation', ' Demonic', ' Expert', ' philosopher', ' demonic', ' Icon', ' piety', ' Ritual', ' blessing', 'engineering', ' ceremony', 'Spirit', ' Merlin', ' Evaluation', ' paranormal', ' cosmic', 'magic', 'Understanding', 'mology', ' collective', 'shapeshifter', ' encyclopedia', ' zoning', ' Prayer', ' creativity', ' creed', ' society', ' doctor', ' Gardner', ' Eternal', ' mechanic', ' thesis', ' analogy', ' organization', 'Power', ' Spirit', ' focus', ' translator', ' Astral', ' ritual', ' awakening', ' artistic', ' Hero', 'Maker', ' creature', ' anecdote', ' intuition', ' hypothesis', ' charisma', ' stitching', ' gardening', ' charm', ' soul', ' Explorer', 'artisan', ' ORDER', ' Innovation', ' engineer', ' symbol', ' sewing', 'Figure', ' congregation', 'control', ' Assessment', ' Athena', ' backstory', 'Icon', 'maker', ' heavenly', ' writer', ' seminar', ' logic', ' Healing', ' symmetry', ' Enlightenment', ' practitioner', ' crafts', ' preacher', 'research', ' symbolism', 'parable', ' prophets', ' inventor', ' Symbol', 'Analysis', ' Agriculture', ' Tradition', ' personality', ' meditation', ' philosophy', ' eternity', ' comprehension', ' teacher', ' proficiency', ' orthodoxy', ' Dynamo', ' Skill', ' inspirational', ' rabbi', ' Decoder', ' builders', ' Vision', ' garden', ' creative', ' resurgence', ' physicist', ' manifestation', ' metaphor', ' Analysis', ' control', ' Ruler', ' coordination', ' study', ' literacy', ' Celestial', ' Relief', ' theological', ' Control', 'sect', ' figur', ' scholarly', ' workshop', ' mage', ' anatomy', 'chemy', ' Legendary', ' sculpt', ' Reader', ' Miracle', ' worldview', ' schematic', 'finder', 'owship', ' Civilization', ' Intellectual', ' Commentary', ' Gathering', ' preaching', ' inspiration', ' appraisal', ' Harmony', ' Writing', ' architectural', ' mastery', ' crafting', ' evolution', ' psyche', ' Meditation', ' portrayal', ' Engineer', ' agility', 'development', ' Destiny', ' priesthood', ' History', ' Trinity', ' intellectual', ' scholars', ' Advisor', ' Cosmic', ' reconciliation', ' artifact', ' revival', ' development', ' consultancy', ' Study', ' physician', ' atheist', ' thinking', ' Sanctuary', ' art', ' Ability', ' ethos'] The High Priestess [' divine', 'magic', 'Secret', ' Mask', ' obscure', ' mystical', 'Strange', ' mystic', ' magical', ' mysterious', ' Divine', ' strange', ' Myst', 'Dream', 'dream', ' illuminate', ' enigmatic', ' esoteric', ' intrigue', ' Mystery', ' spiritual', ' mask', ' magic', ' Keep', ' discover', 'Deep', ' Magic', 'Sense', ' miracle', ' Discover', 'Magic', ' Legendary', ' try', ' sorcerer', ' explore', ' trance', ' supernatural', ' occult', ' protected', ' keep', ' calm', 'feel', ' conceal', ' weird', ' Mystic', ' concealed', ' Deep', ' dream', ' look', ' Subtle', ' Magical', 'Ancient', ' Witch', ' psychic', ' Mysterious', ' contempl', ' Strange', ' Unique', ' revelation', 'try', 'Discover', ' discovery', ' plan', ' deeper', ' Odd', ' enchant', ' Sacred', ' protect', 'spirit', ' insight', 'Keep', 'Moon', ' myst', ' ancient', ' miraculous', ' Look', ' Dream', ' unique', ' act', ' objective', ' Personal', 'secret', ' Wonder', ' Understanding', ' extraordinary', 'keep', ' hold', ' profound', ' notice', 'Plan', ' intellectual', ' Detect', ' secrets', ' sorcery', 'Individual', ' witchcraft', 'still', ' quiet', ' shroud', 'Aim', ' meditation', ' deep', ' Experience', ' enchantment', ' experience', ' encounter', ' visionary', ' unusual', ' creative', ' watch', ' mythical', ' contemplate', ' ceremony', 'Unique', ' LOOK', ' Watch', ' mentor', ' prophecy', ' priest', 'Goal', ' understand', ' Shroud', ' Epic', ' odd', ' VIEW', ' Master', 'protect', ' secretive', ' Understand', ' Adventure', 'look', 'Alien', ' masked', 'Hidden', ' cryptic', ' intellect', ' Aim', ' vision', ' holy', ' Move', ' intent', ' Spiritual', ' goal', ' awaken', ' Knowledge', ' Private', ' master', 'view', ' scholarly', ' Secret', 'Notice', ' purpose', 'witch', ' Weird', ' Psychic', 'deep', ' Moon', ' genius', 'hold', ' Carry', ' Silent', ' mystery', ' charm', ' Scholar', ' light', ' sacred', ' Sense', ' Other', 'Watch', ' pray', ' arcane', ' secret', ' contemplation', ' individual', ' Enchant', ' rare', 'Understanding', ' secrecy', 'intention', 'FACE', ' behold', ' Meet', ' Charm', 'individual', ' covert', ' glance', ' Explore', ' Encounter', ' Conduct', 'Original', ' private', ' sage', ' intuitive', ' Plan', ' Enlightenment', ' monk', ' eccentric', ' exploration', ' Study', 'Protect', ' guiding', 'Spirit', ' peer', ' Sorcerer', ' sense', 'watch', 'alien', ' Goddess', 'hidden', ' SEE', ' Face', 'holy', ' Strategy', ' Discovery', ' enlight', 'study', ' Action', ' illuminating', 'Look', 'research', ' find', ' TEST', 'Experience', ' SEAL', 'Oracle', ' Hidden', ' guard', 'Feel', ' wizard', 'See', ' PLAN', ' shrouded', ' Miracle', 'Study', ' epic', ' strategy', ' subtle', 'Foreign', 'Other', ' poetic', ' understanding', ' Sorceress', ' held', ' Glow', 'notice', 'SEE', 'Mask', ' Exploration', ' study', ' observe', ' Shaman', ' test', ' alien', 'Detect', 'odd', ' imaginative', ' cloaked', ' sorce', ' disguise', ' disclosure', 'Research', ' Spell', ' fantasy', ' disguised', ' Hold', ' guide', ' sacrament', 'Private', ' mysteriously', 'plan', ' Oracle', ' spell', ' Meditation', ' conduct', ' identify', ' Disclosure', 'Performance', 'find', ' OTHER', ' Holy', 'Legendary', 'intent', ' Insight', 'design', ' Adviser', ' feel', 'Wonder', ' Deity', 'Test', ' Individual', ' analyze', ' deity', 'action', ' Fantasy', ' Arcane', 'face', 'spell', ' action', 'other', ' still', ' Moonlight', 'Rare', ' moon', ' wise', ' Trance', ' peculiar', 'Still', ' shimmer', ' Vision', ' Struggle', ' unconventional', ' ACT', ' veiled', ' Interpret', 'private', 'Uncommon', ' Notice', 'Help', ' cloak', ' feature', 'order', ' archetype', ' nun', ' endeavor', 'quiet', ' struggle', ' mysteries', 'Move', ' subconscious', 'see', ' fortune', ' spirituality', ' shaman', ' ACTION', ' Sovereign', 'stars', ' Protect', ' Held', ' mastermind', ' maneuver', ' Cloak', ' comprehend', ' awakening', ' trial', ' perplex', 'Different', ' Still', ' spirit', ' intriguing', ' cosmos', ' Purpose', 'dig', ' discreet', ' hidden', ' WATCH', ' Calm', 'intuitive', ' FACE', ' Light', ' learn', ' inner', ' personal', ' scheme', 'purpose', ' deepest', ' ritual', ' magician', 'Act', 'different', ' heroic', ' stars', ' guarded', ' Spirit', ' retain', ' legendary', ' surreal', ' Rare', 'protected', ' Foreign', 'knowledge', ' Feel', ' Universe', 'night', ' reflection', ' Quiet', 'peer', 'trial', ' Silence', ' exotic', ' Peer', 'sense', 'held', ' QUEST', 'Master', ' Wise', ' Objective', ' Lights', ' Try', ' whispering', ' attempt', ' Mysteries', 'QUEST', ' Cosmic', ' intimate', ' Original', ' Counsel', ' theoretical', 'act', 'Soul', ' discern', 'guide', ' Night', ' move', ' Providence', ' Destiny', ' goddess', ' enchanted', ' mute', ' Priest', 'quest', 'Spell', ' Different', ' design', ' Order', ' maintain', ' other', ' Intellectual', 'Personal', ' Ancient', ' Performance', ' Divinity', ' Heroic', ' preserved', ' FANTASY', ' Inner', 'fortune', ' disclosed', ' research', ' perception', ' gesture', ' dig', ' gaze', ' Quest', ' expedition', ' intuition', ' adviser', ' encounters', ' fate', 'ACT', ' quest', ' figure', ' knowledge', ' witch', ' unveil', 'foreign', ' Stars', ' Designer', 'test', 'mask', ' Learned', ' senses', ' Trial', ' phenomenal', ' mission', ' elusive', 'ordinary', ' carry', ' view', ' sheltered', ' amulet', ' illumination', 'Holy', 'execute', ' philosophical', 'Vision', ' safeguard', 'yssey', ' diplomacy', ' enlightenment', ' destiny', ' puzzling', ' thoughtful', 'meet', 'WATCH', ' Sage', ' special', ' different', 'Night', ' Attempt', ' clandestine', ' Odyssey', 'ACTION', ' execute', 'Meet', 'fight', 'Explore', ' bizarre', ' Test', ' Creative', ' fictional'] The Empress [' nurturing', ' loving', ' harmony', ' beautiful', ' Integrity', ' beauty', ' support', ' fruitful', ' inspire', ' supportive', 'Strong', ' Creator', ' Beauty', ' unity', ' love', ' creation', ' warmth', 'Family', 'growth', 'Community', ' Spiritual', ' Creative', ' creative', ' comfort', ' feeling', ' teaching', ' wisdom', 'green', 'Life', ' happy', ' protect', ' Kind', ' flower', ' Passion', ' caring', 'Mother', ' nurture', ' Happiness', ' create', ' Beautiful', ' Strong', ' Joy', ' prosperity', ' secure', ' Life', ' trust', ' wealth', 'creation', ' SUPPORT', 'Root', 'Love', ' Sharing', ' creator', 'parent', 'kind', 'protect', ' satisfaction', ' Family', ' Mother', ' compassionate', ' sustain', ' kindness', ' care', ' share', ' partner', ' forest', 'power', ' security', ' understanding', ' tree', 'warm', ' Love', ' Secure', ' Feeling', 'birth', ' Happy', ' grow', ' Tree', ' LOVE', ' Help', ' believe', 'knowledge', ' safe', 'Protect', 'comfort', ' Vision', ' joy', ' fulfillment', ' confident', ' Safe', 'care', ' Grow', 'support', ' loyal', ' development', 'awareness', ' life', ' intuition', ' knowledge', 'Tree', ' relationship', ' tender', ' strong', ' happiness', ' Care', ' belief', ' Development', ' constructive', ' healing', ' enjoyment', ' awareness', ' green', 'Queen', ' community', ' creating', 'loving', ' Plant', ' compassion', ' Heart', ' noble', ' Gaia', ' gentle', ' unified', ' empowering', ' natural', ' educate', 'dream', 'Trust', ' Universe', ' wise', ' Building', ' Wisdom', ' generous', ' Growth', ' POWER', ' Wealth', ' Protect', 'Secure', ' Teaching', 'Heart', ' Understanding', ' fruit', 'Growing', ' radiant', ' fulfilling', 'Team', ' Celebration', ' Aura', ' experience', 'natural', ' Powerful', ' traditional', ' fulfill', ' assistance', 'production', 'traditional', ' solidarity', ' Healing', ' flourishing', ' Positive', ' Connection', 'ustain', ' prosperous', 'Origin', ' united', ' powerful', ' bloom', 'connection', ' fulfilled', ' Leader', 'Support', ' world', ' tradition', ' growing', ' Relationship', ' Producer', 'Unity', ' foundation', 'Service', ' Together', 'love', ' CARE', ' Community', ' guiding', 'nature', ' Comfort', ' prosper', ' Giving', ' family', ' celebration', 'forest', 'peace', ' Knowledge', 'Forest', 'secure', ' Faith', 'ossom', 'Nature', ' help', ' Support', ' glow', ' Enjoy', ' full', ' graceful', ' Trust', ' Service', 'community', ' aid', ' Harmony', ' lush', ' Power', 'Perfect', 'thinking', 'Spirit', ' Blossom', ' Shine', 'soft', ' seed', ' receptive', ' influential', ' joyful', ' affirmation', ' comforting', ' service', 'help', ' shelter', ' consolation', ' growth', ' birth', 'Faith', ' Stability', 'Connect', ' ecosystem', ' sharing', 'Care', ' warm', 'flower', ' vitality', ' Security', ' advocate', ' Creation', ' feel', 'Peace', 'tree', 'Vision', ' Connect', 'energy', ' Friendship', ' balance', ' root', ' Learning', 'wood', ' plant', 'Giving', 'strength', ' Growing', ' vibrant', ' Complete', ' soft', ' beloved', 'spirit', 'Joy', ' Creating', ' Foster', 'earth', ' fostering', ' Environment', ' Balance', ' Female', 'Safe', ' Touch', ' helping', ' Mercy', ' parent', ' Art', ' clarity', ' Harvest', ' strength', ' motivating', 'home', 'Touch', 'isdom', 'create', ' uphold', ' convincing', ' Peace', ' engaging', ' wood', ' Aware', 'expression', ' universe', ' formation', ' nurt', ' give', ' Aid', ' heart', ' Glow', ' Loving', ' unite', 'grow', ' refinement', ' Evolution', '神', ' spirit', 'happy', ' intimate', ' gracious', ' inspiration', ' home', ' educating', ' Flower', 'Social', ' integrity', 'Hope', 'Help', ' GROUP', ' peaceful', ' Assist', ' Sovereign', ' floral', ' Garden', 'productive', 'Leader', ' Shelter', ' insight', ' empowerment', ' builder', ' connected', ' Queen', ' Give', 'Lady', ' supporting', ' productive', 'service', ' pleasure', ' Liberty', ' Benefit', ' construct', 'Development', ' safety', ' Tradition', ' heritage', 'life', ' Forest', ' believing', ' empathy', 'Rich', ' Sustainable', ' Progress', ' bonding', ' resilient', 'Construct', ' LIFE', ' energy', ' passion', 'together', 'World', ' motivation', ' Awareness', ' thrive', ' Blessed', ' majesty', ' evolve', ' build', ' feminine', ' existence', ' beginning', ' thinking', ' Builder', 'Understanding', ' Social', ' Flowers', 'building', ' abundance', ' Bloom', 'share', ' empower', 'group', ' vision', ' Root', ' art', ' WORLD', ' group', ' rich', 'rich', ' advocacy', ' Seed', ' protector', ' flowering', ' Wood', ' HOME', ' luxury', 'safe', ' promote', ' Nature', ' Rich', ' social', ' understand', 'female', ' leaf', ' Create', ' mature', ' dignity', ' fertile', ' helpful', ' celebrate', 'Bloom', ' encourage', ' inventive', ' flourish', ' giving', 'Building', 'Home', ' Guide', ' honest', ' consciousness', ' Advocate', ' spiritual', ' Lovely', ' learning', ' hope', ' cooperative', 'Dream', 'learning', ' complete', 'Aid', ' founding', ' Sweet', ' skilled', ' diverse', ' vegetation', ' household', 'Hug', ' innovation', ' queen', ' mutual', ' perception', ' receive', ' nature', ' sustaining', 'Happy', ' promise', 'social', ' strengthening', ' Energy', ' peace', ' Ruler', ' wellbeing', ' building', 'shield', ' hopeful', ' glowing', ' thriving', ' earth', ' Innovation', ' pioneer', ' engagement', 'close', ' Dream', ' educated', ' Living', ' evolution', ' Leaf', ' Unity', ' intelligent', ' Earth', ' Romance', ' Warm', ' Expression', ' Communication', ' Participation', 'unity', 'foundation', ' faith', ' strengthen', 'Complete', 'touch', 'smart', ' engage', ' stability', ' Experience', ' Charm', 'Green', ' lovely', 'Strength', ' maple', 'origin', ' affection', 'Group', 'pathic', ' bliss', 'Together', ' Natural', ' genesis', ' Construct', 'world', ' connect', 'development', ' satisfying', ' Gentle'] The Emperor [' influential', ' leader', ' noble', ' commanding', ' Essential', ' powerful', ' increasing', ' governing', 'powerful', ' guiding', ' maintaining', ' Powerful', ' fearless', ' important', ' effective', ' Leader', ' controlling', ' master', ' influencing', ' upholding', ' securing', ' leading', ' secure', ' managing', ' accepting', ' Brave', ' king', ' inspiring', 'Important', ' strengthening', ' promoting', ' Hero', ' Sovereign', ' Strong', ' liberating', 'Master', ' lord', ' independent', ' Grand', 'increasing', ' strong', ' sustaining', ' manager', ' recognizing', ' supporting', ' rewarding', ' protecting', ' authoritative', 'leader', 'Chief', 'Grand', ' dominating', ' Understanding', 'important', 'Alexander', 'stable', ' victorious', ' significant', ' ruler', ' skilled', ' changing', 'respected', ' respected', ' wise', 'growing', ' gaining', ' enabling', ' Master', ' coordinating', ' rulers', ' growing', 'Hero', 'Leader', ' strongest', ' Alexander', ' Increasing', ' POWER', ' enduring', ' Winning', ' Permanent', ' dominant', ' permanent', ' Leading', ' directing', ' efficient', ' executive', 'winning', ' Effective', ' choosing', ' Noble', 'effective', ' capturing', ' winning', ' King', ' Lord', ' stable', ' Managing', ' essential', ' keeping', ' developing', ' educating', ' regulating', ' complete', ' empowering', 'Keeping', ' joining', 'free', ' Perfect', 'Strength', ' firm', ' chief', 'Having', ' esteemed', ' attracting', ' critical', ' saving', ' Taking', 'taking', ' sage', ' sovereign', 'Father', ' depending', ' understanding', ' safe', ' Chief', ' defeating', ' entering', ' Guide', 'Critical', 'saving', 'changing', ' Commander', ' facilitating', ' invincible', 'Taking', ' nurturing', 'Brave', ' losing', ' protector', ' advising', 'Legendary', ' Manager', ' senior', ' royal', ' stimulating', ' Critical', ' absorbing', ' manipulating', ' exchanging', ' Richard', ' managerial', ' Changing', ' selling', 'Senior', ' stability', ' representative', ' establishing', ' free', 'Effective', ' enhancing', ' asserting', 'Growing', 'Lord', ' steadfast', ' useful', ' achieving', 'Henry', ' successful', ' boss', ' Important', ' brave', ' authority', ' employing', 'Strong', ' combining', ' aiding', ' balancing', ' leaders', ' acquiring', ' director', 'KING', ' sacrificing', ' authorizing', ' allowing', ' flourishing', ' owning', ' valuable', ' activating', ' approving', ' lifting', ' responsible', ' expert', ' high', ' refining', ' Ruler', ' Holding', ' winner', ' Publishing', ' mighty', ' providing', ' justifying', 'Key', ' Emperor', ' Central', ' conquering', 'Complete', ' glorious', 'safe', ' encouraging', ' Including', ' succeeding', ' engaging', ' earning', 'Understanding', ' ideal', ' great', ' inviting', ' Augustus', ' working', 'Central', ' appointing', ' modifying', ' Marketing', ' expanding', 'Executive', ' hero', ' Henry', ' Supervisor', ' Vital', ' renowned', ' vital', 'controlled', ' motivating', ' grand', ' Power', ' gathering', 'strong', ' Having', ' father', ' competent', ' advancing', ' fulfilling', 'lasting', ' superior', ' loyal', ' Mighty', ' Sage', ' planning', ' transforming', ' enjoying', 'Control', ' Honor', ' utilizing', ' KEY', ' Edward', 'Edward', ' Winner', ' refreshing', ' revered', 'manager', ' Executive', ' overseeing', 'Rule', ' Head', ' entertaining', 'Learning', ' ensuring', ' productive', 'depending', ' HERO', ' preserving', ' adopting', ' illustrious', ' primary', ' elevated', 'significant', 'Director', ' Secure', ' taking', ' queen', 'Secure', ' warrior', ' Order', ' infinite', 'Richard', ' reliable', ' restoring', 'working', ' leadership', ' learning', ' fruitful', ' Founder', ' central', ' Planning', ' Growing', 'Changing', ' optimizing', ' Keeping', ' decisive', ' adjusting', ' bold', ' evaluating', 'secure', ' defending', ' realizing', ' emperor', ' Victor', ' Stability', 'seeing', ' confirming', ' Administrative', ' Champion', ' Defender', ' initiating', ' Governor', ' champion', ' Influence', ' Primary', ' ascending', ' determining', ' Bold', 'independent', 'critical', ' Leadership', 'King', ' Senior', ' prevailing', ' key', ' HEAD', ' thriving', 'Independent', ' organizing', ' surviving', ' Elder', ' spreading', ' selecting', ' banking', ' receiving', ' High', 'educated', 'leading', ' strength', ' possessing', ' triumphant', 'keeping', ' confident', ' Control', ' rallying', ' Eisenhower', ' cultivating', ' pioneer', ' Free', 'ifying', ' organising', ' ruling', ' supervisor', ' prestigious', 'successful', ' educated', 'guide', ' acknowledging', ' having', ' autonomous', ' control', ' Leaders', ' Louis', ' Useful', ' raising', ' dedicated', ' controlled', ' Arthur', 'master', ' collecting', 'key', 'selling', 'control', ' Director', ' Constantine', 'KEY', ' Boss', ' publishing', ' mastering', ' Protector', 'head', ' exalted', ' suppressing', 'skilled', 'high', 'primary', ' rule', ' informative', ' Eternal', ' legendary', ' Significant', ' helpful', ' strategist', ' guide', 'verning', ' courageous', ' insightful', ' organizer', ' ORDER', ' potent', 'Primary', ' Royal', ' experienced', ' symbolic', ' managers', ' protective', 'Depending', 'bearing', ' proficient', ' awarding', ' guardian', ' HIGH', ' granting', ' LORD', ' exploring', ' Principal', ' dependent', ' advertising', ' principal', ' limitless', 'bold', ' Banking', ' arresting', ' majestic', ' large', 'Louis', ' honorable', ' accumulating', ' executing', ' holding', ' Imperial', 'strength', 'having', ' improving', ' seizing', ' Philip', ' counting', ' Complete', ' embracing', 'Queen', ' Learned', ' substantial', 'High', ' Independent', 'Free', ' Selling', ' knowledgeable', 'fighter', ' Rule', ' constructive', ' deciding', ' experiencing', ' preferring', ' Legendary', ' permitting', ' grabbing', ' reflecting', 'freedom', 'Perfect', ' admitting', ' reigning', ' Teacher', ' assisting', ' Fighter', 'rule', ' Queen', ' Supporting', ' persuasive', ' eliminating', ' Depending', ' loving', ' advisor', 'ranking', 'complete', ' timeless', ' distinguished', ' delivering', 'Guide', 'Professional', ' Higher', ' imposing', 'Elizabeth', ' elite', ' Father', ' overcoming', 'grand', 'essential', ' Charles', ' CONTROL', ' ancient', ' Strength', ' Expert', 'model', ' fighter', 'chief', ' forceful', ' management'] The Hierophant [' Alliance', ' Purpose', ' noble', ' wisdom', ' missionary', ' sage', ' enlightenment', ' rational', ' wise', ' gentle', ' enlightened', ' lineage', ' visionary', 'ethical', ' community', ' philosophy', ' compassionate', ' Way', ' alliance', 'Assembly', ' holy', 'Strength', ' humane', ' Style', ' Essence', 'knowledge', ' purpose', ' Legacy', ' variety', ' generous', ' spiritual', 'purpose', ' ceremony', ' vision', ' ethical', ' Assembly', ' constructive', ' Radical', ' Knowledge', ' loving', ' benevolent', ' guide', ' devotion', ' profound', ' influential', 'Community', ' Religious', ' pious', ' powerful', ' disciple', ' charitable', ' mission', ' virtuous', ' Society', 'entity', ' Advocate', ' Tradition', ' Enlightenment', ' Sage', ' Guide', ' Understanding', ' beloved', ' custom', ' explorer', ' company', ' Group', ' Type', 'radical', ' inventive', ' style', ' tradition', ' quest', ' courageous', ' creator', ' adventure', ' union', ' Unique', ' Plan', ' Witness', 'method', ' notable', ' earnest', ' progressive', ' intellectual', ' culture', ' radical', ' family', ' Order', ' mentor', ' apostle', ' honorable', ' Establishment', ' kind', ' Philosophy', ' group', ' good', 'good', ' Rational', ' Array', ' divine', ' warrior', ' Force', ' observer', ' objective', ' PLAN', 'order', ' journey', ' way', ' diversity', ' knowledge', ' piety', ' respected', ' Helpful', ' Prophet', ' guru', 'Mission', 'History', ' allegiance', ' Creative', ' advocate', ' doctrine', ' Movement', ' Creator', ' Custom', ' Kind', ' Guru', ' wealth', ' Awareness', 'Pope', ' contemplation', 'Family', ' Important', ' Presence', ' fellowship', ' Charity', ' Wisdom', ' Culture', ' pioneer', 'assion', ' array', ' productive', ' vitality', ' essence', ' character', ' innovative', ' dedicated', ' inspirational', ' Approach', ' tolerant', ' faithful', 'holy', ' Divine', ' organization', 'SET', ' Church', ' Quest', ' path', 'CLASS', 'community', ' teachings', ' Spiritual', ' multitude', ' Loving', ' POWER', 'group', ' Authority', ' practitioner', ' Founder', ' loyal', ' Integrity', ' principle', 'learning', ' strategy', ' medium', ' Diversity', ' Medium', ' helpful', ' Set', ' design', ' imaginative', ' Identity', ' energy', ' insightful', ' Blessed', ' famous', ' charity', ' revolutionary', ' steadfast', ' important', 'medium', ' generosity', ' Scheme', ' devout', ' soul', 'Honest', ' insight', 'custom', ' History', ' sacred', ' Fellowship', ' PATH', ' sensible', ' mystic', ' pioneering', ' Pilgrim', ' Wise', ' Perception', ' illustrious', ' Holy', ' talented', ' knowledgeable', ' seeker', ' cleric', ' beneficial', ' Method', 'category', ' Priest', ' worthy', 'Faith', 'range', ' honesty', ' Mystic', ' thoughtful', ' exploration', ' Union', ' harmony', ' Respect', ' unique', 'Calling', ' Personality', ' follower', ' congregation', ' prophet', ' Learning', ' society', ' category', ' dream', ' Teacher', ' assembly', 'Good', ' ideal', ' Learned', ' fundamental', ' contemporary', ' positive', ' combination', ' GROUP', ' esteemed', ' Strength', 'Oracle', ' goal', 'spirit', ' Insight', 'modern', 'Important', ' understanding', ' supportive', ' reverence', ' eminent', ' Faith', ' eager', 'Kind', ' just', ' tribe', ' Blueprint', 'aggressive', 'loving', 'respected', 'respect', ' spirit', ' scholar', ' Model', 'Way', 'Education', ' template', ' awareness', ' learning', 'form', 'Range', 'founder', ' form', 'goal', ' attentive', 'awareness', ' brilliant', ' pilgrimage', 'Method', ' personality', 'series', ' entity', ' mainstream', ' orthodoxy', ' educated', ' ingenious', ' Series', ' Noble', ' collection', ' philosopher', ' worship', ' venture', 'intention', ' respectful', ' priest', ' modern', 'culture', ' enthusiastic', ' Patient', ' Legendary', ' establishment', ' Community', ' aim', ' Range', 'existence', ' virtue', 'JUST', ' Goal', ' elegance', ' creative', ' liberal', 'Body', ' Reference', ' awakening', ' Just', ' ritual', ' learned', ' Principle', ' passion', ' Path', ' examiner', ' theoretical', ' Class', ' Journey', 'important', 'Self', ' adept', 'example', ' cultural', ' Moral', ' witness', ' energetic', ' Company', 'Template', ' Compliance', ' trustworthy', ' prophecy', 'focused', ' reasonable', 'Path', ' Modern', 'Target', ' founder', ' magic', 'PATH', ' Search', ' Founding', 'universal', ' Examiner', ' elegant', ' bishop', ' Objective', ' Messenger', ' Power', 'TYPE', 'Type', ' practitioners', ' Explorer', 'gallery', 'connection', 'righteous', ' Heritage', ' enthusiast', 'liberal', ' archetype', 'guide', ' Famous', ' Past', ' number', ' Mission', ' honor', ' analyst', ' convergence', ' identity', ' existence', ' series', ' direction', 'array', 'model', ' Wealth', ' Intellectual', ' innovation', ' Progressive', ' cool', ' achievement', ' diligent', ' Design', ' moral', ' pattern', 'design', ' principles', ' chic', ' orderly', ' Organization', ' scholarship', ' hip', ' champion', ' Self', ' encouraging', ' bounty', ' aggressive', ' hopeful', 'Guide', ' religious', 'Witness', ' Analyst', ' righteous', 'Goal', ' guidance', ' range', ' healer', 'Vision', ' Representative', ' psychic', ' Contemporary', ' adventurer', ' strength', 'set', ' pursuit', ' model', ' Psychic', ' integration', ' optimistic', ' Body', ' Project', 'Life', ' Pope', ' Useful', ' instruction', ' Enterprise', ' teaching', 'path', ' devoted', ' type', ' Position', ' value', ' Good', 'plan', ' Character', ' Strategy', ' Association', ' Apostle', 'chemist', ' remarkable', 'class', ' Convention', ' Gathering', 'Justice', ' Ideal', ' Energy', ' landmark', ' cooperative', 'rational', ' truthful', ' appreciation', 'Group', ' Vision', 'Analysis', ' catalyst', ' graceful', ' gathering', 'Father', ' blend', ' synthesis', ' Loyal', ' power', 'Model', ' Divinity', ' sense', ' firm', ' self', ' humanitarian', 'Entity', ' Being', ' founding', 'just', ' grateful', 'formation', ' Positive', ' Warrior', 'project', ' analytical', ' Technique', 'strength', ' prominent', ' abundance', ' loyalty', 'energy', ' class', 'Power', ' prudent', ' Light'] The Lovers [' LOVE', ' love', ' passion', 'Touch', ' inspire', 'connect', ' pursuit', ' Bind', ' challenge', ' Touch', ' aim', ' unite', 'pair', ' aid', ' delight', ' reward', ' bind', 'balance', ' Honor', ' Experience', ' gift', 'Build', ' admire', ' Build', ' match', 'love', ' foster', ' honour', ' respect', ' benefit', ' Support', ' embrace', 'illusion', ' hunger', ' engage', ' spirit', ' desire', ' goal', ' promote', ' champion', 'Love', ' Passion', ' honor', ' Love', ' fuse', ' support', ' connect', ' spark', ' Challenge', ' share', ' joy', ' experience', 'dream', ' Vision', ' Extend', ' SUPPORT', ' burn', ' Pair', ' touch', ' flare', ' relate', ' Match', 'Draw', ' Respect', ' dream', ' hold', ' strive', ' ignite', ' Focus', 'Bind', ' Art', ' Display', 'support', 'Focus', 'Aim', 'Hold', ' restore', ' blend', ' Hold', 'match', ' explore', 'change', ' Blend', ' adventure', ' spell', 'spirit', ' show', ' quest', ' Connect', ' illuminate', 'show', ' fascination', ' Fire', ' provide', 'Capture', ' Echo', ' Connection', ' aspire', ' harmony', ' accord', 'spell', ' hope', ' Lust', ' flame', 'fire', 'Adventure', ' cherish', ' glow', 'Join', ' craft', ' devotion', 'bind', ' Magic', ' balance', ' enthusiasm', ' lust', ' Adventure', ' tune', ' pleasure', ' vision', ' combine', 'Start', 'Create', ' Benefit', 'Hum', ' offer', 'Discover', ' habit', ' Trigger', ' contemplate', ' sustain', ' Spell', 'goal', 'link', 'forge', ' draw', 'shape', ' build', 'share', ' encourage', ' test', ' Show', 'Quest', ' Balance', 'flame', 'purpose', 'Goal', ' strengthen', ' Gift', 'Attach', ' forge', ' ambition', ' adapt', ' enrich', 'Present', ' Quest', ' Light', ' mold', ' potential', 'create', ' stir', ' adjust', 'shine', ' fellowship', ' marry', 'Skill', ' reconnect', ' purpose', 'Begin', ' Shape', 'Experience', ' Celebration', 'build', 'Return', ' Calling', 'Fight', 'Vision', ' Discover', ' Dream', ' Reward', ' zeal', ' shine', ' construct', ' talent', ' activate', ' nurture', ' life', 'Match', 'Life', ' alliance', ' create', ' Design', ' Long', ' Flame', ' Aid', ' connection', ' treasure', ' woo', 'Connect', ' assist', 'Support', ' advance', ' Enchant', 'Wonder', ' Help', 'Construct', ' romance', 'offer', ' chant', ' fashion', ' begin', ' practice', 'Spirit', ' advantage', ' align', ' develop', ' ceremony', ' value', ' hug', 'long', ' appreciation', ' Kiss', 'Light', 'value', ' Draft', 'help', ' tribute', ' Talent', ' Meet', ' stimulate', ' art', 'benefit', ' deepen', ' Advantage', 'reflect', ' enchantment', ' mirror', ' duel', 'Protect', 'develop', ' Understanding', 'Launch', ' shape', ' showcase', 'Craft', ' light', ' kiss', ' Mission', 'Test', ' fulfill', ' Hum', ' Provide', ' Forge', ' thrill', ' twin', ' objective', ' Advance', 'Aid', ' Joy', ' start', ' mission', ' adhere', ' Enhance', ' display', ' fight', ' BACK', 'craft', ' trigger', 'Shape', ' Habit', 'Mission', 'Fire', ' LIFE', ' link', ' exchange', ' affection', 'respect', 'Forge', 'Mix', ' Career', ' Aim', 'BACK', ' Correspond', 'fight', 'flash', ' Fusion', ' Treasure', ' Partnership', ' esteem', 'Display', ' QUEST', ' Goal', ' Skill', ' magic', ' glory', ' Explore', ' Manifest', ' ART', ' Further', 'SHARE', ' Target', 'Magic', ' fire', ' Assist', ' Link', ' START', ' Enable', ' seek', ' prize', ' worship', 'align', ' Draw', ' appreciate', 'Design', ' design', 'Dream', ' Miracle', ' pursue', ' flash', ' Reflect', ' reveal', 'Share', ' Experiment', ' favour', 'marriage', 'Burn', 'Long', ' Duel', ' demonstrate', ' LINK', 'Complete', ' CARE', 'ampion', ' Associate', 'Mania', ' present', ' Grip', ' urge', ' wed', ' Share', ' surrender', ' cling', ' lure', 'Show', ' Strike', ' opportunity', ' echo', ' Test', 'joy', ' grasp', ' attach', ' Start', ' Champion', ' custom', ' facilitate', ' target', ' join', ' Fellowship', ' relationship', ' Stick', ' offset', ' attraction', ' conceive', ' SHOW', ' Offer', ' Seek', ' Custom', ' Expedition', 'Marvel', 'touch', ' commence', ' Discovery', ' Desire', ' identify', ' complement', ' articulate', ' Beam', ' Opportunity', ' Glow', ' settle', ' thirst', 'Adapt', ' Create', ' Complete', ' Keep', ' call', ' feature', ' LONG', 'mire', ' exploration', ' celebrate', ' complete', ' gain', ' Hug', ' help', ' Sync', ' Mix', 'Reward', ' Journey', ' Blessing', 'Hope', ' discovery', 'design', ' Construct', ' blaze', ' Hope', ' compassion', ' intimacy', ' possibility', ' Tradition', 'ISSION', ' back', ' journey', 'adjust', ' trial', 'echo', ' experiment', ' customize', 'Further', ' boost', ' energy', ' meet', ' enhance', ' pair', ' return', ' launch', ' play', ' Adapt', ' privilege', ' intensify', 'Care', ' correlate', 'quest', 'chant', ' Spark', 'life', ' flair', 'burn', ' reverence', ' understanding', ' long', 'attach', ' chemistry', ' pine', 'service', ' Spirit', ' enchant', 'test', ' Charm', 'practice', ' apprentice', 'hold', ' approve', ' Present', ' Trial', ' Back', ' Adjust', ' admiration', ' preserve', ' endeavor', ' companion', 'back', ' tradition', ' invent', ' Union', 'Back', ' Objective', ' friendship', 'Art', ' Shine', ' look', 'Flash', ' Life', ' maintain', 'owship', 'Union', ' Capture', ' Wonder', 'hum', ' Honour', ' Purpose', ' miracle', 'magic', ' Insp', ' Value', ' Join', ' clasp', ' TEST', ' bring', ' inspires', ' propose', 'Target', 'Meet', ' suggest', 'rifice', 'mix', 'Merit', ' anticipation', 'Value', ' spur'] Strength [' Integrity', ' agility', ' resilience', ' Faith', ' courage', ' stamina', ' integrity', ' vitality', ' skill', ' endurance', 'Strength', ' stability', ' strength', ' Mastery', ' support', 'Faith', ' success', 'Power', ' efficiency', ' Stability', ' faith', 'power', ' Power', ' mastery', ' protector', ' Achievement', ' leadership', ' Preservation', ' willpower', ' Harmony', ' confidence', ' heart', ' spirit', ' CONTROL', ' restoration', ' Balance', ' power', ' progress', ' Progress', ' Strength', ' Genius', ' hero', ' Support', 'capacity', ' energy', ' regeneration', ' persistence', ' flexibility', ' influence', ' Courage', 'skill', 'strength', ' POWER', 'performance', ' acceptance', ' preservation', 'Heart', ' reliability', ' dexterity', ' conservation', 'Development', ' Leadership', ' Performance', 'character', 'Defense', ' proficiency', 'Support', ' patience', ' Heart', ' HERO', 'progress', 'heart', ' healing', ' Influence', ' Fire', ' growth', ' encouragement', ' Efficiency', ' authority', ' Talent', ' grit', ' capability', ' balance', ' wisdom', ' backing', 'Hero', ' passion', 'confidence', ' spark', ' brilliance', ' restraint', ' control', 'Animation', ' toughness', ' Recovery', ' Hero', ' versatility', 'fire', 'hero', 'Security', ' Agility', ' Innovation', 'support', ' competence', ' achievement', ' FIRE', ' empowerment', ' reinforcement', ' fire', 'Performance', ' recovery', ' soul', ' assistance', ' Healing', ' enthusiasm', ' dominance', 'independence', ' Independence', ' independence', ' potency', 'Skill', ' Protection', 'efficiency', ' Heat', ' sway', ' effectiveness', 'trust', ' capacity', 'drive', 'heat', ' durability', ' security', 'control', ' Command', ' protection', 'Progress', ' discipline', ' Improvement', ' performance', ' Advocate', ' Force', ' Dexterity', ' assurance', ' heroism', ' dedication', ' genius', ' talent', ' efficacy', ' Protector', ' Endurance', ' flair', ' advancement', ' Growth', ' mobility', ' Conservation', 'Fire', ' composure', 'growth', ' sovereignty', ' bravery', 'spirit', 'energy', 'security', 'defense', ' Restoration', ' DEFENSE', ' help', ' Defense', ' Muscle', ' muscle', ' gift', 'determination', ' prowess', ' SUPPORT', ' Wisdom', ' Capacity', ' Passion', 'protection', ' enrichment', ' catalyst', ' productivity', 'Command', ' heat', ' Aid', ' pillar', ' Assistance', ' drive', ' trust', 'command', ' HELP', ' creativity', ' motivation', ' defense', 'Help', 'help', ' guardian', 'intensity', ' belief', ' Authority', ' coordination', ' beacon', ' governance', ' enhancement', 'Heat', 'balance', 'development', ' progression', ' Catalyst', ' Discipline', ' Stamina', ' guts', ' stimulation', ' command', ' Spark', ' idol', ' force', ' Character', ' Zen', ' renewal', ' supporter', ' moderation', ' knack', ' nerve', ' consolidation', ' Gift', ' transformation', ' elevation', ' character', ' dignity', ' might', ' consistency', ' intensity', ' tolerance', 'faith', ' spine', ' optimism', 'Character', ' comeback', 'Drive', ' advocacy', ' harmony', ' aspiration', ' ability', ' clout', ' Beacon', ' provocation', ' Animation', ' commitment', ' shielding', 'Spirit', 'force', 'pillar', ' Mobility', ' impetus', ' resolve', ' incentive', ' Pillar', ' champion', ' animation', ' Transformation', ' resurgence', 'Ability', ' Maintenance', 'aid', ' resignation', ' defender', ' Idol', ' inspiration', ' daring', ' conviction', ' development', ' Enhancement', ' importance', ' Guardian', ' weakness', ' evolution', 'might', ' honesty', ' advocate', ' ambition', 'Soul', ' accomplishment', ' Control', ' determination', 'cipline', ' immortality', ' innovation', ' devotion', ' leader', ' backer', ' sanity', ' hegemony', ' Belief', ' sincerity', ' ease', ' vigilance', ' inspirational', ' Spirit', ' Drive', 'iability', ' Energy', 'luence', 'zen', 'Control', ' guidance', ' Soul', ' improvement', 'Force', 'termination', ' Skill', ' skills', ' ingenuity', 'management', ' appreciation', ' Charisma', ' Zeal', ' guarding', ' maintenance', ' Success', ' promise', ' warmth', ' accountability', ' abilities', 'Success', ' Resolution', 'Management', ' resistance', ' liberation', ' talents', ' armor', ' persever', ' zeal', ' involvement', ' prosperity', ' aid', 'iciency', 'Energy', 'Aid', ' fury', 'Leader', 'Role', ' backbone', ' greatness', ' cohesion', ' supremacy', ' reverence', ' Leaders', ' Trust', 'acity', ' validation', ' pressure', ' Management', ' Defender', ' leverage', 'Trust', ' justification', ' charisma', 'tenance', ' accomplishments', ' Revival', 'intelligence', ' forgiveness', ' shield', 'incerity', ' affinity', 'rity', ' retention', ' Opportunity', 'istance', ' traction', ' urgency', ' freedom', ' Mighty', ' courageous', ' defenders', ' Help', 'success', ' victory', 'armor', ' reputation', ' inhibition', ' Security', 'voice', ' tendency', 'Balance', ' deterrence', 'rowth', ' Leader', ' refinement', ' illumination', ' pride', ' management', ' Flame', ' backup', ' longevity', ' certainty', ' Administration', ' rhythm', ' breakthrough', ' mitigation', ' elegance', ' abandonment', ' promotion', ' instruction', ' destruction', ' generosity', 'isdom', ' mantle', ' degradation', ' utility', ' flame', ' verification', ' Ally', ' Skills', ' resilient', ' Voice', ' superiority', ' paralysis', 'expression', ' Integration', ' excellence', ' strengths', 'pressure', ' redemption', ' Heroic', ' containment', ' discretion', ' rationality', ' capacities', ' seasoning', ' correctness', ' honor', ' mighty', ' viability', ' foundation', ' contemplation', ' affection', 'reciation', ' authenticity', ' continuity', ' heroic', ' competitiveness', ' superpower', ' stabilization', ' mastermind', ' Armor', ' Change', ' inconsistency', 'forcement', 'establishment', ' solidarity', ' initiation', ' regener', ' powerhouse', ' comfort', ' empathy', ' Might', ' pace', ' credibility', ' salvation', ' Liberty', ' renaissance', ' activation', ' compatibility', ' nobility', ' presence', ' Excellence', ' responsiveness', 'iasm', ' movement', 'activation', ' Rhythm', 'etermination', ' admiration', ' strengthening', ' sacrifice', ' approval', ' suppression', ' readiness', ' tendencies', 'ovation', ' validity', ' praise', 'gence', ' intelligence', ' Development', 'foundation', ' stabil', ' impact', ' kindness', 'impact', ' Immunity', ' hope', ' affirmation', 'Zen', 'repair', ' achievements', ' Advice', ' Production', ' avoidance'] The Hermit [' Quiet', ' meditation', ' Doctrine', ' belief', ' sanctuary', ' wisdom', 'Freedom', ' Understanding', ' Wisdom', ' refuge', ' contemplation', ' quiet', ' intuition', ' Knowledge', ' comfort', ' doctrine', ' Rest', ' Awareness', ' Silence', 'learning', ' Question', ' Belief', ' insight', ' Faith', ' understanding', ' Peace', ' Query', ' Learning', 'peace', 'Faith', ' enlightenment', ' learning', ' Enlightenment', 'knowledge', ' Philosophy', 'Peace', ' calm', ' freedom', ' Comfort', ' questioning', ' Search', ' Prophet', ' guidance', ' Sanctuary', ' Inquiry', 'Focus', ' silence', 'quiet', ' discovery', ' liberation', ' Advice', ' Meditation', ' Ethics', 'question', ' autonomy', ' Teaching', ' devotion', ' Judgment', 'Learning', 'Query', ' awakening', ' exploration', ' revelation', ' query', ' pause', ' peace', ' intelligence', ' Shelter', 'freedom', 'focus', ' spirituality', ' relaxation', ' shelter', ' retreat', ' Focus', ' mystery', 'Question', ' Independence', 'search', ' privacy', ' rest', ' Intelligence', ' Insight', ' Study', 'pause', ' Freedom', 'isdom', ' Sage', 'Break', ' independence', ' Break', ' moderation', ' philosophy', ' knowledge', ' Instruction', ' inquiry', ' Refuge', ' Creed', ' Principles', 'comfort', ' instruction', ' teaching', ' Oracle', ' Discovery', ' Guru', ' search', ' consolation', 'Pause', ' beliefs', ' Liberation', ' question', ' judgement', ' isolation', ' Divinity', ' inspiration', ' Privacy', ' Scholarship', ' Principle', 'Understanding', ' humility', 'break', 'query', ' faith', ' Calm', 'Oracle', ' awareness', ' Evolution', 'Search', 'intelligence', 'Guide', ' creed', ' Mystic', 'Study', ' Exploration', ' abstinence', ' Perception', ' break', ' coaching', 'direction', ' simplicity', ' transformation', ' Revelation', 'thinking', ' Transformation', ' advice', ' comprehension', ' observation', ' austerity', ' prophet', ' judgment', ' Guide', ' Haven', ' solitude', ' lull', ' ethics', ' mentor', ' Direction', ' loneliness', ' relief', ' Paradox', ' REST', 'rest', ' emancipation', ' clarification', ' mindfulness', ' Counsel', ' illumination', ' theology', ' realization', ' modesty', ' puzzle', ' evolution', ' focus', ' scholarship', 'Rest', ' Teacher', ' Presence', ' Investigation', ' discipline', ' Seeking', ' presence', ' teacher', ' investigation', ' guru', ' reserve', ' doctrines', ' piety', ' discretion', 'awareness', 'seeking', ' uniqueness', ' detachment', ' haven', ' sage', ' visionary', ' lesson', 'Privacy', ' mystic', 'ctuary', ' thinking', ' forgiveness', ' Leadership', ' Prayer', ' Mystery', ' Reality', ' Truth', 'independence', 'research', ' Language', ' leadership', 'oneliness', ' conscience', ' mediation', ' truth', ' silent', ' direction', 'iscovery', ' Evidence', ' Thinking', ' disillusion', 'study', 'ependence', ' lessons', ' consciousness', ' calming', 'faith', 'confidence', ' perception', ' explanation', ' Fellowship', ' guide', ' Awakening', ' Vision', 'olation', ' uncertainty', ' orthodoxy', ' Darkness', ' Analysis', ' Studies', ' ambiguity', ' confidence', ' study', ' counsel', 'udence', 'elligence', ' cessation', ' disciple', ' conviction', ' Silent', ' experience', ' interrogation', ' Planning', ' Mysteries', ' mysteries', ' information', ' rationality', ' philosopher', 'sight', ' prayer', ' paradox', ' Courage', ' pardon', ' principles', 'thought', 'Justice', ' quieter', ' vision', ' Hearing', ' curiosity', ' seeking', ' innocence', ' research', ' Integrity', ' Mastery', ' questions', ' mantra', ' Yoga', ' worship', ' imagination', ' grounding', ' innovation', 'stay', 'Research', ' Relief', ' Education', ' Definition', ' myst', ' trance', ' counseling', ' Disciple', ' Liberty', ' commitment', ' neutrality', ' determination', ' thinker', ' Experience', ' emptiness', ' thought', 'determination', ' Mission', ' judgments', ' Diversity', ' inquiries', ' compassion', 'Zen', ' Promise', ' philosophers', ' consensus', 'iphany', ' secrets', ' distance', ' Mercy', ' protection', 'creation', 'Questions', ' teach', ' immersion', ' dilemma', ' contempl', ' Zen', ' darkness', ' visualization', ' strategy', ' Opinion', ' Healing', ' Prediction', ' yog', ' mentors', ' hearing', 'Mission', ' calmed', ' Justice', ' quietly', ' Communication', 'enment', ' reasoning', ' Religion', ' rested', 'Distance', 'Training', ' Stability', ' interpretation', ' finding', 'assion', ' philosophical', ' dedication', ' foundations', 'cipline', ' skepticism', ' reliance', ' Survey', ' competence', ' believing', ' Questions', ' reverence', 'education', ' Discrimination', ' affirmation', ' answer', ' Debate', ' equilibrium', ' hiatus', ' Buddha', ' principle', ' spiritual', ' Puzzle', ' philosophies', ' prohibitions', ' focused', ' Myst', ' reservation', ' jQuery', ' secrecy', ' paralysis', ' sanity', ' Information', ' Innovation', ' training', ' Quote', ' Strategies', ' Research', ' education', ' quest', ' Logic', ' approach', ' Conversion', ' doctr', ' dissertation', ' Difference', ' initiation', ' theory', ' religion', ' Training', ' learn', ' dogma', ' wilderness', 'Coach', ' intuitive', ' accommodation', 'iosity', 'Evidence', ' Regulation', ' compassionate', 'ruction', ' exile', ' paradise', ' disciples', ' sql', ' authority', ' Journey', ' creation', ' Policy', ' Protection', ' speculation', ' diplomacy', ' blessing', ' motivation', ' prophets', ' practice', 'Sense', ' Lessons', ' inspirational', ' mystical', 'information', ' Science', ' Interest', ' Thought', ' Statement', 'change', ' suggestion', ' leisure', ' explanations', 'usterity', ' Wonder', 'onomy', 'dream', ' maxim', ' miracle', ' unrest', ' literature', 'Education', ' healing', ' Savior', ' realism', ' contemplating', ' Theory', 'Mind', 'Leader', ' Crisis', ' Reconstruction', 'Discussion', 'guide', ' tranquil', ' Directions', ' Emerson', ' specialization', ' Scholar', ' Reason', ' gratitude', ' diversion', ' Revelations', 'revolution', ' Learn', 'Interest', ' Distance', ' Discipline', ' assessment', ' language', 'Insp', ' Architecture', 'reality', ' opinion', ' Conquest', 'Problem', ' Apocalypse', ' Decision', ' instructional', ' unity', ' supervision', ' invocation', ' Witness', ' Translation', ' Concept', ' scripture', 'truth', ' abstraction', ' Creation', ' divergence', ' Concern', 'Vision', ' architecture', ' measure', ' awaken', ' recovery', ' communication', ' disturbance', ' reflection', ' vacancy', ' kindness', 'fulness'] The Wheel of Fortune [' transformation', ' twist', ' Transformation', ' changing', ' Twist', ' change', ' Paradox', ' Change', ' Strange', ' evolution', ' random', ' alteration', ' Progress', ' fortune', ' Alchemy', ' reinvent', ' turning', ' miracle', 'random', 'Strange', ' transforming', ' Upgrade', ' adjustment', ' advancement', 'Changing', ' Swap', ' Curious', ' development', ' morph', ' Transition', ' Fate', ' UPDATE', ' Prediction', ' unforeseen', ' evolve', ' Fortune', 'changing', ' Blend', ' Transform', ' paradox', ' Changing', ' Miracle', 'Update', ' expansion', ' Fusion', ' unusual', ' Revolution', ' Expansion', 'chance', ' Switch', ' fate', ' variation', ' transition', ' refinement', ' Shift', ' express', ' revolution', ' strange', 'update', ' improvement', ' movement', ' shift', ' Random', ' bizarre', ' improbable', 'Chance', ' correction', ' Increase', ' chance', ' alternating', ' transformations', 'fortune', ' stirring', 'morph', ' advance', ' innovate', ' upgrade', ' interchange', 'Revolution', ' sporadic', ' resurgence', ' craft', ' coincidence', ' Chance', 'Change', ' provoking', 'Switch', ' redesign', ' Insight', ' Surprise', 'revolution', ' Turning', ' shifting', ' extension', ' switch', ' forging', ' Alter', 'change', ' Leap', ' Merge', 'Development', ' converting', ' Novel', ' Express', ' Gamble', ' rarity', ' awakening', ' Variable', ' Explore', ' Update', ' Experiment', ' reversal', ' altering', ' moving', 'Rare', ' design', ' Fantasy', ' acceleration', ' Movement', ' diversion', ' innovation', ' incidental', ' Awakening', ' Chaos', ' update', ' Destiny', ' Design', 'switch', ' Evolution', 'Random', ' modifying', ' growth', ' variability', ' adventure', ' Revival', ' evolving', 'Explore', ' transform', ' enhancement', 'Move', ' Rare', ' intrigue', 'Imagine', ' curious', ' chaos', 'moving', ' progressing', ' Correction', ' perverse', ' increase', 'UPDATE', ' Odyssey', ' advancing', ' mutation', ' traverse', ' Morph', ' leap', ' deform', 'Shift', ' shuffle', ' accelerating', 'Correction', ' instability', ' Enchant', ' modify', ' luck', 'Express', ' exceptional', ' enchant', ' dynamics', ' Craft', ' Warp', ' twists', ' regeneration', ' Extension', ' novel', ' experiment', ' exaggeration', ' Innovation', ' Development', ' changed', 'Variable', ' Luck', ' Spectrum', ' encouraging', ' flexible', ' innovative', 'variable', ' Plastic', 'Design', ' Pioneer', ' flourish', ' gamble', ' surprise', ' revolutions', 'luck', ' merge', ' Dynamics', ' transcend', ' amplification', 'design', ' rebirth', 'Moving', ' modulation', ' eclipse', ' occasional', ' resurrection', ' spontaneous', ' disrupt', 'Upgrade', ' Wander', ' twisting', ' variable', ' unpredictable', ' rare', ' erratic', ' pioneer', ' developing', ' Improvement', ' Growth', ' Vortex', 'alter', ' upheaval', ' irregular', ' convert', ' commute', ' surreal', 'Progress', 'Increase', ' move', ' Freak', ' switching', ' inciting', ' FANTASY', ' renewal', ' stir', ' miraculous', ' anomaly', ' accelerate', ' Hybrid', ' transformer', ' Convert', ' reversible', ' fusion', ' inconsistent', 'cycle', ' Rarity', ' plastic', ' cycle', ' Enhancement', ' inflation', ' inexplicable', 'illusion', ' restructuring', ' inception', ' Imagine', ' emerge', ' unexpected', ' rotating', ' Karma', ' progress', ' intermittent', ' inspiring', ' peculiar', ' vortex', ' tumble', ' firing', ' perplex', ' cryptic', ' Opportunity', ' invent', ' vary', ' disguise', ' adaptation', 'express', ' progression', ' Rebirth', ' deviation', ' Move', ' swap', ' accidental', ' explore', ' spin', ' Fog', ' differentiation', ' Eclipse', ' differ', ' magnification', ' elevate', ' unintentional', ' blend', ' modification', ' insight', ' illuminating', ' destiny', ' extraordinary', ' odd', ' Illusion', ' delight', ' spectrum', ' evade', ' urging', ' motivating', ' marching', 'craft', ' Driving', ' arbitrary', ' whipping', ' conversion', ' exploring', ' revival', ' Swift', ' deflect', ' alter', ' uncanny', ' misfortune', ' wander', ' updating', ' exciting', ' experimenting', ' novelty', ' foil', ' ingenuity', ' pivotal', ' alterations', ' proceeding', ' renaissance', ' imagine', 'Transform', ' Enhance', ' labyrinth', ' fluctuations', ' Escape', ' inconsistency', 'move', ' scarce', ' amendment', ' prediction', ' pushing', ' emergence', ' glimpse', ' fleeting', ' innovations', ' evolves', ' limitation', ' quaint', ' reproduction', 'Improve', ' fortunate', ' increment', ' continuation', ' freak', ' induction', ' fortunes', ' boosting', ' imagination', ' irony', ' blending', ' escalation', ' Cascade', 'amorph', ' esoteric', ' reduction', ' Renew', ' unique', ' Uncommon', ' Cycle', ' precarious', 'izarre', ' moves', ' transformative', ' decline', ' anticipate', ' fuse', 'culation', ' disruption', 'formation', ' disrupting', ' Adventure', ' introduction', ' forge', ' imagining', ' illusion', ' Moving', ' turn', ' adapting', ' swapping', ' intriguing', ' altered', ' Increasing', ' growing', ' influencing', ' merging', ' enchantment', 'progress', ' flexibility', ' breakthrough', ' distort', ' refreshing', ' deterioration', ' fantasy', ' inspire', ' chang', ' cycles', 'Craft', ' differential', ' Reduction', ' Tempest', ' revolutionary', ' expanding', ' teleportation', ' surge', ' unpredict', ' opportunity', ' magical', ' shaping', ' distortion', ' propel', ' innate', ' arcane', ' abrupt', ' potential', ' astonishing', ' Curiosity', ' eruption', ' Conversion', ' revers', ' sorcery', ' entropy', ' Genesis', ' transformed', ' upgrading', ' tuning', ' Subtle', ' advent', 'turn', ' genesis', ' catastrophe', ' puzzling', ' experience', ' increasing', 'development', ' enlarge', ' remarkable', ' Alchemist', ' variance', ' visionary', ' driving', ' impulses', ' decrease', 'Adventure', ' revolving', ' trajectory', ' flourishing', ' divergence', ' innov', ' integration', 'improve', ' Develop', ' Alternate', ' accumulating', ' Journey', ' Adventures', ' surprises', ' Stir', ' weaving', ' predicting', ' Experience', ' obscure', ' consolidation', ' grow', ' correct', 'driving', ' dart', ' Reverse', 'intuitive', ' enchanted', ' developments', ' ripple', ' provocation', ' explosion', ' discord', ' transforms', ' passage', ' Spin', ' Voyager', ' Refresh', ' surprising', ' Echo', ' Exchange', 'forge', ' warp', ' ascend', ' inventive', ' ingenious', ' pyramid', ' rotation', ' occult', ' Brave', ' crossover'] Justice [' protection', ' Stability', ' Regulation', ' DEFENSE', 'protection', ' regulation', ' justice', ' determination', ' morality', ' guidance', ' Protection', ' maintenance', ' assessment', ' Accountability', ' Resolution', ' Standards', ' sovereignty', ' Assessment', ' fairness', ' awareness', 'argument', 'review', 'Analysis', ' Order', 'determination', 'awareness', ' justification', 'power', ' authority', ' Judgment', 'Freedom', 'Justice', ' Investigation', ' compliance', ' Power', ' Release', ' modification', ' Responsibility', ' consideration', ' investigation', 'justice', 'balance', ' CONTROL', ' Leadership', ' Respect', ' enhancement', 'knowledge', ' amendment', ' Relief', ' control', ' mediation', 'Management', ' Regulatory', ' LAW', ' stability', 'control', ' integrity', ' power', 'Development', 'truth', ' respect', 'order', ' duty', ' Integrity', ' harmony', ' defense', ' governance', ' confidence', 'regulation', ' Immunity', ' enlightenment', 'Review', ' management', ' Change', ' clarification', ' authorization', ' rationality', 'Standard', ' Doctrine', 'progress', ' progress', ' discussion', ' Dialogue', ' Balance', ' judgment', ' recovery', ' debate', ' preservation', ' remedy', ' Standard', ' development', 'Defense', ' Knowledge', ' Examination', ' Bar', ' Enhancement', ' reform', ' Legislation', 'shield', 'compliance', ' review', ' wisdom', ' conscience', ' equity', ' evaluation', ' coordination', ' Evaluation', 'Control', ' study', ' contemplation', ' Maintenance', 'enforcement', ' legislation', 'security', ' improvement', ' Transparency', 'court', ' balance', 'bench', ' law', ' RELEASE', 'Rule', ' honesty', 'Bar', 'defense', ' validity', ' Truth', ' JUSTICE', ' logic', ' maxim', ' commitment', ' court', ' Enlightenment', ' Restoration', ' Justice', ' truth', ' legitimacy', 'respect', ' observation', 'application', ' freedom', ' Authorization', ' judgement', ' oversight', 'fficiency', ' Law', ' discretion', ' application', ' safeguard', ' liberation', ' Norm', ' Compliance', ' resolution', ' compromise', ' security', 'Bench', 'Discussion', ' Authority', ' doctrine', ' Implementation', ' assurance', ' ratification', ' Administration', ' Freedom', ' examination', ' obligation', ' inquiry', ' Control', ' recognition', ' assignment', ' virtue', ' innovation', 'management', ' Study', ' principle', ' forum', ' Debate', ' advocacy', 'Truth', ' Management', 'Balance', ' Oversight', ' Analysis', ' renewal', ' autonomy', 'resolution', ' Review', ' litigation', 'Reason', ' optimization', ' compensation', ' order', ' promotion', ' Liberation', ' correction', ' restraint', 'essment', ' responsibility', 'analysis', 'freedom', ' support', ' charge', ' defence', ' Decision', 'Solution', ' relief', ' Defense', ' change', 'Commission', 'Conclusion', ' care', ' Improvement', ' agreement', ' Progress', 'Peace', ' bar', ' adjustment', 'Charge', ' certainty', 'forcement', 'Law', ' enactment', ' urgency', 'study', ' Enforcement', ' dignity', ' Measure', ' tribunal', ' encouragement', ' supervision', 'Shield', ' alignment', ' normative', 'rule', ' Judiciary', ' decision', ' revision', ' Application', ' clearance', ' Equality', ' accountability', ' upholding', ' Moral', ' Revival', ' merit', 'development', 'payment', ' conclusion', ' shielding', ' precision', ' negotiation', ' transparency', 'Safety', ' leadership', ' understanding', 'Change', ' acceptance', 'Task', ' knowledge', ' correctness', 'law', ' command', ' rule', ' moral', ' Reason', 'Sense', ' decree', ' restitution', ' shield', 'Dialogue', 'standard', ' Discussion', ' Development', 'guard', ' probe', ' Virtue', ' ethics', ' uphold', ' Argument', ' reimbursement', ' sanction', ' Result', ' righteousness', ' Research', ' validation', ' satisfaction', ' analysis', ' evolution', ' Understanding', ' Discipline', ' conservation', ' Security', ' Reform', ' ordinance', ' Renaissance', 'Correction', ' stabilization', ' Court', ' clarity', 'Power', ' CARE', 'care', ' consensus', 'Order', ' growth', ' Revision', ' COURT', ' implementation', ' equilibrium', ' Assignment', ' verification', ' neutrality', ' Growth', ' settlement', 'confidence', ' research', ' mandate', 'bar', 'duty', ' reconciliation', 'Translation', ' consistency', ' classification', ' Charge', ' Preservation', ' barrier', ' monitoring', ' arbitration', ' regulated', ' measure', 'Court', ' emancipation', ' release', 'tenance', ' Safety', 'Release', ' argument', ' verdict', ' Solution', 'Research', 'task', ' Compensation', ' standard', ' legality', ' jurisdiction', ' dominion', ' significance', ' illumination', 'growth', ' acknowledgment', ' approval', ' prosecution', ' integration', ' Bench', ' controversy', ' Tribunal', ' Promotion', 'Application', ' refinement', 'Guard', ' enforcement', ' Evolution', ' chamber', ' persuasion', ' restoration', ' Equity', 'support', ' discourse', 'Progress', ' violation', ' bench', ' punishment', ' Conclusion', ' outcome', ' guard', ' transformation', ' reflection', ' symmetry', ' norm', ' sanity', ' Guard', ' purity', ' systematic', ' segregation', ' Rule', ' Innovation', ' forgiveness', ' Honor', ' perfection', 'thought', ' Pledge', ' ORDER', ' Probe', ' Action', ' solution', 'safety', 'change', ' classified', ' characterization', ' equality', ' Recovery', ' redress', ' Logic', ' nobility', ' BAR', ' competence', ' conformity', ' Classification', ' goodness', ' repayment', 'Support', ' regulatory', ' Diversity', ' precept', ' proclamation', ' measurement', ' diversity', ' renaissance', ' Insight', ' Wisdom', 'examination', 'release', ' criterion', ' directive', ' standards', 'Norm', ' accordance', 'etermination', ' meditation', 'Measure', ' decency', ' Principle', ' advancement', 'organized', 'charge', ' direction', ' Chamber', ' Task', ' POWER', ' burden', ' vigilance', 'moral', 'sense', ' organized', ' task', ' reversal', ' Precision', ' pledge', ' endorsement', ' provision', ' guarantee', ' sponsorship', ' flexibility', ' Command', ' enrichment', ' guideline', ' insight', ' Barrier', ' honor', ' regard', ' Directive', ' tolerance', ' Constitution', ' administration', ' Awareness', ' Assembly', 'Security', ' interpretation', 'research', ' dialogue', ' Shield', ' inconsistency', ' principled', 'norm', ' Defence', ' ethic', ' stringent', ' citation', ' Meditation', ' statute', 'forum', 'Study', ' scrutiny', ' ethical', ' immunity', ' certification', ' Transformation', ' upkeep', 'command', ' sincerity', ' relevance', ' reason', 'Understanding', ' gauge', ' sense', ' structured', ' regulations', ' viability', ' detachment', ' payment'] Death [' evolution', ' collapse', ' forcing', ' parting', ' deterioration', ' Breaking', ' cessation', ' decline', ' rallying', ' termination', ' start', ' Instruct', ' restart', ' dissolution', ' growth', ' demise', ' diminishing', ' improvement', ' finishing', ' Control', ' creation', ' Origin', ' organizing', ' decay', ' evolving', ' launching', ' killing', 'launch', ' exit', 'Start', ' turning', ' START', ' redesign', ' Launch', ' initiation', ' alteration', ' splitting', ' refinement', ' starting', 'Guide', ' Exit', ' Evolution', ' establishment', ' stopping', ' refresh', 'End', ' dominating', ' closing', ' decrease', 'Change', ' degradation', ' integration', ' progression', ' resisting', ' Refresh', ' extermination', ' culmination', ' decaying', ' End', 'shift', 'start', ' conquering', ' end', ' renew', ' Increase', ' completion', ' Improvement', ' beginning', ' renewal', 'origin', ' development', 'starting', ' instruct', ' control', 'Launch', ' downfall', ' awakening', ' launch', ' retreat', 'Breaking', ' Closing', ' drive', ' boosting', ' END', 'Finish', ' Decay', ' increase', 'inducing', ' inauguration', 'killing', ' shift', ' Conclusion', ' Turning', 'Fighting', ' activation', ' influencing', ' transformation', ' rebirth', ' revolution', ' Resolution', 'force', ' Finish', ' collapsing', ' inducing', 'Exit', ' growing', ' considering', 'fighting', ' elimination', 'termination', 'revolution', ' settling', ' cooling', ' passing', ' demolition', ' breaking', ' Starting', ' deteriorating', ' removal', ' eruption', 'Considering', ' Transition', ' Change', ' strengthening', 'END', ' escalation', ' Start', ' opening', ' expansion', ' finish', 'Spread', 'Control', ' pressure', ' resting', ' falling', ' enhancement', 'growth', ' force', ' destruction', ' transition', ' decreasing', 'last', ' Ruin', ' extinction', ' stimulation', ' birth', ' direct', ' Reign', ' govern', ' dying', ' Order', ' ruin', ' Establishment', ' sinking', 'exit', ' Transformation', 'Order', 'drive', 'Opening', ' reduction', ' Passage', 'development', ' closure', ' descending', 'control', ' CONTROL', ' Last', ' division', ' weighing', 'Force', ' ignition', ' rejecting', ' Command', ' Extension', ' lifting', 'Drive', ' Invasion', ' improving', ' triggering', 'lead', ' exhausting', 'change', ' departure', ' dwindling', ' fracturing', ' increasing', ' Creation', ' driving', 'order', ' transforming', ' shrinking', ' conversion', 'govern', 'foundation', ' debating', ' LAST', ' ORDER', 'establishment', ' extension', ' suppressing', ' Drive', 'Lead', ' shepherd', ' origin', 'build', ' Passing', ' degrading', ' quitting', ' recession', ' resolution', ' Enhancement', ' Fighting', ' disappearance', ' declining', ' evaluating', ' transitioning', ' reducing', ' revolutions', ' revival', ' foundation', ' conclusion', 'Starting', ' flowering', ' opposing', ' passage', ' draining', ' wounding', ' adjustment', ' lead', ' Renew', ' Killing', ' firing', ' Development', ' stirring', 'Increase', ' Growth', 'push', ' Dying', ' destroying', ' guide', ' commanding', ' Genesis', ' regeneration', ' change', ' commencement', ' balancing', ' order', 'direct', ' wrapping', ' Opening', ' contraction', ' halting', ' bruising', ' researching', ' wandering', ' integrating', ' Considering', ' arranging', 'breaking', ' whipping', ' Sleeping', ' Govern', ' relaxation', 'spread', 'Beginning', ' revive', ' outset', ' shifting', 'Ultimate', ' finale', ' variation', 'resolution', 'Command', ' prompting', 'creation', ' rotting', ' shaping', 'Govern', ' drying', ' elevation', ' hurting', ' turnover', ' losing', ' fighting', ' Rebirth', ' amplification', ' pilot', ' overthrow', ' reform', ' budding', ' erosion', 'Conclusion', ' exaggeration', ' Shepherd', ' rebuilding', 'Revolution', 'Birth', ' damaging', ' injuring', ' refusing', ' departing', ' Birth', ' progressing', ' Ultimate', ' evolve', ' emergence', ' refining', ' folding', ' terminating', ' Shift', ' organising', ' scanning', ' guiding', ' prevail', ' rotation', ' beating', ' regulating', ' liberating', ' sleeping', ' freeing', ' Rule', ' Lead', ' investigating', ' displacement', ' mandate', ' optimization', ' worsening', ' studying', ' magnification', ' accumulation', ' onset', ' battling', 'activation', ' modification', ' spread', ' manage', ' observing', ' viewing', ' tearing', ' hiding', ' Pilot', ' ultimate', ' easing', 'Closure', ' raising', 'growing', ' squeezing', ' induction', ' push', ' leading', ' winding', ' empowerment', ' Removal', ' burning', ' retiring', ' fading', ' dropping', 'end', ' minimizing', ' navigate', ' retreating', ' bashing', ' healing', 'Instruct', ' steer', ' inflation', ' breakdown', ' founding', ' smashing', 'Origin', ' DIRECT', ' denying', ' analyzing', ' seizing', 'burning', ' burying', ' imagining', ' protesting', 'rule', ' purge', ' changing', ' wasting', ' reign', ' dictate', ' Falling', ' reflecting', ' Integration', 'Push', 'pressure', ' navigating', ' coordinate', ' guidance', ' lying', ' dissenting', ' contemplating', ' climax', ' last', ' entering', ' arrange', ' provoking', ' Blooming', ' sparking', ' reforming', 'Release', 'lying', ' contracting', ' cancellation', ' combating', ' squeeze', ' leaving', ' convincing', ' introduction', ' enrichment', ' evolves', ' melting', 'consuming', 'Direct', 'luence', ' dominate', ' begging', ' joining', ' ascent', 'guide', ' hitting', ' resurgence', ' assessing', ' eliminating', ' relaxing', ' Engineer', ' laying', ' liberate', ' expanding', ' Force', ' Ending', ' oversee', ' challenging', ' invading', ' overhaul', ' Awakening', ' outbreak', ' resurrect', ' Increasing', ' arguing', ' propulsion', 'Upgrade', ' mutation', 'opening', ' advance', ' upgrading', ' invasion', 'leading', ' limiting', ' Release', ' organise', ' shutdown', ' conservation', ' sequencing', ' graduation', ' Beginning', 'command', ' turn', 'adding', ' fall', ' divergence', ' Recession', 'Begin', ' shedding', ' weakening', ' resignation', ' exchange', ' impacting', ' organize', ' dividing', ' developing', ' calming', ' elevate', ' shooting', ' ordering', ' revol', ' erupt', ' winning', ' surrender', ' loss', ' promoting', ' Origins', ' diminish', ' regulation', ' overturn', 'reference', ' drowning', ' doubling', ' waning', ' encouraging', ' governing', ' blowing', ' oxidation', ' arresting', ' Decision', ' Spread', ' optimizing' The Tower [' releasing', ' launching', ' awakening', ' moving', ' discovery', 'Breaking', ' promoting', ' freeing', ' changing', ' destruction', ' breaking', ' disrupting', ' spreading', 'breaking', ' creation', ' movement', ' Action', ' threatening', ' shaping', ' Working', ' transforming', ' disruption', ' organizing', ' development', ' understanding', 'moving', ' revealing', ' enabling', ' Breaking', ' NEWS', ' revelation', ' advancement', ' release', ' destroying', ' collapse', ' beginning', ' Change', ' initiation', ' enlightenment', ' Founding', 'Job', ' strengthening', ' Increasing', ' announcing', ' restructuring', ' Development', ' change', ' working', ' breakthrough', ' shifting', 'shock', ' sustaining', ' rescue', ' supporting', 'Development', ' trade', ' lowering', ' Revelation', ' renewal', 'Action', ' shock', ' Knowledge', ' demonstrating', ' discovering', ' Understanding', ' demonstration', 'Spread', ' reinforcement', ' restoring', ' ACTION', ' leaving', 'changing', ' warning', ' Revolution', ' Report', ' shattering', ' warming', 'development', ' aiding', ' Awakening', ' progress', ' disclosure', ' revolution', ' reinforcing', ' flare', ' announcement', ' flash', ' evolution', ' activation', ' Release', ' accelerating', ' impact', 'News', ' signing', 'flight', ' maximizing', ' Conversion', ' motion', ' Resurrection', ' collapsing', ' Enhancement', ' Operation', ' RELEASE', ' Beginning', ' realization', ' prompting', ' action', 'change', ' ending', ' Job', ' smashing', 'knowledge', ' proposing', ' reforming', ' transformation', 'Activity', ' completion', ' rebirth', ' Chaos', 'Trade', ' Movement', 'creation', ' Birth', ' explosion', ' pressing', ' Progress', ' operating', ' activating', ' Flight', ' betrayal', ' Transformation', ' liberation', ' Ending', ' Trade', ' propagation', ' manipulating', ' forcing', ' driving', ' switching', ' increasing', ' relocation', ' crushing', 'Launch', ' identifying', ' confirmation', ' growth', ' proclaiming', ' Exodus', ' rocking', 'Shock', ' disclosing', ' enhancement', ' confirming', ' preserving', ' reducing', 'Origin', ' Moving', ' firing', ' Outbreak', ' progressing', ' transporting', ' refreshing', 'Moving', 'increasing', ' Spread', ' leading', ' rupture', 'Work', ' unveiling', ' turning', ' maintaining', 'Understanding', ' Growth', ' abandonment', ' manipulation', ' coordinating', ' launch', ' Replacement', ' splitting', ' rallying', ' News', ' Animation', ' promise', ' reversal', 'casting', ' adjusting', ' separating', 'support', 'Flight', ' maintenance', ' upheaval', ' empowering', ' modifying', ' advancing', ' Impact', ' founding', ' Freedom', ' induction', ' triggering', ' extension', 'working', ' dismantling', ' Declaration', ' causing', ' replacement', ' arranging', ' managing', ' birth', ' escaping', ' decreasing', 'start', ' exchange', ' promotion', ' elimination', 'release', ' Disclosure', 'Change', ' realizing', ' information', ' adjustment', ' crumbling', ' fracturing', ' Emergency', ' dying', ' Liberation', ' Shock', ' eruption', ' urging', ' upsetting', ' emergency', ' Organization', ' Exchange', 'Start', ' finding', 'action', ' directing', 'motion', ' Extension', 'duty', ' motivating', ' Reform', ' catastrophe', 'blow', ' departure', ' dissolution', ' spread', ' Exposure', ' Shift', ' waving', ' Promise', ' Work', 'Progress', 'termination', 'WORK', ' cracking', 'origin', ' guiding', ' Changing', ' acceptance', 'Release', 'flash', ' parting', ' arming', ' snapping', ' plotting', ' tearing', 'Revolution', ' enhancing', ' flashing', ' Destruction', ' emergence', ' detecting', ' whipping', ' recognizing', ' Blow', 'activation', ' endorsement', ' chaos', ' continuation', 'akening', ' regeneration', ' Operating', ' blow', ' Support', ' altering', ' tuning', ' stimulation', ' Information', ' expanding', ' sinking', ' boosting', ' predicting', ' Planning', 'awareness', ' sparking', ' comprehension', 'Birth', ' pushing', ' alteration', 'ACTION', ' exchanging', ' annihilation', ' securing', ' flight', ' swinging', ' Insight', ' refining', ' starting', ' grinding', ' dawn', ' task', ' Driving', ' upholding', ' weakening', ' encouraging', ' Ruin', ' insight', ' mobilization', ' Rescue', ' burning', ' inspiring', ' conquering', ' blowing', ' delaying', ' fixing', ' ruining', ' proclamation', ' Crisis', ' stirring', ' electing', ' activity', ' Promotion', ' rebuilding', ' Swap', ' escape', ' acknowledgment', ' Invention', 'Operation', ' backing', ' Salvation', ' building', ' Supporting', ' knowledge', 'Working', ' opening', ' ruin', ' modulation', ' helping', ' focusing', ' Genesis', 'spread', 'NEWS', ' terminating', ' shaking', ' controlling', ' acceleration', ' ascending', ' broadcasting', ' Leading', ' Dawn', 'impact', ' Finding', ' selecting', ' Escape', ' stimulating', ' advocacy', ' deepening', ' Starting', ' denouncing', ' Wisdom', ' escalation', ' Expansion', ' suppressing', 'Animation', ' growing', 'Finding', 'Report', ' redesign', ' operation', ' Broadcast', ' Publication', ' breakdown', ' expansion', ' planning', ' correcting', ' exodus', ' initiating', ' duty', ' degrading', ' Announce', ' declaring', ' achieving', 'activity', ' Discovery', ' consciousness', ' ignition', ' favoring', ' Function', 'Changing', ' extracting', ' Activity', ' Flare', ' Start', ' retreat', ' Switch', 'growth', ' termination', ' acknowledging', ' choosing', ' formation', 'Task', 'Beginning', ' start', ' support', ' kicking', ' genesis', ' assurance', ' Failure', ' bursting', ' Surprise', ' onset', 'Support', ' origin', ' division', ' advising', ' swap', ' outbreak', ' animation', ' pitching', 'revolution', ' publication', ' developing', ' Handling', 'switch', ' calming', 'opening', ' abandoning', ' Building', ' Broadcasting', ' diminishing', 'ending', ' provoking', 'function', ' facilitating', ' amplification', ' grounding', ' Spark', ' liberating', ' Restoration', ' drifting', 'task', ' steering', ' Managing', 'finding', ' endorsing', ' stepping', ' work', ' locating', ' withdrawal', ' responsibility', ' extermination', ' raising', ' REPORT', ' assisting', ' completing', ' substitution', ' vanishing', ' disappearance', ' benefiting', ' Task', ' conversion', ' varying', ' highlighting', ' Leaving', ' resurrection', ' extinction', ' stretching', ' job', ' Casting', ' melting', ' Closing', ' Saving', ' Motion', ' admitting', ' exploding', ' preparing', ' Apocalypse', ' calling', ' overhaul', ' deployment', ' recognition', ' preferring', ' leveling', ' rescuing', ' Explosion', ' Perception', ' impacting', ' allowing', ' ratification', ' commencement'] The Sun [' sunrise', ' Star', ' Sky', ' eruption', ' Celebration', ' Wind', ' enthusiasm', 'Joy', ' Divine', ' Faith', ' Sunrise', ' explosion', ' glow', ' Debate', ' miracle', ' victory', ' Storm', ' unity', 'Star', ' belief', 'Storm', ' battle', ' Blast', 'Wave', ' Joy', ' Wave', ' blast', ' triumph', ' Crown', ' Orbit', ' Fire', ' End', ' debate', ' Release', ' Victory', ' sensation', ' joy', ' Sunshine', ' Circle', ' rise', ' Ascension', ' turn', 'Unity', ' energy', 'Spirit', ' wave', ' twist', ' star', ' Unity', ' harmony', ' jump', ' peace', ' event', 'stop', ' pleasure', 'Wind', ' stability', 'End', ' acceptance', ' Beam', ' summit', ' END', ' demonstration', ' attraction', ' flood', 'Sky', ' peak', ' light', ' devotion', ' Light', ' passion', 'confidence', 'peace', ' sky', ' Event', 'Shock', ' Rise', 'pause', 'fire', 'spirit', ' dance', ' Combat', 'shine', ' Stream', ' leap', ' enjoyment', 'battle', ' tide', ' twilight', ' certainty', ' Sprint', ' Truth', ' collision', ' Enhancement', 'capacity', ' Paradise', ' Pulse', ' sunshine', ' Peace', ' Shock', ' Protection', 'Combat', ' Conference', ' silence', ' explode', 'rush', ' shine', ' clash', ' Conversation', ' wind', ' conflict', ' burst', ' Amazing', ' Explosion', ' bow', ' glide', 'Dash', 'Battle', ' Reality', ' discovery', ' Ability', ' incredible', 'Peace', ' break', ' outburst', ' festival', ' Passion', 'burst', ' trance', 'Faith', ' Dawn', ' sunset', ' flame', ' pulse', ' spirit', ' Calm', ' chat', ' happiness', ' flare', ' ring', ' Drift', ' stream', ' divine', ' Merge', ' end', ' Festival', ' Rainbow', ' Triumph', 'Stop', 'storm', ' Mastery', 'performance', ' Fact', ' blaze', ' quake', ' Talk', ' celebration', 'attack', ' bash', ' enhancement', ' Mix', 'unity', 'ulse', ' Glow', ' storm', ' Pact', ' zeal', ' Adventure', 'Speed', ' Dialogue', ' Harmony', ' capacity', ' crest', ' circle', 'safety', 'Spring', ' beam', ' float', ' whirlwind', 'circle', ' breakthrough', ' Achievement', ' Morning', ' burn', 'Show', ' Battle', ' applause', ' heaven', ' Twist', ' Heaven', ' climax', ' display', 'wind', ' truth', ' fire', 'Release', 'Beat', 'Increase', ' success', 'END', ' Silence', ' Refuge', 'Trust', ' surge', ' Float', ' confidence', ' Beat', ' ascent', ' delight', 'Discussion', ' Process', 'Fire', 'Pause', 'Assembly', ' Incredible', 'show', 'blast', 'Sol', 'Ring', ' warmth', ' halt', 'ray', ' truce', ' vault', ' Sunset', ' crown', ' STAR', ' faith', ' Ball', 'shock', ' Safety', ' blasts', 'FUN', ' Conflict', ' Charge', 'turn', ' expansion', 'Legend', ' war', ' Lift', ' Miracle', 'Fight', 'Truth', ' Race', ' Bash', 'iscovery', ' spark', ' Stop', ' celestial', ' Holy', ' ecstasy', 'heat', 'bash', ' Discovery', ' charisma', ' evening', 'Bow', 'peak', ' steadfast', ' Exhibition', ' crash', ' talk', ' speed', 'Production', ' avalanche', ' Spring', ' Apollo', ' shock', ' world', 'dash', 'energy', ' Flame', 'Ball', ' magnificent', ' height', ' Pledge', 'Adventure', 'iphany', 'Mix', ' rally', ' paradise', 'orbit', ' relief', ' Laugh', ' Dash', ' spiritual', ' combat', ' elevation', ' cascade', ' Shine', ' Belief', 'STAR', ' stop', 'Race', ' dawn', ' Liberty', 'Performance', ' Summit', ' explosions', ' tornado', ' Peak', ' realization', ' Discussion', ' discourse', ' awakening', ' spring', ' Attack', ' stir', ' Gathering', 'production', 'Talk', ' Promise', ' gathering', 'Rush', 'haven', ' excitement', 'termination', 'ball', 'Strength', 'World', ' WORLD', ' propagation', ' pledge', 'beat', 'jump', ' reality', ' START', 'beam', ' vibration', ' Warfare', ' SNAP', ' laugh', ' brilliance', ' Earthquake', ' Strike', ' Tornado', ' transparency', 'Tip', ' Invention', ' Fun', ' heavenly', ' astonishing', ' TOP', 'Chat', ' tip', 'Safety', ' commitment', 'ellation', ' rush', 'faith', ' conversation', 'break', ' release', ' workshop', 'Light', ' promise', 'start', ' Ring', ' conquest', ' Moon', ' shudder', ' Break', ' victories', ' breathtaking', ' Heavenly', ' exceptional', ' Meeting', 'Wow', ' intensity', ' Jet', ' eclipse', ' sun', 'Jump', ' Jupiter', 'Animation', ' termination', ' Comfort', ' process', ' Height', ' collide', 'rest', ' Flood', ' adventure', ' shield', 'Amazing', ' Revelation', 'creation', ' pinnacle', 'defense', ' sacred', ' astounding', 'light', 'star', ' rainbow', ' pause', 'success', ' ball', ' universe', ' Cosmos', ' unbelievable', 'Break', 'Snap', ' FIRE', ' race', ' production', 'fight', ' struggle', ' cessation', ' Blaze', ' Shield', 'Charge', ' Chat', ' Rest', 'fly', ' comfort', 'spin', ' bliss', ' Rhythm', 'WIND', ' Awakening', 'truth', ' achievement', ' Production', ' Ascend', ' Saturn', 'Grand', ' sublime', ' REST', ' Fly', ' Haven', ' World', 'Energy', 'Heat', 'Bob', ' mix', 'holy', ' Daylight', ' Solar', ' dedication', ' sacrament', ' beat', ' Transparency', ' Trust', ' Spirit', ' conference', ' Fight', ' Burst', ' Tempest', 'Rest', ' dusk', ' illumination', ' sanctuary', ' Assembly', ' miraculous', 'Solar', ' Clash', 'release', 'fact', ' resurrection', ' cheer', ' culmination', ' Animation', ' Shelter', ' skill', ' splendid', ' Show', ' Bounce', ' Stability', 'Vision', ' swirl', ' elegance', ' Burning', ' Success', ' Sun', ' alert', ' fight', ' increase', ' Top', ' planet', 'Fact', ' morning', 'height', ' Refresh', ' Defense', 'display', ' recreation', ' flow', ' cosmos', 'wave', ' capability', ' Sanctuary', ' adherence', ' Spark', ' fact', ' Skill', ' Eclipse', 'Planet', ' fortress', 'Defense', 'comfort'] The Hanged Man [' transforming', ' contemplating', ' halting', ' releasing', ' strength', ' determining', 'strength', ' initiating', 'Reporting', ' delaying', ' courage', ' finishing', ' considering', ' absorbing', ' changing', ' spirit', ' stopping', ' exploring', ' identifying', ' stop', ' resolving', ' analyzing', ' freezing', ' staying', ' advising', ' halt', ' preparing', ' issuing', ' deciding', ' reporting', ' breaking', ' yielding', ' resting', ' producing', ' understanding', ' holding', ' altering', ' quitting', ' examining', ' launching', 'Strength', ' establishing', 'Stop', ' interpreting', ' inducing', ' calculating', ' predicting', ' turning', ' easing', ' wrapping', ' awareness', ' declaring', ' restoring', ' snapping', ' generating', ' committing', ' backing', 'itating', ' heart', ' fixing', ' maintaining', ' securing', ' withdrawing', ' relaxing', ' pause', ' accepting', ' Keeping', ' reversing', ' lifting', ' determination', ' saving', ' slowing', ' specifying', 'Spirit', ' rest', ' reducing', ' remembering', 'Understanding', ' transferring', ' grounding', ' suspending', ' reflecting', ' Changing', ' accelerating', ' Understanding', ' outlining', ' refreshing', ' anticipating', ' organising', ' weakening', 'Beginning', ' bursting', ' departing', 'walking', ' firing', ' popping', ' reversal', ' capturing', ' flexibility', ' keeping', 'Heart', ' delay', ' initiation', ' separating', ' evaluating', ' setting', 'stop', ' splitting', ' expanding', ' Release', 'Fire', ' exchanging', ' assessing', ' organizing', ' turnaround', ' accumulating', 'changing', 'Zip', 'Starting', 'Changing', ' gathering', ' rallying', ' lowering', 'Setting', ' stirring', ' wait', 'Hold', ' slipping', ' vitality', 'Wait', 'setting', ' release', ' thinking', ' trapping', ' change', ' defining', 'Keeping', ' debating', ' inspiring', ' awakening', ' endurance', 'Calling', ' clearing', 'fire', ' assembling', ' converting', ' kicking', ' preventing', ' intensity', ' returning', ' judging', ' stimulating', 'Finish', 'uring', ' Fire', ' vision', ' cleansing', ' jumping', ' Thinking', ' observing', 'iasm', ' break', ' shift', 'pause', ' storing', 'keeping', ' discouraging', 'break', ' toughness', ' extracting', ' repairing', ' resisting', ' revealing', ' developing', ' whipping', ' permitting', ' recognizing', ' Change', ' patience', ' spirits', ' justifying', 'Production', ' believing', ' embracing', 'Shift', ' sustaining', ' ripping', ' pulling', ' estimating', ' skipping', ' repealing', ' optimism', ' influencing', ' strengthening', ' transformation', ' enduring', 'Release', ' beginning', ' passing', ' restructuring', ' Shift', ' giving', ' terminating', ' educating', ' watching', 'Change', ' pulse', ' evolving', ' abandoning', ' questioning', ' contemplation', 'thinking', ' sweeping', ' transform', ' culminating', ' monitoring', ' touching', ' constructing', 'stay', ' comparing', ' integrating', ' announcing', ' wisdom', ' resolve', ' Vision', ' Spirit', ' swallowing', ' spinning', ' projecting', ' removing', 'giving', ' Stop', ' shaking', 'breaking', ' crashing', ' distinguishing', ' leaving', ' studying', ' Reporting', 'delay', ' liberating', ' submitting', 'Pause', ' disclosing', ' animation', ' closing', ' curing', ' mindfulness', 'ifying', ' timing', ' leaping', ' imagination', 'Vision', ' fire', 'inducing', ' directing', ' gripping', ' activating', 'Considering', ' learning', ' bravery', ' slashing', ' persistence', ' facilitating', ' Stay', ' retaining', ' timeout', ' measuring', 'zing', ' moratorium', ' grit', ' delivering', ' Animation', ' pressing', 'calling', ' detecting', 'spirit', ' rotating', ' calling', ' Finish', ' encouraging', ' proposing', ' searching', ' composing', ' operating', 'release', ' verifying', ' guiding', ' Zip', ' lull', ' walking', ' stretching', ' clarity', ' concluding', ' rebuilding', ' finish', ' waiting', ' avoiding', ' concentrating', ' calming', ' Walking', ' cessation', ' resignation', ' starting', ' arresting', ' investigating', ' tearing', 'Animation', ' arranging', ' selecting', ' stepping', ' Wait', ' Heart', ' experiencing', ' opening', ' protecting', ' STOP', ' revelation', ' swapping', ' enthusiasm', ' performing', ' passion', 'isdom', ' surrender', ' separation', 'shift', ' commanding', 'Following', 'ooting', ' commitment', ' respecting', ' conquering', ' sparking', ' acceptance', ' shooting', ' refresh', ' perspective', ' modifying', ' FIRE', ' circling', ' Hold', ' hiatus', ' eliminating', ' swirling', ' dissolve', ' planning', ' zip', ' managing', ' freeing', ' squeezing', ' decoding', ' Production', ' mapping', ' heroism', ' Passing', ' Considering', ' urging', ' stamina', ' readiness', ' wondering', ' warmth', ' cleaning', ' making', ' Shooting', ' motivating', ' enabling', ' Transformation', ' implementing', ' flipping', ' orientation', ' unveiling', ' mindset', ' provoking', 'heart', ' trading', ' interpretation', ' dismantling', ' administering', ' wiping', ' realization', ' Giving', 'change', ' tossing', ' impacting', 'awareness', ' invoking', ' gazing', ' insight', ' blasting', ' exposing', ' positioning', 'uilding', ' Break', ' shaping', ' confirming', ' meditation', ' letting', ' killing', ' hold', ' seizing', 'rest', 'determination', ' catching', ' liberation', ' Beginning', ' assembly', 'vision', ' pauses', ' circulating', ' backbone', ' durability', ' frenzy', ' sacrificing', ' regeneration', ' triggering', ' dividing', 'intensity', ' freeze', ' perception', ' ending', ' decreasing', ' yield', ' shrinking', ' orbiting', ' REST', ' describing', ' shifting', ' distortion', ' Delay', 'reporting', ' tightening', ' addressing', ' promoting', ' guaranteeing', ' sending', ' Setting', ' demonstrating', 'termination', ' sacrifice', ' preserving', ' defending', ' documenting', ' stay', ' Starting', ' combining', ' recovering', ' Strength', ' obtaining', ' limbo', 'wait', ' moving', ' dreaming', ' varying', 'freedom', ' replacing', ' bustling', ' rushing', ' adjusting', ' complying', ' optimizing', ' smashing', ' correcting', ' innovation', ' translating', ' Rest', 'production', ' transmitting', ' writing', ' realizing', 'Timeout', ' burning', ' tipping', ' sticking', ' controlling', ' imagining', ' settling', ' dropping', ' cutting', ' resilience', 'Rest', 'mitting', ' illuminating', ' halted', ' marking', ' collapsing', ' probing', ' recognition', 'ifting', ' engaging', 'Giving', ' reaching', ' conducting', ' inspecting', ' escaping', ' dodging', ' preaching', ' proclaiming', ' shutting', ' coordinating', ' relaxation'] The Devil [' betrayal', ' destruction', 'monster', 'Rogue', ' demon', ' monster', ' grasp', ' illusion', ' trap', ' corruption', ' shadow', 'violence', ' burn', ' snake', ' attack', ' hatred', ' damage', ' threat', ' madness', ' alienation', ' conflict', ' destroy', ' revolt', ' shame', ' lust', ' ruin', ' despair', ' killer', 'block', ' liberation', ' cloud', 'Abyss', ' demons', ' fault', ' protect', ' power', 'Cross', 'Revolution', ' Trap', ' deceit', 'protect', 'Beast', 'Power', ' loss', ' limit', 'cross', ' corrupt', 'Shadow', ' exclusion', ' harm', ' Mix', ' rebellion', ' denounce', ' rogue', ' distrust', ' foe', ' destructive', ' violence', ' overwhelm', ' collapse', ' devil', ' revolution', 'illusion', ' Storm', 'attack', ' deception', ' exile', ' pinch', ' tear', ' whip', ' crush', ' frenzy', 'damage', ' turmoil', ' confusion', 'Demon', ' Monster', ' dream', ' shock', ' bitterness', ' Limit', ' rival', 'corruption', ' blockade', ' scourge', ' enemies', ' anger', 'killer', ' Emergency', ' disruption', ' abyss', ' foul', ' conceal', ' disdain', 'Attack', 'magic', ' crisis', 'Magic', ' betrayed', ' contradictions', 'shadow', ' blur', ' surprise', ' enemy', ' Magic', ' Illusion', ' force', ' Damage', 'Mix', ' disaster', ' repression', ' twist', ' manipulation', ' trick', ' smear', 'loss', ' misery', ' defeat', ' contradiction', ' rage', ' rescue', 'cheat', 'lust', ' sabotage', ' Shadow', ' friction', 'Ghost', ' disgrace', 'hidden', ' elimination', ' catastrophe', 'Destroy', ' Beast', ' Power', 'Damage', ' shred', ' devastation', 'Protect', ' release', ' Demon', ' intrigue', ' Lust', ' delusion', ' peril', ' Extreme', ' defect', ' pride', ' storm', ' despise', ' grasping', ' Abyss', ' mayhem', 'cloud', ' fear', 'Cloud', ' tension', ' insurrection', 'threat', ' thief', 'harm', ' rupture', ' hate', ' deceive', 'Burn', ' failure', ' slander', ' magic', 'Block', ' hide', ' stir', ' ghost', 'trap', ' curse', ' hazard', ' emergency', 'burn', ' betray', ' menace', ' suppression', ' swamp', ' rot', ' challenge', ' blister', ' fracture', ' Violence', ' Attack', ' integration', 'Snake', ' breakdown', ' disillusion', 'power', 'mix', ' beast', ' level', ' wedge', ' disguise', ' distortion', ' expel', ' menacing', ' cynicism', ' explosion', ' murder', ' oppression', ' perverse', ' emotion', ' Dream', ' grasped', 'ggle', ' Ruin', 'Dream', ' extreme', ' opposition', ' frustration', ' grip', ' mistrust', 'revolution', ' Snake', 'limit', 'demon', ' fury', 'hide', ' cheat', ' obstacle', ' Loss', ' rend', ' paradox', ' obscure', ' disgust', ' delay', 'hazard', ' detachment', 'Edge', ' treacherous', ' fray', ' rivalry', ' dirty', ' instability', ' Destroy', ' fusion', ' Failure', 'rot', ' Block', ' wrap', ' rebel', ' protection', ' provoke', ' corrupted', ' Rogue', ' chaos', ' ambush', ' havoc', ' addiction', ' struggle', ' control', ' lash', 'Failure', ' Cloud', ' shatter', 'shock', ' tragedy', 'enemy', ' twisted', ' offset', ' clasp', ' trouble', ' redeem', 'delay', ' sting', ' monstrous', ' decay', ' dissolution', 'rend', ' Edge', ' Harm', ' capture', ' aversion', ' malice', ' devoid', 'Limit', ' Cross', 'dirty', ' LEVEL', ' resistance', ' mix', 'destroy', ' conflicts', ' bomb', ' consume', ' clutch', ' clash', 'Extreme', ' deceived', ' disappoint', ' scandal', 'Released', ' CONTROL', ' temptation', ' treacher', ' spite', ' distraction', ' combine', ' bind', ' POWER', ' wrath', ' hidden', ' destroyer', ' Ghost', 'Level', ' Rend', ' hurt', ' weapon', ' veil', ' stun', ' carnage', ' exhaustion', ' fraud', ' resentment', ' liberate', ' Hide', ' Control', ' submerged', ' fuse', ' quake', ' cross', ' swallow', ' Threat', ' doom', ' brink', 'dream', ' hoax', ' villain', ' crack', ' depleted', 'protection', ' clinging', ' gloom', ' Destruction', ' Released', 'Emergency', 'Rot', ' kill', ' wrapped', ' complication', 'ghost', ' libel', ' humiliation', ' destroys', ' violation', ' surrender', ' drown', ' indifference', ' nightmare', ' wear', ' contempt', ' separation', ' worry', ' exiled', 'released', ' ignite', ' agony', ' taboo', ' incite', ' murderer', ' negativity', ' stain', ' neglect', ' dominance', ' integrate', ' stress', 'level', ' Mask', ' vice', ' Hazard', ' horror', ' Revolution', ' scorn', 'Force', ' redemption', ' blow', ' Shock', 'control', 'assault', ' compromise', ' obstacles', ' Hidden', ' embrace', ' Corruption', ' depression', ' disorder', ' Killer', ' Protect', ' assault', ' expelled', ' injury', ' desert', ' division', ' released', ' pressure', 'extreme', ' hostility', ' demonic', ' upheaval', ' Struggle', ' opponent', ' truth', ' mercenary', ' threats', ' dust', ' defiance', 'Lie', ' expulsion', ' difficulty', ' hunger', ' camouflage', ' terror', ' drag', ' fantasy', ' distress', ' parody', ' attacks', ' adversary', ' condemn', ' overthrow', ' gain', ' cunning', ' revenge', ' Enemy', ' suffering', ' killers', ' antagon', ' phantom', ' Release', ' desperation', ' fog', ' mischief', ' poison', ' void', ' anguish', 'rust', 'Shock', 'storm', ' skepticism', ' barrier', 'Monster', ' steal', ' risk', ' edge', ' degradation', ' dash', ' shroud', ' bondage', ' obsession', ' Protection', ' Paradox', ' clouds', ' tyranny', ' hold', ' occult', ' sacrifice', ' waste', ' disparity', ' hazards', ' plunder', 'pressure', ' mistake', ' misuse', ' fool', ' Resistance', 'Release', ' taunt', 'edge', ' squeeze', ' exploitation', ' smash', ' downfall', ' Conflict', ' Swamp', ' discomfort', ' stability', ' abandon', ' labyrinth', ' strain', ' emptiness', ' strike', ' rift', ' crowd', ' cruelty', ' shake', ' rebellious', ' delight', ' Exile', ' stigma', 'danger', ' blight', 'rifice', 'defense', ' Decay', ' monsters', ' detach', ' Shame'] ^ These were the same ten tokens used in experiments in earlier, related GPT-J embedding space research: Mapping the semantic void II: Above, below and between token embeddings and Mapping the semantic void III: Exploring neighbourhoods. ^ The switch from American to British spelling of "colo(u)r" was unexpected here. ^ It's worth stopping to consider that there is an identifiable point in GPT-J embedding space that, were a token to live there, the model would identify it as a word which meant "the act of eating a piece of broccoli while thinking about the fact that you are eating broccoli". ^ It's possible the rescaling and considering larger norms might have, in some cases, produced more definition trees closer in spirit to the traditional tarot associations. Unit norm was used as the obvious starting point, and other norms were not tested due to the computationally demanding nature of producing the definition trees. Another possible approach would involve offsetting by the token embedding centroid.
2024-05-08
https://www.lesswrong.com/posts/8gCsRPxvXyAZJLcwd/gdp-per-capita-in-2050
8gCsRPxvXyAZJLcwd
GDP per capita in 2050
hauke-hillebrandt
null
2024-05-06
https://www.lesswrong.com/posts/j6mi9ubiKLnGsDcLE/reviewing-the-structure-of-current-ai-regulations
j6mi9ubiKLnGsDcLE
Reviewing the Structure of Current AI Regulations
deric-cheng
This report is one in a series of ~10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance (such as incident reporting, safety evals, model registries, and more). We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis. This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll publish individual posts on our website and release a comprehensive report at the end of this series. In this post, we’ll discuss a multifaceted, high-level topic: How are current AI regulatory policies structured, and what are the advantages and disadvantages of their choices? By focusing on the existing regulatory choices of the EU, US, and China, we’ll compare and contrast key decisions in terms of classifying AI models and the organization of existing AI governance structures. What are possible approaches to classify AI systems for governance? Before passing any regulations, governments must answer for themselves several challenging, interrelated questions to lay the groundwork for their regulatory strategy: How will we classify AI systems - by their capabilities, amount of compute, domain of application, risk level, underlying architecture, or otherwise?Who will these regulations apply to – organizations, individuals, or companies? Who will possess legal responsibility for harm generated by AI systems - the AI lab developing the core model, the enterprise business deploying it, or the customer using it?What is the correct tradeoff between encouraging development & innovation and mitigating risks from AI systems? Complicating the matter, even precisely defining what is an AI system is challenging: as a field, AI today encompasses many different forms of algorithms and structures. You’ll find overlapping and occasionally conflicting definitions on what constitutes “models”, “algorithms”, “AI”, “ML”, and more. In particular, the latest wave of foundational large-language models (LLMs such as ChatGPT) have varying names under different governance structures and contexts, such as “general-purpose AI (GPAI)”, “dual-use foundation models”, “frontier AI models”, or simply “generative AI”. For the purposes of this review, we’ll rely on an extremely broad definition of AI systems from IBM: “A program that has been trained on a set of data to recognize certain patterns or make certain decisions without further human intervention.” There are various viable approaches to classifying the development of AI models or algorithms into “regulatory boxes”. Many of these approaches may overlap with each other, or be layered to form a comprehensive, effective governance strategy. We’ll discuss some of them below: Classifying AI models by application: This approach focuses on classifying and regulating AI models based on the intended domain of usage. For instance, AI models for improving healthcare for patients should fall under HIPAA regulations, AI models for filtering resumes should be protected from discrimination, and so on. Though this is an intuitive strategy that is well supported by existing precedent regulation, it can have substantial gaps for novel uses of AI models that do not fit into existing applications. This approach is facing significant challenges with the development of foundational LLMs, which can be effective tools in a variety of domains simultaneously. As a result, new regulatory frameworks often carve out a specific set of policies targeting these models separately, as was the case with the 2022 modifications to the EU AI Act defining “general-purpose AI (GPAI)”.Classifying AI models by compute: This approach focuses primarily on the amount of computational power (often called “compute”) required to train or develop AI models. In practice, the capabilities of foundational AI models strongly correspond to the amount of training data and computational power used to generate the model, though this is a metric that is heavily impacted by technical research, algorithmic design, and data quality. Such an approach regards the models with the most compute as the most likely to cause harm, and therefore the most important to regulate.Classifying AI models by risk level: This approach focuses on classifying AI models by the risk that they may pose to society, and applying regulations based on the measured level of risk. This may directly overlap with the previous strategies. Measuring this risk can be done in a number of ways: A proposed governance framework (Responsible Scaling Policies) by Anthropic suggests that organizations should measure specific dangerous capabilities of their AI models, and impose limitations to development (either independently or via governmental regulation) based on their results.As in the EU AI Act, certain applications of AI models may inherently be deemed high-risk, and therefore subject to a separate set of regulations.As in the US Executive Order, a certain threshold of computational power of AI models may be deemed risky enough to regulate.Considering AI models to be “algorithms”: As is currently the case in China, AI models may be considered just a subclass of “algorithms”, which more broadly includes computer programs such as recommendation algorithms, translation features, and more. By regulating algorithms as a whole, governments may include AI model governance as a component of a broader package of legislation around modern digital technology. Certain regulatory approaches may involve a combination of two or more of these classifications. For example, the US Executive Order identifies a lower compute threshold for mandatory reporting for models trained on biological data, combining compute-level and application-level classifications. Point of Regulation Closely tied to this set of considerations is the concept of point of regulation – where in the supply chain governments decide to target their policies and requirements. Governments must identify the most effective regulatory approaches to achieve their objectives, considering factors such as their level of influence and the ease of enforcement at the selected point. The way AI systems are classified under a government's regulatory framework directly informs the methods they employ for regulation. That is, the classification strategy and the point of regulation are interdependent decisions that shape a government’s overall regulatory strategy for AI. As an example: As American companies hold a 95% share of the high-end AI chip market, the US has found it effective to regulate physical exports of these chips to minimize Chinese access in pursuit of its geopolitical goals. As such, its primary point of regulation at this time targets high-end AI chip vendors, distributors, and exporters of AI chips. In contrast, it has little to no binding regulation regarding the design, sharing, or commercialization of AI models such as ChatGPT at this time.Conversely, the EU has chosen to concentrate its binding regulation around regulating access to AI models, as their main priority is protecting the individual rights of citizens using these models. As such, it focuses on strict requirements regarding the behavior, transparency requirements, and reporting for AI models, to be met by the organizations publishing such models for commercial use. What are important tradeoffs when designing regulatory structures for AI governance? How should a government structure its AI governance, and what factors might it depend on? We’ll mention several relevant considerations that will be further discussed regarding specific government’s approach to legislation. Centralized vs. Decentralized Enforcement In a centralized AI governance system, a single agency or regulatory body may be responsible for implementing, monitoring, and enforcing legislation. Such a body may be able to operate more efficiently by consolidating technical expertise, resources, and jurisdiction. For example, a single agency could coordinate more easily with AI labs to design a single framework for regulating multi-functional LLMs, or be able to better fund technically complex safety evaluations by hiring leading safety researchers. However, such an agency may fail to effectively account for the varied uses of AI technology, or lean too far towards “one-size-fits-all” regulatory strategies. For example, a single agency may be unable to simultaneously effectively regulate use-cases of LLMs in healthcare (e.g. complying with HIPAA regulations), content creation (e.g. preventing deepfakes), and employment (e.g. preventing discriminatory hiring practices), as it may become resource constrained and lack domain expertise. A single agency may also be more susceptible to regulatory capture from AI labs. In contrast, decentralized enforcement may spread ownership of AI regulation across a variety of agencies or organizations focused on different concerns, such as the domain of application or method of oversight. This approach might significantly improve the application of governance to specific AI use-cases, but risks stretching agencies thin as they struggle to independently evaluate and regulate rapidly-developing technologies. Decentralized governmental bodies may not take ownership of novel AI technologies without clear precedent (such as deepfakes), and key issues may “slip between the gaps” of different regulatory agencies. Alternatively, they might alternatively attempt to overfit existing regulatory structures onto novel technologies with disastrous outcomes for innovation. For example, the SEC’s attempt to map emerging cryptocurrencies onto its existing definition of securities has led to it declaring that the majority of cryptocurrency projects are unlicensed securities subject to shutdown. Vertical vs Horizontal Regulations A very similar set of arguments can be applied to the regulations themselves. A horizontally-integrated AI governance policy (such as the EU AI Act) applies new legislation to all use cases of AI, effectively forcing any AI models in existence to comply with a wide-ranging and non-specific set of regulations. Such an approach can provide a comprehensive, clearly defined structure for new AI development, simplifying compliance. However, horizontally-integrated policies can also be criticized for “overreaching” in scope, by applying regulations too broadly before legislators have developed expertise in managing a new field, and potentially stifling innovation as a result. In contrast, vertical regulations may be able to target a single domain of interest precisely, focusing on a narrow domain like “recommendation algorithms”, “deepfakes”, or “text generation” as demonstrated by China’s recent AI regulatory policies. Such vertical regulations can be more straightforward to implement and enforce than a broad set of horizontal regulations, and can allow legislators to concentrate on effectively managing a narrow set of use cases and considerations. However, they may not account effectively for AI technologies that span multiple domains, and could eventually lead to piecemeal, conflicting results as different vertical “slices” take disjointed approaches to regulating AI technologies. How are leading governments approaching AI Governance? China Over the past three years, China has passed a series of vertical regulations targeting specific domains of AI applications, led by the Cyberspace Administration of China (CAC). The three most relevant pieces of legislation include: Algorithmic Recommendation Provisions: Initially published in August 2021, these provisions enforce a series of regulations targeting recommendation algorithms, such as those that provide personalized rankings, search filters, decision making, or “services with public opinion properties or social mobilization capabilities”. Notably, it created a mandatory algorithm registry requiring all qualifying algorithms by Chinese organizations to be registered within 10 days of public launch.Deep Synthesis Provisions: Initially published in November 2022, this creates a series of regulations regulating the use of algorithms that synthetically generate content such as text, voice, images, or videos. It was intended to combat the rise of “deepfakes”, and requires labeling, user identification, and providers to prevent “misuse” as broadly defined by the Chinese government.Interim Generative AI Measures: Initially published in July 2023, this set of regulations was a direct response to the announcement and ensuing wave of excitement caused by ChatGPT’s release in late 2022. It expands on the policies proposed in the Deep Synthesis Provisions to better encompass multi-use LLMs, strengthening provisions such as discrimination requirements, requirements for training data, and alignment with national interests. The language used by these AI regulations is typically broad, high-level, and non-specific. For example, Article 5 of the Interim Generative AI Measures states that providers should “Encourage the innovative application of generative AI technology in each industry and field [and] generate exceptional content that is positive, healthy, and uplifting”. In practice, this wording extends greater control to the CAC, allowing it to interpret its regulations as necessary to enforce its desired outcomes. Notably, China created the first national algorithm registry in its 2021 Algorithmic Recommendation Provisions, focusing initially on capturing all recommendation algorithms used by consumers in China. By defining the concept of “algorithm” quite broadly, this registry often requires that organizations submit many separate, detailed reports for various algorithms in use by its systems. In subsequent legislation, the CAC has continually expanded the scope of this algorithm registry to include updated forms of AI, including all LLMs and AI models capable of generating content. What are key traits of China’s AI governance strategy? China’s governance strategy is focused on tracking and managing algorithms by their domain of use: In particular, the CAC is developing legislation regulating all types of algorithms in use by Chinese citizens, not just LLMs or AI models. Based on their track record, we can expect that China will continue to expand the algorithm registry to include a broader scope of algorithms over time. China is taking a vertical, iterative approach to developing progressively more comprehensive legislation, by passing targeted regulations concentrating on a single type of algorithm at a time: The CAC has tended to focus on current domains in AI, drafting legislation when a new domain becomes socially relevant. In contrast to the US or EU, it appears to have deprioritized many domains outside this scope, such as regulating AI for healthcare, employment, law enforcement, judicial systems and more.These iterative regulations appear to be predecessors building towards a more comprehensive piece of legislation: an Artificial Intelligence Law, proposed by a legislative plan released in June 2023. This law is not expected to be published until late 2024, but will likely cover many domains of AI use, horizontally integrating China’s AI regulations.China has demonstrated clear precedent for this model of passing iterative legislation in preparation for a comprehensive, all-encompassing law. In particular, it followed a similar process for internet regulation in the 2000s, capped by an all-encompassing Cybersecurity Law passed in 2017. China strongly prioritizes social control and alignment in its AI regulations: In particular, the domains of AI technology selected for legislation clearly indicate the priorities of the Chinese government. Each of the provisions includes references to upholding “Core Socialist Values”, and contains more specific direction such as requirements to “respect social mores and ethics, and adhere to the correct political direction, public opinion orientation, and values trends, to promote progress and improvement” (Article 4, Deep Synthesis Provisions). The broad nature of its requirements allows for arbitrary enforcement. China has demonstrated an inward focus on regulating Chinese organizations and citizens: As a result of China’s restrictive policies via the Great Firewall preventing many leading Western technology services from operating in China, these regulations primarily apply to Chinese technology companies serving Chinese citizens. Major leading AI labs such as OpenAI, Anthropic, and Google do not actively serve Chinese consumers, in part because they are unwilling to comply with China’s censorship policies. In many ways, Chinese AI governance operates on a parallel and disjoint basis to Western AI governance. The EU The European Union (EU) has conducted almost all of its AI governance initiatives within a single piece of legislation: the EU AI Act, formally adopted in March 2024. Initially proposed in 2021, this comprehensive legislation aims to regulate AI systems based on their potential risks and safeguard the rights of EU citizens. At the core of the EU AI Act is a risk-based approach to AI regulation. The act classifies AI systems into four categories: unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk AI systems, such as those that manipulate human behavior or exploit vulnerabilities, are banned outright. High-risk AI systems, including those used in critical infrastructure, education, and employment, are subject to strict requirements and oversight. Limited risk AI systems require transparency measures, while minimal risk AI systems are largely unregulated. In direct response to the publicization of foundational AI models in 2022 starting with the launch of ChatGPT, the Act includes clauses specifically addressing the challenges posed by general purpose AI (GPAI). GPAI systems, which can be adapted for a wide range of tasks, are subject to additional requirements, including being categorized as high-risk systems depending on their intended domain of use. What are key traits of the EU’s AI governance strategy? The EU AI Act is a horizontally integrated, comprehensive piece of legislation implemented by a centralized body: The EU AI Act classifies all AI systems used within the EU into four distinct risk levels, and assigns clear requirements for each set of AI systems. As a result, it’s the most comprehensive legal framework for AI systems today. Though it has generally been well-received, it’s also received criticism by member countries for being overly restrictive and potentially stifling AI innovation within the EU.To oversee the implementation and enforcement of the EU AI Act, the legislation establishes the European AI Office. This dedicated body is responsible for coordinating compliance, providing guidance to businesses and organizations, and enforcing the rules set out in the act. As the leading agency enforcing binding AI rules on a multinational coalition, it will shape the development and governance of AI globally, much as the GDPR led to an international restructuring of internet privacy standards. The EU has demonstrated a clear prioritization for the protection of citizen’s rights: The EU AI Act’s core approach to categorizing risk levels is designed primarily around measuring the ability of AI systems to infringe on the rights of EU citizens. This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.This is in direct contrast to China’s AI governance strategy, which is designed largely to give the government greater control over generated content and recommendations.Most of the requirements are designed with the common citizen in mind, such as transparency and reporting requirements, the ability of any citizen to lodge a complaint with a market surveillance authority, prohibitions on social scoring systems, and discrimination requirements. Few protections are included for corporations or organizations running AI systems. The fines for non-compliance are quite high, ranging from 1.5% to 7% of a firm’s global sales turnover or millions of euros, whichever is greater. The EU AI Act implements strict and binding requirements for high-risk AI systems: In particular, AI systems classified as high-risk face the most extensive and broad regulatory requirements from the passage of this Act, including conducting risk assessments, ensuring high-quality and unbiased datasets, enabling human oversight measures, detailed documentation and compliance with model registries, security and accuracy requirements, and more. Low-risk AI systems face significantly less stringent compliance requirements, but have binding transparency requirements mandating that AI systems must inform humans when sharing or distributing generated content. The US In large part due to legislative gridlock in the US Congress, the United States has taken an approach to AI governance centered around executive orders and non-binding declarations by the Biden administration. Though this approach has key limitations, such as the inability to allocate budget for additional programs, it has resulted in a significant amount of executive action over the past year. Three key executive actions stand out in shaping the US approach: US / China Semiconductor Export Controls: Launched on Oct 7, 2022, these export controls (and subsequent updates) on high-end semiconductors used to train AI models mark a significant escalation in US efforts to restrict China's access to advanced computing and AI technologies. The rules, issued by the Bureau of Industry and Security (BIS), ban the export of advanced chips, chip-making equipment, and semiconductor expertise to China. They aim to drastically slow China's AI development and protect US national security by targeting the hardware essential to develop powerful AI models. Blueprint for an AI Bill of Rights: ​​Released in October 2022, this blueprint outlines five principles to guide the design, use, and deployment of automated systems to protect the rights of the American public. These principles include safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. While non-binding, the blueprint aims to inform policy decisions and align action across all levels of government.The Executive Order on Artificial Intelligence: Issued in October 2023, this order directs various federal agencies to act to promote the responsible development and use of AI. It calls for these agencies to develop AI risk management frameworks, develop AI standards and technical guidance, create better systems for AI oversight, and foster public-private partnerships. It marks the first comprehensive and coordinated effort to shape AI governance across the federal government, but lacks binding regulation or specific details as it primarily orders individual agencies to publish reports on next steps. What are key traits of the US’ AI governance strategy? The US’ initial binding regulations focus on classifying AI models by compute ability and regulating hardware: The US has taken a distinctive approach to AI governance by controlling the hardware and computational power required to train and develop AI models. It is uniquely positioned to leverage this compute-based approach to regulation, as it is home to all leading vendors of high-end AI chips (Nvidia, AMD, Intel) and consequently has direct legislative control over these chips.This is exemplified by the US-China export controls, which aim to restrict China's access to the high-end AI chips necessary for developing advanced AI systems by setting limits on the processing power & performance density of exportable chips. This focus can also be seen in the Executive Order’s reporting requirements for AI models, which have thresholds for computing capacity or model training measured in floating-point operations per second (FLOP/s). Beyond export controls, the US appears to be pursuing a decentralized, largely non-binding approach relying on executive action: Due to structural challenges in passing binding legislation through a divided Congress, the US has relied primarily on executive orders and agency actions to shape its AI governance strategy, which don’t require any congressional approval. It has chosen to decentralize its research and regulatory process by distributing such work among selected agencies.Instead of including specific binding requirements in the US Executive Order on AI, the Biden administration has preferred to task various federal agencies with developing their own frameworks, standards, and oversight mechanisms. Most of these upcoming standards are still being developed and are not yet public.Such executive orders are limited first and foremost by the lack of jurisdiction to allocate more budget for specific policy implementations, a power controlled by Congress.A secondary limitation is that executive orders are easy to repeal or reverse when the US presidency changes every 4 years, meaning that even binding executive orders may not be enforced long-term.The Blueprint for an AI Bill of Rights and the Executive Order on AI provide high-level guidance and principles but lack the binding force of law. They serve more as a framework for agencies to develop their own policies and practices, rather than a centralized, comprehensive regulatory regime like the EU AI Act. US AI policy is strongly prioritizing its geopolitical AI arms race with China: The US AI governance strategy is heavily influenced by the perceived threat of China's rapid advancements in AI and the potential implications for national security and the global balance of power. The only binding actions taken by the US (enforcing semiconductor export controls) are explicitly designed to counter China's AI ambitions and maintain the US' technological and military superiority.This geopolitical focus sets the US apart from the EU, which has prioritized the protection of individual rights and the ethical development of AI, or China, which has prioritized internal social control and alignment with party values. The US strategy appears to be more concerned with the strategic implications of AI and ensuring that the technology aligns with US interests in the global arena.
2024-05-07