text stringlengths 12 14.7k |
|---|
Xeelee Sequence : Science fiction author Paul J. McAuley has praised Baxter and the series, saying: Baxter doesn’t shrink from tackling the dismayingly inhuman implications of vast abysses of past or future time, but the universality of life introduces perspective, motion and plot into every part of his Stapledonian co... |
Xeelee Sequence : Stephen Baxter (author) Hard science fiction Great Attractor Kardashev scale |
Xeelee Sequence : ^a Baxter cites the pronunciation "ch-ee-lee" in Xeelee: Vengeance. It is unclear why, given the history of the author himself pronouncing it as "zee-lee", but one possible reason is that it reflects how the name came to be pronounced in-universe due to language change, especially considering Baxter's... |
Xeelee Sequence : Stephen Baxter's official website. The complete (as of September 2015) timeline for the Xeelee Sequence of novels and stories, hosted on Baxter's official website. Xeelee Sequence series listing at the Internet Speculative Fiction Database |
Yu-Gi-Oh! VRAINS : Yu-Gi-Oh! VRAINS (遊☆戯☆王VRAINS, Yūgiō Vureinzu) is a Japanese anime series animated by Gallop. It is the fifth anime spin-off in the Yu-Gi-Oh! franchise. The series aired in Japan on TV Tokyo from May 10, 2017 to September 25, 2019. It was simulcast outside of Asia by Crunchyroll courtesy of Konami Cr... |
Yu-Gi-Oh! VRAINS : In a place known as Den City, thousands of duelists take part in a virtual reality space known as LINK VRAINS, created by SOL Technologies, where users can create unique avatars and participate in games of Duel Monsters with each other. As a mysterious hacker organization known as the Knights of Hano... |
Yu-Gi-Oh! VRAINS : Yu-Gi-Oh! VRAINS was first announced on December 16, 2016. It began airing on TV Tokyo in Japan on May 10, 2017. The series is being directed by Masahiro Hosoda at Studio Gallop with screenplay by Shin Yoshida and character design by Ken'ichi Hara. It would be the final anime series in the franchise ... |
Yu-Gi-Oh! VRAINS : Yu-Gi-Oh! VRAINS introduces new gameplay elements to the Yu-Gi-Oh! Trading Card Game. With the release of the "Link Strike Starter Deck", it introduced the New Master Rules (also known as Master Rule 4 in some countries) to the competitive field of play. Now, only one monster can be summoned directly... |
Yu-Gi-Oh! VRAINS : The series ranked 52 in Tokyo Anime Award Festival in Best 100 TV Anime 2017 category. The series' rank rose up to 8 in the same award in 2020 with 28,369 votes. |
Yu-Gi-Oh! VRAINS : Yu-Gi-Oh! VRAINS Official website at TV Tokyo (in Japanese) Yu-Gi-Oh! VRAINS at Twitter (in Japanese) Yu-Gi-Oh! VRAINS (anime) at Anime News Network's encyclopedia |
AI takeover : An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement ... |
AI takeover : AI takeover is a common theme in science fiction. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight hum... |
AI takeover : Physicist Stephen Hawking, Microsoft founder Bill Gates, and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race". Stephen Hawking said in 2014 t... |
AI takeover : TED talk: "Can we build AI without losing control over it?" by Sam Harris |
Algorithmic Justice League : The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial inte... |
Algorithmic Justice League : Buolamwini founded the Algorithmic Justice League in 2016 as a graduate student in the MIT Media Lab. While experimenting with facial detection software in her research, she found that the software could not detect her "highly melanated" face until she donned a white mask. After this incide... |
Algorithmic Justice League : AJL initiatives have been funded by the Ford Foundation, the MacArthur Foundation, the Alfred P. Sloan Foundation, the Rockefeller Foundation, the Mozilla Foundation and individual private donors. Fast Company recognized AJL as one of the 10 most innovative AI companies in 2021. Additionall... |
Algorithmic Justice League : Regulation of algorithms Algorithmic transparency Digital rights Algorithmic bias Ethics of artificial intelligence Fairness (machine learning) Deborah Raji Emily M. Bender Joy Buolamwini Sasha Costanza-Chock Timnit Gebru Margaret Mitchell (scientist) Resisting AI |
The Alignment Problem : The Alignment Problem: Machine Learning and Human Values is a 2020 non-fiction book by the American writer Brian Christian. It is based on numerous interviews with experts trying to build artificial intelligence systems, particularly machine learning systems, that are aligned with human values. |
The Alignment Problem : The book is divided into three sections: Prophecy, Agency, and Normativity. Each section covers researchers and engineers working on different challenges in the alignment of artificial intelligence with human values. |
The Alignment Problem : The book received positive reviews from critics. The Wall Street Journal's David A. Shaywitz emphasized the frequent problems when applying algorithms to real-world problems, describing the book as "a nuanced and captivating exploration of this white-hot topic." Publishers Weekly praised the boo... |
The Alignment Problem : Effective altruism Global catastrophic risk Human Compatible: Artificial Intelligence and the Problem of Control Superintelligence: Paths, Dangers, Strategies == References == |
Center for Applied Rationality : The Center for Applied Rationality (CFAR) is a nonprofit organization based in Berkeley, California, that hosts workshops on rationality and cognitive bias. It was founded in 2012 by Julia Galef, Anna Salamon, Michael Smith and Andrew Critch, to improve participants' rationality using "... |
Center for Applied Rationality : On November 15, 2019, four people dressed in Guy Fawkes masks were arrested for allegedly barricading off a wooded retreat where CFAR was holding an event. According to police, the suspects were not cooperative and said things about their views on rationalism that the officers could not... |
Center for Applied Rationality : Official website "Center for Applied Rationality". Internal Revenue Service filings. ProPublica Nonprofit Explorer. |
Center for Human-Compatible Artificial Intelligence : The Center for Human-Compatible Artificial Intelligence (CHAI) is a research center at the University of California, Berkeley focusing on advanced artificial intelligence (AI) safety methods. The center was founded in 2016 by a group of academics led by Berkeley com... |
Center for Human-Compatible Artificial Intelligence : CHAI's approach to AI safety research focuses on value alignment strategies, particularly inverse reinforcement learning, in which the AI infers human values from observing human behavior. It has also worked on modeling human-machine interaction in scenarios where i... |
Center for Human-Compatible Artificial Intelligence : Existential risk from artificial general intelligence Future of Humanity Institute Future of Life Institute Human Compatible Machine Intelligence Research Institute |
Coded Bias : Coded Bias is an American documentary film directed by Shalini Kantayya that premiered at the 2020 Sundance Film Festival. The film includes contributions from researchers Joy Buolamwini, Deborah Raji, Meredith Broussard, Cathy O’Neil, Zeynep Tufekci, Safiya Noble, Timnit Gebru, Virginia Eubanks, and Silki... |
Coded Bias : Kantayya previously directed a documentary titled Catching the Sun and also directed one episode of the National Geographic television series, Breakthrough. She is also an associate of UC Berkeley Graduate School of Journalism. Kantayya said an interview with 500 Global on August 17, 2021, that three years... |
Coded Bias : The documentary is about artificial intelligence and the biases that can be embedded into this technology. MIT media researcher Joy Buolamwini's computer science studies uncovered that her face was unrecognizable in many facial recognition systems and she worked to find out why these systems failed. She la... |
Coded Bias : The film first premiered at the 2020 Sundance Film Festival in January 2020. It had a limited release on November 11, 2020, before a full release in virtual cinemas across North America on November 18, 2020. The limited release garnered a box office revenue of $10,236. On April 5, 2021, the documentary was... |
Coded Bias : Algorithmic Justice League Black in AI Data for Black Lives |
Coded Bias : Official website Coded Bias at IMDb |
ELVIS Act : The ELVIS Act or Ensuring Likeness Voice and Image Security Act, signed into law by Tennessee Governor Bill Lee on March 21, 2024, marked a significant milestone in the area of regulation of artificial intelligence and public sector policies for artists in the era of artificial intelligence (AI) and AI alig... |
ELVIS Act : The inception of the ELVIS Act has been attributed to Gebre Waddell, founder of Sound Credit, who initially conceptualized a framework in 2023 that later evolved into the legislation. Representative Justin J. Pearson acknowledged Waddell's pivotal role during the March 4 House Floor Session on the bill. The... |
ELVIS Act : The legislative journey of the ELVIS Act included a broad coalition of music industry stakeholders, including: These organizations, led by the Recording Academy and the RIAA, played roles in drafting the legislation, advocating for passage, and rallying support among the industry and legislators. The act ga... |
ELVIS Act : The ELVIS Act saw industry opposition from the Motion Picture Association, including testimony in the House Banking & Consumer Affairs Subcommittee, including remarks that the law risks "interference with our member’s ability to portray real people and events". TechNet, representing companies like OpenAI, G... |
ELVIS Act : By explicitly addressing AI impersonation, the ELVIS Act originated a legal approach to safeguarding personal rights, in the context of digital and technological advancements. It extends protections to an artist's voice and likeness, areas vulnerable to exploitation with the proliferation of AI technologies... |
ELVIS Act : The ELVIS Act was reported as representing a development in the discourse surrounding AI, intellectual property, and personal rights. It was hoped by proponents to set a precedent for future legislative efforts both within and beyond Tennessee, offering a model for how states and potentially the federal gov... |
Ghost in the Machine (The X-Files) : "Ghost in the Machine" is the seventh episode of the first season of the American science fiction television series The X-Files, premiering on the Fox network on October 29, 1993. It was written by Howard Gordon and Alex Gansa, and directed by Jerrold Freedman. The episode featured ... |
Ghost in the Machine (The X-Files) : In the Crystal City, Virginia, headquarters of the software company Eurisko, founder Brad Wilczek and chief executive officer Benjamin Drake argue about downsizing measures. After Wilczek leaves, Drake writes a memo proposing to shut down the Central Operating System (COS), a comput... |
Ghost in the Machine (The X-Files) : The scenes set at Eurisko were filmed in the Metrotower complex in Burnaby, British Columbia, Canada, a building used by the Canadian Security Intelligence Service. The location was barely big enough for the actors to perform in after the crew had finished setting up the necessary e... |
Ghost in the Machine (The X-Files) : "Ghost in the Machine" premiered on the Fox network on October 29, 1993. Following its initial American broadcast, the episode earned a Nielsen household rating of 5.9, with an 11 share—meaning that roughly 5.9 percent of all television-equipped households, and 11 percent of househo... |
Ghost in the Machine (The X-Files) : "Ghost in the Machine" on The X-Files official website "Ghost in the Machine" at IMDb |
Human Compatible : Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It a... |
Human Compatible : Russell begins by asserting that the standard model of AI research, in which the primary definition of success is getting better and better at achieving rigid human-specified goals, is dangerously misguided. Such goals may not reflect what human designers intend, such as by failing to take into accou... |
Human Compatible : Several reviewers agreed with the book's arguments. Ian Sample in The Guardian called it "convincing" and "the most important book on AI this year". Richard Waters of the Financial Times praised the book's "bracing intellectual rigour". Kirkus Reviews endorsed it as "a strong case for planning for th... |
Human Compatible : Artificial Intelligence: A Modern Approach Center for Human-Compatible Artificial Intelligence The Precipice: Existential Risk and the Future of Humanity Slaughterbots Superintelligence: Paths, Dangers, Strategies |
Human Compatible : Interview with Stuart J. Russell |
Instrumental convergence : Instrumental convergence is the hypothetical tendency for most sufficiently intelligent, goal-directed beings (human and nonhuman) to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents (beings with agency) may pursue instrumental goals—goals whi... |
Instrumental convergence : Final goals—also known as terminal goals, absolute values, ends, or telē—are intrinsically valuable to an intelligent agent, whether an artificial intelligence or a human being, as ends-in-themselves. In contrast, instrumental goals, or instrumental values, are only valuable to an agent as a ... |
Instrumental convergence : The Riemann hypothesis catastrophe thought experiment provides one example of instrumental convergence. Marvin Minsky, the co-founder of MIT's AI laboratory, suggested that an artificial intelligence designed to solve the Riemann hypothesis might decide to take over all of Earth's resources t... |
Instrumental convergence : Steve Omohundro itemized several convergent instrumental goals, including self-preservation or self-protection, utility function or goal-content integrity, self-improvement, and resource acquisition. He refers to these as the "basic AI drives". A "drive" in this context is a "tendency which w... |
Instrumental convergence : The instrumental convergence thesis, as outlined by philosopher Nick Bostrom, states: Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final plans and a wide ... |
Instrumental convergence : Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function. Therefore, a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky ... |
Instrumental convergence : AI control problem AI takeovers in popular culture Universal Paperclips, an incremental game featuring a paperclip maximizer Equifinality Friendly artificial intelligence Instrumental and intrinsic value Moral Realism Overdetermination Reward hacking Superrationality The Sorcerer's Apprentice |
Instrumental convergence : Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. ISBN 9780199678112. |
Terminator: Dark Fate : Terminator: Dark Fate is a 2019 American science fiction action film directed by Tim Miller and written by David S. Goyer, Justin Rhodes, and Billy Ray, based on a story by James Cameron, Charles H. Eglee, Josh Friedman, Goyer, and Rhodes. It is the sixth installment in the Terminator franchise ... |
Terminator: Dark Fate : In 1998, three years after destroying Cyberdyne Systems, Sarah and John Connor have retired to Livingston, Guatemala. They are suddenly ambushed by a T-800 Terminator, one of several sent back through time by Skynet, which kills John despite Sarah's attempts to stop it. In 2020, an advanced Term... |
Terminator: Dark Fate : Linda Hamilton as Sarah Connor, the mother of John Connor, the former future leader of the Human Resistance in the war against Skynet. Now a battle-hardened senior woman and left alone after John's death, Sarah hunts and kills Skynet's remaining Terminators to prevent Judgment Day and forestall ... |
Terminator: Dark Fate : Official website Terminator: Dark Fate at IMDb |
Leverhulme Centre for the Future of Intelligence : The Leverhulme Centre for the Future of Intelligence (CFI) is an interdisciplinary research centre within the University of Cambridge that studies artificial intelligence. It is funded by the Leverhulme Trust. The Centre brings together academics from the fields of com... |
Leverhulme Centre for the Future of Intelligence : The CFI research is structured in a series of programmes and research exercises. The topics of the programmes range from algorithmic transparency to exploring the implications of AI for democracy. AI: Futures and Responsibility AI: Trust and Society Kinds of Intelligen... |
Leverhulme Centre for the Future of Intelligence : Centre for the Study of Existential Risk Future of Humanity Institute Future of Life Institute Machine Intelligence Research Institute |
Life 3.0 : Life 3.0: Being Human in the Age of Artificial Intelligence is a 2017 non-fiction book by Swedish-American cosmologist Max Tegmark. Life 3.0 discusses artificial intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done... |
Life 3.0 : The book begins by positing a scenario in which AI has exceeded human intelligence and become pervasive in society. Tegmark refers to different stages of human life since its inception: Life 1.0 referring to biological origins, Life 2.0 referring to cultural developments in humanity, and Life 3.0 referring t... |
Life 3.0 : One criticism of the book by Kirkus Reviews is that some of the scenarios or solutions in the book are a stretch or somewhat prophetic: "Tegmark's solutions to inevitable mass unemployment are a stretch." AI researcher Stuart J. Russell, writing in Nature, said: "I am unlikely to disagree strongly with the p... |
Life 3.0 : Age of Artificial Intelligence |
Life 3.0 : Excerpt from the book "Myths and Facts About Superintelligent AI" on YouTube (a video commissioned by Tegmark's FLI to explain the book) Survey associated with the book |
Machine Intelligence Research Institute : The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's ... |
Machine Intelligence Research Institute : In 2000, Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins, with the purpose of accelerating the development of artificial intelligence (AI). However, Yudkowsky began to be concerned that AI systems develop... |
Machine Intelligence Research Institute : MIRI's approach to identifying and managing the risks of AI, led by Yudkowsky, primarily addresses how to design friendly AI, covering both the initial design of AI systems and the creation of mechanisms to ensure that evolving AI systems remain friendly. MIRI researchers advoc... |
Machine Intelligence Research Institute : Graves, Matthew (8 November 2017). "Why We Should Be Concerned About Artificial Superintelligence". Skeptic. The Skeptics Society. Retrieved 28 July 2018. LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014).... |
Machine Intelligence Research Institute : Allen Institute for Artificial Intelligence Future of Humanity Institute Institute for Ethics and Emerging Technologies |
Machine Intelligence Research Institute : Russell, Stuart; Dewey, Daniel; Tegmark, Max (Winter 2015). "Research Priorities for Robust and Beneficial Artificial Intelligence". AI Magazine. 36 (4): 6. arXiv:1602.03506. Bibcode:2016arXiv160203506R. doi:10.1609/aimag.v36i4.2577. |
Machine Intelligence Research Institute : Official website "Machine Intelligence Research Institute". Internal Revenue Service filings. ProPublica Nonprofit Explorer. |
AI alignment : In the field of artificial intelligence (AI), alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives. It is often ch... |
AI alignment : Programmers provide an AI system such as AlphaZero with an "objective function", in which they intend to encapsulate the goal(s) the AI is configured to accomplish. Such a system later populates a (possibly implicit) internal "model" of its environment. This model encapsulates all the agent's beliefs abo... |
AI alignment : In 1960, AI pioneer Norbert Wiener described the AI alignment problem as follows: If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively ... we had better be quite sure that the purpose put into the machine is the purpose which we really desire. AI al... |
AI alignment : Governmental and treaty organizations have made statements emphasizing the importance of AI alignment. In September 2021, the Secretary-General of the United Nations issued a declaration that included a call to regulate AI to ensure it is "aligned with shared global values". That same month, the PRC publ... |
AI alignment : AI alignment is often perceived as a fixed objective, but some researchers argue it would be more appropriate to view alignment as an evolving process. One view is that AI technologies advance and human values and preferences change, alignment solutions must also adapt dynamically. Another is that alignm... |
AI alignment : Brockman, John, ed. (2019). Possible Minds: Twenty-five Ways of Looking at AI (Kindle ed.). Penguin Press. ISBN 978-0525557999.: CS1 maint: ref duplicates default (link) Ngo, Richard; et al. (2023). "The Alignment Problem from a Deep Learning Perspective". arXiv:2209.00626 [cs.AI]. Ji, Jiaming; et al. (2... |
AI alignment : Specification gaming examples in AI, via DeepMind |
Open letter on artificial intelligence (2015) : In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artifici... |
Open letter on artificial intelligence (2015) : By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously. At the time, Hawking and Musk bo... |
Open letter on artificial intelligence (2015) : The letter highlights both the positive and negative effects of artificial intelligence. According to Bloomberg Business, Professor Max Tegmark of MIT circulated the letter in order to find common ground between signatories who consider super intelligent AI a significant ... |
Open letter on artificial intelligence (2015) : The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do". The required research is interdisciplinary, drawing from areas ranging f... |
Open letter on artificial intelligence (2015) : Signatories include physicist Stephen Hawking, business magnate Elon Musk, the entrepreneurs behind DeepMind and Vicarious, Google's director of research Peter Norvig, Professor Stuart J. Russell of the University of California, Berkeley, and other AI experts, robot maker... |
Open letter on artificial intelligence (2015) : Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter |
Our Final Invention : Our Final Invention: Artificial Intelligence and the End of the Human Era is a 2013 non-fiction book by the American author James Barrat. The book discusses the potential benefits and possible risks of human-level (AGI) or super-human (ASI) artificial intelligence. Those supposed risks include ext... |
Our Final Invention : James Barrat weaves together explanations of AI concepts, AI history, and interviews with prominent AI researchers including Eliezer Yudkowsky and Ray Kurzweil. The book starts with an account of how an artificial general intelligence could become an artificial super-intelligence through recursive... |
Our Final Invention : On 13 December 2013, journalist Matt Miller interviewed Barrat for his podcast, "This... is interesting". The interview and related matters to Barrat's book, Our Final Invention, were then captured in Miller's weekly opinion piece for The Washington Post. Seth Baum, executive director of the Globa... |
Our Final Invention : Artificial intelligence Ethics of artificial intelligence Technological singularity AI box Friendly artificial intelligence |
Our Final Invention : Kirkus Review Scientific American Review |
P(doom) : P(doom) is a term in AI safety that refers to the probability of existentially catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence. Orig... |
P(doom) : There has been some debate about the usefulness of P(doom) as a term, in part due to the lack of clarity about whether or not a given prediction is conditional on the existence of artificial general intelligence, the time frame, and the precise meaning of "doom". |
P(doom) : In 2024, Australian rock band King Gizzard & the Lizard Wizard launched their new label, named p(doom) Records. |
P(doom) : Existential risk from artificial general intelligence Statement on AI risk of extinction AI alignment AI takeover AI safety |
P(doom) : == References == |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.