Powered by RND
PodcastsTechnologyDoom Debates
Listen to Doom Debates in the App
Listen to Doom Debates in the App
(36,319)(250,152)
Save favorites
Alarm
Sleep timer

Doom Debates

Podcast Doom Debates
Liron Shapira
Urgent disagreements that must be resolved before the world ends, hosted by Liron Shapira. lironshapira.substack.com

Available Episodes

5 of 50
  • Roon vs. Liron: AI Doom Debate
    Roon is a member of the technical staff at OpenAI. He’s a highly respected voice on tech Twitter, despite being a pseudonymous cartoon avatar account. In late 2021, he invented the terms “shape rotator” and “wordcel” to refer to roughly visual/spatial/mathematical intelligence vs. verbal intelligence. He is simultaneously a serious thinker, a builder, and a shitposter. I'm excited to learn more about Roon, his background, his life, and of course, his views about AI and existential risk.00:00 Introduction02:43 Roon’s Quest and Philosophies22:32 AI Creativity30:42 What’s Your P(Doom)™54:40 AI Alignment57:24 Training vs. Production01:05:37 ASI01:14:35 Goal-Oriented AI and Instrumental Convergence01:22:43 Pausing AI01:25:58 Crux of Disagreement1:27:55 Dogecoin01:29:13 Doom Debates’s MissionShow NotesFollow Roon: https://x.com/tszzlFor Humanity: An AI Safety Podcast with John Sherman — https://www.youtube.com/@ForHumanityPodcastLethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16FgPauseAI, the volunteer organization I’m part of — https://pauseai.info/Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:44:46
  • Scott Aaronson Makes Me Think OpenAI's “Safety” Is Fake, Clueless, Reckless and Insane
    Today I’m reacting to the recent Scott Aaronson interview on the Win-Win podcast with Liv Boeree and Igor Kurganov.Prof. Aaronson is the Director of the Quantum Information Center at the University of Texas at Austin. He’s best known for his research advancing the frontier of complexity theory, especially quantum complexity theory, and making complex insights from his field accessible to a wider readership via his blog.Scott is one of my biggest intellectual influences. His famous Who Can Name The Bigger Number essay and his long-running blog are among my best memories of coming across high-quality intellectual content online as a teen. His posts and lectures taught me much of what I know about complexity theory.Scott recently completed a two-year stint at OpenAI focusing on the theoretical foundations of AI safety, so I was interested to hear his insider account.Unfortunately, what I heard in the interview confirms my worst fears about the meaning of “safety” at today’s AI companies: that they’re laughably clueless at how to achieve any measure of safety, but instead of doing the adult thing and slowing down their capabilities work, they’re pushing forward recklessly.00:00 Introducing Scott Aaronson02:17 Scott's Recruitment by OpenAI04:18 Scott's Work on AI Safety at OpenAI08:10 Challenges in AI Alignment12:05 Watermarking AI Outputs15:23 The State of AI Safety Research22:13 The Intractability of AI Alignment34:20 Policy Implications and the Call to Pause AI38:18 Out-of-Distribution Generalization45:30 Moral Worth Criterion for Humans51:49 Quantum Mechanics and Human Uniqueness01:00:31 Quantum No-Cloning Theorem01:12:40 Scott Is Almost An Accelerationist?01:18:04 Geoffrey Hinton's Proposal for Analog AI01:36:13 The AI Arms Race and the Need for Regulation01:39:41 Scott Aronson's Thoughts on Sam Altman01:42:58 Scott Rejects the Orthogonality Thesis01:46:35 Final Thoughts01:48:48 Lethal Intelligence Clip01:51:42 OutroShow NotesScott’s Interview on Win-Win with Liv Boeree and Igor Kurganov: https://www.youtube.com/watch?v=ANFnUHcYza0Scott’s Blog: https://scottaaronson.blogPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:52:58
  • Liron Reacts to Subbarao Kambhampati on Machine Learning Street Talk
    Today I’m reacting to a July 2024 interview that Prof. Subbarao Kambhampati did on Machine Learning Street Talk.Rao is a Professor of Computer Science at Arizona State University, and one of the foremost voices making the claim that while LLMs can generate creative ideas, they can’t truly reason.The episode covers a range of topics including planning, creativity, the limits of LLMs, and why Rao thinks LLMs are essentially advanced N-gram models.00:00 Introduction02:54 Essentially N-Gram Models?10:31 The Manhole Cover Question20:54 Reasoning vs. Approximate Retrieval47:03 Explaining Jokes53:21 Caesar Cipher Performance01:10:44 Creativity vs. Reasoning01:33:37 Reasoning By Analogy01:48:49 Synthetic Data01:53:54 The ARC Challenge02:11:47 Correctness vs. Style02:17:55 AIs Becoming More Robust02:20:11 Block Stacking Problems02:48:12 PlanBench and Future Predictions02:58:59 Final ThoughtsShow NotesRao’s interview on Machine Learning Street Talk: https://www.youtube.com/watch?v=y1WnHpedi2ARao’s Twitter: https://x.com/rao2zPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    2:59:34
  • This Yudkowskian Has A 99.999% P(Doom)
    In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).00:00 Nethys Introduction04:47 The Vulnerable World Hypothesis10:01 What’s Your P(Doom)™14:04 Nethys’s Banger YouTube Comment26:53 Living with High P(Doom)31:06 Losing Access to Distant Stars36:51 Defining AGI39:09 The Convergence of AI Models47:32 The Role of “Unlicensed” Thinkers52:07 The PauseAI Movement58:20 Lethal Intelligence Video ClipShow NotesEliezer Yudkowsky’s post on “Death with Dignity”: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategyPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:04:11
  • Cosmology, AI Doom, and the Future of Humanity with Fraser Cain
    Fraser Cain is the publisher of Universe Today, co-host of Astronomy Cast, a popular YouTuber about all things space, and guess what… he has a high P(doom)! That’s why he’s joining me on Doom Debates for a very special AI + space crossover episode.00:00 Fraser Cain’s Background and Interests5:03 What’s Your P(Doom)™07:05 Our Vulnerable World15:11 Don’t Look Up22:18 Cosmology and the Search for Alien Life31:33 Stars = Terrorists39:03 The Great Filter and the Fermi Paradox55:12 Grabby Aliens Hypothesis01:19:40 Life Around Red Dwarf Stars?01:22:23 Epistemology of Grabby Aliens01:29:04 Multiverses01:33:51 Quantum Many Worlds vs. Copenhagen Interpretation01:47:25 Simulation Hypothesis01:51:25 Final ThoughtsSHOW NOTESFraser’s YouTube channel: https://www.youtube.com/@frasercainUniverse Today (space and astronomy news): https://www.universetoday.com/Max Tegmark’s book that explains 4 levels of multiverses: https://www.amazon.com/Our-Mathematical-Universe-Ultimate-Reality/dp/0307744256Robin Hanson’s ideas:Grabby Aliens: https://grabbyaliens.comThe Great Filter: https://en.wikipedia.org/wiki/Great_FilterLife in a high-dimensional space: https://www.overcomingbias.com/p/life-in-1kdhtml---Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:57:45

More Technology podcasts

About Doom Debates

Urgent disagreements that must be resolved before the world ends, hosted by Liron Shapira. lironshapira.substack.com
Podcast website

Listen to Doom Debates, BG2Pod with Brad Gerstner and Bill Gurley and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.1.1 | © 2007-2024 radio.de GmbH
Generated: 12/27/2024 - 2:47:31 AM