Powered by RND
PodcastsTechnologyMachine Learning Street Talk (MLST)
Listen to Machine Learning Street Talk (MLST) in the App
Listen to Machine Learning Street Talk (MLST) in the App
(36,319)(250,152)
Save favorites
Alarm
Sleep timer

Machine Learning Street Talk (MLST)

Podcast Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuro...

Available Episodes

5 of 191
  • Neel Nanda - Mechanistic Interpretability (Sparse Autoencoders)
    Neel Nanda, a senior research scientist at Google DeepMind, leads their mechanistic interpretability team. In this extensive interview, he discusses his work trying to understand how neural networks function internally. At just 25 years old, Nanda has quickly become a prominent voice in AI research after completing his pure mathematics degree at Cambridge in 2020. Nanda reckons that machine learning is unique because we create neural networks that can perform impressive tasks (like complex reasoning and software engineering) without understanding how they work internally. He compares this to having computer programs that can do things no human programmer knows how to write. His work focuses on "mechanistic interpretability" - attempting to uncover and understand the internal structures and algorithms that emerge within these networks. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ *** SHOWNOTES, TRANSCRIPT, ALL REFERENCES (DONT MISS!): https://www.dropbox.com/scl/fi/36dvtfl3v3p56hbi30im7/NeelShow.pdf?rlkey=pq8t7lyv2z60knlifyy17jdtx&st=kiutudhc&dl=0 We riff on: * How neural networks develop meaningful internal representations beyond simple pattern matching * The effectiveness of chain-of-thought prompting and why it improves model performance * The importance of hands-on coding over extensive paper reading for new researchers * His journey from Cambridge to working with Chris Olah at Anthropic and eventually Google DeepMind * The role of mechanistic interpretability in AI safety NEEL NANDA: https://www.neelnanda.io/ https://scholar.google.com/citations?user=GLnX3MkAAAAJ&hl=en https://x.com/NeelNanda5 Interviewer - Tim Scarfe TOC: 1. Part 1: Introduction [00:00:00] 1.1 Introduction and Core Concepts Overview 2. Part 2: Outside Interview [00:06:45] 2.1 Mechanistic Interpretability Foundations 3. Part 3: Main Interview [00:32:52] 3.1 Mechanistic Interpretability 4. Neural Architecture and Circuits [01:00:31] 4.1 Biological Evolution Parallels [01:04:03] 4.2 Universal Circuit Patterns and Induction Heads [01:11:07] 4.3 Entity Detection and Knowledge Boundaries [01:14:26] 4.4 Mechanistic Interpretability and Activation Patching 5. Model Behavior Analysis [01:30:00] 5.1 Golden Gate Claude Experiment and Feature Amplification [01:33:27] 5.2 Model Personas and RLHF Behavior Modification [01:36:28] 5.3 Steering Vectors and Linear Representations [01:40:00] 5.4 Hallucinations and Model Uncertainty 6. Sparse Autoencoder Architecture [01:44:54] 6.1 Architecture and Mathematical Foundations [02:22:03] 6.2 Core Challenges and Solutions [02:32:04] 6.3 Advanced Activation Functions and Top-k Implementations [02:34:41] 6.4 Research Applications in Transformer Circuit Analysis 7. Feature Learning and Scaling [02:48:02] 7.1 Autoencoder Feature Learning and Width Parameters [03:02:46] 7.2 Scaling Laws and Training Stability [03:11:00] 7.3 Feature Identification and Bias Correction [03:19:52] 7.4 Training Dynamics Analysis Methods 8. Engineering Implementation [03:23:48] 8.1 Scale and Infrastructure Requirements [03:25:20] 8.2 Computational Requirements and Storage [03:35:22] 8.3 Chain-of-Thought Reasoning Implementation [03:37:15] 8.4 Latent Structure Inference in Language Models
    --------  
    3:42:36
  • Jonas Hübotter (ETH) - Test Time Inference
    Jonas Hübotter, PhD student at ETH Zurich's Institute for Machine Learning, discusses his groundbreaking research on test-time computation and local learning. He demonstrates how smaller models can outperform larger ones by 30x through strategic test-time computation and introduces a novel paradigm combining inductive and transductive learning approaches. Using Bayesian linear regression as a surrogate model for uncertainty estimation, Jonas explains how models can efficiently adapt to specific tasks without massive pre-training. He draws an analogy to Google Earth's variable resolution system to illustrate dynamic resource allocation based on task complexity. The conversation explores the future of AI architecture, envisioning systems that continuously learn and adapt beyond current monolithic models. Jonas concludes by proposing hybrid deployment strategies combining local and cloud computation, suggesting a future where compute resources are allocated based on task complexity rather than fixed model size. This research represents a significant shift in machine learning, prioritizing intelligent resource allocation and adaptive learning over traditional scaling approaches. SPONSOR MESSAGES: CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ Transcription, references and show notes PDF download: https://www.dropbox.com/scl/fi/cxg80p388snwt6qbp4m52/JonasFinal.pdf?rlkey=glk9mhpzjvesanlc14rtpvk4r&st=6qwi8n3x&dl=0 Jonas Hübotter https://jonhue.github.io/ https://scholar.google.com/citations?user=pxi_RkwAAAAJ Transductive Active Learning: Theory and Applications (NeurIPS 2024) https://arxiv.org/pdf/2402.15898 EFFICIENTLY LEARNING AT TEST-TIME: ACTIVE FINE-TUNING OF LLMS (SIFT) https://arxiv.org/pdf/2410.08020 TOC: 1. Test-Time Computation Fundamentals [00:00:00] Intro [00:03:10] 1.1 Test-Time Computation and Model Performance Comparison [00:05:52] 1.2 Retrieval Augmentation and Machine Teaching Strategies [00:09:40] 1.3 In-Context Learning vs Fine-Tuning Trade-offs 2. System Architecture and Intelligence [00:15:58] 2.1 System Architecture and Intelligence Emergence [00:23:22] 2.2 Active Inference and Constrained Agency in AI [00:29:52] 2.3 Evolution of Local Learning Methods [00:32:05] 2.4 Vapnik's Contributions to Transductive Learning 3. Resource Optimization and Local Learning [00:34:35] 3.1 Computational Resource Allocation in ML Models [00:35:30] 3.2 Historical Context and Traditional ML Optimization [00:37:55] 3.3 Variable Resolution Processing and Active Inference in ML [00:43:01] 3.4 Local Learning and Base Model Capacity Trade-offs [00:48:04] 3.5 Active Learning vs Local Learning Approaches 4. Information Retrieval and Model Interpretability [00:51:08] 4.1 Information Retrieval and Nearest Neighbor Limitations [01:03:07] 4.2 Model Interpretability and Surrogate Models [01:15:03] 4.3 Bayesian Uncertainty Estimation and Surrogate Models 5. Distributed Systems and Deployment [01:23:56] 5.1 Memory Architecture and Controller Systems [01:28:14] 5.2 Evolution from Static to Distributed Learning Systems [01:38:03] 5.3 Transductive Learning and Model Specialization [01:41:58] 5.4 Hybrid Local-Cloud Deployment Strategies
    --------  
    1:45:56
  • How AI Could Be A Mathematician's Co-Pilot by 2026 (Prof. Swarat Chaudhuri)
    Professor Swarat Chaudhuri from the University of Texas at Austin and visiting researcher at Google DeepMind discusses breakthroughs in AI reasoning, theorem proving, and mathematical discovery. Chaudhuri explains his groundbreaking work on COPRA (a GPT-based prover agent), shares insights on neurosymbolic approaches to AI. Professor Swarat Chaudhuri: https://www.cs.utexas.edu/~swarat/ SPONSOR MESSAGES: CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ TOC: [00:00:00] 0. Introduction / CentML ad, Tufa ad 1. AI Reasoning: From Language Models to Neurosymbolic Approaches [00:02:27] 1.1 Defining Reasoning in AI [00:09:51] 1.2 Limitations of Current Language Models [00:17:22] 1.3 Neuro-symbolic Approaches and Program Synthesis [00:24:59] 1.4 COPRA and In-Context Learning for Theorem Proving [00:34:39] 1.5 Symbolic Regression and LLM-Guided Abstraction 2. AI in Mathematics: Theorem Proving and Concept Discovery [00:43:37] 2.1 AI-Assisted Theorem Proving and Proof Verification [01:01:37] 2.2 Symbolic Regression and Concept Discovery in Mathematics [01:11:57] 2.3 Scaling and Modularizing Mathematical Proofs [01:21:53] 2.4 COPRA: In-Context Learning for Formal Theorem-Proving [01:28:22] 2.5 AI-driven theorem proving and mathematical discovery 3. Formal Methods and Challenges in AI Mathematics [01:30:42] 3.1 Formal proofs, empirical predicates, and uncertainty in AI mathematics [01:34:01] 3.2 Characteristics of good theoretical computer science research [01:39:16] 3.3 LLMs in theorem generation and proving [01:42:21] 3.4 Addressing contamination and concept learning in AI systems REFS: 00:04:58 The Chinese Room Argument, https://plato.stanford.edu/entries/chinese-room/ 00:11:42 Software 2.0, https://medium.com/@karpathy/software-2-0-a64152b37c35 00:11:57 Solving Olympiad Geometry Without Human Demonstrations, https://www.nature.com/articles/s41586-023-06747-5 00:13:26 Lean, https://lean-lang.org/ 00:15:43 A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go Through Self-Play, https://www.science.org/doi/10.1126/science.aar6404 00:19:24 DreamCoder (Ellis et al., PLDI 2021), https://arxiv.org/abs/2006.08381 00:24:37 The Lambda Calculus, https://plato.stanford.edu/entries/lambda-calculus/ 00:26:43 Neural Sketch Learning for Conditional Program Generation, https://arxiv.org/pdf/1703.05698 00:28:08 Learning Differentiable Programs With Admissible Neural Heuristics, https://arxiv.org/abs/2007.12101 00:31:03 Symbolic Regression With a Learned Concept Library (Grayeli et al., NeurIPS 2024), https://arxiv.org/abs/2409.09359 00:41:30 Formal Verification of Parallel Programs, https://dl.acm.org/doi/10.1145/360248.360251 01:00:37 Training Compute-Optimal Large Language Models, https://arxiv.org/abs/2203.15556 01:18:19 Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, https://arxiv.org/abs/2201.11903 01:18:42 Draft, Sketch, and Prove: Guiding Formal Theorem Provers With Informal Proofs, https://arxiv.org/abs/2210.12283 01:19:49 Learning Formal Mathematics From Intrinsic Motivation, https://arxiv.org/pdf/2407.00695 01:20:19 An In-Context Learning Agent for Formal Theorem-Proving (Thakur et al., CoLM 2024), https://arxiv.org/pdf/2310.04353 01:23:58 Learning to Prove Theorems via Interacting With Proof Assistants, https://arxiv.org/abs/1905.09381 01:39:58 An In-Context Learning Agent for Formal Theorem-Proving (Thakur et al., CoLM 2024), https://arxiv.org/pdf/2310.04353 01:42:24 Programmatically Interpretable Reinforcement Learning (Verma et al., ICML 2018), https://arxiv.org/abs/1804.02477
    --------  
    1:44:42
  • Nora Belrose - AI Development, Safety, and Meaning
    Nora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety. Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up. Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety. She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture. The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks. Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor. SPONSOR MESSAGES: CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ Nora Belrose: https://norabelrose.com/ https://scholar.google.com/citations?user=p_oBc64AAAAJ&hl=en https://x.com/norabelrose SHOWNOTES: https://www.dropbox.com/scl/fi/38fhsv2zh8gnubtjaoq4a/NORA_FINAL.pdf?rlkey=0e5r8rd261821g1em4dgv0k70&st=t5c9ckfb&dl=0 TOC: 1. Neural Network Foundations [00:00:00] 1.1 Philosophical Foundations and Neural Network Simplicity Bias [00:02:20] 1.2 LEACE and Concept Erasure Fundamentals [00:13:16] 1.3 LISA Technical Implementation and Applications [00:18:50] 1.4 Practical Implementation Challenges and Data Requirements [00:22:13] 1.5 Performance Impact and Limitations of Concept Erasure 2. Machine Learning Theory [00:32:23] 2.1 Neural Network Learning Progression and Simplicity Bias [00:37:10] 2.2 Optimal Transport Theory and Image Statistics Manipulation [00:43:05] 2.3 Grokking Phenomena and Training Dynamics [00:44:50] 2.4 Texture vs Shape Bias in Computer Vision Models [00:45:15] 2.5 CNN Architecture and Shape Recognition Limitations 3. AI Systems and Value Learning [00:47:10] 3.1 Meaning, Value, and Consciousness in AI Systems [00:53:06] 3.2 Global Connectivity vs Local Culture Preservation [00:58:18] 3.3 AI Capabilities and Future Development Trajectory 4. Consciousness Theory [01:03:03] 4.1 4E Cognition and Extended Mind Theory [01:09:40] 4.2 Thompson's Views on Consciousness and Simulation [01:12:46] 4.3 Phenomenology and Consciousness Theory [01:15:43] 4.4 Critique of Illusionism and Embodied Experience [01:23:16] 4.5 AI Alignment and Counting Arguments Debate (TRUNCATED, TOC embedded in MP3 file with more information)
    --------  
    2:29:50
  • Why Your GPUs are underutilised for AI - CentML CEO Explains
    Prof. Gennady Pekhimenko (CEO of CentML, UofT) joins us in this *sponsored episode* to dive deep into AI system optimization and enterprise implementation. From NVIDIA's technical leadership model to the rise of open-source AI, Pekhimenko shares insights on bridging the gap between academic research and industrial applications. Learn about "dark silicon," GPU utilization challenges in ML workloads, and how modern enterprises can optimize their AI infrastructure. The conversation explores why some companies achieve only 10% GPU efficiency and practical solutions for improving AI system performance. A must-watch for anyone interested in the technical foundations of enterprise AI and hardware optimization. CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Cheaper, faster, no commitments, pay as you go, scale massively, simple to setup. Check it out! https://centml.ai/pricing/ SPONSOR MESSAGES: MLST is also sponsored by Tufa AI Labs - https://tufalabs.ai/ They are hiring cracked ML engineers/researchers to work on ARC and build AGI! SHOWNOTES (diarised transcript, TOC, references, summary, best quotes etc) https://www.dropbox.com/scl/fi/w9kbpso7fawtm286kkp6j/Gennady.pdf?rlkey=aqjqmncx3kjnatk2il1gbgknk&st=2a9mccj8&dl=0 TOC: 1. AI Strategy and Leadership [00:00:00] 1.1 Technical Leadership and Corporate Structure [00:09:55] 1.2 Open Source vs Proprietary AI Models [00:16:04] 1.3 Hardware and System Architecture Challenges [00:23:37] 1.4 Enterprise AI Implementation and Optimization [00:35:30] 1.5 AI Reasoning Capabilities and Limitations 2. AI System Development [00:38:45] 2.1 Computational and Cognitive Limitations of AI Systems [00:42:40] 2.2 Human-LLM Communication Adaptation and Patterns [00:46:18] 2.3 AI-Assisted Software Development Challenges [00:47:55] 2.4 Future of Software Engineering Careers in AI Era [00:49:49] 2.5 Enterprise AI Adoption Challenges and Implementation 3. ML Infrastructure Optimization [00:54:41] 3.1 MLOps Evolution and Platform Centralization [00:55:43] 3.2 Hardware Optimization and Performance Constraints [01:05:24] 3.3 ML Compiler Optimization and Python Performance [01:15:57] 3.4 Enterprise ML Deployment and Cloud Provider Partnerships 4. Distributed AI Architecture [01:27:05] 4.1 Multi-Cloud ML Infrastructure and Optimization [01:29:45] 4.2 AI Agent Systems and Production Readiness [01:32:00] 4.3 RAG Implementation and Fine-Tuning Considerations [01:33:45] 4.4 Distributed AI Systems Architecture and Ray Framework 5. AI Industry Standards and Research [01:37:55] 5.1 Origins and Evolution of MLPerf Benchmarking [01:43:15] 5.2 MLPerf Methodology and Industry Impact [01:50:17] 5.3 Academic Research vs Industry Implementation in AI [01:58:59] 5.4 AI Research History and Safety Concerns
    --------  
    2:08:40

More Technology podcasts

About Machine Learning Street Talk (MLST)

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Podcast website

Listen to Machine Learning Street Talk (MLST), All-In with Chamath, Jason, Sacks & Friedberg and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.1.1 | © 2007-2024 radio.de GmbH
Generated: 12/28/2024 - 7:40:39 PM