AXRP - the AI X-risk Research Podcast
Podcast autorstwa Daniel Filan
59 Odcinki
-  46 - Tom Davidson on AI-enabled CoupsOpublikowany: 7.08.2025
-  45 - Samuel Albanie on DeepMind's AGI Safety ApproachOpublikowany: 6.07.2025
-  44 - Peter Salib on AI Rights for Human SafetyOpublikowany: 28.06.2025
-  43 - David Lindner on Myopic Optimization with Non-myopic ApprovalOpublikowany: 15.06.2025
-  42 - Owain Evans on LLM PsychologyOpublikowany: 6.06.2025
-  41 - Lee Sharkey on Attribution-based Parameter DecompositionOpublikowany: 3.06.2025
-  40 - Jason Gross on Compact Proofs and InterpretabilityOpublikowany: 28.03.2025
-  38.8 - David Duvenaud on Sabotage Evaluations and the Post-AGI FutureOpublikowany: 1.03.2025
-  38.7 - Anthony Aguirre on the Future of Life InstituteOpublikowany: 9.02.2025
-  38.6 - Joel Lehman on Positive Visions of AIOpublikowany: 24.01.2025
-  38.5 - Adrià Garriga-Alonso on Detecting AI SchemingOpublikowany: 20.01.2025
-  38.4 - Shakeel Hashim on AI JournalismOpublikowany: 5.01.2025
-  38.3 - Erik Jenner on Learned Look-AheadOpublikowany: 12.12.2024
-  39 - Evan Hubinger on Model Organisms of MisalignmentOpublikowany: 1.12.2024
-  38.2 - Jesse Hoogland on Singular Learning TheoryOpublikowany: 27.11.2024
-  38.1 - Alan Chan on Agent InfrastructureOpublikowany: 16.11.2024
-  38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent SystemsOpublikowany: 14.11.2024
-  37 - Jaime Sevilla on AI ForecastingOpublikowany: 4.10.2024
-  36 - Adam Shai and Paul Riechers on Computational MechanicsOpublikowany: 29.09.2024
-  New Patreon tiers + MATS applicationsOpublikowany: 28.09.2024
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
