AXRP - the AI X-risk Research Podcast
Podcast autorstwa Daniel Filan
59 Odcinki
-
35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization
Opublikowany: 24.08.2024 -
34 - AI Evaluations with Beth Barnes
Opublikowany: 28.07.2024 -
33 - RLHF Problems with Scott Emmons
Opublikowany: 12.06.2024 -
32 - Understanding Agency with Jan Kulveit
Opublikowany: 30.05.2024 -
31 - Singular Learning Theory with Daniel Murfet
Opublikowany: 7.05.2024 -
30 - AI Security with Jeffrey Ladish
Opublikowany: 30.04.2024 -
29 - Science of Deep Learning with Vikrant Varma
Opublikowany: 25.04.2024 -
28 - Suing Labs for AI Risk with Gabriel Weil
Opublikowany: 17.04.2024 -
27 - AI Control with Buck Shlegeris and Ryan Greenblatt
Opublikowany: 11.04.2024 -
26 - AI Governance with Elizabeth Seger
Opublikowany: 26.11.2023 -
25 - Cooperative AI with Caspar Oesterheld
Opublikowany: 3.10.2023 -
24 - Superalignment with Jan Leike
Opublikowany: 27.07.2023 -
23 - Mechanistic Anomaly Detection with Mark Xu
Opublikowany: 27.07.2023 -
Survey, store closing, Patreon
Opublikowany: 28.06.2023 -
22 - Shard Theory with Quintin Pope
Opublikowany: 15.06.2023 -
21 - Interpretability for Engineers with Stephen Casper
Opublikowany: 2.05.2023 -
20 - 'Reform' AI Alignment with Scott Aaronson
Opublikowany: 12.04.2023 -
Store, Patreon, Video
Opublikowany: 7.02.2023 -
19 - Mechanistic Interpretability with Neel Nanda
Opublikowany: 4.02.2023 -
New podcast - The Filan Cabinet
Opublikowany: 13.10.2022
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
