The Inside View
Podcast autorstwa Michaël Trazzi
54 Odcinki
-
Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI
Opublikowany: 16.07.2023 -
Eric Michaud on scaling, grokking and quantum interpretability
Opublikowany: 12.07.2023 -
Jesse Hoogland on Developmental Interpretability and Singular Learning Theory
Opublikowany: 6.07.2023 -
Clarifying and predicting AGI by Richard Ngo
Opublikowany: 9.05.2023 -
Alan Chan And Max Kauffman on Model Evaluations, Coordination and AI Safety
Opublikowany: 6.05.2023 -
Breandan Considine on Neuro Symbolic AI, Coding AIs and AI Timelines
Opublikowany: 4.05.2023 -
Christoph Schuhmann on Open Source AI, Misuse and Existential risk
Opublikowany: 1.05.2023 -
Simeon Campos on Short Timelines, AI Governance and AI Alignment Field Building
Opublikowany: 29.04.2023 -
Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision
Opublikowany: 17.01.2023 -
Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment
Opublikowany: 12.01.2023 -
David Krueger–Coordination, Alignment, Academia
Opublikowany: 7.01.2023 -
Ethan Caballero–Broken Neural Scaling Laws
Opublikowany: 3.11.2022 -
Irina Rish–AGI, Scaling and Alignment
Opublikowany: 18.10.2022 -
Shahar Avin–Intelligence Rising, AI Governance
Opublikowany: 23.09.2022 -
Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk
Opublikowany: 16.09.2022 -
Markus Anderljung–AI Policy
Opublikowany: 9.09.2022 -
Alex Lawsen—Forecasting AI Progress
Opublikowany: 6.09.2022 -
Robert Long–Artificial Sentience
Opublikowany: 28.08.2022 -
Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming
Opublikowany: 24.08.2022 -
Robert Miles–Youtube, AI Progress and Doom
Opublikowany: 19.08.2022
The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.
