54 Odcinki

  1. Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI

    Opublikowany: 16.07.2023
  2. Eric Michaud on scaling, grokking and quantum interpretability

    Opublikowany: 12.07.2023
  3. Jesse Hoogland on Developmental Interpretability and Singular Learning Theory

    Opublikowany: 6.07.2023
  4. Clarifying and predicting AGI by Richard Ngo

    Opublikowany: 9.05.2023
  5. Alan Chan And Max Kauffman on Model Evaluations, Coordination and AI Safety

    Opublikowany: 6.05.2023
  6. Breandan Considine on Neuro Symbolic AI, Coding AIs and AI Timelines

    Opublikowany: 4.05.2023
  7. Christoph Schuhmann on Open Source AI, Misuse and Existential risk

    Opublikowany: 1.05.2023
  8. Simeon Campos on Short Timelines, AI Governance and AI Alignment Field Building

    Opublikowany: 29.04.2023
  9. Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision

    Opublikowany: 17.01.2023
  10. Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment

    Opublikowany: 12.01.2023
  11. David Krueger–Coordination, Alignment, Academia

    Opublikowany: 7.01.2023
  12. Ethan Caballero–Broken Neural Scaling Laws

    Opublikowany: 3.11.2022
  13. Irina Rish–AGI, Scaling and Alignment

    Opublikowany: 18.10.2022
  14. Shahar Avin–Intelligence Rising, AI Governance

    Opublikowany: 23.09.2022
  15. Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk

    Opublikowany: 16.09.2022
  16. Markus Anderljung–AI Policy

    Opublikowany: 9.09.2022
  17. Alex Lawsen—Forecasting AI Progress

    Opublikowany: 6.09.2022
  18. Robert Long–Artificial Sentience

    Opublikowany: 28.08.2022
  19. Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming

    Opublikowany: 24.08.2022
  20. Robert Miles–Youtube, AI Progress and Doom

    Opublikowany: 19.08.2022

2 / 3

The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.

Visit the podcast's native language site