37 Odcinki

  1. Episode 17: Andrew Lampinen, DeepMind, on symbolic behavior, mental time travel, and insights from psychology

    Opublikowany: 28.02.2022
  2. Episode 16: Yilun Du, MIT, on energy-based models, implicit functions, and modularity

    Opublikowany: 21.12.2021
  3. Episode 15: Martín Arjovsky, INRIA, on benchmarks for robustness and geometric information theory

    Opublikowany: 15.10.2021
  4. Episode 14: Yash Sharma, MPI-IS, on generalizability, causality, and disentanglement

    Opublikowany: 24.09.2021
  5. Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning

    Opublikowany: 10.09.2021
  6. Episode 12: Jacob Steinhardt, UC Berkeley, on machine learning safety, alignment and measurement

    Opublikowany: 18.06.2021
  7. Episode 11: Vincent Sitzmann, MIT, on neural scene representations for computer vision and more general AI

    Opublikowany: 20.05.2021
  8. Episode 10: Dylan Hadfield-Menell, UC Berkeley/MIT, on the value alignment problem in AI

    Opublikowany: 12.05.2021
  9. Episode 09: Drew Linsley, Brown, on inductive biases for vision and generalization

    Opublikowany: 2.04.2021
  10. Episode 08: Giancarlo Kerg, Mila, on approaching deep learning from mathematical foundations

    Opublikowany: 27.03.2021
  11. Episode 07: Yujia Huang, Caltech, on neuro-inspired generative models

    Opublikowany: 18.03.2021
  12. Episode 06: Julian Chibane, MPI-INF, on 3D reconstruction using implicit functions

    Opublikowany: 5.03.2021
  13. Episode 05: Katja Schwarz, MPI-IS, on GANs, implicit functions, and 3D scene understanding

    Opublikowany: 24.02.2021
  14. Episode 04: Joel Lehman, OpenAI, on evolution, open-endedness, and reinforcement learning

    Opublikowany: 17.02.2021
  15. Episode 03: Cinjon Resnick, NYU, on activity and scene understanding

    Opublikowany: 1.02.2021
  16. Episode 02: Sarah Jane Hong, Latent Space, on neural rendering & research process

    Opublikowany: 7.01.2021
  17. Episode 01: Kelvin Guu, Google AI, on language models & overlooked research problems

    Opublikowany: 15.12.2020

2 / 2

Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.

Visit the podcast's native language site