Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - Podcast autorstwa Sam Charrington

Kategorie:

Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions. The complete show notes for this episode can be found at twimlai.com/go/644.

Visit the podcast's native language site