EA - Announcing AI Alignment Awards: $100k research contests about goal misgeneralization and corrigibility by Akash

The Nonlinear Library: EA Forum - Podcast autorstwa The Nonlinear Fund

Podcast artwork

Kategorie:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility, published by Akash on November 22, 2022 on The Effective Altruism Forum.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Visit the podcast's native language site