AI Safety Fundamentals: Alignment
Podcast autorstwa BlueDot Impact
Kategorie:
83 Odcinki
-
Constitutional AI Harmlessness from AI Feedback
Opublikowany: 19.07.2024 -
Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Opublikowany: 19.07.2024 -
Illustrating Reinforcement Learning from Human Feedback (RLHF)
Opublikowany: 19.07.2024 -
Chinchilla’s Wild Implications
Opublikowany: 17.06.2024 -
Deep Double Descent
Opublikowany: 17.06.2024 -
Intro to Brain-Like-AGI Safety
Opublikowany: 17.06.2024 -
Eliciting Latent Knowledge
Opublikowany: 17.06.2024 -
Toy Models of Superposition
Opublikowany: 17.06.2024 -
Least-To-Most Prompting Enables Complex Reasoning in Large Language Models
Opublikowany: 17.06.2024 -
Discovering Latent Knowledge in Language Models Without Supervision
Opublikowany: 17.06.2024 -
ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation
Opublikowany: 17.06.2024 -
Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions
Opublikowany: 17.06.2024 -
Imitative Generalisation (AKA ‘Learning the Prior’)
Opublikowany: 17.06.2024 -
An Investigation of Model-Free Planning
Opublikowany: 17.06.2024 -
Low-Stakes Alignment
Opublikowany: 17.06.2024 -
Gradient Hacking: Definitions and Examples
Opublikowany: 17.06.2024 -
Empirical Findings Generalize Surprisingly Far
Opublikowany: 17.06.2024 -
Compute Trends Across Three Eras of Machine Learning
Opublikowany: 13.06.2024 -
Worst-Case Thinking in AI Alignment
Opublikowany: 29.05.2024 -
Public by Default: How We Manage Information Visibility at Get on Board
Opublikowany: 12.05.2024
Listen to resources from the AI Safety Fundamentals: Alignment course!https://aisafetyfundamentals.com/alignment