Learning From Human Preferences
AI Safety Fundamentals: Alignment - Podcast autorstwa BlueDot Impact
Kategorie:
One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better.Original article:https://openai.com/research/learning-from-human-preferencesAuthors:Dario Am...