EA - Levelling Up in AI Safety Research Engineering by Gabriel Mukobi
The Nonlinear Library: EA Forum - Podcast autorstwa The Nonlinear Fund
Kategorie:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Levelling Up in AI Safety Research Engineering, published by Gabriel Mukobi on September 2, 2022 on The Effective Altruism Forum. Summary: A level-based guide for independently up-skilling in AI Safety Research Engineering that aims to give concrete objectives, goals, and resources to help anyone go from zero to hero. Cross-posted to LessWrong. View a pretty Google Docs version here. Introduction I think great career guides are really useful for guiding and structuring the learning journey of people new to a technical field like AI Safety. I also like role-playing games. Here’s my attempt to use levelling frameworks and break up one possible path from zero to hero in Research Engineering for AI Safety (e.g. jobs with the “Research Engineer” title) through objectives, concrete goals, and resources. I hope this kind of framework makes it easier to see where one is on this journey, how far they have to go, and some options to get there. I’m mostly making this to sort out my own thoughts about my career development and how I’ll support other students through Stanford AI Alignment, but hopefully, this is also useful to others! Note that I assume some interest in AI Safety Research Engineering—this guide is about how to up-skill in Research Engineering, not why (though working through it should be a great way to test your fit). Also note that there isn’t much abstract advice in this guide (see the end for links to guides with advice), and the goal is more to lay out concrete steps you can take to improve. For each level, I describe the general capabilities of someone at the end of that level, some object-level goals to measure that capability, and some resources to choose from that would help get there. The categories of resources within a level are listed in the order you should progress, and resources within a category are roughly ordered by quality. There’s some redundancy, so I would recommend picking and choosing between the resources rather than doing all of them. Also, if you are a student and your university has a good class on one of the below topics, consider taking that instead of one of the online courses I listed. As a very rough estimate, I think each level should take at least 100-200 hours of focused work, for a total of 700-1400 hours. At 10 hours/week (quarter-time), that comes to around 16-32 months of study but can definitely be shorter (e.g. if you already have some experience) or longer (if you dive more deeply into some topics)! I think each level is about evenly split between time spent reading/watching and time spent building/testing, with more reading earlier on and more building later. Confidence: mid-to-high. I am not yet an AI Safety Research Engineer (but I plan to be)—this is mostly a distillation of what I’ve read from other career guides (linked at the end) and talked about with people working on AI Safety. I definitely haven’t done all these things, just seen them recommended. I don’t expect this to be the “perfect” way to prepare for a career in AI Safety Research Engineering, but I do think it’s a very solid way. Level 1: AI Safety Fundamentals Objective You are familiar with the basic arguments for existential risks due to advanced AI, models for forecasting AI advancements, and some of the past and current research directions within AI alignment/safety. Note: You should be coming back to these readings and keeping up to date with the latest work in AI Safety throughout your learning journey. It’s okay if you don’t understand everything on your first try—Level 1 kind of happens all the time. Goals Complete an AI Safety introductory reading group fellowship. Write a reflection distilling, recontextualizing, or expanding upon some AI Safety topic and share it with someone for feedback. Figure out how convinced you are of the arg...
