EA - Concerns about AI safety career change by mmKALLL
The Nonlinear Library: EA Forum - Podcast autorstwa The Nonlinear Fund
Kategorie:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concerns about AI safety career change, published by mmKALLL on January 13, 2023 on The Effective Altruism Forum.Summary:I'm a software engineer interested in working on AI safety, but confused about its career prospects. I outlined all my concerns below.In particular, I had trouble finding accounts of engineers working in the field, and the differences between organizations/companies working on AI safety are very unclear from the outside.It's also not clear if frontend skills are seen as useful, or whether applicants should reside within the US.Full text:I'm an experienced full-stack software engineer and software/strategy consultant based in Japan. I've been loosely following EA since 2010, and have become increasingly concerned about AI x-risk since 2016. This has led me to regularly consider possible careers in AI safety, especially now that the demand for software engineers in the field has increased dramatically.However, having spent ~15 hours reading about the current state of the field, organizations, and role of engineers, I find myself having more questions than I started with. In hope of finding more clarity and help share what engineers considering the career shift might be wondering, I decided to outline my main points of concern below:The only accounts of engineers working in AI safety I could find were two articles and a problem profile on 80,000 Hours. Not even the AI Alignment Forum seemed to have any posts written by engineers sharing their experience. Despite this, most orgs have open positions for ML engineers, DevOps engineers, or generalist software developers. What are all of them doing?Many job descriptions listed very similar skills for engineers, even when the orgs seemed to have very different approaches on tackling AI safety problems. Is the set of required software skills really that uniform across organizations?Do software engineers in the field feel that their day-to-day work is meaningful? Are they regularly learning interesting and useful things? How do they see their career prospects?I'm also curious whether projects are done with a diverse set of technologies? Who is typically responsible for data transformations and cleanup? How much ML theory should an engineer coming into the field learn beforehand? (I'm excited to learn about ML, but got very mixed signals about the expectations.)Some orgs describe their agenda and goals. In many cases, these seemed very similar to me, as all of them are pragmatic and many even had shared or adjacent areas of research. Given the similarities, why are there so many different organizations? How is an outsider supposed to know what makes each of them unique?As an example, MIRI states that they want to "ensure that the creation of smarter-than-human machine intelligence has a positive impact", Anthropic states they have "long-term goals of steerable, trustworthy AI", Redwood Research states they want to "align -- future systems with human interests", and Center of AI Safety states they want to "reduce catastrophic and existential risks from AI". What makes these different from each other? They all sound like they'd lead to similar conclusions about what to work on.I was surprised to find that some orgs didn't really describe their work or what differentiates them. How are they supposed to find the best engineers if interested ones can't know what areas they are working on? I also found that it's sometimes very difficult to evaluate whether an org is active and/or trustworthy.Related to this, I was baffled to find that MIRI hasn't updated their agenda since 2015, and their latest publication is dated at 2016. However, their blog seems to have ~quarterly updates? Are they still relevant?Despite finding many orgs by reading articles and publications, I couldn't find a good overall list ...
