EA - How many people are working (directly) on reducing existential risk from AI? by Benjamin Hilton

The Nonlinear Library: EA Forum - Podcast autorstwa The Nonlinear Fund

Podcast artwork

Kategorie:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many people are working (directly) on reducing existential risk from AI?, published by Benjamin Hilton on January 17, 2023 on The Effective Altruism Forum.SummaryI've updated my estimate of the number of FTE (full-time equivalent) working (directly) on reducing existential risks from AI from 300 FTE to 400 FTE.Below I've pasted some slightly edited excepts of the relevant sections of the 80,000 Hours profile on preventing an AI-related catastrophe.New 80,000 Hours estimate of the number of people working on reducing AI riskNeglectedness estimateWe estimate there are around 400 people around the world working directly on reducing the chances of an AI-related existential catastrophe (with a 90% confidence interval ranging between 200 and 1,000). Of these, about three quarters are working on technical AI safety research, with the rest split between strategy (and other governance) research and advocacy. We think there are around 800 people working in complementary roles, but we’re highly uncertain about this estimate.Footnote on methodologyIt’s difficult to estimate this number.Ideally we want to estimate the number of FTE (“full-time equivalent“) working on the problem of reducing existential risks from AI.But there are lots of ambiguities around what counts as working on the issue. So I tried to use the following guidelines in my estimates:I didn’t include people who might think of themselves on a career path that is building towards a role preventing an AI-related catastrophe, but who are currently skilling up rather than working directly on the problem.I included researchers, engineers, and other staff that seem to work directly on technical AI safety research or AI strategy and governance. But there’s an uncertain boundary between these people and others who I chose not to include. For example, I didn’t include machine learning engineers whose role is building AI systems that might be used for safety research but aren’t primarily designed for that purpose.I only included time spent on work that seems related to reducing the potentially existential risks from AI, like those discussed in this article. Lots of wider AI safety and AI ethics work focuses on reducing other risks from AI seems relevant to reducing existential risks – this ‘indirect’ work makes this estimate difficult. I decided not to include indirect work on reducing the risks of an AI-related catastrophe (see our problem framework for more).Relatedly, I didn’t include people working on other problems that might indirectly affect the chances of an AI-related catastrophe, such as epistemics and improving institutional decision-making, reducing the chances of great power conflict, or building effective altruism.With those decisions made, I estimated this in three different ways.First, for each organisation in the AI Watch database, I estimated the number of FTE working directly on reducing existential risks from AI. I did this by looking at the number of staff listed at each organisation, both in total and in 2022, as well as the number of researchers listed at each organisation. Overall I estimated that there were 76 to 536 FTE working on technical AI safety (90% confidence), with a mean of 196 FTE. I estimated that there were 51 to 359 FTE working on AI governance and strategy (90% confidence), with a mean of 151 FTE. There’s a lot of subjective judgement in these estimates because of the ambiguities above. The estimates could be too low if AI Watch is missing data on some organisations, or too high if the data counts people more than once or includes people who no longer work in the area.Second, I adapted the methodology used by Gavin Leech’s estimate of the number of people working on reducing existential risks from AI. I split the organisations in Leech’s estimate into technical sa...

Visit the podcast's native language site