EA - Update to Samotsvety AGI timelines by Misha Yagudin

The Nonlinear Library: EA Forum - Podcast autorstwa The Nonlinear Fund

Podcast artwork

Kategorie:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Update to Samotsvety AGI timelines, published by Misha Yagudin on January 24, 2023 on The Effective Altruism Forum.Previously: Samotsvety's AI risk forecasts.Our colleagues at Epoch recently asked us to update our AI timelines estimate for their upcoming literature review on TAI timelines. We met on 2023-01-21 to discuss our predictions about when advanced AI systems will arrive.Forecasts:Definition of AGIWe used the following definition to determine the “moment at which AGI is considered to have arrived,” building on this Metaculus question:The moment that a system capable of passing the adversarial Turing test against a top-5% human who has access to experts on various topics is developed.More concretely:A Turing test is said to be “adversarial” if the human judges make a good-faith attempt to unmask the AI as an impostor, and the human confederates make a good-faith attempt to demonstrate that they are humans.An AI is said to “pass” a Turing test if at least half of judges rated the AI as more human than at least third of the human confederates.This definition of AGI is not unproblematic, e.g., it’s possible that AGI could be unmasked long after its economic value and capabilities are very high. We chose to use an imperfect definition and indicated to forecasters that they should interpret the definition not “as is” but “in spirit” to avoid annoying edge cases.Individual forecastsP(AGI by 2030)P(AGI by 2050)P(AGI by 2100)P(AGI by this year) = 10%P(AGI by this year) = 50%P(AGI by this year) = 90%F1F3F4F5F6F7F8F90.390.750.7820282034N/A0.280.70.872027203921200.260.580.932025203920880.350.730.912025203720750.40.650.820252035N/A0.330.650.82026203722500.20.50.72026205022000.230.440.67202620602250AggregateP(AGI by 2030)P(AGI by 2050)P(AGI by 2100)P(AGI by this year) = 10%P(AGI by this year) = 50%P(AGI by this year) = 90%mean:0.310.630.81202620412164stdev:0.070.110.091.078.9979.6550% CI:[0.26, 0.35][0.55, 0.70][0.74, 0.87][2025.3, 2026.7][2035, 2047][2110, 2218]80% CI:[0.21, 0.40][0.48, 0.77][0.69, 0.93][2024.6, 2027.4][2030, 2053][2062, 2266]95% CI:[0.16, 0.45][0.41, 0.84][0.62, 0.99][2023.9, 2028.1][2024, 2059][2008, 2320]geomean:0.300.620.802026.0020412163geo odds:0.300.630.82Epistemic status:For Samotsvety track-record see:/Note that this track record comes mostly from questions about geopolitics and technology that resolve within 12 months.Most forecasters have at least read Joe Carlsmith’s report on AI x-risk, Is “Power-Seeking AI an Existential Risk?”. Those who are short on time may have just skimmed the report and/or watched the presentation. We discussed the report section by section over the course of a few weekly meetings.Note also that there might be selection effects at the level of which forecasters chose to participate in this exercise; for example, Samotsvety forecasters who view AI as an important/interesting/etc. topic could have self-selected into the discussion.(Though, the set of forecasters who participated this time and participated last time is very similar.)Update from our previous estimateThe last time we publicly elicited a similar probability from our forecasters, we were at 32% that AGI would be developed in the next 20 years (so by late 2042); and at 73% that it would be developed by 2100. These are a bit lower than our current forecasts. The changes since then can be attributed toWe have gotten more time to think about the topic, and work through considerations and counter-considerations, e.g., the extent to which we should fear selection effects in the types of arguments to which we are exposed.Some of our forecasters still give substantial weight to more skeptical probabilities coming from semi-informative priors, from Lap...

Visit the podcast's native language site