“2. Why intuitive comparisons of large-scale impact are unjustified” by Anthony DiGiovanni

EA Forum Podcast (All audio) - Podcast autorstwa EA Forum Team

Audio note: this article contains 55 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. We’ve seen so far that unawareness leaves us with gaps in the impartial altruistic justification for our decisions. It's fundamentally unclear how the possible outcomes of our actions trade off against each other. But perhaps this ambiguity doesn’t matter much in practice, since the consequences that predominantly shape the impartial value of the future (“large-scale consequences”, for short) seem at least somewhat foreseeable. Here are two ways we might think we can fill these gaps: Implicit: “Even if we can’t assign EVs to interventions, we have reason to trust that our pre-theoretic intuitions at least track an intervention's sign. These intuitions can distinguish whether the large-scale consequences are net-better than inaction.”[1] Explicit: “True, we don’t directly conceive [...] ---Outline:(03:01) Degrees of imprecision from unawareness(07:32) When is unawareness not a big deal?(10:22) Why we're especially unaware of large-scale consequences(11:30) Extremely limited understanding of mechanisms(15:32) Unawareness and superforecasting(19:14) Pessimistic induction(25:11) The better than chance argument, and other objections to imprecision(32:21) Appendix A: The meta-epistemic wager?(35:43) ReferencesThe original text contained 15 footnotes which were omitted from this narration. --- First published: June 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/qZS8cgvY5YrjQ3JiR/2-why-intuitive-comparisons-of-large-scale-impact-are --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Visit the podcast's native language site