Imagine A World: What if some people could live forever?

Future of Life Institute Podcast - Podcast autorstwa Future of Life Institute

Kategorie:

If you could extend your life, would you? How might life extension technologies create new social and political divides? How can the world unite to solve the great problems of our time, like AI risk? What if AI creators could agree on an inspection process to expose AI dangers before they're unleashed? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year In the fifth episode of Imagine A World, we explore the fictional worldbuild titled 'To Light'. Our host Guillaume Riesen speaks to Mako Yass, the first place winner of the FLI Worldbuilding Contest we ran last year. Mako lives in Auckland, New Zealand. He describes himself as a 'stray philosopher-designer', and has a background in computer programming and analytic philosophy. Mako's world is particularly imaginative, with richly interwoven narrative threads and high-concept sci fi inventions. By 2045, his world has been deeply transformed. There's an AI-designed miracle pill that greatly extends lifespan and eradicates most human diseases. Sachets of this life-saving medicine are distributed freely by dove-shaped drones. There's a kind of mind uploading which lets anyone become whatever they wish, live indefinitely and gain augmented intelligence. The distribution of wealth is almost perfectly even, with every human assigned a share of all resources. Some people move into space, building massive structures around the sun where they practice esoteric arts in pursuit of a more perfect peace. While this peaceful, flourishing end state is deeply optimistic, Mako is also very conscious of the challenges facing humanity along the way. He sees a strong need for global collaboration and investment to avoid catastrophe as humanity develops more and more powerful technologies. He's particularly concerned with the risks presented by artificial intelligence systems as they surpass us. An AI system that is more capable than a human at all tasks - not just playing chess or driving a car - is what we'd call an Artificial General Intelligence - abbreviated 'AGI'. Mako proposes that we could build safe AIs through radical transparency. He imagines tests that could reveal the true intentions and expectations of AI systems before they are released into the world. Please note: This episode explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/to-light The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects Media and concepts referenced in the episode: https://en.wikipedia.org/wiki/Terra_Ignota https://en.wikipedia.org/wiki/The_Transparent_Society https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain https://en.wikipedia.org/wiki/The_Matrix https://aboutmako.makopool.com/

Visit the podcast's native language site