EA - New open letter on AI — "Include Consciousness Research" by Jamie Harris

The Nonlinear Library: EA Forum - Podcast autorstwa The Nonlinear Fund

Podcast artwork

Kategorie:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New open letter on AI — "Include Consciousness Research", published by Jamie Harris on April 28, 2023 on The Effective Altruism Forum.Quick context:The potential development of artificial sentience seems very important; it presents large, neglected, and potentially tractable risks.80,000 Hours lists artificial sentience and suffering risks as "similarly pressing but less developed areas" than their top 8 "list of the most pressing world problems".There's some relevant work on this topic by Sentience Institute, Future of Humanity Institute, Center for Reducing Suffering, and others, but room for much more. Yesterday someone asked on the Forum "How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient?"A month ago, people got excited about the FLI open letter: "Pause giant AI experiments".Now, Researchers from the Association for Mathematical Consciousness Science have written an open letter emphasising the urgent need for accelerated research in consciousness science in light of rapid advancements in artificial intelligence. (I'm not affiliated with them in any way.)It's quite short, so I'll copy the full text here:This open letter is a wakeup call for the tech sector, the scientific community and society in general to take seriously the need to accelerate research in the field of consciousness science.As highlighted by the recent “Pause Giant AI Experiments” letter [1], we are living through an exciting and uncertain time in the development of artificial intelligence (AI) and other brain-related technologies. The increasing computing power and capabilities of the new AI systems are accelerating at a pace that far exceeds our progress in understanding their capabilities and their “alignment” with human values.AI systems, including Large Language Models such as ChatGPT and Bard, are artificial neural networks inspired by neuronal architecture in the cortex of animal brains. In the near future, it is inevitable that such systems will be constructed to reproduce aspects of higher-level brain architecture and functioning. Indeed, it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness. Contemporary AI systems already display human traits recognised in Psychology, including evidence of Theory of Mind [2].Furthermore, if achieving consciousness, AI systems would likely unveil a new array of capabilities that go far beyond what is expected even by those spearheading their development. AI systems have already been observed to exhibit unanticipated emergent properties [3]. These capabilities will change what AI can do, and what society can do to control, align and use such systems. In addition, consciousness would give AI a place in our moral landscape, which raises further ethical, legal, and political concerns.As AI develops, it is vital for the wider public, societal institutions and governing bodies to know whether and how AI systems can become conscious, to understand the implications thereof, and to effectively address the ethical, safety, and societal ramifications associated with artificial general intelligence (AGI).Science is starting to unlock the mystery of consciousness. Steady advances in recent years have brought us closer to defining and understanding consciousness and have established an expert international community of researchers in this field. There are over 30 models and theories of consciousness (MoCs and ToCs) in the peer-reviewed scientific literature, which already include some important pieces of the solution to the challenge of consciousness.To understand whether AI systems are, or can become, conscious, tools are needed that can be applied to artificial systems. In particular, science needs to further develop formal and mat...

Visit the podcast's native language site