EA - AI Safety Needs Great Product Builders by goodgravy
The Nonlinear Library: EA Forum - Podcast autorstwa The Nonlinear Fund
Kategorie:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Needs Great Product Builders, published by goodgravy on November 2, 2022 on The Effective Altruism Forum.In his AI Safety Needs Great Engineers post, Andy Jones explains how software engineers can reduce the risks of unfriendly artificial intelligence. Even without deep ML knowledge, these developers can work effectively on the challenges involved in building and understanding large language models.I would broaden the claim: AI safety doesn’t need only great engineers – it needs great product builders in general.This post will describe why, list some concrete projects for a few different roles, and show how they contribute to AI going better for everyone.AudienceThis post is aimed at anyone who has been involved with building software products: web developers, product managers, designers, founders, devops, generalist software engineers, . I’ll call these “product buildersâ€.Non-technical roles (e.g. operations, HR, finance) do exist in many organisations focussed on AI safety, but this post isn’t aimed at them.But I thought I would need a PhD!In the past, most technical AI safety work was done in academia or in research labs. This is changing because – among other things – we now have concrete ideas for how to construct AI in a safer manner.However, it’s not enough for us to merely have ideas of what to build. We need teams of people to partner with these researchers and build real systems, in order to:Test whether they work in the real world.Demonstrate that they have the nice safety features we’re looking for.Gather empirical data for future research.This strand of AI safety work looks much more like product development, which is why you – as a product builder – can have a direct impact today.Example projects, and why they’re importantTo prove there are tangible ways that product builders can contribute to AI safety, I’ll give some current examples of work we’re doing at Ought.For software engineersIn addition to working on our user-facing app, Elicit, we recently open-sourced our Interactive Composition Explorer (ICE).ICE is a tool to help us and others better understand Factored Cognition. It consists of a software framework and an interactive visualiser:On the back-end, we’re looking for better ways to instrument the cognition “recipes†such that our framework stays out of the user’s way as much as possible, while still giving a useful trace of the reasoning process. We’re using some meta-programming, and having good CS fundamentals would be helpful, but there’s no ML experience required. Plus working on open-source projects is super fun!If you are more of a front-end developer, you’ll appreciate that representing a complex deductive process is a UX challenge as much as anything else. These execution graphs can be very large, cyclic, oddly and unpredictably shaped, and each node can contain masses of information. How can we present this in a useful UI which captures the macro structure and still allows the user to dive into the minutiae?This work is important for safety because AI systems that have a legible decision-making process are easier to reason about and more trustworthy. On a more technical level, Factored Cognition looks like it will be a linchpin of Iterated Distillation and Amplification – one of the few concrete suggestions for a safer way to build AI.For product managersAt first, it might not be obvious how big of an impact product managers can have on AI safety (the same goes for designers). However, interface design is an alignment problem – and it’s even more neglected than other areas of safety research.You don’t need a super technical background, or to already be steeped in ML. The competing priorities we face every day in our product decisions will be fairly familiar to experienced PMs. Here are some example t...
