AI Watermarking Won’t Curb Disinformation

AI Safety Fundamentals: Alignment - Podcast autorstwa BlueDot Impact

Generative AI allows people to produce piles upon piles of images and words very quickly. It would be nice if there were some way to reliably distinguish AI-generated content from human-generated content. It would help people avoid endlessly arguing with bots online, or believing what a fake image purports to show. One common proposal is that big companies should incorporate watermarks into the outputs of their AIs. For instance, this could involve taking an image and subtly changing many pix...

Visit the podcast's native language site