Nightshade Data Poisoning Tool Targets AI to Protect Artist IP

A new tool called Nightshade offers creators a way to fend off artificial intelligence models attempting to train on visual artwork without permission. Created by a University of Chicago team led by Professor Ben Zhao, Nightshade makes it possible to include an instruction set that can cause AI models to “break” during unauthorized scraping. It does this by inserting “invisible pixels.” As a result, popular AI models including DALL-E, Midjourney and Stable Diffusion will subsequently render erratic results, turning dogs into cats and cars into cows, and so forth. Continue reading Nightshade Data Poisoning Tool Targets AI to Protect Artist IP