AI and Art: Challenges in Fair Use, Virtual Exhibitions, and Watermarking
Feature Story: Fair Use Perhaps Not So Fair
OpenAI’s release of DALL-E 3 has introduced the ability for creators to “opt-out” their images from being used to train future AI models, responding to concerns about AI companies using artists’ work without permission. However, the opt-out process is challenging and might have limited impact since current AI models have already processed massive amounts of images. The issue arises as AI, such as DALL-E and ChatGPT, relies on vast datasets of human-created content to improve its abilities. While opting out may safeguard future work, existing AI models might have already used pre-2023 art, and there is no retroactive solution in place to address this. The recursive nature of AI art generation complicates this, as machine-created images may be used in training future AI. Concerns also arise about the burden of responsibility, with artists having to opt out proactively, and a potential shift of power to tech companies. Copyright and fair use in AI training data are complex legal matters, and the effectiveness of the opt-out mechanism remains disparagingly uncertain.
AI & Art History: A New Way to Experience the Works of van Gogh
The Musée d’Orsay in Paris is presenting an exhibition of Vincent van Gogh’s last works, featuring an AI incarnation of the artist and a virtual reality experience. The exhibition showcases paintings and drawings produced by van Gogh during his final creative period in Auvers-sur-Oise. It is the first time these works, held by the Musée d’Orsay and the Van Gogh Museum in Amsterdam, have been displayed together. The AI version of van Gogh answers questions from visitors, while the virtual reality experience, based on the artist’s last paint palette, offers a unique perspective on his art. The exhibition runs until February 2024.
AI Art-Tech Spotlight: AI Watermarking…a Lost Cause?
University of Maryland researchers have found that watermarking AI-generated images is ineffective in protecting against deepfakes, revealing vulnerabilities in Google’s and other tech giants’ approaches to defend against AI-generated forgeries. Their study exposed a trade-off between the evasion error rate (the percentage of watermarked images detected as unmarked) and the spoofing error rate (the percentage of unmarked images identified as watermarked). The authors developed attack methods, including diffusion purification, enabling near-invisible watermarked images to bypass detection. While Google has introduced AI watermarking called SynthID, which is almost imperceptible to the human eye, it may still be vulnerable to these attacks. Another AI watermarking technique, Glaze, created by University of Chicago researchers, obscures characteristics manipulated by AI generators to safeguard digital images, but the study did not confirm whether it could be bypassed. The study’s revelations on the vulnerabilities of AI watermarking techniques raise crucial questions, underscoring the ongoing need for innovative and more robust defenses to protect digital works from being manipulated.