Google has introduced a new tool called SynthID that aims to prevent the spread of misinformation by embedding an invisible, permanent watermark on computer-generated images. SynthID, developed by Google’s DeepMind unit in collaboration with Google Cloud, utilizes the latest text-to-image generator called Imagen. The AI-generated watermark remains intact even if the image is modified with filters or altered colors.
The tool also has the ability to scan incoming images and determine the likelihood that they were created by Imagen. It uses three levels of certainty: detected, not detected, and possibly detected. While not perfect, internal testing has shown that SynthID is accurate against many common image manipulations.
A beta version of SynthID is currently available for select customers of Vertex AI, Google’s platform for generative AI development. Google intends to refine and expand SynthID, potentially integrating it into other Google products or even offering it to third parties.
Deepfake and edited images have become increasingly realistic, raising concerns about the authenticity of visual content. Tech companies, including Google, are actively seeking reliable methods to identify and flag manipulated content. The European Commission has called for technology implementation and clear labeling of such content to users.
In this effort, Google joins other startups and major tech companies working towards finding solutions. Companies such as Truepic and Reality Defender are part of this growing wave, recognizing the importance of protecting the distinction between reality and fabrication.
Google has taken its own approach to combating misinformation by creating tools like About this image, which allows users to trace the origin of images found on the site. Additionally, AI-generated images created by Google carry a markup in the original file to provide context if the image is found on another platform.
While these technical solutions are intended to address the problem of misinformation, the rapid development of AI technology poses challenges for human oversight. OpenAI, the organization behind Dall-E and ChatGPT, has acknowledged the imperfections of its own detection efforts in the realm of AI-generated writing. Caution is advised in relying solely on technical solutions to combat misinformation.
Sources: Google, Adobe-backed Consortium
[Article source will be cited without URL]