Mashable 06月05日 06:04
Google introduces small watermark to Veo 3 videos
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

谷歌最近在其新的Veo 3 AI视频模型生成的视频中添加了可见水印,以应对由逼真AI视频带来的虚假信息传播风险。这种水印可以在谷歌发布的宣传视频中看到。除了可见水印,谷歌还在所有AI生成内容中嵌入了不可见水印SynthID,并推出了SynthID检测器。然而,由于水印较小,专家认为其在实际应用中可能难以被普通用户注意到。文章讨论了水印的有效性、可裁剪性以及未来改进的可能性,并强调了提高用户对AI生成内容的辨识能力的重要性。

🔍 谷歌为Veo 3生成的AI视频添加了可见水印,该水印位于视频的右下角,旨在帮助用户识别AI生成的内容,并减少虚假信息的传播。

📢除了可见水印,谷歌还在其AI生成内容中嵌入了不可见的SynthID水印。同时,谷歌还推出了SynthID检测器,但尚未广泛提供。

⚠️专家指出,由于水印较小,用户在浏览社交媒体时可能难以注意到。此外,水印也容易被裁剪或编辑,这限制了其作为识别AI生成内容的有效性。

💡专家建议,为了提高用户对AI生成内容的辨识能力,水印需要更加明显,或者平台可以在图像旁边添加提示,例如“检查水印以验证图像是否为AI生成”。

Last week, Google quietly announced that it would be adding a visible watermark to AI-generated videos made using its new Veo 3 model.

And if you look really closely while scrolling through your social feeds, you might be able to see it.

The watermark can be seen in videos released by Google to promote the launch of Veo 3 in the UK and other countries.

Credit: Screenshot: Google

Google announced the change in an X thread by Josh Woodward, Vice President with Google Labs and Google Gemini.

According to Woodward's post, the company added the watermark to all Veo videos except for those generated in Google's Flow tool by users with a Google AI Ultra plan. The new watermark is in addition to the invisible SynthID watermark already embedded in all of Google's AI-generated content, as well as a SynthID detector, which recently rolled out to early testers but is not yet broadly available.

The visible watermark "is a first step as we work to make our SynthID Detector available to more people in parallel," said Josh Woodward, VP of Google Labs and Gemini in his X post.

In the weeks after Google introduced Veo 3 at Google I/O 2025, the new AI video model has garnered lots of attention for its incredibly realistic videos, especially since it can also generate realistic audio and dialogue. The videos posted online aren't just fantastical renderings of animals acting like humans, although there's plenty of that, too. Veo 3 has also been used to generate more mundane clips, including man-on-the-street interviews, influencer ads, fake news segments, and unboxing videos.

If you look closely, you can spot telltale signs of AI like overly-smooth skin and erroneous artifacts in the background. But if you're passively doomscrolling, you might not think to double-check whether the emotional support kangaroo casually holding a plane ticket is real or fake. People being duped by an AI-generated kangaroo is a relatively harmless example. But Veo 3's widespread availability and realism introduce a new level of risk for the spread of misinformation, according to AI experts interviewed by Mashable for this story.

The new watermark should reduce those risks, in theory. The only problem is that the visible watermark isn't that visible. In a video Mashable generated using Veo 3, you can see a "Veo" watermark in a pale shade of white in the bottom right-hand corner of the video. See it?

A Veo 3 video generated by Mashable includes the new watermark. Credit: Screenshot: Mashable

How about now?

Google's Veo watermark. Credit: Screenshot: Mashable

"This small watermark is unlikely to be apparent to most consumers who are moving through their social media feed at a break-neck clip," said digital forensics expert Hany Farid. Indeed, it took us a few seconds to find it, and we were looking for it. Unless users know to look for the watermark, they may not see it, especially if viewing content on their mobile devices.

A Google spokesperson told Mashable by email, "We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools. Any content generated with Google AI has a SynthID watermark embedded and we also add a visible watermark to Veo videos too.”

"People are familiar with prominent watermarks like Getty Images, but this one is very small," said Negar Kamali, a researcher studying people's ability to detect AI-generated content at Kellogg School of Management. "So either the watermark needs to be more noticeable, or platforms that host images could include a note beside the image — something like 'Check for a watermark to verify whether the image is AI-generated,'" said Kamali. "Over time, people could learn to look for it."

However, visible watermarks aren't a perfect remedy. Both Farid and Kamali told us that videos with watermarks can easily be cropped or edited. "None of these small — visible — watermarks in images or video are sufficient because they are easy to remove," said Farid, who is also a professor at UC Berkeley School of Information.

But, he noted that Google's SynthID invisible watermark, "is quite resilient and difficult to remove." Farid added, "The downside is that the average user can’t see this [SynthID watermark] without a watermark reader so the goal now is to make it easier for the consumer to know if a piece of content contains this type of watermark."

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

谷歌 AI视频 水印 虚假信息 SynthID
相关文章