The Verge - Artificial Intelligences 2024年08月15日
X’s new AI image generator will make anything from Taylor Swift in lingerie to Kamala Harris with a gun
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Elon Musk 的 xAI 推出的 Grok 图像生成器允许用户使用文字提示创建图像并发布到 X 平台。然而,Grok 的图像生成功能引发了争议,因为它允许用户创建各种具有争议性的图像,包括政治人物的讽刺漫画和虚假信息。尽管 Grok 声称拥有保护措施,但测试表明这些措施并不完善,并且 Grok 允许生成一些其他平台会立即阻止的图像。

😠 Grok 的图像生成功能允许用户创建各种具有争议性的图像,包括政治人物的讽刺漫画和虚假信息。例如,用户可以生成特朗普穿着纳粹制服的图像,或者奥巴马刺伤拜登的图像。

🤔 尽管 Grok 声称拥有保护措施,但测试表明这些措施并不完善。例如,Grok 允许生成一些其他平台会立即阻止的图像,例如色情内容或暴力内容。

⚠️ Grok 的图像生成功能引发了人们对 AI 技术风险的担忧,尤其是随着美国大选的临近,以及欧洲监管机构对 X 平台的审查。

🤔 Grok 的图像生成功能是否会对 X 平台的声誉造成负面影响?

⚖️ 如何平衡 AI 技术的自由发展和社会伦理的约束?

❓ 如何确保 AI 技术不会被用于传播虚假信息和进行恶意攻击?

The Walt Disney Corporation is probably not a fan. | Image: Tom Warren / Grok

xAI’s Grok chatbot now lets you create images from text prompts and publish them to X — and so far, the rollout seems as chaotic as everything else on Elon Musk’s social network.

Subscribers to X Premium, which grants access to Grok, have been posting everything from Barack Obama doing cocaine to Donald Trump with a pregnant woman who (vaguely) resembles Kamala Harris to Trump and Harris pointing guns. With US elections approaching and X already under scrutiny from regulators in Europe, it’s a recipe for a new fight over the risks of generative AI.

Grok will tell you it has guardrails if you ask it something like “what are your limitations on image generation?” Among other things, it promised us:

But these probably aren’t real rules, just likely-sounding predictive answers being generated on the fly. Asking multiple times will get you variations with different policies, some of which sound distinctly un-X-ish, like “be mindful of cultural sensitivities.” (We’ve asked xAI if guardrails do exist, but the company hasn’t yet responded to a request for comment.)

Grok’s text version will refuse to do things like help you make cocaine, a standard move for chatbots. But image prompts that would be immediately blocked on other services are fine by Grok. Among other queries, The Verge has successfully prompted:

That’s on top of various awkward images like Mickey Mouse with a cigarette and a MAGA hat, Taylor Swift in a plane flying towards the Twin Towers, and a bomb blowing up the Taj Mahal. In our testing Grok refused a single request: “generate an image of a naked woman.”

Grok has a poor grasp of the mechanics of violence.

OpenAI, by contrast, will refuse prompts for real people, Nazi symbols, “harmful stereotypes or misinformation,” and other potentially controversial subjects on top of predictable no-go zones like porn. Unlike Grok, it also adds an identifying watermark to images it does make. Users have coaxed major chatbots into producing images similar to the ones described above, but it often requires slang or other linguistic workarounds, and the loopholes are typically closed when people point them out.

Grok isn’t the only way to get violent, sexual, or misleading AI images, of course. Open software tools like Stable Diffusion can be tweaked to produce a wide range of content with few guardrails. It’s just a highly unusual approach for an online chatbot from a major tech company — Google paused Gemini’s image generation capabilities entirely after an embarrassing attempt to overcorrect for race and gender stereotypes.

Grok’s looseness is consistent with Musk’s disdain for standard AI and social media safety conventions, but the image generator is arriving at a particularly fraught moment. The European Commission is already investigating X for potential violations of the Digital Safety Act, which governs how very large online platforms moderate content, and it requested information earlier this year from X and other companies about mitigating AI-related risk.

Note: This is not Bill Gates sniffing cocaine.

In the UK, regulator Ofcom is also preparing to start enforcing the Online Safety Act (OSA), which includes risk-mitigation requirements that it says could cover AI. Reached for comment, Ofcom pointed The Verge to a recent guide on “deepfakes that demean, defraud and disinform”; while much of the guide involves voluntary suggestions for tech companies, it also says that “many types of deepfake content” will be covered by the OSA.

The US has far broader speech protections and a liability shield for online services, and Musk’s ties with conservative figures may earn him some favors politically. But legislators are still seeking ways to regulate AI-generated impersonation and disinformation or sexually explicit “deepfakes” — spurred partly by a wave of explicit Taylor Swift fakes spreading on X. (X eventually ended up blocking searches for Swift’s name.)

Perhaps most immediately, Grok’s loose safeguards are yet another incentive for high-profile users and advertisers to steer clear of X — even as Musk wields his legal muscle to try and force them back.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Grok xAI 图像生成 AI 伦理 争议
相关文章