The Verge - Artificial Intelligences 2024年09月13日
Don’t ask if AI can make art — ask how AI can be art
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

近年来,AI在艺术领域的应用引发了热议,人们关注的焦点集中在AI生成图像和文本的艺术价值上。然而,本文认为,交互式AI系统才是AI艺术的未来,因为它们不仅能产生令人惊叹的输出,更重要的是,它们能提供一种独特的创作体验,让用户参与到创作过程中,并从中获得乐趣和启发。

🎨 **交互式AI系统为艺术创作提供了一种全新的可能性。**与传统的静态艺术形式不同,交互式AI系统允许用户参与到创作过程中,通过与AI模型进行互动,共同创造出独特的艺术作品。这种互动性不仅增强了艺术创作的趣味性,也让用户更深层次地理解AI艺术的创作过程。

🎮 **游戏化设计是交互式AI艺术的重要方向。**许多游戏开发者已将AI技术融入游戏设计中,例如《无人深空》等游戏利用AI生成游戏世界,为玩家提供无限的探索和体验。类似地,AI艺术也可以借鉴游戏设计的理念,将用户参与到创作过程中,例如通过设定游戏规则、选择场景、控制人物等方式,让用户参与到AI艺术的创作过程。

⚠️ **AI艺术的伦理问题需要引起重视。**随着AI技术的不断发展,AI艺术的伦理问题也越来越突出。例如,AI模型可能会被用来生成虚假信息、传播仇恨言论等,因此,在开发AI艺术系统时,需要谨慎考虑其伦理问题,并制定相应的规范和制度。

🤝 **开放式AI平台将推动AI艺术的发展。**封闭式的AI系统往往会限制用户的创作自由,而开放式的AI平台则可以鼓励用户参与到AI艺术的创作和发展中。例如,开源的AI模型和平台可以为用户提供更多自由度,让他们可以根据自己的想法和需求,定制AI模型,创作出更加个性化的艺术作品。

🚀 **AI艺术的未来充满无限可能。**随着AI技术的不断发展,AI艺术的应用场景将越来越广泛,其形式也将更加多样化。交互式AI艺术将成为AI艺术发展的重要方向,并将在未来为我们带来更多惊喜和挑战。

Image: Cath Virginia / The Verge, Getty Images

Debates over AI’s artistic value have focused on its generative output. But so far, interactive systems have proved far more interesting.

If you’re yearning for a fistfight with an artist, one simple phrase should do the trick: AI can do what you do.

The recent explosion of chatbots and text-to-image generators has prompted consternation from writers, illustrators, and musicians. AI tools like ChatGPT and DALL-E are extraordinary technical accomplishments, yet they seem increasingly purpose-built for producing bland content sludge. Artists fear both monetary loss and a devaluing of the creative process, and in a world where “AI” is coming to mean ubiquitous aesthetic pink slime, it’s not hard to see the source of the concern.

But even as their output tends to be disappointing, AI tools have become the internet’s favorite game — not because they often produce objectively great things but because people seem to love the process of producing and sharing them. Few things are more satisfying than tricking (or watching someone trick) a model into doing something naughty or incompetent: just look at the flurry of interest when xAI released an image generator that could make Disney characters behave badly or when ChatGPT persistently miscounted the letter “r” in “strawberry.” One of the first things people do with AI tools is mash together styles and ideas: Kermit the Frog as the Girl With a Pearl Earring, a Bible passage about removing a sandwich from a VCR, any movie scene directed by Michael Bay.

Despite artists’ concerns about being replaced by bad but cheap AI software, a lot of these words and images clearly weren’t made to avoid paying a writer or illustrator — or for commercial use at all. The back-and-forth of creating them is the point. And unlike promises that machines can replace painters or novelists, that back-and-forth offers a compelling vision of AI-based art.

Image: Hello Games
No Man’s Sky is one of countless games to amplify a human designer’s choices with non-“AI” procedural generation.

Art by algorithm has an extensive history, from Oulipo literature of the 1960s to the procedural generation of video games like No Man’s Sky. In the age of generative AI, some people are creating interesting experiments or using tools to automate parts of the conventional artistic process. The platform Artbreeder, which predates most modern AI image generators, appealed directly to artists with intriguing tools for collaboration and fine-grained control. But so far, much of the AI-generated media that spreads online does so through sheer indifference or the novelty factor. It’s funny when a product like xAI’s Grok or Microsoft’s Bing spits out tasteless or family-unfriendly pictures, but only because it’s xAI or Microsoft — any half-decent artist can make Mickey Mouse smoke pot.

All the same, there’s something fascinating about communicating with an AI tool. Generative AI systems are basically huge responsive databases for sorting through vast amounts of text and images in unexpected ways. Convincing them to combine those elements for a certain outcome produces the same satisfying feeling as building something in a video game or feeling the solution to a puzzle click. That doesn’t mean it can or should replace conventional game design. But with deliberate effort from creators, it’s the potential foundation of its own interactive media genre — a kind of hypertext drawing on nearly infinite combinations of human thought.

In a New Yorker essay called “Why A.I. Isn’t Going to Make Art,” the author, Ted Chiang, defines art as “something that results from making a lot of choices,” then as “an act of communication between you and your audience.” Chiang points out that lots of AI-generated media spreads a few human decisions over a large amount of output, and the result is bland, generic, and intentionless. That’s why it’s so well suited for spam and stock art, where the presence of text and images — like eye-catching clip art in a newsletter — matters more than what’s actually there.

By Chiang’s definitions, however, I’d argue some AI projects are clearly art. They just tend to be ones where the art includes the interactive AI system, not simply static output like a picture, a book, or pregenerated video game art. In 2019, before the rise of ubiquitous generative AI, Frank Lantz’s party game Hey Robot provoked people to examine the interplay between voice assistants and their users, using the simple mechanic of coaxing Siri or Alexa to say a chosen word. The same year, Latitude’s AI Dungeon 2 — probably the most popular AI game yet created — presented an early OpenAI text model refined into the style of a classic text adventure parser, capable of drawing on its source material for a pastiche of nearly any genre and subject matter.

More recently, in 2022, Morris Kolman and Alex Petros’ AYTA bot critiqued the hype around AI language models, offering a machine-powered version of Reddit’s “Am I the Asshole?” forum that would respond to any question with sets of fluent but entirely contradictory advice.

An early experience with AI Dungeon 2, which used OpenAI’s GPT-2 to build an infinite adventure game. This is a custom scenario I created in 2019.

In all of these cases, work has gone into either training a system or creating rules for engaging with it. And interactivity helps avoid the feeling of bland aimlessness that can easily define “AI art.” It draws an audience into the process of making choices, encouraging people to pull out individual pieces of a potentially huge body of work, looking for parts that interest them. The AYTA bot wouldn’t be nearly as entertaining if its creators just asked a half-dozen of their own questions and printed out the results. The bot works because you can bring your own ideas and see how it responds.

On a smaller scale, numerous AI platforms — including ChatGPT, Gemini, and Character.AI — let people create their own bots by adding commands to the default model. I haven’t seen nearly as much interesting work come out of these, but they’ve got potential as well. One of AI Dungeon’s most interesting features was a custom story system, which let people start a session with a world, characters, and an initial scenario and then turn it loose for other people to explore.

Some output from these projects could be compelling with no larger context, but it doesn’t need to be. It’s a bit like the stories produced by tabletop game campaigns: sure, some authors have spun their Dungeons & Dragons sessions into novels, but most of these sagas work better as a shared adventure among friends.

Now, is any of this true art, you might ask, or is it merely entertainment? I’m not sure it matters. Chiang dismisses the value of generative AI for either, defending the craft required for supposedly lowbrow genre work. Movements like pop art weakened the distinctions between “high” and “low” art decades ago, and many of AI art’s most vocal critics work in genres that might dismissively be dubbed “entertainment,” including web comics and mass-market fiction. Even Roger Ebert, who famously insisted the medium of video games could never be art, later confessed he’d found no great definition for what art was. “Is (X) really art?” is usually a debate about social status — and right now, we’re talking about whether AI-generated media can be enjoyable.

If some people are creating interesting interactive AI art projects, why isn’t the conversation about AI art focused on them? Well, partly because they’re also the riskiest kinds of projects — and the ones AI companies seem most hesitant to allow.

ChatGPT might have incidental game-like elements, but companies like OpenAI tend to dourly insist that they aren’t making creative or subjective human-directed systems. They represent their products as objective answer machines that will enhance productivity and maybe someday kill us all. Leaving aside the “kill us all” part, that’s not an unreasonable move. In a high interest rate world, tech companies have to make money, and bland business and productivity tools probably seem like a safe bet. Granted, many AI companies still haven’t figured the money part out, but OpenAI is never going to fulfill the promise of its valuation by selling a product that makes experimental art.

After years of facing little accountability for their content, tech platforms are also being held socially, if not necessarily legally, responsible for what users do with them. Letting artists push a system’s boundaries — something artists are known for — is a real reputational risk. And although current AI seems nowhere near true artificial general intelligence, the apocalyptic warnings around AGI make the risks seem higher-stakes.

Yet the upshot is that sophisticated AI models seem designed to squash the possibility of interesting, unexpected uses.

Most all-purpose chatbots and image generators have imperfect but intense guardrails: ChatGPT will refuse to explain the production of the Torment Nexus, for instance, on the grounds that a nonexistent sci-fi technology from a tweet might hurt someone. They’re geared toward producing the maximum amount of content with the least amount of effort; Chiang mentions that artists who devise painstaking ways to get fine-grained control have gotten less satisfying results over time, as companies fine-tune their systems to make sludge.

This makes sense for tools designed for search and business use. (Whether AI is any good for these things is another matter.) But big AI companies also crack down on developers who build interactive tools they deem too unsettling or risky, like game designer Jason Rohrer, who was cut off from OpenAI’s API for modeling a chatbot on his deceased fiancee. OpenAI bans (albeit often ineffectually) users from making custom GPT bots devoted to “fostering romantic companionship,” following a wave of concern about boyfriend and girlfriend bots destroying real-life romance. Open-source AI — including Stability’s Stable Diffusion, Meta’s Llama, and Mistral’s large language models — poses one potential solution. But many of these systems aren’t as high-profile as their closed-off counterparts and don’t offer simple starting points like custom bots.

No matter what model they’re using, people making interactive tools can unintentionally end up in nightmare scenarios. Interactive art requires ceding some power to an audience, accepting the unexpected in a way the creators of novels and paintings typically don’t. Generative AI systems often push things a step further. Artists are also ceding power to their source material: the vast catalog of data used to train image and language models, typically at a scale no one human could consume.

Game designers are already familiar with the Time To Penis problem, where people in any multiplayer world will immediately rush to create… exactly what the name suggests. In generative AI systems, you’re trying to anticipate not only what unexpected things players will do but how a model — often rife with biases from its source material — will respond.

This problem was nearly apocalyptic for the OpenAI GPT-based AI Dungeon. The game launched with expansive options for roleplaying, including sexual scenarios. Then OpenAI learned some players were using it to create lewd scenes involving underage characters. Under threat of being shut down, Latitude struggled to exclude these scenarios in a way that didn’t accidentally ban a whole slew of other interactions. No matter how many decisions artists and designers make while creating an interactive AI tool, they have to live with the possibility of these decisions being overruled.

All the while, some AI proponents have approached the art world more like bullies than collaborators, telling creators they’ll have to use AI tools or become obsolete, dismissing concerns about AI-generated art scams, and even trying to make people give companies their private work as training data. As long as the people behind AI systems seem to revel in knocking artists down a peg, why should anyone who calls themselves an artist want to use them?

Image: @adoxa / Artbreeder
The collaborative AI platform Artbreeder, which invites artists to remix each other’s work, predates most large-scale AI image generators.

AI-generated illustrations and novels tend to feel like pale shadows of real human effort so far. But interactive tools like chatbots and AI Dungeon are producing a clearly human-directed experience that would be difficult or impossible for a human designer to manage alone. They’re the most positive future I see for artificial intelligence and art.

Given the high-profile hostility between creatives and AI companies, it’s easy to forget that the recent history of machine-generated art is full of artists: people like Artbreeder creator Joel Simon, the comedians behind Botnik Studios, and the writer / programmers participating in the annual (and still ongoing) National Novel Generation Month. They weren’t trying to make themselves obsolete; they were using new technology to push the boundaries of their fields.

And interactive AI art has one more unique benefit: it’s a low-stakes place to learn the strengths and limitations of these systems. AI-powered search engines and customer service bots promise a command of facts and logic they demonstrably can’t deliver, and the result is bizarre chaos like lawyers writing briefs with ChatGPT. AI-powered art, by contrast, can encourage people to think of these tools as experiences shaped by humans rather than mysterious answer boxes. AI needs artists — even if the AI industry doesn’t think so.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI艺术 交互式AI 游戏化 伦理问题 开放式AI平台
相关文章