ΑΙhub 03月26日 22:29
How AI images are ‘flattening’ Indigenous cultures – creating a new form of tech colonialism
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能(AI)图像生成技术对澳大利亚原住民文化造成的潜在负面影响。Adobe等公司发布的AI生成图像被指责未能准确描绘原住民,甚至出现文化上不恰当的元素。文章深入分析了AI图像生成的工作原理,以及由此可能导致的“文化扁平化”现象,呼吁AI公司、研究人员、政府与原住民社区合作,制定伦理规范,以保护和尊重原住民文化。

🤔 AI图像生成技术通过学习大量图像及其描述,生成符合用户提示的原创图像。然而,这种技术在描绘特定文化群体时,可能因训练数据不足或偏差,导致图像失真或文化误用。

⚠️ AI生成的“原住民”图像常出现文化不准确之处,例如带有不相关标记或使用未经授权的艺术作品,这反映了对原住民传统文化的不尊重,并可能损害原住民艺术家的经济利益。

🌍 AI图像生成可能导致“文化扁平化”,即文化多样性被简化和同质化。文章指出,AI在缺乏对原住民语言、传统和文化背景的细致理解下,可能生成具有误导性的图像,加剧文化误解。

💡 解决这一问题需要多方合作,包括AI公司、研究人员、政府和原住民社区。共同制定伦理规范,改进AI训练数据,并提高公众对文化敏感性的认知,是保护原住民文化的重要措施。

By John McMullan, Murdoch University and Glen Stasiuk, Murdoch University

It feels like everything is slowly but surely being affected by the rise of artificial intelligence (AI). And like every other disruptive technology before it, AI is having both positive and negative outcomes for society.

One of these negative outcomes is the very specific, yet very real cultural harm posed to Australia’s Indigenous populations.

The National Indigenous Times reports Adobe has come under fire for hosting AI-generated stock images that claim to depict “Indigenous Australians”, but don’t resemble Aboriginal and Torres Strait Islander peoples.

Some of the figures in these generated images also have random body markings that are culturally meaningless. Critics who spoke to the outlet, including Indigenous artists and human rights advocates, point out these inaccuracies disregard the significance of traditional body markings to various First Nations cultures.

Adobe’s stock platform was also found to host AI-generated “Aboriginal artwork”, raising concerns over whether genuine Indigenous artworks were used to train the software without artists’ consent.

The findings paint an alarming picture of how representations of Indigenous cultures can suffer as a result of AI.

How AI image generators work

While training AI image generators is a complex affair, in a nutshell it involves feeding a neural network millions of images with associated text descriptions.

This is much like how you would have been taught to recognise various objects as a small child: you see a car and you’re told it’s a “car”. Then you see a different car, and are told it is also a “car”. Over time you begin to discern patterns that help you differentiate between cars and other objects.

You gain an idea of what a car “is”. Then, when asked to draw a picture of a car, you can synthesise all your knowledge to do so.

Many AI image generators produce images through what is called “reverse diffusion”. In essence, they take the images they’ve been trained on and add “noise” to them until they are just a mix of pixels of random colour and brightness. They then continually decrease the amount of noise, until the correct image is displayed.

Adrien Limousin / Non-image / Licenced by CC-BY 4.0

The process of creating an AI image begins with a text prompt by the user. The image generator then compares how the words in the prompt associate with its learning, and produces an image that satisfies the prompt. This image will be original, in that it won’t exist anywhere else.

If you’ve gone through this process, you’ll appreciate how difficult it can be to control the image that is produced.

Say you want your subject to be wearing a very specific style of jacket; you can prompt it as precisely as you like – but you may never get it perfect. The result will come down to how the model was trained and the dataset it was trained on.

We’ve seen early versions of the AI image generator Midjourney respond to prompts for “Indigenous Australians” with what appeared to be images of African tribespeople: essentially an amalgam of the “noble savage”.

Cultural flattening through AI

Now, consider that in the future, millions of people will be generating AI images from various generators. These may be used for teaching, promotional materials, advertisements, travel brochures, news articles and so on. Often, there will be little consequence if the images generated are “generic” in appearance.

But what if it was important for the image to accurately reflect what the creator was trying to represent?

In Australia, there are more than 250 Indigenous languages, each one specific to a particular place and people. For each of these groups, language is central to their identity, sense of belonging and empowerment.

It is a core element of their culture – just as much as their connection to a specific area of land, their kinship systems, spiritual beliefs, traditional stories, art, music, dance, laws, food practices and more.

But when an AI model is trained on images of Australian Indigenous peoples’ art, clothing, or artefacts, it isn’t also necessarily fed detailed information of which language group each image is associated with.

The result is “cultural flattening” through technology, wherein culture is made to appear more uniform and less diverse. In one example, we observed an AI image generator produce an image of what was mean to be an elderly First Nations man in a traditional Papuan headdress.

This is an example of technological colonialism, wherein tech corporations contribute to the homogenisation and/or misrepresentation of diverse Indigenous cultures.

We’ve also seen pictures of “Indigenous art” on stock footage websites that are clearly labelled as being produced by AI. How can these be sold as images of First Nations art if no First Nations person was involved in making them? Any connection to deep cultural knowledge and lived experience is completely absent.

Besides the obvious economic consequences for artists, long-term technological misrepresentation could also have adverse impacts on the self-perception of Indigenous individuals.

What can be done?

While there’s currently no simple solution, progress begins with discussion and engagement between AI companies, researchers, governments and Indigenous communities.

These collaborations should result in strategies for reclaiming visual narrative sovereignty. They may, for instance, implement ethical guidelines for AI image generation, or reconfigure AI training datasets to add nuance and specificity to Indigenous imagery.

At the same time, we’ll need to educate AI users about the risk of cultural flattening, and how to avoid it when representing Indigenous people, places, or art. This would require a coordinated approach involving educational institutions from kindergarten upwards, as well as the platforms that support AI image creation.

The future goal is, of course, the respectful representation of Indigenous cultures that are already fighting for survival in many other ways.

John McMullan, Screen Production Lecturer, Murdoch University and Glen Stasiuk, Academic Program Chair of Screen Production, Lecturer & Senior Indigenous Researcher, Murdoch University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI图像生成 原住民文化 文化扁平化 伦理规范
相关文章