Communications of the ACM - Artificial Intelligence 07月17日 23:23
What Do You Think About AI?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了公众与人工智能(AI)专家之间在AI发展前景和影响上的显著认知差距。研究表明,公众对AI的担忧远超专家,对AI的积极影响持更保守态度。这种差异与公众的AI素养、媒体信息、娱乐内容以及个人特质(如年龄、性格)密切相关。此外,文章还指出,尽管存在分歧,公众和专家在AI监管方面存在共识,且性别差异也影响着对AI的态度。了解这些公众认知对于AI的合法性和推广至关重要,但如何将这些认知转化为实际行动仍在探索中。

📊 **公众与专家在AI认知上存在显著分歧**:研究显示,公众对AI技术发展持更谨慎甚至担忧的态度,51%的公众对AI日益增长的应用表示“更担心而非兴奋”,而专家持此观点的比例仅为15%。在AI对美国未来20年的积极影响方面,仅17%的公众认同,远低于专家的56%。这表明公众对AI的期望和理解与前沿开发者存在巨大差异。

🧠 **影响公众AI认知的多重因素**:公众对AI的态度受多种因素影响,包括个人性格特质(如“宜人性”和年轻年龄与更积极的看法相关)、对阴谋论的易感性(与负面态度相关)、媒体叙事(尤其是社交媒体上的不实信息和阴谋论)以及娱乐内容中的AI描绘。例如,相信娱乐内容真实反映AI的人,更可能将其视为情感伙伴而非威胁。

⚖️ **AI监管与性别差异的共识与分歧**:尽管在AI的总体看法上存在较大差异,公众和专家在AI监管问题上表现出高度一致性,均认为美国在AI监管方面可能做得不够。此外,研究发现了一个普遍的性别差距,男性普遍比女性对AI持有更积极的态度,这一发现对专家群体同样适用,显示出专家内部并非完全一致。

💡 **特定AI应用认知度不均与潜在影响**:公众对AI技术的认知存在显著差异。例如,虽然61%的受访者听说过ChatGPT等大型语言模型,但仅有18%了解AI在福利资格审核等具体应用中的潜力。这凸显了在某些高影响力AI应用领域,公众认知度普遍较低,尤其是在弱势群体中,他们可能因面部识别等技术而受到不成比例的影响。

Artificial intelligence (AI) is everywhere: in our devices, workplaces, schools, hospitals. Governments and organizations are incorporating AI systems into their workflows at pace, and tech companies are embedding generative AI (GenAI) in our everyday tools. Understanding what the public thinks about this juggernaut AI rollout is important for legitimacy and buy-in, but can be challenging due to its scale and complexity. Researchers are using empirical methods to understand public perceptions of AI. They are uncovering variety and trends alike, and a cavernous gap between how AI experts and the public envisage AI’s future.

Capturing the Big Picture

The Pew Research Center, a nonpartisan think-tank based in Washington, D.C., has been conducting research on the U.S. public’s view of AI since 2021, investigating both sector-specific contexts and general attitudes. For example, in a report examining attitudes to AI in the workplace, Pew found that 62% of 11,004 surveyed adults believed AI will have a “major impact” on workers in the next 20 years, but only 28% thought it would have a major impact on them personally. Pew teams also have analyzed topics such as Americans’ views of ChatGPT and driverless cars, and teachers’ perceptions of AI in K-12 education.

In a report released in April, Pew researchers revealed stark contrasts between how the public and experts view AI. One finding shows that only 15% of AI experts are “more concerned than excited” about the technology’s increasing use in daily life; that value shoots up to 51% for the public. When it comes to AI having a “very or somewhat positive impact on the U.S. over the next 20 years,” 56% of experts agreed with the statement, but just 17% of the public shared that view.

Giancarlo Pasquini, a research associate at the Pew Research Center who worked on the report, explained that experts are “consistently more positive” about AI than the public. “The experts who are at the cutting edge of this technology—developing new models, pushing things forward—they don’t fully see the technology in the same way that the general public does,” he said.

The two-part survey of public and AI expert views required distinct approaches. Public attitudes were garnered from 5,410 adults via the American Trends Panel, a nationally representative, probability-based survey panel. Identifying experts was more complicated, as “there’s no definitive source of who is an AI expert,” said Pasquini. The researchers defined experts as “individuals whose work or research relates to AI,” and included AI-related business, societal, and ethical—as well as technical—perspectives. The final sample of 1,013 experts was drawn from authors or presenters at AI-related conferences held in 2023 or 2024.

While public and expert views of AI frequently diverge, some alignments do emerge, notably in attitudes towards AI regulations. Said Pasquini, “Both sides said they largely worry that the U.S. wouldn’t go far enough in regulating AI: 58% of the public and 56% of experts said that.” A common gender gap was also uncovered, with men tending to be more positive towards AI than women across both groups. It was a “striking” finding for Pasquini, revealing that experts “ didn’t agree on every single thing,” he said.

How Views Are Formed

Some researchers aim to capture attitudes to AI; others are interested in where those attitudes come from in the first place. A Germany-based team from Chemnitz University of Technology, the University of Würzburg, and the Leibniz Institute for Educational Trajectories investigated how views of AI may be impacted by personality traits, finding that “agreeableness and younger age predict a more positive view towards artificially intelligent technology, whereas the susceptibility to conspiracy beliefs connects to a more negative attitude.”

Researchers from the U.K.’s Nottingham Trent University, Leicester University, and the Defence Science and Technology Laboratory worked with focus groups to understand public perceptions if AI in defense. They found that assumptions about AI’s use in the sector “were generally driven by inaccurate narratives and conspiracy theories and (mis)information presented on social media.”

A University of Texas at Austin team studied how entertainment media may influence perceptions of AI and identified a “significant relationship” between the two. “Those who believe AI is realistically depicted in entertainment media were more likely to see AIs as potential emotional partners or apocalyptic robots than to imagine AIs taking over jobs or operating as surveillance tools,” the researchers concluded.

Reflecting Nuance

Capturing a multitude of views shaped by an abundance of narratives informs how surveys are designed. The London-based Ada Lovelace Institute and The Alan Turing Institute, the U.K’s the national institute for data science and AI, have been collaborating to understand how people in the U.K. feel about AI. The results of their first survey were published in 2023 and a second wave was released in March 2025.

Roshni Modhvadia, a researcher at the Ada Lovelace Institute, co-authored the report with Tvesha Sippy, an online safety researcher at The Alan Turing Institute. For Modhvadia, recognizing that “AI isn’t one thing” and that people often hold “nuanced views” impacted how the report was devised. “We wanted to look at very specific applications of AI,” she explained.

The results uncovered how variable awareness of AI technologies can be. For example, 61% of the sample had heard of large language models (LLMs) such as ChatGPT, however only 18% were aware of AI’s potential use to determine eligibility for welfare benefits. The latter was an interesting finding, said Modhvadia. “This is a really impactful AI, and public awareness seems to be quite low.”

The sample of 3,513 adults was drawn from the U.K.’s National Centre for Social Research’s random probability NatCen Opinion Panel. Sample weighting was used to ensure respondents were representative of the population and to understand the differential impacts of AI on societal groups. “Black and Asian communities were a lot more concerned about specific tools, for instance, facial recognition and policing; we know minority ethnic communities are disproportionately affected by these tools,” said Modhvadia.

Echoing the findings of the Pew survey, Modhvadia is mindful of a mismatch between public and expert perceptions. A survey of AI researchers by University College London found that “researchers have a ‘deficit model’ of the public; they assume that a lack of public trust in AI is due to low AI literacy,” she explained. The Ada Lovelace/ Alan Turing Institutes’ research suggests that the public does understand what AI might mean for society, however, “they have nuanced views that do not rely on having knowledge of the technical detail of these tools,” she said.

Ultimately, having a clear idea of what people think supports legitimacy around the development of AI, according to Modhvadia. “Understanding people’s hopes, expectations, and experiences is really important in that cycle, all the way from decisions on what data is used to train AI to how companies are developing these tools and where they’re implemented,” she said.

With an increasing body of research providing insights into public attitudes to AI, the question is: what to do with it? The survey Modhvadia worked on was part of the U.K.’s Public Voices in AI project and funded by the Economic and Social Research Council—a public sector investment that demonstrates “energy around public participation in the data and AI space,” she said. Yet, finding out what people think and ensuring that has real-life impact are two different things. For governments and academia, what form that takes is still unfolding. For companies, it may simply boil down to whether people like and use their AI products—or not.

Karen Emslie is a location-independent freelance journalist and essayist.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 公众认知 专家观点 AI监管 AI素养
相关文章