DailyAI | Exploring the World of Artificial Intelligence 01月04日
Two hours of AI conversation can create a near-perfect digital twin of anyone
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

斯坦福和谷歌DeepMind的研究人员开发出一种AI,只需两小时的对话即可惊人地准确复制人类个性。通过采访1052位背景各异的人,他们创建了“模拟代理”——能够高度一致地预测人类对应者的信念、态度和行为的数字副本。该AI通过自然对话收集详细的个人信息,并从心理学、行为经济学、政治学和人口统计学等多个专业角度分析对话。测试表明,这些AI副本在社交态度和性格测试中与人类高度匹配,但在经济博弈中存在局限性。该技术可用于科学研究,但也存在被滥用的风险,未来人类与机器的互动将更加复杂。

🗣️ 研究人员通过与1052人进行两小时对话,构建了能预测其信念、态度和行为的“模拟代理”,这些数字副本通过AI访谈收集详细个人信息。

🧠 该AI系统运用心理学、行为经济学、政治学和人口统计学等视角分析对话,例如,心理学家分析性格特质和情感模式,经济学家研究财务决策和风险承受能力。

📊 测试显示,AI副本在社交态度和性格测试中与人类匹配度高,但在经济博弈中预测人类的慷慨和合作行为存在局限。

🔬 AI克隆技术可用于科学研究,例如预测公众对公共卫生信息的反应,或研究社区如何适应社会变革。

⚠️ 但此技术也存在被滥用的风险,未来区分真实人类互动和AI生成内容将更加困难,需要关注隐私和伦理问题。

Stanford and Google DeepMind researchers have created AI that can replicate human personalities with uncanny accuracy after just a two-hour conversation. 

By interviewing 1,052 people from diverse backgrounds, they built what they call “simulation agents” – digital copies that could predict their human counterparts’ beliefs, attitudes, and behaviors with remarkable consistency.

To create the digital copies, the team uses data from an “AI interviewer” designed to engage participants in natural conversation. 

The AI interviewer asks questions and generates personalized follow-up questions – an average of 82 per session – exploring everything from childhood memories to political views.

Through these two-hour discussions, each participant generated detailed transcripts averaging 6,500 words.

The above shows the study platform, which includes participant sign-up, avatar creation, and a main interface with modules for consent, avatar creation, interview, surveys/experiments, and a self-consistency retake of surveys/experiments. Modules become available sequentially as previous ones are completed. Source: ArXiv.

For example, when a participant mentions their childhood hometown, the AI might probe deeper, asking about specific memories or experiences. By simulating a natural flow of conversation, the system captures nuanced personal information that standard surveys typically overlook.

Behind the scenes, the study documents what the researchers call “expert reflection” – prompting large language models to analyze each conversation from four distinct professional viewpoints:

The researchers concluded that this interview-based technique outperformed comparable methods – such as mining social media data – by a substantial margin.

The above shows the interview interface, which features an AI interviewer represented by a 2-D sprite in a pulsating white circle that matches the audio level. The sprite changes to a microphone when it’s the participant’s turn. A progress bar shows a sprite traveling along a line, and options are available for subtitles and pausing.

Testing the digital copies

The researchers put their AI replicas through a battery of tests. 

First, they used the General Social Survey – a measure of social attitudes that asks questions about everything from political views to religious beliefs. Here, the AI copies matched their human counterparts’ responses 85% of the time.

On the Big Five personality test, which measures traits like openness and conscientiousness through 44 different questions, the AI predictions aligned with human responses about 80% of the time. The system was particularly good at capturing traits like extraversion and neuroticism.

Economic game testing revealed fascinating limitations, however. In the “Dictator Game,” where participants decide how to split money with others, the AI struggled to perfectly predict human generosity. 

In the “Trust Game,” which tests willingness to cooperate with others for mutual benefit, the digital copies only matched human choices about two-thirds of the time. 

This suggests that while AI can grasp our stated values, it still can’t fully capture the complexity of human social decision-making.

Real-world experiments

The researchers also ran five classic social psychology experiments using their AI copies. 

In one experiment testing how perceived intent affects blame, both humans and their AI copies showed similar patterns of assigning more blame when harmful actions seemed intentional. 

Another experiment examined how fairness influences emotional responses, with AI copies accurately predicting human reactions to fair versus unfair treatment.

The AI replicas successfully reproduced human behavior in four out of five experiments, suggesting they can model not just individual topical responses but broad, complex behavioral patterns.

Easy AI clones: What are the implications?

AI clones are big business, with Meta recently announcing plans to fill Facebook and Instagram with AI profiles that can create content and engage with users.

TikTok has also jumped into the fray with its new “Symphony” suite of AI-powered creative tools, which includes digital avatars that can be used by brands and creators to produce localized content at scale.

With Symphony Digital Avatars, TikTok is enabling new ways for creators and brands to captivate global audiences using generative AI. The avatars can represent real people with a wide range of gestures, expressions, ages, nationalities and languages.

Stanford and DeepMind’s research suggests such digital replicas will become far more sophisticated – and easier to build and deploy at scale. 

“If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made — that, I think, is ultimately the future,” states lead researcher Joon Sung Park, a Stanford PhD student in computer science, to MIT.

However, Park describes that there are upsides to such technology, as building accurate clones could support scientific research. 

Instead of running expensive or ethically questionable experiments on real people, researchers could test how populations might respond to certain inputs. For example, it could help predict reactions to public health messages or study how communities adapt to major societal shifts.

Ultimately, though, the same features that make these AI replicas valuable for research also make them powerful tools for deception. 

As digital copies become more convincing, distinguishing authentic human interaction from AI-generated content will grow far more complex. 

The research team acknowledges these risks. Their framework requires clear consent from participants and allows them to withdraw their data, treating personality replication with the same privacy concerns as sensitive medical information. 

In any case, we’re entering uncharted territory in human-machine interaction, and the long-term implications remain largely unknown.

The post Two hours of AI conversation can create a near-perfect digital twin of anyone appeared first on DailyAI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 人格复制 数字孪生 模拟代理 伦理风险
相关文章