Communications of the ACM - Artificial Intelligence 2024年11月26日
Personalizing Interactions
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Maja Matarić教授致力于开发社交辅助机器人,旨在通过个性化互动帮助人们应对健康、教育和治疗等方面的挑战。她发现,将人工智能融入机器人,使其能够根据用户的具体情况进行调整,可以显著提升参与度和效果。文章探讨了社交机器人的发展历程、面临的挑战以及未来方向,并强调了个性化互动在提升用户体验和效果方面的关键作用。同时,文章也反思了社交机器人的伦理问题,强调其应成为连接人与人之间关系的桥梁,而非替代人际互动。

🤔 **社交辅助机器人的目标:**Maja Matarić教授致力于开发能够提供社交、情感和心理支持的机器人,而非仅仅进行物理操作,以帮助人们应对健康、教育和治疗等方面的挑战。

🤖 **人工智能赋能:**AI技术使得机器人能够与用户进行更自然、更人性化的互动,并根据用户的具体情况进行个性化调整,从而提升参与度和效果。

💡 **个性化互动是关键:**文章强调了根据用户需求和状态进行个性化互动的重要性,机器人应该能够理解用户的需求,并提供相应的支持,而非简单地给出指令。

🤝 **人机协作的未来:**Matarić教授认为,社交机器人的作用在于连接人与人之间的关系,而非取代人际互动。未来的发展方向应该是人-人-机器人的协作模式,构建更健康的社会互动网络。

💲 **资金投入的挑战:**目前,机器人领域的大部分资金都流向了自动化制造等领域,而社交辅助机器人的发展相对滞后。Matarić教授呼吁加大对社交辅助机器人的投入,以推动其在更多领域的应用。

2024 ACM Athena Award recipient and University of Southern California professor Maja Matarić is not afraid to get personal. In her quest to design socially assistive robots—robots that provide social, not physical, support in realms like rehabilitation, education, and therapy—she realized that personalizing interactions would boost both engagement and outcomes. Artificial Intelligence (AI) has made that easier, though as always, surprises are never far when human beings are involved. Here, Matarić shares what she’s learned about meeting people where they are.

Let’s talk about your work on socially assistive robots. You’ve said that having kids inspired you to build robots that help people. How did that interest develop into the mission of supporting specific behavioral interventions in health, wellness, and education?

It was a confluence of events in my life. I had two small kids, and I really wanted my work to have impact beyond academia in ways that even children could understand.

I did a lot of reading, and I immersed myself in a bunch of communities, because I was trying to understand how to develop agents that could help people in ways in which they needed help. Identifying that niche—that place in the user journey where something is difficult, and where behavioral interventions could support people—is not at all obvious. It remains not obvious, because we engineers tend to think, “Here’s a problem. And this is how it should be solved.” And often, we don’t even recognize the right problem, much less the right solution. The hard part is not having to remember to take your medicine or figuring out how to do your stroke rehabilitation exercises; the hard part is that doing those things reminds people that they’re not well, and exercises are often stigmatizing, boring, and repetitive, or there are more nuanced motivations we need to uncover before we can find solutions.

It’s not hard to imagine the possibilities for human-machine interactions now, in the post-ChatGPT era, but you saw the potential far earlier. I’m curious to hear your perspective on what’s changed—and what has not changed—in the twenty-odd years you’ve been working in the field.

One thing that’s changed is that machines now talk to us like humans talk, and we perceive machines as if they were human—we agentify, or ascribe agency, to machines. It’s marvelous to see that the technology is at a level where we don’t have to write dialogue trees because the agents are actually smart. Of course, there’s still plenty of work left to do. There are also really hard questions to answer about how agents should interact with users, how they should adapt and personalize their behavior, and how we can ensure that they are ethical, safe, and privacy-protective. But the underlying technological substrate has accelerated tremendously.

What has not changed?

There are fundamental issues in robotics that remain unsolved, like how robots can effectively manipulate the world physically. From my perspective, though, that’s not the biggest challenge. I don’t need my robots to physically manipulate people. I need them to provide social, emotional, and psychological support, which means that accessibility is the far larger unsolved problem. Our goal is to put physically embodied agents into people’s lives—and they need to be in their lives, which means they need to be affordable, safe, and accessible. None of that exists on the consumer market. There are no such platforms.

I’m a little surprised by that, to the extent you’ve been able to demonstrate the efficacy of your robotic interventions, and the alternative is often engaging a trained human being. Why isn’t there more funding?

There has been a surge in funding in robotics, at least in startups and industry. Most of the money has gone to robots that manipulate things in the world, because ultimately, people are interested in automating manufacturing, and they’re not seeing the opportunity for socially assistive systems. The National Science Foundation tries, but they have a tiny budget. It’s not the mission of the Department of Defense. I recently received a grant from the National Institutes of Health, which is an honor, but NIH very rarely funds technologies for health interventions.

Still, I want to be optimistic, both in the sense that people are starting to understand the societal implications of talking machines, and because, fortunately and finally, the diversity of innovators who are contributing is expanding.

In the meantime, you created an open-source kit to help college and high school students build their own “robot friend.”

My lab started with a platform that was developed in Professor Guy Hoffman’s lab at Cornell called Blossom, and then we redid the structure to make it 3D-printed and much cheaper. Finally, we designed some exterior patterns that one can sew or crochet to customize the robot’s appearance.

Now, we have a robot platform that’s maybe $230 to build, and then you make a customized skin for it, and it’s really inexpensive and completely open-sourced, so hopefully anybody can do it.

These robots are very cute. I imagine that’s part of the point.

We and many others have done studies on this issue of embodiment. What happens when you interact with a screen versus when you interact with a physically embodied agent? There’s very clear evidence that physical embodiment is fundamental to improving both engagement and outcomes. That’s not to say that screen agents can’t do useful things. But the question is, how do they compare? It turns out, largely unfavorably.

We’re also working in contexts where things are really hard. This isn’t about video game engagement. It’s about helping children with autism learn new skills or supporting people with anxiety and depression in learning emotion regulation. We did a study in college dorms in which we compared a chatbot that provided LLM-based therapy versus the same LLM-based therapy from a physically embodied robot. Students engaged with and used both of them, but only the students who used the robot measurably reduced their psychiatric distress.

What are some of the things that surprise you about the way people interact with robots?

We’re always surprised by people. Early on, we were surprised when people tried to cheat or trick the robot. Now, we’re surprised by how people react to the idea of interacting with a robot. About seven years ago, we were doing a study with elderly people, and one of the participants said, “It’s cute, but why can’t it do as many things as my iPad can?” Some people absolutely love the robot and others are very grumpy, and the question is, what can we learn from that about our own stereotypes and cognitive biases, and about personalizing the interaction?

Personalization is why these interventions work. We need to be able to find out what someone needs right now, as opposed to simply telling them, “Here are your steps, and you need to go do them.” Even with physical health, it turns out that a lot depends on the state you’re in on a given day, on your metabolism, and so on. Why wouldn’t that be the case with your behavior, which relates to your mental and physical health, and also your social context?

It’s very multi-layered, isn’t it? It also alleviates this burden people often feel in therapeutic settings, where their health is tied to individual choices, and the broader social context figures, if at all, in a very indirect, amorphous way.

Exactly. However, I do worry that creating intelligent agents risks making vulnerable people even more isolated, because they’ll be told to just rely on their agent. What these agents should be doing is connecting people socially and serving as this interstitial network. It can’t be a binary choice between human-agent and human-human interaction. It has to be human-human-agent.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

社交机器人 人工智能 个性化互动 行为干预 人机交互
相关文章