ΑΙhub 06月10日 16:04
Interview with Amar Halilovic: Explainable AI for robotics
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了乌尔姆大学博士生Amar Halilovic的研究,重点是机器人导航中的可解释AI。研究探讨了机器人如何生成符合人类偏好和期望的行动解释,尤其是在导航任务中。Halilovic的研究涵盖了环境解释框架、黑盒与生成方法、解释属性规划以及动态选择最佳解释策略。他认为,理解人类在不同情境下对机器人行为的解读差异至关重要。未来,Halilovic计划扩展其框架,实现实时适应,并通过用户研究验证解释的有效性。

🤔 Amar Halilovic的研究核心在于为机器人导航构建可解释的AI系统,重点在于机器人如何生成符合人类偏好和期望的解释,特别是在导航任务中。

💡 Halilovic开发了一个框架,用于解释机器人行为和决策,尤其是在出现问题时。他探索了生成文本和视觉解释的黑盒和生成方法,并研究了诸如时间、表示和持续时间等解释属性的规划。

🧐 他的研究还包括动态选择最佳解释策略,这取决于上下文和用户偏好。他发现人们对机器人行为的解释会因紧急程度或失败情境而异,因此调整解释的时机和内容至关重要。

🚀 未来,Halilovic计划扩展框架以实现实时适应,使机器人能够从用户反馈中学习并即时调整解释。他还计划进行更多的用户研究,以验证这些解释在现实世界人机交互环境中的有效性。

In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In this latest interview, we hear from Amar Halilovic, a PhD student at Ulm University.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I’m currently a PhD student at Ulm University in Germany, where I focus on explainable AI for robotics. My research investigates how robots can generate explanations of their actions in a way that aligns with human preferences and expectations, particularly in navigation tasks.

Could you give us an overview of the research you’ve carried out so far during your PhD?

So far, I’ve developed a framework for environmental explanations of robot actions and decisions, especially when things go wrong. I have explored black-box and generative approaches for the generation of textual and visual explanations. Furthermore, I have been working on planning of different explanation attributes, such as timing, representation, duration, etc. Lately, I’ve been working on methods for dynamically selecting the best explanation strategy depending on the context and user preferences.

Is there an aspect of your research that has been particularly interesting?

Yes, I find it fascinating how people interpret robot behavior differently depending on the urgency or failure context. It’s been especially rewarding to study how explanation expectations shift in different situations and how we can tailor explanation timing and content accordingly.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

Next, I’ll be extending the framework to incorporate real-time adaptation, enabling robots to learn from user feedback and adjust their explanations on the fly. I’m also planning more user studies to validate the effectiveness of these explanations in real-world human-robot interaction settings.

Amar with his poster at the AAAI/SIGAI Doctoral Consortium at AAAI 2025.

What made you want to study AI, and, in particular, explainable robot navigation?

I’ve always been interested in the intersection of humans and machines. During my studies, I realized that making AI systems understandable isn’t just a technical challenge—it’s key to trust and usability. Robot navigation struck me as a particularly compelling area because decisions are spatial and visual, making explanations both challenging and impactful.

What advice would you give to someone thinking of doing a PhD in the field?

Pick a topic that genuinely excites you—you’ll be living with it for several years! Also, build a support network of mentors and peers. It’s easy to get lost in the technical work, but collaboration and feedback are vital.

Could you tell us an interesting (non-AI related) fact about you?

I have lived and studied in four different countries.

About Amar

Amar is a PhD student at the Institute of Artificial Intelligence of Ulm University in Germany. His research focuses on Explainable Artificial Intelligence (XAI) in Human-Robot Interaction (HRI), particularly how robots can generate context-sensitive explanations for navigation decisions. He combines symbolic planning and machine learning to build explainable robot systems that adapt to human preferences and different contexts. Before starting his PhD, he studied Electrical Engineering at the University of Sarajevo in Sarajevo, Bosnia and Herzegovina, and Computer Science at Mälardalen University in Västerås, Sweden. Outside academia, Amar enjoys travelling, photography, and exploring connections between technology and society.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

可解释AI 机器人导航 人机交互 博士研究
相关文章