ΑΙhub 02月14日
Interview with Kayla Boggess: Explainable AI for more accessible and understandable technologies
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了凯拉·博格斯的研究。她是弗吉尼亚大学的博士生,在多学科实验室研究可解释AI。其研究涵盖MARL解释、以人为中心的解释等,还提到未来计划及选择该领域的原因,也给出了对博士生的建议,并分享了她的兴趣爱好。

🎓凯拉是弗吉尼亚大学博士生,在多学科实验室工作,研究可解释AI。

💡她的研究包括为MARL生成综合政策摘要,提供自然语言解释等。

🚀未来她想改进复杂系统的可解释方法,应用于更多实际系统。

💬强调新博士生不应过早局限于一个问题,要探索自己感兴趣的领域。


In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In the second of our interviews with the 2025 cohort, we hear from Kayla Boggess, a PhD student at the University of Virginia, and find out more about her research on explainable AI.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I’m currently working on my PhD at the University of Virginia. I’m a member of the University of Virginia Link Lab, which is a multi-disciplinary lab focused on cyber-physical systems. There’s individuals from departments across the University of Virginia that all work in the lab, so I’ve had the opportunity to work with other researchers in computer science, system engineering, psychology, and even law during my time there. We work on real-world problems in robotics, autonomous vehicles, health care, internet of things, and smart cities.

Specifically, I work in explainable AI. My goal is to make advanced technologies more accessible and understandable for users by increasing system transparency, building user trust, and enhancing collaboration with autonomous systems. I have worked to create concept-based natural language explanations for multi-agent reinforcement learning policies and applied these methods to domains like autonomous vehicles and robotic search and rescue.

Could you give us an overview of the research you’ve carried out so far during your PhD?

My research in explainable AI focuses on two key areas: explanations for multi-agent reinforcement learning (MARL) and human-centric explanations. First, I developed methods to generate comprehensive policy summaries that clarify agent behaviors within a MARL policy and provide concept-based natural language explanations to address user queries about agent decisions such as when, what, and why not. Second, I leveraged user preferences, social concepts, subjective evaluations, and information presentation methods to create more effective explanations tailored to human needs.

Is there an aspect of your research that has been particularly interesting?

I particularly enjoy the evaluation aspect of research. So, the computational experiments and user studies that are run once the algorithms are developed. Since I work in the generation of explanations there is a lot of evaluation with real-world users that needs to happen to show that the explanation we produced is actually usable and helpful. I always find it interesting how people react to the systems, what they wish the system did and didn’t do, how they would change how the information is laid out, and what they take away from the explanation. People are much more difficult to deal with than algorithms, so I like to piece together the best way to help them and get them to collaborate with AI systems. Sometimes it’s significantly more difficult than building the system itself.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

In the future, I would like to improve explainable methods for complex systems by focusing further on the development of robust algorithms and the integration of human factors. I would like to apply my methods to more complex, real-world systems like autonomous vehicles and large language models. My goal is to help ensure that understandable and satisfactory decisions can be made by AI systems for a broad
audience.

What made you want to study AI, and in particular the area of explainable AI?

I actually have two undergraduate degrees, one in computer science and the other in English Literature. I originally wanted to get my PhD in English, but after trying to apply to both English and computer science programs, I found that I had better opportunities on the computer science side. I was offered a position in the first cohort of the University of Virginia Link Lab’s NRT program and I took the offer because it was going to allow me to do interdisciplinary work. I wasn’t particularly sure what I was going to study yet, but I knew I wanted it to be somewhere between computer science and English. We were able to rotate through multiple advisors in my first year, so I didn’t have to commit to anything directly to begin with. My advisor approached me with an explainable AI project that she wanted to get off the ground and felt like I was a good fit with my background. I enjoyed the project so much that I decided to continue working on it.

What advice would you give to someone thinking of doing a PhD in the field?

I would say that a new PhD student shouldn’t tie themselves down to one problem too quickly. Take time to explore different fields and find something you are interested in. Just because you come into your PhD thinking you are going to do one thing doesn’t mean you’ll end your PhD working on that same problem. Play to your strengths as a researcher and don’t just pick a field because you think it’s trendy. Be ready to walk away from a problem if you have to, but don’t be afraid to take on new projects and try things out. You never know who you will meet and how things will work out.

Could you tell us an interesting (non-AI related) fact about you?

I’m self-taught in sewing. In my down time, I like to make all sorts of things like jackets, dresses, and pants. It’s something that keeps my mind engaged in problem-solving while letting me do something creative. I’ve actually won several awards for my pieces in the past.

About Kayla

Kayla Boggess is a PhD student in the Department of Computer Science at the University of Virginia, advised by Dr Lu Feng. Kayla’s research is at the intersection of explainable artificial intelligence, human-in-the-loop cyber-physical systems, and formal methods. She is interested in developing human-focused cyber-physical systems using natural language explanations for various single and multi-agent domains such as search and rescue and autonomous vehicles. Her research has led to multiple publications at top-tier computer science conferences such as IJCAI, ICCPS, and IROS. Kayla is a recipient of the prestigious NSF NRT Graduate Research Fellowship and a CPS Rising Star Award.

Further reading

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

可解释AI 凯拉·博格斯 研究方向 博士生建议
相关文章