cs.AI updates on arXiv.org 07月30日 12:12
XAI for Point Cloud Data using Perturbations based on Meaningful Segmentation
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本研究提出了一种新颖的、基于分割的可解释人工智能(XAI)方法,用于处理点云分类神经网络。作为该方法的核心组成部分,我们引入了一种新颖的点移位机制,能够对点云数据引入扰动。该方法旨在提高AI算法在关键领域的决策过程透明度,通过生成人类易于理解的解释,帮助分析和决策。我们专注于解释点云分类算法,利用点云分割模型生成解释,并通过点移位机制引入扰动,生成显著性图。与现有方法不同,我们提出的方法生成的分割具有明确的语义含义,能够生成更具可解释性的显著性图,并已通过与经典聚类算法的比较以及示例输入分析验证了其有效性。

💡 提出新颖的基于分割的XAI方法,用于点云分类神经网络,旨在提高AI决策过程的可解释性。该方法利用点云分割模型来生成解释,并引入点移位机制来引入扰动。

🚀 引入了一种新颖的点移位机制,用于在点云数据中引入扰动,确保移位后的点不再影响分类算法的输出。此机制是生成有意义显著性图的关键。

🌟 与现有方法相比,该方法生成的分割具有人类可轻松理解的语义含义,从而能够生成更具可解释性和信息量的显著性图,帮助用户更好地理解AI模型的决策过程。

📊 通过与经典聚类算法的比较以及对示例输入的显著性图分析,验证了该方法在生成有意义解释方面的有效性和实用性。

arXiv:2507.22020v1 Announce Type: cross Abstract: We propose a novel segmentation-based explainable artificial intelligence (XAI) method for neural networks working on point cloud classification. As one building block of this method, we propose a novel point-shifting mechanism to introduce perturbations in point cloud data. Recently, AI has seen an exponential growth. Hence, it is important to understand the decision-making process of AI algorithms when they are applied in critical areas. Our work focuses on explaining AI algorithms that classify point cloud data. An important aspect of the methods used for explaining AI algorithms is their ability to produce explanations that are easy for humans to understand. This allows them to analyze the AI algorithms better and make appropriate decisions based on that analysis. Therefore, in this work, we intend to generate meaningful explanations that can be easily interpreted by humans. The point cloud data we consider represents 3D objects such as cars, guitars, and laptops. We make use of point cloud segmentation models to generate explanations for the working of classification models. The segments are used to introduce perturbations into the input point cloud data and generate saliency maps. The perturbations are introduced using the novel point-shifting mechanism proposed in this work which ensures that the shifted points no longer influence the output of the classification algorithm. In contrast to previous methods, the segments used by our method are meaningful, i.e. humans can easily interpret the meaning of the segments. Thus, the benefit of our method over other methods is its ability to produce more meaningful saliency maps. We compare our method with the use of classical clustering algorithms to generate explanations. We also analyze the saliency maps generated for example inputs using our method to demonstrate the usefulness of the method in generating meaningful explanations.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

点云分类 可解释人工智能 XAI 点移位机制 显著性图
相关文章