ΑΙhub 03月02日
Congratulations to the #AAAI2025 outstanding paper award winners
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

AAAI 2025 年人工智能大会公布了杰出论文奖,表彰在技术贡献和阐述方面达到最高标准的论文。今年的获奖论文包括关于多智能体系统中通过少量查询实现最优失真的研究,该研究优化了信息不完整情况下的资源匹配效率。另一篇论文提出了基于推论反思的神经符号推理不一致性高效纠正方法,通过模拟人类认知过程,提升了神经符号AI系统的准确性和效率。此外,还有关于具有 Omega-正则目标的 POMDPs 可判定类的研究,为不确定性环境下的序列决策提供了理论保障。特别奖授予了关于大规模志愿者收集的生物多样性数据集中领域特定分布偏移的探索,揭示了数据偏差对深度学习模型在生物多样性监测中应用的影响。

🥇Every Bit Helps: 提出了一种多智能体系统中通过少量查询实现最优失真的方法,解决了在不完全信息下如何有效匹配智能体与替代方案的问题,通过少量查询每个智能体关于替代方案的效用,显著提高失真度,并推广到一般的社会选择问题。

🧠Efficient Rectification of Neuro-Symbolic Reasoning Inconsistencies by Abductive Reflection: 受到人类认知反思的启发,提出了基于推论反思(ABL-Refl)的神经符号AI系统改进方法,利用领域知识来推断反思向量,从而标记神经网络输出中的潜在错误,并通过推论来纠正它们,提高系统的准确性和效率。

🧩Revelations: 介绍了揭示机制,通过限制信息丢失,确保agent最终能够完全了解当前状态,构建了用于弱揭示和强揭示的两类POMDPs的精确算法,将可判定的情况简化为有限信念支持马尔可夫决策过程的分析。

🌍DivShift: 探讨了大规模志愿者收集的生物多样性数据集中领域特定分布偏移的影响,引入了Diversity Shift (DivShift)框架,量化了生物多样性领域特定分布偏移对深度学习模型性能的影响,并提出了针对自然世界图像生物多样性集合训练计算机视觉模型的建议。

The AAAI 2025 outstanding paper awards were announced during the opening ceremony of the 39th Annual AAAI Conference on Artificial Intelligence on Thursday 27 February. These awards honour papers that “exemplify the highest standards in technical contribution and exposition”. Papers are recommended for consideration during the review process by members of the Program Committee. This year, three papers have been selected as outstanding papers, with a further paper being recognised in the special track on AI for social impact.

AAAI-25 outstanding papers

Every Bit Helps: Achieving the Optimal Distortion with a Few Queries
Soroush Ebadian and Nisarg Shah

Abstract: A fundamental task in multi-agent systems is to match agents to alternatives (e.g., resources or tasks). Often, this is accomplished by eliciting agents’ ordinal rankings over the alternatives instead of their exact numerical utilities. While this simplifies elicitation, the incomplete information leads to inefficiency, captured by a worst-case measure called distortion. A recent line of work shows that making just a few queries to each agent regarding their cardinal utility for an alternative can significantly improve the distortion, with [1] achieving distortion with two queries per agent. We generalize their result by achieving distortion with queries per agent, for any constant , which is optimal given a previous lower bound by [2]. We also extend our finding to the general social choice problem, where one of alternatives must be chosen based on the preferences of agents, and show that distortion can be achieved with queries per agent, for any constant , which is also optimal given prior results. Thus, for both problems, our work settles open questions regarding the optimal distortion achievable using a fixed number of cardinal value queries.

Read the paper in full here.


Efficient Rectification of Neuro-Symbolic Reasoning Inconsistencies by Abductive Reflection
Wen-Chao Hu, Yuan Jiang, Zhi-Hua Zhou, Wang-Zhou Dai

Abstract: Neuro-Symbolic (NeSy) AI could be regarded as an analogy to human dual-process cognition, modeling the intuitive System 1 with neural networks and the algorithmic System 2 with symbolic reasoning. However, for complex learning targets, NeSy systems often generate outputs inconsistent with domain knowledge and it is challenging to rectify them. Inspired by the human Cognitive Reflection, which promptly detects errors in our intuitive response and revises them by invoking the System 2 reasoning, we propose to improve NeSy systems by introducing Abductive Reflection (ABL-Refl) based on the Abductive Learning (ABL) framework. ABL-Refl leverages domain knowledge to abduce a reflection vector during training, which can then flag potential errors in the neural network outputs and invoke abduction to rectify them and generate consistent outputs during inference. ABL-Refl is highly efficient in contrast to previous ABL implementations. Experiments show that ABL-Refl outperforms state-of-the-art NeSy methods, achieving excellent accuracy with fewer training resources and enhanced efficiency.

Read the paper in full here.


Revelations: A Decidable Class of POMDPs with Omega-Regular Objectives
Marius Belly, Nathanaël Fijalkow, Hugo Gimbert, Florian Horn, Guillermo Perez, Pierre Vandenhove

Abstract: Partially observable Markov decision processes (POMDPs) form a prominent model for uncertainty in sequential decision making. We are interested in constructing algorithms with theoretical guarantees to determine whether the agent has a strategy ensuring a given specification with probability 1. This well-studied problem is known to be undecidable already for very simple omega-regular objectives, because of the difficulty of reasoning on uncertain events. We introduce a revelation mechanism which restricts information loss by requiring that almost surely the agent has eventually full information of the current state. Our main technical results are to construct exact algorithms for two classes of POMDPs called weakly and strongly revealing. Importantly, the decidable cases reduce to the analysis of a finite belief-support Markov decision process. This yields a conceptually simple and exact algorithm for a large class of POMDPs.

Read the paper in full here.


AAAI-25 outstanding paper – special track on AI for social impact (AISI)

DivShift: Exploring Domain-Specific Distribution Shifts in Large-Scale, Volunteer-Collected Biodiversity Datasets
Elena Sierra, Teja Katterborn, Salim Soltani, Lauren Gillespie, Moisés Expósito-Alonso

Abstract: Large-scale, volunteer-collected datasets of community-identified natural world imagery like iNaturalist have enabled marked performance gains for fine-grained visual classification of plant species using deep learning models. However, such datasets are opportunistic and lack a structured sampling strategy. Resulting geographic, temporal, observation quality, and socioeconomic biases inherent to this volunteer-based participatory data collection process are stymieing the wide uptake of these models for downstream biodiversity monitoring tasks, especially in the Global South. While widely documented in biodiversity modeling literature, the impact of these biases’ downstream distribution shift on deep learning models have not been rigorously quantified. Here we introduce Diversity Shift (DivShift), a framework for quantifying the effects of biodiversity domain-specific distribution shifts on deep learning model performance. We also introduce DivShift – West Coast Plant (DivShift-WCP), a new curated dataset of almost 8 million iNaturalist plant observations across the western coast of North America, for diagnosing the effects of these biases in a controlled case study. Using this new dataset, we contrast computer vision model performance across a variety of these shifts and observe that these biases indeed confound model performance across observation quality, spatial location, and political boundaries. Interestingly, we find for all partitions that accuracy is lower than expected by chance from estimates of dataset shift from the data themselves, implying the structure within natural world images provides significant generalization improvements. From these observations, we suggest recommendations training computer vision models on natural world imagery biodiversity collections.

Read the paper in full here.


Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AAAI 2025 人工智能 杰出论文 多智能体系统 神经符号AI
相关文章