cs.AI updates on arXiv.org 07月30日 12:11
A finite time analysis of distributed Q-learning
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文研究了分布式Q学习算法在有限时间内的分析,提出了一种新的样本复杂度结果,为多智能体强化学习(MARL)提供了理论支持。

arXiv:2405.14078v2 Announce Type: replace Abstract: Multi-agent reinforcement learning (MARL) has witnessed a remarkable surge in interest, fueled by the empirical success achieved in applications of single-agent reinforcement learning (RL). In this study, we consider a distributed Q-learning scenario, wherein a number of agents cooperatively solve a sequential decision making problem without access to the central reward function which is an average of the local rewards. In particular, we study finite-time analysis of a distributed Q-learning algorithm, and provide a new sample complexity result of $\tilde{\mathcal{O}}\left( \min\left{\frac{1}{\epsilon^2}\frac{t{\text{mix}}}{(1-\gamma)^6 d{\min}^4 } ,\frac{1}{\epsilon}\frac{\sqrt{|\gS||\gA|}}{(1-\sigma2(\boldsymbol{W}))(1-\gamma)^4 d{\min}^3} \right}\right)$ under tabular lookup

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

多智能体强化学习 分布式Q学习 样本复杂度 有限时间分析 理论支持
相关文章