MIT News - Artificial intelligence 01月29日
New training approach could help AI agents perform better in uncertain conditions
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

麻省理工学院的研究人员发现,与传统观念相反,在无噪声环境中训练人工智能体有时能使其表现更好。他们称这种现象为“室内训练效应”。研究人员通过训练AI玩Atari游戏,并添加不确定性来研究这一现象。结果表明,在无噪声环境中训练的AI在有噪声环境中测试时,表现优于在相同噪声环境中训练的AI。这表明,在某些情况下,在不确定性较小的环境中训练AI,可以使其更好地适应真实世界中的复杂环境。这项研究为AI训练方法提供了新的思路。

💡研究发现,在无噪声环境中训练的AI智能体,在面对有噪声的测试环境时,表现反而优于在噪声环境中直接训练的智能体,这种现象被称为“室内训练效应”。

🕹️研究人员通过修改Atari游戏,并加入随机性来模拟噪声环境。他们发现,无论游戏类型如何变化,室内训练效应都普遍存在,这表明该效应并非偶然。

🎯研究结果表明,在训练AI时,并不总是需要模拟真实环境的噪声。有时,在更干净、更可控的环境中训练,反而能让AI更好地掌握基本规则,从而更好地适应真实世界。

🤔研究还发现,当两个AI智能体探索训练空间的方式相似时,在无噪声环境中训练的智能体表现更佳。反之,如果探索方式差异较大,在噪声环境中训练的智能体可能表现更好。这说明探索模式对训练效果有显著影响。

A home robot trained to perform household tasks in a factory may fail to effectively scrub the sink or take out the trash when deployed in a user’s kitchen, since this new environment differs from its training space.

To avoid this, engineers often try to match the simulated training environment as closely as possible with the real world where the agent will be deployed.

However, researchers from MIT and elsewhere have now found that, despite this conventional wisdom, sometimes training in a completely different environment yields a better-performing artificial intelligence agent.

Their results indicate that, in some situations, training a simulated AI agent in a world with less uncertainty, or “noise,” enabled it to perform better than a competing AI agent trained in the same, noisy world they used to test both agents.

The researchers call this unexpected phenomenon the indoor training effect.

“If we learn to play tennis in an indoor environment where there is no noise, we might be able to more easily master different shots. Then, if we move to a noisier environment, like a windy tennis court, we could have a higher probability of playing tennis well than if we started learning in the windy environment,” explains Serena Bono, a research assistant in the MIT Media Lab and lead author of a paper on the indoor training effect.

The researchers studied this phenomenon by training AI agents to play Atari games, which they modified by adding some unpredictability. They were surprised to find that the indoor training effect consistently occurred across Atari games and game variations.

They hope these results fuel additional research toward developing better training methods for AI agents.

“This is an entirely new axis to think about. Rather than trying to match the training and testing environments, we may be able to construct simulated environments where an AI agent learns even better,” adds co-author Spandan Madan, a graduate student at Harvard University.

Bono and Madan are joined on the paper by Ishaan Grover, an MIT graduate student; Mao Yasueda, a graduate student at Yale University; Cynthia Breazeal, professor of media arts and sciences and leader of the Personal Robotics Group in the MIT Media Lab; Hanspeter Pfister, the An Wang Professor of Computer Science at Harvard; and Gabriel Kreiman, a professor at Harvard Medical School. The research will be presented at the Association for the Advancement of Artificial Intelligence Conference.

Training troubles

The researchers set out to explore why reinforcement learning agents tend to have such dismal performance when tested on environments that differ from their training space.

Reinforcement learning is a trial-and-error method in which the agent explores a training space and learns to take actions that maximize its reward.

The team developed a technique to explicitly add a certain amount of noise to one element of the reinforcement learning problem called the transition function. The transition function defines the probability an agent will move from one state to another, based on the action it chooses.

If the agent is playing Pac-Man, a transition function might define the probability that ghosts on the game board will move up, down, left, or right. In standard reinforcement learning, the AI would be trained and tested using the same transition function.

The researchers added noise to the transition function with this conventional approach and, as expected, it hurt the agent’s Pac-Man performance.

But when the researchers trained the agent with a noise-free Pac-Man game, then tested it in an environment where they injected noise into the transition function, it performed better than an agent trained on the noisy game.

“The rule of thumb is that you should try to capture the deployment condition’s transition function as well as you can during training to get the most bang for your buck. We really tested this insight to death because we couldn’t believe it ourselves,” Madan says.

Injecting varying amounts of noise into the transition function let the researchers test many environments, but it didn’t create realistic games. The more noise they injected into Pac-Man, the more likely ghosts would randomly teleport to different squares.

To see if the indoor training effect occurred in normal Pac-Man games, they adjusted underlying probabilities so ghosts moved normally but were more likely to move up and down, rather than left and right. AI agents trained in noise-free environments still performed better in these realistic games.

“It was not only due to the way we added noise to create ad hoc environments. This seems to be a property of the reinforcement learning problem. And that was even more surprising to see,” Bono says.

Exploration explanations

When the researchers dug deeper in search of an explanation, they saw some correlations in how the AI agents explore the training space.

When both AI agents explore mostly the same areas, the agent trained in the non-noisy environment performs better, perhaps because it is easier for the agent to learn the rules of the game without the interference of noise.

If their exploration patterns are different, then the agent trained in the noisy environment tends to perform better. This might occur because the agent needs to understand patterns it can’t learn in the noise-free environment.

“If I only learn to play tennis with my forehand in the non-noisy environment, but then in the noisy one I have to also play with my backhand, I won’t play as well in the non-noisy environment,” Bono explains.

In the future, the researchers hope to explore how the indoor training effect might occur in more complex reinforcement learning environments, or with other techniques like computer vision and natural language processing. They also want to build training environments designed to leverage the indoor training effect, which could help AI agents perform better in uncertain environments.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI训练 室内训练效应 强化学习 噪声环境 Atari游戏
相关文章