MIT News - Artificial intelligence 07月21日 20:09
The unique, mathematical shortcuts language models use to predict dynamic scenarios
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一项新的研究揭示了语言模型在处理序列信息时,并非完全模仿人类的逐步推理,而是采用了更巧妙的数学“捷径”。研究人员通过模拟经典的“猜数字”游戏,发现Transformer模型通过“关联算法”和“奇偶关联算法”等机制,将连续的状态变化组织成层级结构,并进行预测。这些发现表明,可以通过调整模型的训练方式,鼓励其采用更深层次的推理模式,从而提升其在动态任务中的预测能力,例如天气和金融市场的预测。研究成果为改进语言模型的可靠性和泛化能力提供了新的思路。

🧠 **模型并非逐一追踪,而是利用数学捷径:** 研究发现,语言模型在处理序列信息时,不像人类那样一步步追踪状态变化,而是通过“关联算法”和“奇偶关联算法”等数学方法,将连续步骤的信息进行聚合和层级化处理,从而做出预测。这是一种更高效的计算方式,但可能与人类的直观理解不同。

🌳 **层级化处理与“关联算法”:** “关联算法”将序列中的相邻步骤分组,并进行层级计算,如同构建一棵“推理树”。初始状态是树根,相邻步骤的信息被整合到不同的分支,最终通过分支的乘积得出预测结果。这种方式有助于模型快速处理复杂序列。

⚖️ **“奇偶关联算法”的精炼预测:** “奇偶关联算法”则在聚合信息前,先通过判断最终排列是偶数次还是奇数次重排来缩小可能性范围。随后,它也采用类似“关联算法”的分组和乘积计算方式,以更精确地预测最终结果。

💡 **鼓励深度推理,提升预测能力:** 研究者认为,可以通过调整模型训练的“深度维度”(即增加Transformer层数)而非“链式思考”的长度,来鼓励模型构建更深的推理树,从而提升其在动态任务中的预测能力。这为改进模型性能提供了新的方向。

⚠️ **避免“启发式”陷阱,提升泛化能力:** 研究还发现,模型早期过度依赖“启发式”(快速计算的经验法则)会导致泛化能力下降。因此,通过特定的预训练目标来抑制模型形成不良的计算习惯,是提升模型泛化能力的关键。

Let’s say you’re reading a story, or playing a game of chess. You may not have noticed, but each step of the way, your mind kept track of how the situation (or “state of the world”) was changing. You can imagine this as a sort of sequence of events list, which we use to update our prediction of what will happen next.

Language models like ChatGPT also track changes inside their own “mind” when finishing off a block of code or anticipating what you’ll write next. They typically make educated guesses using transformers — internal architectures that help the models understand sequential data — but the systems are sometimes incorrect because of flawed thinking patterns. Identifying and tweaking these underlying mechanisms helps language models become more reliable prognosticators, especially with more dynamic tasks like forecasting weather and financial markets.

But do these AI systems process developing situations like we do? A new paper from researchers in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Electrical Engineering and Computer Science shows that the models instead use clever mathematical shortcuts between each progressive step in a sequence, eventually making reasonable predictions. The team made this observation by going under the hood of language models, evaluating how closely they could keep track of objects that change position rapidly. Their findings show that engineers can control when language models use particular workarounds as a way to improve the systems’ predictive capabilities.

Shell games

The researchers analyzed the inner workings of these models using a clever experiment reminiscent of a classic concentration game. Ever had to guess the final location of an object after it’s placed under a cup and shuffled with identical containers? The team used a similar test, where the model guessed the final arrangement of particular digits (also called a permutation). The models were given a starting sequence, such as “42135,” and instructions about when and where to move each digit, like moving the “4” to the third position and onward, without knowing the final result.

In these experiments, transformer-based models gradually learned to predict the correct final arrangements. Instead of shuffling the digits based on the instructions they were given, though, the systems aggregated information between successive states (or individual steps within the sequence) and calculated the final permutation.

One go-to pattern the team observed, called the “Associative Algorithm,” essentially organizes nearby steps into groups and then calculates a final guess. You can think of this process as being structured like a tree, where the initial numerical arrangement is the “root.” As you move up the tree, adjacent steps are grouped into different branches and multiplied together. At the top of the tree is the final combination of numbers, computed by multiplying each resulting sequence on the branches together.

The other way language models guessed the final permutation was through a crafty mechanism called the “Parity-Associative Algorithm,” which essentially whittles down options before grouping them. It determines whether the final arrangement is the result of an even or odd number of rearrangements of individual digits. Then, the mechanism groups adjacent sequences from different steps before multiplying them, just like the Associative Algorithm.

“These behaviors tell us that transformers perform simulation by associative scan. Instead of following state changes step-by-step, the models organize them into hierarchies,” says MIT PhD student and CSAIL affiliate Belinda Li SM ’23, a lead author on the paper. “How do we encourage transformers to learn better state tracking? Instead of imposing that these systems form inferences about data in a human-like, sequential way, perhaps we should cater to the approaches they naturally use when tracking state changes.”

“One avenue of research has been to expand test-time computing along the depth dimension, rather than the token dimension — by increasing the number of transformer layers rather than the number of chain-of-thought tokens during test-time reasoning,” adds Li. “Our work suggests that this approach would allow transformers to build deeper reasoning trees.”

Through the looking glass

Li and her co-authors observed how the Associative and Parity-Associative algorithms worked using tools that allowed them to peer inside the “mind” of language models. 

They first used a method called “probing,” which shows what information flows through an AI system. Imagine you could look into a model’s brain to see its thoughts at a specific moment — in a similar way, the technique maps out the system’s mid-experiment predictions about the final arrangement of digits.

A tool called “activation patching” was then used to show where the language model processes changes to a situation. It involves meddling with some of the system’s “ideas,” injecting incorrect information into certain parts of the network while keeping other parts constant, and seeing how the system will adjust its predictions.

These tools revealed when the algorithms would make errors and when the systems “figured out” how to correctly guess the final permutations. They observed that the Associative Algorithm learned faster than the Parity-Associative Algorithm, while also performing better on longer sequences. Li attributes the latter’s difficulties with more elaborate instructions to an over-reliance on heuristics (or rules that allow us to compute a reasonable solution fast) to predict permutations.

“We’ve found that when language models use a heuristic early on in training, they’ll start to build these tricks into their mechanisms,” says Li. “However, those models tend to generalize worse than ones that don’t rely on heuristics. We found that certain pre-training objectives can deter or encourage these patterns, so in the future, we may look to design techniques that discourage models from picking up bad habits.”

The researchers note that their experiments were done on small-scale language models fine-tuned on synthetic data, but found the model size had little effect on the results. This suggests that fine-tuning larger language models, like GPT 4.1, would likely yield similar results. The team plans to examine their hypotheses more closely by testing language models of different sizes that haven’t been fine-tuned, evaluating their performance on dynamic real-world tasks such as tracking code and following how stories evolve.

Harvard University postdoc Keyon Vafa, who was not involved in the paper, says that the researchers’ findings could create opportunities to advance language models. “Many uses of large language models rely on tracking state: anything from providing recipes to writing code to keeping track of details in a conversation,” he says. “This paper makes significant progress in understanding how language models perform these tasks. This progress provides us with interesting insights into what language models are doing and offers promising new strategies for improving them.”

Li wrote the paper with MIT undergraduate student Zifan “Carl” Guo and senior author Jacob Andreas, who is an MIT associate professor of electrical engineering and computer science and CSAIL principal investigator. Their research was supported, in part, by Open Philanthropy, the MIT Quest for Intelligence, the National Science Foundation, the Clare Boothe Luce Program for Women in STEM, and a Sloan Research Fellowship.

The researchers presented their research at the International Conference on Machine Learning (ICML) this week.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

语言模型 Transformer 人工智能 序列处理 预测能力
相关文章