Content feed of the TransferLab — appliedAI Institute 2024年11月27日
Position: Leverage Foundational Models for Black-Box Optimization
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了利用大型语言模型(LLM)解决黑盒优化(BBO)中多模态和任务泛化挑战的问题。作者提出将BBO构建在基于序列的基础模型之上,利用LLM从各种模态检索信息的能力,从而获得更优的优化策略。传统BBO方法在处理多模态和任务泛化方面存在困难,而LLM能够处理多模态数据并针对不同任务进行微调,这使得它成为解决BBO挑战的有力工具。文章还概述了BBO的常见技术,并总结了从手工遗传算法到基于LLM的算法的演进过程。最后,作者提出了LLM在BBO中面临的一系列挑战和开放性问题,例如数据表示、多模态数据集、BBO元数据编码格式、大规模开源评估数据集等,为未来研究指明了方向。

🤔 **传统黑盒优化方法的局限性:**传统BBO方法在处理多模态数据和任务泛化方面存在困难,难以构建适用于不同任务和模态的可靠先验知识。

💡 **大型语言模型的优势:**LLM擅长处理多模态数据,可以通过预训练获取丰富的知识,并针对特定任务进行微调,满足BBO对基础模型的多方面需求。

🔄 **基于LLM的黑盒优化工作流程:**文章提出将BBO视为序列学习问题,利用LLM预测下一个优化候选,并结合历史信息和反馈迭代优化。

📊 **BBO算法的演进:**文章概述了BBO算法的演进过程,从手工遗传算法到基于模型的BBO,再到基于特征的元学习,最终发展到基于序列、注意力、标记和LLM的算法。

❓ **LLM在BBO中的挑战和开放问题:**文章提出了LLM在BBO中面临的挑战,例如数据表示、多模态数据集、元数据编码格式、评估数据集以及泛化能力等,为未来的研究指明了方向。

This paper explores the use of Large Language Models (LLMs) to address challenges in Black Box Optimization (BBO), particularly multi-modality and task generalization. The authors propose framing BBO around sequence-based foundation models, leveraging LLMs’ capabilities to retrieve information from various modalities resulting in superior optimization strategies.Motivation: Traditional BBO techniques struggle with multi-modality and task generalizationThe position paper by [Son24P] advocates using LLM-basedfoundation models for Black Box Optimization (BBO). The goal of BBO is tooptimize an objective function given only the evaluations of the function (i.e.,no gradients or other second-order information about the function). A commonexample of a BBO task is neural network architecture search, where the objectiveis to maximize classification accuracy based on different architectures.Classical BBO approaches include grid search, random search, and Bayesianoptimization.Figure 1. [Son24P], Figure 1. FoundationModels can learn priors from a wide variety of sources, such as world knowledge,domain-specific documents, and actual experimental evaluations. Such models canthen perform black-box optimization over various search spaces (e.g.hyperparameters, code, natural language) and feedbacks (numeric values,categorical ratings, and subjective sentiment).More recent BBO algorithms typically try to incorporate inductive biases orpriors into the search problem, e.g., domain knowledge, parameter constraints,the search history etc. One particular goal of these approaches is to performmeta-learning, i.e., to develop algorithms that can automatically provide priorsfor various tasks from different domains without additional task-specifictraining. However, constructing reliable priors that work across multiple tasksand can take in data from multiple modalities (values, text, images) ischallenging.Position: LLMs can process multi-model data and be fine-tuned to different tasksFigure 2. [Son24P], Figure2. Black-box optimization loop with sequential foundation models. Using metadata$m$ and history $h$, the model proposes candidates $x$ which are checked forfeasibility, evaluated, and then appended to the history.The central point made in this paper is that LLMs are a promising candidate fortackling this challenge (Figure 1). The key idea is to interpret BBO asa problem of learning a sequence: Given a search space $\mathcal{X}$ containinghyperparameter setting $x \in \mathcal{X}$ and a sequence or history $h{1:t-1}$of previous settings $x{1:t-1}$ and corresponding objective function values$y_{1:t-1}$, the goal is to predict the next element in the sequence, i.e., anew hyperparameter setting $x_t$.Transformer-based LLMs exceed at sequence learning and meet several criticalrequirements for foundational BBO:Multi-modality: They can process large amounts of data from various modalities.Pre-training: They can be pre-trained to acquire extensive world knowledge.Fine-tuning: They can be fine-tuned with task-specific information.The workflow for using LLM-based foundation models for BBO is visualized inFigure 2.The authors also give an overview of common techniques for BBO, summarizingthe increasing capabilities as we move from hand-crafted genetic algorithms,model-based BBO and feature-based meta-learning to sequence-based,attention-based, token-based and finally LLM-based algorithms (table1, paper section 3.2).Table 1. [Son24P], Table2. Classes of methods organized by their capabilities. Note: Method names basedon increasing development order - e.g. “Attention-based” can consist oftechniques up to their development such as meta-learning, but not LLMs.Finally, the authors collect a set of challenges and open questions for BBOwith LLMs (paper section 4). They argue that there is a need forbetter data representation and multimodality datasets for training modelson multi-modal tasksa common guideline or format for encoding BBO (meta)-data to be processed byLLMslarge open-source evaluation datasetsbetter generalization and customization of LLMs for different tasksnew benchmarks for metadata-rich BBO to better test the capabilities ofLLMsThis paper is an interesting read and provides a comprehensive overview of thelimitations of classical BBO methods and the possibilities of Large LanguageModels for Black Box Optimization.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

黑盒优化 大型语言模型 多模态 任务泛化 序列学习
相关文章