本期的 20 篇论文如下:
[00:24] ? Qwen2.5-VL Technical Report(Qwen2.5-VL 技术报告)
[01:10] ? RAD: Training an End-to-End Driving Policy via Large-Scale 3DGS-based Reinforcement Learning(RAD:基于大规模3DGS强化学习的端到端驾驶策略训练)
[01:50] ? SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation(SongGen:用于文本到歌曲生成的单阶段自回归Transformer)
[02:38] ? MoM: Linear Sequence Modeling with Mixture-of-Memories(MoM:结合记忆混合的线性序列建模)
[03:15] ? Craw4LLM: Efficient Web Crawling for LLM Pretraining(Craw4LLM:面向LLM预训练的高效网页爬取方法)
[04:05] ? LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization(LongPO:通过短至长偏好优化实现大型语言模型的长上下文自进化)
[04:45] ? Small Models Struggle to Learn from Strong Reasoners(小型模型难以从强推理者中学习)
[05:27] ⚙ Autellix: An Efficient Serving Engine for LLM Agents as General Programs(Autellix:一种用于LLM代理作为通用程序的高效服务引擎)
[06:08] ? Presumed Cultural Identity: How Names Shape LLM Responses(假定的文化身份:名字如何塑造LLM的回应)
[06:53] ? Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region(为什么安全保障的船只也会搁浅?对齐的大型语言模型安全机制往往锚定在模板区域)
[07:38] ? SearchRAG: Can Search Engines Be Helpful for LLM-based Medical Question Answering?(搜索RAG:搜索引擎能否助力基于LLM的医疗问答?)
[08:21] ? Thinking Preference Optimization(思考偏好优化)
[08:59] ? Is That Your Final Answer? Test-Time Scaling Improves Selective Question Answering(那是你的最终答案吗?测试时缩放提升选择性问答)
[09:40] ? AdaptiveStep: Automatically Dividing Reasoning Step through Model Confidence(自适应步骤:通过模型置信度自动划分推理步骤)
[10:21] ? NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule Generation(NExT-Mol:3D扩散与1D语言建模结合的3D分子生成)
[11:02] ? ActionPiece: Contextually Tokenizing Action Sequences for Generative Recommendation(ActionPiece:面向生成推荐的上下文感知行为序列标记化)
[11:44] ? Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models(小模型训练,大模型推理:用于大型语言模型的内存高效LoRA训练)
[12:33] ? GIMMICK -- Globally Inclusive Multimodal Multitask Cultural Knowledge Benchmarking(GIMMICK -- 全球包容性多模态多任务文化知识基准测试)
[13:19] ? InfiR : Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning(InfiR:构建高效的小型语言模型和多模态小型语言模型用于推理)
[14:06] ? Noise May Contain Transferable Knowledge: Understanding Semi-supervised Heterogeneous Domain Adaptation from an Empirical Perspective(噪声可能包含可转移的知识:从实证角度理解半监督异构域适应)

【关注我们】
您还可以在以下平台找到我们,获得播客内容以外更多信息
小红书: AI速递