热点
"数据效率" 相关文章
Libra: Large Chinese-based Safeguard for AI Content
cs.AI updates on arXiv.org 2025-07-30T04:12:04.000000Z
EVEv2: Improved Baselines for Encoder-Free Vision-Language Models
cs.AI updates on arXiv.org 2025-07-25T04:28:49.000000Z
Data-Efficient Safe Policy Improvement Using Parametric Structure
cs.AI updates on arXiv.org 2025-07-22T04:34:20.000000Z
物理ai
韭研公社 2025-07-17T03:32:31.000000Z
SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning
cs.AI updates on arXiv.org 2025-07-16T04:28:57.000000Z
AI Should Sense Better, Not Just Scale Bigger: Adaptive Sensing as a Paradigm Shift
cs.AI updates on arXiv.org 2025-07-11T04:03:59.000000Z
EXAONE Path 2.0: Pathology Foundation Model with End-to-End Supervision
cs.AI updates on arXiv.org 2025-07-10T04:05:49.000000Z
Video-RTS: Rethinking Reinforcement Learning and Test-Time Scaling for Efficient and Enhanced Video Reasoning
cs.AI updates on arXiv.org 2025-07-10T04:05:44.000000Z
BlueLM-2.5-3B Technical Report
cs.AI updates on arXiv.org 2025-07-09T04:01:30.000000Z
Text Detoxification: Data Efficiency, Semantic Preservation and Model Generalization
cs.AI updates on arXiv.org 2025-07-03T04:07:20.000000Z
SKIL: Semantic Keypoint Imitation Learning for Generalizable Data-efficient Manipulation
cs.AI updates on arXiv.org 2025-07-03T04:07:15.000000Z
LLMs Can Learn Complex Math from Just One Example: Researchers from University of Washington, Microsoft, and USC Unlock the Power of 1-Shot Reinforcement Learning with Verifiable Reward
MarkTechPost@AI 2025-05-03T05:30:41.000000Z
OpenAI自曝GPT-4.5训练内幕:数据效率是关键,预训练仍然有用
Founder Park 2025-04-19T06:21:12.000000Z
OpenAI自曝GPT-4.5训练内幕:数据效率是关键,预训练仍然有用
智源社区 2025-04-15T13:27:51.000000Z
OpenAI揭秘GPT-4.5训练:10万块GPU几乎全员上阵 出现“灾难性问题”
Cnbeta 2025-04-13T09:07:28.000000Z
OpenAI 揭秘 GPT-4.5 训练:10 万块 GPU,几乎全员上阵,出现“灾难性问题”
IT之家 2025-04-13T07:28:36.000000Z
OpenAI:现在只需 5~10 人即可从头重建 GPT-4,发展瓶颈已经从“算力”转变为“数据效率”
IT之家 2025-04-11T13:19:04.000000Z
机器人泛化能力大幅提升:HAMSTER层次化方法和VLA尺度轨迹预测,显著提升开放世界任务成功率
机器之心 2025-03-11T09:01:55.000000Z
This AI Paper from UC Berkeley Introduces a Data-Efficient Approach to Long Chain-of-Thought Reasoning for Large Language Models
MarkTechPost@AI 2025-02-15T04:20:10.000000Z
少用33%数据,模型性能不变,陈丹琦团队用元数据来做降本增效
36氪 - 科技频道 2025-01-08T11:07:05.000000Z