热点
"参数高效微调" 相关文章
LoRA中到底有多少参数冗余?新研究:砍掉95%都能保持高性能
机器之心 2025-05-02T08:21:24.000000Z
CVPR 2025 | CV 微调卷出天际,Mona:我小、我强、我省资源
机器之心 2025-05-01T09:46:17.000000Z
CVPR25 | CV 微调卷出天际,Mona:我小,我强,我省资源
我爱计算机视觉 2025-04-27T16:37:13.000000Z
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
cs.AI updates on arXiv.org 2025-04-10T04:03:50.000000Z
北大团队提出LIFT:将长上下文知识注入模型参数,提升大模型长文本能力
机器之心 2025-03-20T05:13:33.000000Z
This AI Paper Introduces a Parameter-Efficient Fine-Tuning Framework: LoRA, QLoRA, and Test-Time Scaling for Optimized LLM Performance
MarkTechPost@AI 2025-03-08T19:58:24.000000Z
PEFT fine tuning of Llama 3 on SageMaker HyperPod with AWS Trainium
AWS Machine Learning Blog 2024-12-24T15:42:59.000000Z
一篇AI冬令营第一期的优秀学习笔记!
智源社区 2024-12-19T14:49:10.000000Z
一篇AI冬令营第一期的优秀学习笔记!
Datawhale 2024-12-19T13:37:23.000000Z
【NLP】Kaggle知识点:文本分类与LoRA
机器学习初学者 2024-12-04T05:36:18.000000Z
PointGST:点云分析精度卷到99%了,还只用了2M训练参数
我爱计算机视觉 2024-11-06T13:32:12.000000Z
NeurIPS 2024 Oral | 小参数,大作为!揭秘非对称 LoRA 架构的高效性能
机器之心 2024-10-20T08:11:15.000000Z
如何微调(Fine-tuning)大语言模型?
阿里云开发者 2024-10-01T08:00:33.000000Z
CRISPR-Cas9 guide RNA efficiency prediction with efficiently tuned models in Amazon SageMaker
AWS Machine Learning Blog 2024-09-16T23:02:39.000000Z
LoRA-Pro: A Groundbreaking Machine Learning Approach to Bridging the Performance Gap Between Low-Rank Adaptation and Full Fine-Tuning
MarkTechPost@AI 2024-07-28T06:04:31.000000Z
仅微调0.02%参数,性能接近全量微调!上交大推出高效微调统一新范式
智源社区 2024-07-22T02:51:31.000000Z
A Paradigm Shift: MoRA’s Role in Advancing Parameter-Efficient Fine-Tuning Techniques
MarkTechPost@AI 2024-05-26T04:30:53.000000Z