热点
关于我们
xx
xx
"
参数高效微调
" 相关文章
ACL 2025 | 推理不靠堆参数!CRFT打破CoT瓶颈,0.016%参数撬动18.2%性能
PaperWeekly
2025-07-30T03:06:46.000000Z
ICML 2025 | 还在裸跑LoRA?CoTo用渐进激活杀出新路,融合剪枝全起飞
PaperWeekly
2025-07-30T03:06:46.000000Z
The Impact of Fine-tuning Large Language Models on Automated Program Repair
cs.AI updates on arXiv.org
2025-07-29T04:22:08.000000Z
Datawhale AI夏令营:Baseline与调优
掘金 人工智能
2025-07-28T03:23:23.000000Z
ICML 2025 | CoTo:让LoRA训练「渐入佳境」,模型融合、剪枝样样精通
机器之心
2025-07-26T18:56:53.000000Z
Parameter-Efficient Fine-Tuning of 3D DDPM for MRI Image Generation Using Tensor Networks
cs.AI updates on arXiv.org
2025-07-25T04:28:46.000000Z
Swin-TUNA : A Novel PEFT Approach for Accurate Food Image Segmentation
cs.AI updates on arXiv.org
2025-07-24T05:31:19.000000Z
EXPOTION: Facial Expression and Motion Control for Multimodal Music Generation
cs.AI updates on arXiv.org
2025-07-08T04:33:42.000000Z
Refining Salience-Aware Sparse Fine-Tuning Strategies for Language Models
cs.AI updates on arXiv.org
2025-06-30T04:14:29.000000Z
LoRA中到底有多少参数冗余?新研究:砍掉95%都能保持高性能
机器之心
2025-05-02T08:21:24.000000Z
CVPR 2025 | CV 微调卷出天际,Mona:我小、我强、我省资源
机器之心
2025-05-01T09:46:17.000000Z
CVPR25 | CV 微调卷出天际,Mona:我小,我强,我省资源
我爱计算机视觉
2025-04-27T16:37:13.000000Z
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
cs.AI updates on arXiv.org
2025-04-10T04:03:50.000000Z
北大团队提出LIFT:将长上下文知识注入模型参数,提升大模型长文本能力
机器之心
2025-03-20T05:13:33.000000Z
This AI Paper Introduces a Parameter-Efficient Fine-Tuning Framework: LoRA, QLoRA, and Test-Time Scaling for Optimized LLM Performance
MarkTechPost@AI
2025-03-08T19:58:24.000000Z
PEFT fine tuning of Llama 3 on SageMaker HyperPod with AWS Trainium
AWS Machine Learning Blog
2024-12-24T15:42:59.000000Z
一篇AI冬令营第一期的优秀学习笔记!
智源社区
2024-12-19T14:49:10.000000Z
一篇AI冬令营第一期的优秀学习笔记!
Datawhale
2024-12-19T13:37:23.000000Z
【NLP】Kaggle知识点:文本分类与LoRA
机器学习初学者
2024-12-04T05:36:18.000000Z
PointGST:点云分析精度卷到99%了,还只用了2M训练参数
我爱计算机视觉
2024-11-06T13:32:12.000000Z