热点
关于我们
xx
xx
"
PEFT
" 相关文章
From LLMs to Edge: Parameter-Efficient Fine-Tuning on Edge Devices
cs.AI updates on arXiv.org
2025-08-01T04:08:25.000000Z
LLM-based Content Classification Approach for GitHub Repositories by the README Files
cs.AI updates on arXiv.org
2025-07-30T04:12:04.000000Z
Solo Connection: A Parameter Efficient Fine-Tuning Technique for Transformers
cs.AI updates on arXiv.org
2025-07-22T04:44:29.000000Z
Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy
cs.AI updates on arXiv.org
2025-07-18T04:13:46.000000Z
Advanced fine-tuning methods on Amazon SageMaker AI
AWS Machine Learning Blog
2025-07-11T17:29:46.000000Z
Breaking PEFT Limitations: Leveraging Weak-to-Strong Knowledge Transfer for Backdoor Attacks in LLMs
cs.AI updates on arXiv.org
2025-07-10T04:06:07.000000Z
微调篇-PEFT定制化LoRA模型配置
掘金 人工智能
2025-06-25T02:08:13.000000Z
微调篇-基于LoRA训练图像分类模型
掘金 人工智能
2025-06-24T02:38:09.000000Z
微调篇--Stable Diffusion模型微调
掘金 人工智能
2025-06-22T02:45:38.000000Z
LLM省钱大测评!48块GH200,首个百亿级参数量实证
智源社区
2025-05-30T07:48:50.000000Z
超越HuggingFace:构建企业级大模型微调系统的24个关键技术
掘金 人工智能
2025-05-29T06:58:04.000000Z
bf16权重合并lora出现无法忽略的精度损失
掘金 人工智能
2025-05-18T10:08:03.000000Z
使用Unsloth微调DeepSeek-R1蒸馏模型:低显存高效训练实践
掘金 人工智能
2025-05-07T02:03:39.000000Z
LoRA中到底有多少参数冗余?新研究:砍掉95%都能保持高性能
掘金 人工智能
2025-05-03T05:48:06.000000Z
Uploading Datasets to Hugging Face: A Step-by-Step Guide
MarkTechPost@AI
2025-04-17T21:15:33.000000Z
Fine-Tuning NVIDIA NV-Embed-v1 on Amazon Polarity Dataset Using LoRA and PEFT: A Memory-Efficient Approach with Transformers and Hugging Face
MarkTechPost@AI
2025-02-23T02:44:45.000000Z
DeepSeek用的GRPO占用大量内存?有人给出了些破解方法
机器之心
2025-02-07T07:55:27.000000Z
Hugging Face Releases Sentence Transformers v3.3.0: A Major Leap for NLP Efficiency
MarkTechPost@AI
2024-11-11T18:05:02.000000Z
仅微调0.02%参数,性能接近全量微调!上交大推出高效微调统一新范式
智源社区
2024-07-22T02:51:31.000000Z