钛媒体:引领未来商业与生活新知 07月24日 13:34
China's BenZhi Activation Leads Charge in Full-Stack On-Device AI Innovation
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

BenZhi Activation是一家源自上海交通大学的AI初创公司,专注于开发全栈原生设备端AI解决方案,旨在重塑个人计算体验。该公司由副教授米泽宇领导,依托IPADS在操作系统和分布式系统领域的领先研究,以及在大规模设备端AI模型和基础设施方面的世界级专长。BenZhi Activation通过将先进AI能力本地化,解决了当前云端AI在隐私、延迟和个性化方面的挑战。其PowerInfer和SmallThinker等开源项目已展现出强大的研发和工程实力,为设备端AI的普及铺平了道路,预示着未来AI将更加安全、响应迅速且个性化。

💡 **全栈原生设备端AI解决方案:** BenZhi Activation致力于构建从底层算法到硬件优化的完整设备端AI技术栈,旨在让复杂的AI模型直接在个人设备(如PC、智能手机)上高效运行,无需依赖云端,从而解决云AI的隐私、延迟和成本问题。

🚀 **PowerInfer与SmallThinker的突破性进展:** 公司发布的PowerInfer和SmallThinker等开源项目,在设备端AI模型运行效率上取得了显著成就。PowerInfer能够让大型模型在消费级显卡上实现接近数据中心的性能,而PowerInfer-2更是将模型成功部署到智能手机上,速度远超现有方案,展现了其在AI模型优化和基础设施构建方面的领先地位。

🔒 **重塑个人计算的隐私与个性化:** 通过将AI处理能力置于设备本地,BenZhi Activation极大地增强了用户数据的隐私性,解决了用户对个人信息上传云端的顾虑。同时,设备端AI能够更深入地学习用户习惯,提供高度个性化的AI体验,这是云端AI难以比拟的。

🌐 **赋能低成本硬件的AI普及:** BenZhi Activation正与上海交通大学合作,开发专门为边缘设备设计的原生大型AI模型。这些模型能够适应低成本硬件的计算、内存和存储限制,使得价值数百元的设备也能流畅运行数十亿参数的AI模型,极大地推动了AI技术的广泛普及。

📈 **资本与行业的高度认可:** 领先的风险投资机构对BenZhi Activation的创新方法给予了高度评价,认为其有效弥合了强大AI模型与主流设备计算能力之间的差距,加速了低成本、高效率AI的部署。其在模型、系统和硬件协同优化方面的能力,被视为降低AI成本、实现实时私密AI体验的关键。

AsianFin -- BenZhi Activation, an AI startup incubated by the Institute of Parallel and Distributed Systems (IPADS) at Shanghai Jiao Tong University, is pioneering a full-stack native on-device AI solution designed to transform personal computing.

Founded and led by Associate Professor Ze-Yu Mi, the team draws on IPADS’s global leadership in operating and distributed systems research, backed by world-class expertise in large-scale on-device AI models and infrastructure.

Their portfolio includes internationally recognized open-source projects like PowerInfer and SmallThinker, demonstrating strength in both cutting-edge AI development and engineering implementation.

As AI technology reshapes industries, BenZhi Activation aims to revolutionize how billions of devices—including PCs, smartphones, and smart terminals—process data by embedding advanced AI capabilities locally, rather than relying on cloud-based models. Cloud AI currently faces significant challenges around privacy, latency, and personalization.

Uploading sensitive personal data to cloud servers raises security concerns and discomfort among users reluctant to entrust their “digital lives” to third parties. Additionally, cloud interactions incur high costs and latency that limit frequent, in-depth AI use. Finally, generalized cloud models struggle to deeply personalize experiences, lacking the ability to learn continuously from individual user data while maintaining privacy.

In response, BenZhi Activation introduces a disruptive, native on-device approach that rebuilds the AI software and hardware stack from the ground up. This strategy bypasses traditional model compression methods, enabling powerful AI models to run efficiently and securely on end-user hardware without sacrificing performance.

The company has achieved collaborative breakthroughs across on-device large model algorithms, infrastructure systems, and hardware optimization, delivering AI that is both highly capable and fully private.

BenZhi Activation has already delivered world-leading innovations. In December 2023, they released PowerInfer, an on-device AI infrastructure capable of running tens-of-billions-parameter models on consumer NVIDIA GTX 4090 GPUs with performance near that of data-center A100 GPUs, boasting inference speeds over 11 times faster than prior methods. This open-source project quickly rose to the top of GitHub’s global trending list. By June 2024, their PowerInfer-2 system, incorporating a proprietary TurboSparse sparsification method, enabled a 4.7-billion-parameter model to run smoothly on smartphones, outpacing the international benchmark llama.cpp by 29 times and marking a leap from desktop to mobile deployment at scale.

Looking ahead, BenZhi Activation will collaborate with Shanghai Jiao Tong University to release the world’s first batch of native large AI models pre-trained and architected explicitly for edge deployment. These models address the tight computational, memory, and storage constraints of low-cost hardware, allowing seamless operation of tens-of-billions-parameter models on devices costing only a few hundred yuan. This milestone exemplifies the team’s full-stack capabilities, from foundational algorithm design to on-device infrastructure. Their earlier release of SmallThinker, a 3-billion-parameter reasoning model optimized for on-device use, achieved over 100,000 downloads on HuggingFace within a week and ranked second globally among trending AI models.

Leading venture capitalists praise BenZhi Activation’s innovative approach. Chen Yu, Partner at Yunqi Partners, noted that the company effectively bridges the gap between powerful AI models and mainstream device computing capabilities, accelerating low-cost and efficient AI deployment. Liu Shui, Managing Director at Baidu Ventures, emphasized the significance of coordinated optimization across models, systems, and hardware to reduce AI costs and deliver real-time, private AI experiences on smart devices. Huang Xinxin from Lighthouse Capital highlighted BenZhi Activation’s rare combination of top-tier R&D and production expertise, positioning the startup as a global leader in edge AI innovation.

BenZhi Activation’s pioneering native on-device AI technology signals a fundamental shift in the industry, empowering billions of users with secure, responsive, and deeply personalized AI experiences directly on their devices. This approach is set to redefine the future of personal intelligence, putting advanced AI capabilities squarely in users’ hands.

更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

BenZhi Activation 设备端AI AI模型 隐私计算 人工智能
相关文章