AI News 04月09日 21:58
Web3 tech helps instil confidence and trust in AI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了如何通过Web3技术增强人们对人工智能的信任。随着AI在各行业的广泛应用,信任问题日益凸显。文章指出,透明度、可验证性和合规性是建立信任的关键。Web3技术,如区块链和去中心化计算,为解决这些问题提供了新思路。通过开源模型、链上验证和用户教育,可以提高AI系统的可靠性和透明度。文章还强调了合规性和问责制的重要性,并提出去中心化是实现AI规模化和信任的关键。

💡 **AI信任危机的背景:** 随着AI在金融、制造、医疗等领域的应用,对其透明度和可靠性的担忧日益增加。缺乏透明度可能导致用户对AI决策产生不信任,甚至引发合规性问题。

🔑 **透明度的重要性:** AI算法的“黑盒”特性使得用户难以理解其决策过程。Web3技术,如区块链,可以通过提供可验证和可审计的算法来增强透明度。例如,Space and Time (SxT) 提供防篡改的数据源,确保AI依赖的信息真实可靠。

✅ **建立信任的途径:** 建立对AI的信任需要持续的评估和验证,特别是在高风险领域。开源模型和链上验证,如使用零知识证明(ZKPs)的不可变账本,可以增强信任。此外,用户教育也很重要,应该让用户了解AI的能力和局限性。

⚖️ **合规性与问责制:** AI需要遵守法律法规,但如何对“无面孔”的算法进行问责是一个挑战。Cartesi等模块化区块链协议提供了一种解决方案,可以在链上进行AI推理,从而实现透明度和可追溯性。

🌐 **去中心化的作用:** 去中心化技术可以帮助AI扩大规模,并增强人们对其底层技术的信任。联合国报告指出,AI的发展可能加剧全球差距,而去中心化可能是解决这一问题的方法之一。

The promise of AI is that it’ll make all of our lives easier. And with great convenience comes the potential for serious profit. The United Nations thinks AI could be a $4.8 trillion global market by 2033 – about as big as the German economy.

But forget about 2033: in the here and now, AI is already fueling transformation in industries as diverse as financial services, manufacturing, healthcare, marketing, agriculture, and e-commerce. Whether it’s autonomous algorithmic ‘agents’ managing your investment portfolio or AI diagnostics systems detecting diseases early, AI is fundamentally changing how we live and work.

But cynicism is snowballing around AI – we’ve seen Terminator 2 enough times to be extremely wary. The question worth asking, then, is how do we ensure trust as AI integrates deeper into our everyday lives?

The stakes are high: A recent report by Camunda highlights an inconvenient truth: most organisations (84%) attribute regulatory compliance issues to a lack of transparency in AI applications. If companies can’t view algorithms – or worse, if the algorithms are hiding something – users are left completely in the dark. Add the factors of systemic bias, untested systems, and a patchwork of regulations and you have a recipe for mistrust on a large scale.

Transparency: Opening the AI black box

For all their impressive capabilities, AI algorithms are often opaque, leaving users ignorant of how decisions are reached. Is that AI-powered loan request being denied because of your credit score – or due to an undisclosed company bias? Without transparency, AI can pursue its owner’s goals, or that of its owner, while the user remains unaware, still believing it’s doing their bidding.

One promising solution would be to put the processes on the blockchain, making algorithms verifiable and auditable by anyone. This is where Web3 tech comes in. We’re already seeing startups explore the possibilities. Space and Time (SxT), an outfit backed by Microsoft, offers tamper-proof data feeds consisting of a verifiable compute layer, so SxT can ensure that the information on which AI relies is real, accurate, and untainted by a single entity.

Space and Time’s novel Proof of SQL prover guarantees queries are computed accurately against untampered data, proving computations in blockchain histories and being able to do so much faster than state-of-the art zkVMs and coprocessors. In essence, SxT helps establish trust in AI’s inputs without dependence on a centralised power.

Proving AI can be trusted

Trust isn’t a one-and-done deal; it’s earned over time, analogous to a restaurant maintaining standards to retain its Michelin star. AI systems must be assessed continually for performance and safety, especially in high-stakes domains like healthcare or autonomous driving. A second-rate AI prescribing the wrong medicines or hitting a pedestrian is more than a glitch, it’s a catastrophe.

This is the beauty of open-source models and on-chain verification via using immutable ledgers, with built-in privacy protections assured by the use of cryptography like Zero-Knowledge Proofs (ZKPs). Trust isn’t the only consideration, however: Users must know what AI can and can’t do, to set their expectations realistically. If a user believes AI is infallible, they’re more likely to trust flawed output.

To date, the AI education narrative has centred on its dangers. From now on, we should try to improve users’ knowledge of AI’s capabilities and limitations, better to ensure users are empowered not exploited.

Compliance and accountability

As with cryptocurrency, the word compliance comes often when discussing AI. AI doesn’t get a pass under the law and various regulations. How should a faceless algorithm be held accountable? The answer may lie in the modular blockchain protocol Cartesi, which ensures AI inference happens on-chain.

Cartesi’s virtual machine lets developers run standard AI libraries – like TensorFlow, PyTorch, and Llama.cpp – in a decentralised execution environment, making it suitable for on-chain AI development. In other words, a blend of blockchain transparency and computational AI.

Trust through decentralisation

The UN’s recent Technology and Innovation Report shows that while AI promises prosperity and innovation, its development risks “deepening global divides.” Decentralisation could be the answer, one that helps AI scale and instils trust in what’s under the hood.

(Image source: Unsplash)

The post Web3 tech helps instil confidence and trust in AI appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI Web3 区块链 信任 透明度
相关文章