Unite.AI 06月12日 00:47
Prioritizing Trust in AI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着社会对人工智能(AI)和机器学习(ML)的依赖日益加深,确保AI输出的可靠性变得至关重要。文章探讨了在AI应用中,通过“不确定性量化”来增强信任的重要性,尤其是在医疗诊断和自动驾驶等高风险领域。文章指出,许多组织由于实施难度和计算资源限制而忽略了不确定性量化,但随着新型计算平台的出现,这一挑战正在被克服。研究表明,下一代计算平台能够显著加速不确定性量化分析,使其更容易被应用。文章强调,在AI部署中,不确定性量化应成为常态,以赢得公众信任。

🤔 AI的广泛应用带来了信任问题:社会对AI和ML的依赖日益增长,但对其输出的信任度却受到质疑。文章指出,在医疗和自动驾驶等关键领域,缺乏对AI输出的可靠性评估可能导致严重后果。

💡 不确定性量化是关键:不确定性量化可以帮助用户了解AI输出的可信度,评估模型可能产生的其他合理输出。通过量化不确定性,用户可以更明智地使用AI的预测结果。

⏳ 传统方法面临挑战:传统的蒙特卡洛方法虽然可以进行不确定性量化,但计算成本高,速度慢,且结果具有随机性。这限制了其在实际应用中的可行性。

🚀 新型计算平台带来希望:下一代计算平台能够直接处理经验概率分布,加速不确定性量化分析。研究表明,这些平台可以使分析速度提高百倍以上,降低了实施难度和成本。

✅ 未来趋势:文章强调,随着AI应用的普及,不确定性量化将成为AI部署的必要组成部分。通过采用不确定性量化,可以提高公众对AI系统的信任度,促进AI技术的健康发展。

Society’s reliance on artificial intelligence (AI) and machine learning (ML) applications continues to grow, redefining how information is consumed. From AI-powered chatbots to information syntheses produced from Large Language Models (LLMs), society has access to more information and deeper insights than ever before. However, as technology companies race to implement AI across their value chain, a critical question looms. Can we really trust the outputs of AI solutions?

Can we really trust AI outputs without uncertainty quantification

For a given input, a model might have generated many other equally-plausible outputs. This could be due to insufficient training data, variations in the training data, or other causes. When deploying models, organizations can leverage uncertainty quantification to provide their end-users with a clearer understanding of how much they should trust the output of an AI/ML model. Uncertainty quantification is the process of estimating what those other outputs could have been.

Imagine a model predicting tomorrow’s high temperature. The model might generate the output 21ºC, but uncertainty quantification applied to that output might indicate that the model could just as well have generated the outputs 12 ºC, 15 ºC, or 16 ºC; knowing this, how much do we now trust the simple prediction of 20 ºC? Despite its potential to engender trust or to counsel caution, many organizations are choosing to skip uncertainty quantification because of the additional work they need to do to implement it, as well as because of its demands on computing resources and inference speed.

Human-in-the-loop systems, such as medical diagnosis and prognosis systems, involve humans as part of the decision-making process. By blindly trusting the data of healthcare AI/ML solutions, healthcare professionals risk misdiagnosing a patient, potentially leading to sub-par health outcomes—or worse. Uncertainty quantification can allow healthcare professionals to see, quantitatively, when they can place more trust in the outputs of AI and when they should treat specific predictions with caution. Similarly, in a fully-automated system such as a self-driving car, the output of a model for estimating the distance of an obstacle could lead to a crash that might have been otherwise avoided in the presence of uncertainty quantification on the distance estimate.

The challenge of leveraging Monte Carlo methods to build trust in AI/ML models

Monte Carlo methods, developed during the Manhattan Project, are a robust way to perform uncertainty quantification. They involve re-running algorithms repeatedly with slightly different inputs until further iterations do not provide much more information in the outputs; when the process reaches such a state, it is said to have converged. One disadvantage of Monte Carlo methods is that they are typically slow and compute-intensive, requiring many repetitions of their constituent computations to obtain a converged output and have an inherent variability across those outputs. Because Monte Carlo methods use the outputs of random number generators as one of their key building blocks, even when you run a Monte Carlo with many internal repetitions, the results you obtain will change when you repeat the process with identical parameters.

The path forward to trustworthiness in AI/ML models

Unlike traditional servers and AI-specific accelerators, a new breed of computing platforms are being developed to directly process empirical probability distributions in the same way that traditional computing platforms process integers and floating-point values. By deploying their AI models on these platforms, organizations can automate the implementation of uncertainty quantification on their pre-trained models and can also speed up other kinds of computing tasks that have traditionally used Monte Carlo methods, such as VaR calculations in finance. In particular, for the VaR scenario, this new breed of platforms allows organizations to work with empirical distributions built directly from real market data, rather than approximating these distributions with samples generated by random number generators, for more accurate analyses and faster results.

Recent breakthroughs in computing have significantly lowered the barriers to uncertainty quantification. A recent research article published by my colleagues and I, in the Machine Learning With New Compute Paradigms workshop at NeurIPS 2024, highlights how a next-generation computation platform we developed enabled uncertainty quantification analysis to run over 100-fold faster compared to running traditional Monte-Carlo-based analyses on a high-end Intel-Xeon-based server. Advances such as these allow organizations deploying AI solutions to implement uncertainty quantification with ease and to run such uncertainty quantification with low overheads.

The future of AI/ML trustworthiness depends on advanced next-generation computation

As organizations integrate more AI solutions into society, trustworthiness in AI/ML will become a top priority. Enterprises can no longer afford to skip implementing facilities in their AI model deployments to allow consumers to know when to treat specific AI model outputs with skepticism. The demand for such explainability and uncertainty quantification is clear, with approximately three in four people indicating they would be more willing to trust an AI system if appropriate assurance mechanisms were in place.

New computing technologies are making it ever easier to implement and deploy uncertainty quantification. While industry and regulatory bodies grapple with other challenges associated with deploying AI in society, there is at least an opportunity to engender the trust humans require, by making uncertainty quantification the norm in AI deployments.

The post Prioritizing Trust in AI appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 机器学习 不确定性量化 AI信任 计算平台
相关文章