Unite.AI 前天 02:17
Why GenAI Stalls Without Strong Governance
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了生成式人工智能(GenAI)项目从试验到生产化的过程中所面临的挑战,特别是数据治理方面的问题。研究表明,许多组织在GenAI试点项目中遇到瓶颈,主要原因在于基础数据准备不足。文章强调,GenAI的成功依赖于可靠、完整、合规且可解释的数据。为了实现GenAI的规模化应用,组织需要将数据治理作为战略要务,建立数据质量、透明度和可追溯性,并培养全员的AI素养。文章最后提出了构建可信赖AI的三个关键支柱,并强调数据治理是推动GenAI价值实现的关键。

💡 **GenAI落地困境**: 尽管生成式AI潜力巨大,但许多企业在将GenAI项目从试验推向生产的过程中受阻,主要原因是数据问题。

🤔 **数据质量至关重要**: GenAI的有效性取决于数据的质量。缺乏可信、完整、合规且可解释的数据,会导致模型产生不准确、有偏见或不适用的结果。

✅ **数据治理是关键**: 为了推动GenAI超越试点阶段,企业必须将数据治理视为战略要务,确保数据能够支撑AI模型。这包括确保数据来源正确、遵守法规、数据透明、并消除数据偏差。

🌍 **培养AI素养**: 组织需要在全员范围内建立AI素养,以理解AI系统的工作原理并负责任地使用它们。同时,数据素养与AI素养紧密相连,两者都需要重视。

🚀 **构建可信赖AI的步骤**: 组织应围绕业务目标组织数据,建立AI的信任机制,并构建AI数据就绪的管道。通过这些措施,数据治理可以加速AI的价值实现。

As companies grapple with moving Generative AI projects from experimentation to productionising – many businesses remain stuck in pilot mode. As our recent research highlights, 92% of organisations are concerned that GenAI pilots are accelerating without first tackling fundamental data issues. Even more telling: 67% have been unable to scale even half of their pilots to production. This production gap is less about technological maturity and more about the readiness of the underlying data. The potential of GenAI depends upon the strength of the ground it stands on. And today, for most organisations, that ground is shaky at best.

Why GenAI gets stuck in pilot

Although GenAI solutions are certainly mighty, they’re only as effective as the data that feeds them. The old adage of “garbage in, garbage out” is truer today than ever. Without trusted, complete, entitled and explainable data, GenAI models often produce results that are inaccurate, biased, or unfit for purpose.

Unfortunately, organisations have rushed to deploy low-effort use cases, like AI-powered chatbots offering tailored answers from different internal documents. And while these do improve customer experiences to an extent, they don’t demand deep changes to a company’s data infrastructure. But to scale GenAI strategically, whether in healthcare, financial services, or supply chain automation, requires a different level of data maturity.

In fact, 56% of Chief Data Officers cite data reliability as a key barrier to the deployment of AI. Other issues are incomplete data (53%), privacy issues (50%), and larger AI governance gaps (36%).

No governance, no GenAI

To take GenAI beyond the pilot stage, companies must treat data governance as a strategic imperative to their business.They need to ensure data is up to the job of powering AI models, and to so the following questions need to be addressed:

Data governance also needs to be embedded within an organisation’s culture. To do this, requires building AI literacy across all teams. The EU AI Act formalises this responsibility, requiring both providers and users of AI systems to make best efforts to ensure employees are sufficiently AI-literate, making sure they understand how these systems work and how to use them responsibly. However, effective AI adoption goes beyond technical know-how. It also demands a strong foundation in data skills, from understanding data governance to framing analytical questions. Treating AI literacy in isolation from data literacy would be short-sighted, given how closely they're intertwined.

In terms of data governance, there’s still work to be done. Among businesses who want to increase their data management investments, 47% agree that lack of data literacy is a top barrier. This highlights the need for building top-level support and developing the right skills across the organisation is crucial. Without these foundations, even the most powerful LLMs will struggle to deliver.

Developing AI that must be held accountable

In the current regulatory environment, it's no longer enough for AI to “just work,” it also needs to be accountable and explained. The EU AI Act and the UK’s proposed AI Action Plan requires transparency in high-risk AI use cases. Others are following suit, and 1,000+ related policy bills are on the agenda in 69 countries.

This global movement towards accountability is a direct result of increasing consumer and stakeholder demands for fairness in algorithms. For example, organisations must be able to say the reasons why a customer was turned down for a loan or charged a premium insurance rate. To be able to do that, they would need to know how the model made that decision, and that in turn hinges on having a clear, auditable trail of the data that was used to train it.

Unless there is explainability, businesses risk losing customer trust as well as facing financial and legal repercussions. As a result, traceability of data lineage and justification of results is not a “nice to have,” but a compliance requirement.

And as GenAI expands beyond being used for simple tools to fully-fledged agents that can make decisions and act upon them, the stakes for strong data governance rise even higher.

Steps for building trustworthy AI

So, what does good look like? To scale GenAI responsibly, organisations should look to adopt a single data strategy across three pillars:

When organisations get this right, governance accelerates AI value. In financial services for example, hedge funds are using gen AI to outperform human analysts in stock price prediction while significantly reducing costs. In manufacturing, supply chain optimisation driven by AI enables organisations to react in real-time to geopolitical changes and environmental pressures.

And these aren’t just futuristic ideas, they’re happening now, driven by trusted data.

With strong data foundations, companies reduce model drift, limit retraining cycles, and increase speed to value. That’s why governance isn’t a roadblock; it’s an enabler of innovation.

What’s next?

After experimentation, organisations are moving beyond chatbots and investing in transformational capabilities. From personalising customer interactions to accelerating medical research, improving mental health and simplifying regulatory processes, GenAI is beginning to demonstrate its potential across industries.

Yet these gains depend entirely on the data underpinning them. GenAI starts with building a strong data foundation, through strong data governance. And while GenAI and agentic AI will continue to evolve, it won’t replace human oversight anytime soon. Instead, we’re entering a phase of structured value creation, where AI becomes a reliable co-pilot. With the right investments in data quality, governance, and culture, businesses can finally turn GenAI from a promising pilot into something that fully gets off the ground.

The post Why GenAI Stalls Without Strong Governance appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

生成式AI 数据治理 AI落地 数据质量 AI素养
相关文章