Unite.AI 02月05日
3 Considerations for Safe and Reliable AI Agents for Enterprises
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了企业在实施生成式AI代理时面临的挑战及解决方法,包括数据准备、治理加强和超越提示工程的思考,强调建立安全有效的AI代理基础的重要性。

🎯企业数据准备的重要性,包括语义标注、解决数据分散和不一致问题等

🛡️加强数据治理,涵盖用户与AI交互体验,确保响应透明可解释

💡超越提示工程,实现系统理解业务语言,促进数据民主化

According to Gartner, 30% of GenAI projects will likely be abandoned after proof-of-concept by the end of 2025. Early adoption of GenAI revealed that most enterprises’ data infrastructure and governance practices weren’t ready for effective AI deployment. The first wave of GenAI productization faced considerable hurdles, with many organizations struggling to move beyond proof-of-concept stages to achieve meaningful business value.

As we enter the second wave of generative AI productization, companies are realizing that successfully implementing these technologies requires more than simply connecting an LLM to their data. The key to unlocking AI’s potential rests on three core pillars: getting data in order and ensuring it’s ready for integration with AI; overhauling data governance practices to address the unique challenges GenAI introduces; and deploying AI agents in ways that make safe and reliable usage natural and intuitive, so users aren’t forced to learn specialized skills or precise usage patterns. Together, these pillars create a strong foundation for safe, effective AI agents in enterprise environments.

Properly Preparing Your Data for AI

While structured data might appear organized to the naked eye, being neatly arranged in tables and columns, LLMs often struggle to understand and work with this structured data effectively. This happens because, in most enterprises, data isn’t labeled in a semantically meaningful way. Data often has cryptic labels, for example, “ID” with no clear indication of whether it’s an identifier for a customer, a product, or a transaction. With structured data, it’s also difficult to capture the proper context and relationships between different interconnected data points, like how steps in a customer journey are related to each other. Just as we needed to label every image in computer vision applications to enable meaningful interaction, organizations must now undertake the complex task of semantically labeling their data and documenting relationships across all systems to enable meaningful AI interactions.

Additionally, data is scattered across many different places – from traditional servers to various cloud services and different software applications. This patchwork of systems leads to critical interoperability and integration issues that become even more problematic when implementing AI solutions.

Another fundamental challenge lies in the inconsistency of business definitions across different systems and departments. For example, customer success teams might define “upsell” one way, while the sales team defines it another way. When you connect an AI agent or chatbot to these systems and begin asking questions, you'll get different answers because the data definitions aren't aligned. This lack of alignment isn't a minor inconvenience—it's a critical barrier to implementing reliable AI solutions.

Poor data quality creates a classic “garbage in, garbage out” scenario that becomes exponentially more serious when AI tools are deployed across an enterprise. Incorrect or messy data affects far more than one analysis—it spreads incorrect information to everyone using the system through their questions and interactions. To build trust in AI systems for real business decisions, enterprises must ensure their AI applications have data that’s clean, accurate, and understood in a proper business context. This represents a fundamental shift in how organizations must think about their data assets in the age of AI – where quality, consistency, and semantic clarity become as crucial as the data itself.

Strengthening Approaches to Governance

Data governance has been a major focus for organizations in recent years, mainly centered on managing and protecting data used in analytics. Companies have been making efforts to map sensitive information, adhere to access standards, comply with laws like GDPR and CCPA, and detect personal data. These initiatives are vital for creating AI-ready data. However, as organizations introduce generative AI agents into their workflows, the governance challenge extends beyond just the data itself to encompass the entire user interaction experience with AI.

We now face the imperative to govern not only the underlying data but also the process by which users interact with that data through AI agents. Existing legislation, such as the European Union's AI Act, and more regulations on the horizon underscore the necessity of governing the question-answering process itself. This means ensuring that AI agents provide transparent, explainable, and traceable responses. When users receive black-box answers—such as asking, “How many flu patients were admitted yesterday?” and getting only “50” without context—it’s hard to trust that information for critical decisions. Without knowing where the data came from, how it was calculated, or definitions of terms like “admitted” and “yesterday,” the AI's output loses reliability.

Unlike interactions with documents, where users can trace answers back to specific PDFs or policies to verify accuracy, interactions with structured data via AI agents often lack this level of traceability and explainability. To address these issues, organizations must implement governance measures that not only protect sensitive data but also make the AI interaction experience governed and reliable. This includes establishing robust access controls to ensure that only authorized personnel can access specific information, defining clear data ownership and stewardship responsibilities, and ensuring that AI agents provide explanations and references for their outputs. By overhauling data governance practices to include these considerations, enterprises can safely harness the power of AI agents while complying with evolving regulations and maintaining user trust.

Thinking Beyond Prompt Engineering

As organizations introduce generative AI agents in an effort to improve data accessibility, prompt engineering has emerged as a new technical barrier for business users. While touted as a promising career path, prompt engineering is essentially recreating the same barriers we've struggled with in data analytics. Creating perfect prompts is no different from writing specialized SQL queries or building dashboard filters – it's shifting technical expertise from one format to another, still requiring specialized skills that most business users don't have and shouldn't need.

Enterprises have long tried to solve data accessibility by training users to better understand data systems, creating documentation, and developing specialized roles. But this approach is backward – we ask users to adapt to data rather than making data adapt to users. Prompt engineering threatens to continue this pattern by creating yet another layer of technical intermediaries.

True data democratization requires systems that understand business language, not users who understand data language. When executives ask about customer retention, they shouldn't need perfect terminology or prompts. Systems should understand intent, recognize relevant data across different labels (whether it's “churn,” “retention,” or “customer lifecycle”), and provide contextual answers. This lets business users focus on decisions rather than learning to ask technically perfect questions.

Conclusion

AI agents will bring important changes to how enterprises operate and make decisions, but come with their own unique set of challenges that must be addressed before they are deployed. With AI, every error is amplified when non-technical users have self-service access, making it crucial to get the foundations right.

Organizations that successfully address the fundamental challenges of data quality, semantic alignment, and governance while moving beyond the limitations of prompt engineering will be positioned to safely democratize data access and decision-making. The best approach involves creating a collaborative environment that facilitates teamwork and aligns human-to-machine as well as machine-to-machine interactions. This guarantees that AI-driven insights are accurate, secure, and reliable, encouraging an organization-wide culture that manages, protects, and maximizes data to its full potential.

The post 3 Considerations for Safe and Reliable AI Agents for Enterprises appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

企业AI代理 数据准备 数据治理 提示工程
相关文章