AI News 19小时前
Suvianna Grecu, AI for Change: Without rules, AI risks ‘trust crisis’
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

全球正加速部署AI,但技术伦理领军人物警告说,优先考虑速度而非安全风险可能导致“信任危机”。AI for Change Foundation创始人Suvianna Grecu认为,缺乏及时和强有力的治理,我们将面临“大规模自动化伤害”的路径。她指出,AI最紧迫的道德危险不是技术本身,而是其推广过程中缺乏结构。强大的系统正越来越多地决定工作申请、信用评分、医疗保健和刑事司法等生活改变的决定,往往缺乏对偏差的充分测试或对其长期社会影响的考虑。Grecu强调,真正的问责制只有在有人对结果真正负责时才开始。她的基金会倡导将抽象理念转变为具体行动,通过设计清单、强制预部署风险评估和跨职能审查委员会等实用工具,将伦理考虑直接嵌入开发工作流程。她认为,关键是在每个阶段建立明确的所有权,建立透明和可重复的过程,就像对待任何其他核心业务功能一样。在执行方面,Grecu明确表示,责任不能仅落在政府或行业身上,而必须合作。政府必须设定法律边界和最低标准,而行业拥有敏捷性和技术人才来超越合规。

📚 强调AI治理的重要性:Suvianna Grecu指出,AI的快速部署而忽视安全风险可能导致“信任危机”,呼吁立即和强有力的治理来避免“大规模自动化伤害”,强调AI技术的伦理危险主要在于其推广过程中的缺乏结构。

🛡️ 推行具体行动措施:Grecu倡导将伦理考量从抽象理念转变为具体行动,通过设计清单、强制预部署风险评估和跨职能审查委员会等实用工具,将伦理直接嵌入AI开发工作流程,确保每个阶段都有明确的责任主体。

🤝 倡导政府与行业的合作:Grecu认为AI治理的责任不能仅由政府或行业单独承担,而应通过合作模式实现,政府设定法律边界和最低标准,行业创新超越合规,共同构建AI信任并降低风险。

💡 强调AI的价值导向:Grecu指出技术并非中立,AI会反映输入的数据、分配的目标和奖励的成果,因此需要有意地构建价值观,确保AI优化效率、规模和利润的同时,也促进正义、尊严或民主等社会理想。

🌍 倡导嵌入欧洲价值观:针对欧洲,Grecu建议在政策、设计和部署的每一层嵌入人权、透明度、可持续性、包容性和公平性等欧洲价值观,使AI服务于人类而非市场,通过主动塑造AI叙事来引导其发展方向。

The world is in a race to deploy AI, but a leading voice in technology ethics warns prioritising speed over safety risks a “trust crisis.”

Suvianna Grecu, Founder of the AI for Change Foundation, argues that without immediate and strong governance, we are on a path to “automating harm at scale.”

Speaking on the integration of AI into critical sectors, Grecu believes that the most pressing ethical danger isn’t the technology itself, but the lack of structure surrounding its rollout.

Powerful systems are increasingly making life-altering decisions about everything from job applications and credit scores to healthcare and criminal justice, often without sufficient testing for bias or consideration of their long-term societal impact.

For many organisations, AI ethics remains a document of lofty principles rather than a daily operational reality. Grecu insists that genuine accountability only begins when someone is made truly responsible for the outcomes. The gap between intention and implementation is where the real risk lies.

Grecu’s foundation champions a shift from abstract ideas to concrete action. This involves embedding ethical considerations directly into development workflows through practical tools like design checklists, mandatory pre-deployment risk assessments, and cross-functional review boards that bring legal, technical, and policy teams together.

According to Grecu, the key is establishing clear ownership at every stage, building transparent and repeatable processes just as you would for any other core business function. This practical approach seeks to advance ethical AI, transforming it from a philosophical debate into a set of manageable, everyday tasks.

Partnering to build AI trust and mitigate risks

When it comes to enforcement, Grecu is clear that the responsibility can’t fall solely on government or industry. “It’s not either-or, it has to be both,” she states, advocating for a collaborative model.

In this partnership, governments must set the legal boundaries and minimum standards, particularly where fundamental human rights are at stake. Regulation provides the essential floor. However, industry possesses the agility and technical talent to innovate beyond mere compliance.

Companies are best positioned to create advanced auditing tools, pioneer new safeguards, and push the boundaries of what responsible technology can achieve.

Leaving governance entirely to regulators risks stifling the very innovation we need, while leaving it to corporations alone invites abuse. “Collaboration is the only sustainable route forward,” Grecu asserts.

Promoting a value-driven future

Looking beyond the immediate challenges, Grecu is concerned about more subtle, long-term risks that are receiving insufficient attention, namely emotional manipulation and the urgent need for value-driven technology.

As AI systems become more adept at persuading and influencing human emotion, she cautions that we are unprepared for the implications this has for personal autonomy.

A core tenet of her work is the idea that technology is not neutral. “AI won’t be driven by values, unless we intentionally build them in,” she warns. It’s a common misconception that AI simply reflects the world as it is. In reality, it reflects the data we feed it, the objectives we assign it, and the outcomes we reward. 

Without deliberate intervention, AI will invariably optimise for metrics like efficiency, scale, and profit, not for abstract ideals like justice, dignity, or democracy, and that will naturally impact societal trust. This is why a conscious and proactive effort is needed to decide what values we want our technology to promote.

For Europe, this presents a critical opportunity. “If we want AI to serve humans (not just markets) we need to protect and embed European values like human rights, transparency, sustainability, inclusion and fairness at every layer: policy, design, and deployment,” Grecu explains.

This isn’t about halting progress. As she concludes, it’s about taking control of the narrative and actively “shaping it before it shapes us.”

Through her foundation’s work – including public workshops and during the upcoming AI & Big Data Expo Europe, where Grecu is a chairperson on day two of the event – she is building a coalition to guide the evolution of AI, and boost trust by keeping humanity at its very centre.

(Photo by Cash Macanaya)

See also: AI obsession is costing us our human skills

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Suvianna Grecu, AI for Change: Without rules, AI risks ‘trust crisis’ appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI伦理 技术治理 信任危机 价值观导向 合作模式
相关文章