少点错误 04月09日 03:42
London Working Group for Short/Medium Term AI Risks
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文作者是一位负责任的AI顾问,他关注英国在AI治理和监管方面的不足。作者计划组建一个伦敦AI工作组,专注于识别未来三年内可能出现的AI风险,并设计相应的缓解措施。工作组将发布公开信,呼吁政府采取行动,解决如逼真AI生成图像带来的法律和民主威胁等问题。作者希望汇集多元观点,邀请伦敦地区从事AI相关工作,并有时间投入的专业人士参与,共同推动AI风险的应对。

🧐 作者关注英国AI治理的不足,特别是缺乏全国性的AI监管,认为现有监管不足以应对AI带来的主要风险。

💡 作者计划组建一个伦敦AI工作组,专注于识别未来三年内可能出现的AI风险。这个时间框架旨在提高行动的紧迫性和准确性。

✍️ 工作组将针对每个识别出的风险设计缓解措施,例如,针对逼真AI生成图像的风险,建议监管AI提供商建立反向图像搜索数据库或强制使用隐写水印。

🤝 作者希望工作组尽可能开放,欢迎多元观点。符合条件的参与者需在伦敦,从事AI相关工作,并有时间投入。工作组计划发布公开信,呼吁政府采取行动。

Published on April 8, 2025 5:32 PM GMT

Background & Motivation

I am a Responsible AI consultant working for a Big 4 consultancy in London. Over the past two years I have become increasingly concerned with organisation's approaches to AI governance, and the lack of country-wide AI regulation in the UK. 

As many of you are likely aware, Keir Starmer's approach to AI regulation is to leave it to existing regulators to cover. For a large number of reasons I do not believe this approach is sufficient to mitigate the biggest AI risks that our society faces. 

I also believe that there are few 'thought leaders' who are issuing governments with precise, actionable risks and mitigations. Many advisors like to talk in vague terms about AI risks, but will not pin-point specifics. 

Approach

My plan is to create a London AI Working Group for Short Term Risks. Over the next few months this team will work together to create a list of AI risks which are likely to manifest in the next three years. This timeline is important, as longer timelines are less likely to be accurate. Also, a three year timeline creates urgency. For each of these risks we will then collaborate to design mitigations. 

One rough example from my list is as follows:

RiskMitigation(s)
Photorealistic AI-generated images pose a threat to the UK's legal and democratic institutions, as false evidence may be produced. This may result in false prosecutions, democratic injustice, etc etc. 

Possibly: Regulate AI providers to offer a reverse-image-search database of all AI-generated images that have been made on their platform. 

Alternatively: Enforce steganographic watermarks on AI generated images. 

Problem: What to do about open-source models?

We will then collate these into an open letter, which we will address to Keir Starmer, his Government, and anybody else who is open to our ideas. We will each sign and distribute this letter on any public platforms that we have (e.g. LinkedIn, Twitter, EA-related forums). We may also create a government petition in parallel. 

Suitability & Application

I want to keep this as open as possible, as I believe the more diverse viewpoints that we hold, the better. However, I understand this may become oversubscribed. Any candidates which meet the following criteria will definitely be invited to be involved:

    Is London-based. Although this is not a requirement, ideally members of this group will be London-based as I believe meeting in-person is beneficial to collaboration.Has an AI/AI-safety related job. This is very flexible, anybody who even vaguely meets this criteria will be considered.Is available to contribute fairly consistently over the next few months. I would like this project to have a fairly succinct turnaround time, so ideally candidates will have free time to commit to this project. 

If you believe you meet this criteria, please   comment expressing your interest, and message me on this platform with some basic information about yourself (e.g. location, job title/company, availability). I will try my best to include everyone that I can in this project. 

 


 


 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI风险 AI治理 伦敦 监管
相关文章