智源社区 02月18日
专题征稿 | 生成式人工智能的安全与伦理
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

《Machine Intelligence Research》期刊发布“生成式人工智能安全与伦理”专题征稿,聚焦大型语言模型和多模态大型语言模型驱动下生成式AI的安全与伦理问题。征稿内容涵盖模型、数据、内容、学习和评估等方面的安全性、隐私性和伦理影响,以及jailbreak攻击与防御、对抗鲁棒性、后门学习、机器卸载学习、幻觉纠正、AI生成内容检测、虚假信息检测、AI生成内容水印、联邦学习与隐私、偏见与公平性、可解释性和透明性等多个方向。截稿日期为2025年6月30日。

🛡️生成式AI的快速发展带来了安全漏洞和伦理问题,包括滥用、深度伪造、网络钓鱼诈骗和数据隐私泄露等风险,对个人、组织乃至国家都造成了潜在威胁。

🔑本次专题征稿关注生成式AI系统中的安全和伦理考量,尤其侧重于大型语言模型和多模态大型语言模型,鼓励探讨与模型、数据、内容、学习和评估相关的安全性、隐私性和伦理影响。

🔬征稿范围广泛,涵盖可靠性、可信赖性、安全性、对抗鲁棒性、后门学习、机器卸载学习、幻觉纠正、AI生成内容检测、虚假信息检测、水印技术、联邦学习与隐私保护、偏见与公平性、可解释性与透明性等多个前沿方向。

📅 投稿截止日期为2025年6月30日,投稿地址已开通,投稿时需在系统中选择“Special Issue on Security and Ethics of Generative Artificial Intelligence”专题。

Machine Intelligence Research

MIR专题"Special Issue on Security and Ethics of Generative Artificial Intelligence"现公开征集原创稿件,截稿日期为2025年6月30日。欢迎赐稿!

专题征稿

Special Issue on

Security and Ethics of Generative Artificial Intelligence

 专题简介

Generative Artificial Intelligence (Generative AI), powered by Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs), is rapidly reshaping the landscape of artificial intelligence. State-of-the-art models such as GPT-4, DeepSeek, Claude, and DALL-E 3 demonstrate significant progress in generative capabilities, enabling breakthroughs in creative content synthesis, logical inference, automated decision-making, and domain-specific applications. However, the accelerated deployment of these systems has also exposed critical security vulnerabilities and ethical concerns, including the risks of misuse, deepfakes, phishing scams, data privacy breaches, and model security. It has raised unexpected concerns from individuals, organizations, communities and even nations. As Generative AI continues to evolve and integrate into various applications and sectors, the need for robust mechanisms to ensure the safety, trustworthiness, and ethical use of generative models has become increasingly urgent. This Special Issue is dedicated to exploring the latest technical advancements that enhance the security, reliability, and ethical deployment of Generative AI technologies.

 征稿范围

MIR is pleased to announce a Special Issue. The Special Issue focuses on all aspects of security and ethical considerations within generative artificial intelligence systems, with a special emphasis on LLMs and MLLMs. We invite original research exploring the challenges related to the safety, privacy, and ethical implications of models, data, content, learning, and evaluation in generative models. We also welcome survey, dataset and benchmark papers in these general areas. Specifically, we encourage scholars from all disciplines to submit contributions related to Security and Ethics topics including, but not limited to:

1) Reliability, Trustworthy and Security for Generative AI; 

2) Jailbreak Attacks and Defenses;

3) Adversarial Robustness;

4) Backdoor Learning;

5) Machine Unlearning;

6) Hallucination Correction;

7) Detection of AI-Generated Content;

8) Detection of Misinformation and Deepfakes;

9) Watermarking AI Generated Content and Fingerprinting Models;

10) Federated Learning and Privacy;

11) Bias and Fairness;

12) Explainability and Transparency;

13) New evaluation datasets and performance benchmark.

 投稿指南

1) 截稿日期:2025年6月30日

2) 投稿地址(已开通)

https://mc03.manuscriptcentral.com/mir

投稿时,请在系统中选择:

Step 6 Details & Comments: Special Issue and Special Section---Special Issue on Security and Ethics of Generative Artificial Intelligence”.

3) 投稿及同行评议指南:

Full-length manuscripts and peer reviewing will follow the MIR guidelines. For details: https://www.springer.com/journal/11633

Please address inquiries to mir@ia.ac.cn.

 客座编委

Prof. Jing Dong, Institute of Automation, Chinese Academy of Sciences, China.

E-mail: jdong@nlpr.ia.ac.cn

Assoc. Prof. Matteo Ferrara, University of Bologna, Italy.

E-mail: matteo.ferrara@unibo.it

Prof. Ran He, University of Chinese Academy of Sciences, China.

E-mail: rhe@nlpr.ia.ac.cn

Prof. Dacheng Tao, Nanyang Technological University, Singapore.

E-mail: dacheng.tao@gmail.com

Prof. Philip H. S. Torr, University of Oxford, UK.

E-mail: philip.torr@eng.ox.ac.uk

Prof. Rama Chellappa, Johns Hopkins University, US.

E-mail: rchella4@jhu.edu

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

生成式AI 安全 伦理 LLM MLLM
相关文章