ΑΙhub 03月25日 15:59
Interview with Lea Demelius: Researching differential privacy
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了Lea Demelius博士在奥地利格拉茨科技大学的研究,主要关注差分隐私,旨在保护数据分析和机器学习中的隐私。Demelius的研究重点是探索可信赖AI的需求之间的权衡与协同作用,特别是隐私和公平性。她致力于推动负责任的机器学习模型的发展,并研究将差分隐私应用于实际系统的影响。文章还探讨了Demelius的研究进展、未来计划以及她对AI领域的看法,以及她对攻读AI领域博士学位的建议。

💡 Lea Demelius是格拉茨科技大学的博士生,她的研究重点是差分隐私,这被广泛认为是保护数据分析和机器学习隐私的最新技术。

📚 Demelius的研究深入探讨了在可信赖AI中,隐私与公平性之间的权衡与协同关系,旨在促进负责任的机器学习模型的应用。

🔍 她的研究包括对深度学习中差分隐私的最新进展进行文献分析,并关注计算过程中数据的保护,涵盖差分隐私以及同态加密、多方计算和联邦学习等技术。

⚖️ Demelius的研究也关注差分隐私对模型准确性的影响,以及隐私与公平性之间的权衡,特别是度量标准和超参数的选择如何影响这种权衡。

🎤 Demelius计划继续探索(差分)隐私、效用和公平性之间的权衡,并研究对敏感数据和机器学习模型的实际攻击,以揭示新的漏洞并寻找解决方案。

In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In the latest interview, we hear from Lea Demelius, who is researching differential privacy.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I am studying at the University of Technology Graz in Austria. My research focuses on differential privacy, which is widely regarded as the state-of-the-art for protecting privacy in data analysis and machine learning. I investigate the trade-offs and synergies that arise between various requirements for trustworthy AI – in particular privacy and fairness, with the goal of advancing the adoption of responsible machine learning models and shedding light on the practical and societal implications of integrating differential privacy into real-world systems.

Could you give us an overview of the research you’ve carried out so far during your PhD?

At the beginning of my PhD, I conducted an extensive literature analysis on recent advances of differential privacy in deep learning. After a long and thorough review process, the results are now published at ACM Computing Surveys. I also worked on privacy and security in AI in general, focusing on data protection during computations, which not only includes differential privacy but also other privacy-enhancing technologies such as homomorphic encryption, multi-party computation, and federated learning.

Over the course of my literature analysis, I came across an intriguing line of research that showed that differential privacy has a disparate impact on model accuracy. While the trade-off between privacy and utility – such as overall accuracy – is a well-recognized challenge, these studies show that boosting privacy protections in machine learning models with differential privacy impacts certain sub-groups disproportionately, raising a significant fairness concern. This trade-off between privacy and fairness is far less understood, so I decided to address some open questions I identified. In particular, I examined how the choice of a) metrics and b) hyperparameters affect the trade-off.

Is there an aspect of your research that has been particularly interesting?

I love that my research challenges me to think in a highly technical and logical way – differential privacy, after all, is a rigorous mathematic framework – while also keeping the broader socio-technological context in mind, considering both societal implications of technology and society’s expectations around privacy and fairness.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

I plan to continue exploring the trade-offs between (differential) privacy, utility and fairness. Additionally, I am interested in practical attacks on sensitive data and machine learning models, as they can both inform our decisions on how to balance these trade-offs and reveal new vulnerabilities that require novel solutions. In the long term, I might broaden my research to other aspects of trustworthy AI, such as explainability or robustness, where there are also interesting trade-offs and potential synergies to investigate.

What made you want to study AI?

At first, my motivation was primarily based on fascination with the novel technologies and opportunities AI brought along. I was eager to understand how machine learning models work and how they can be improved. But I quickly realized that for my PhD, that would not be enough for me. I want to work on something that not only fascinates me, but also benefits society. Given the widespread use of AI models today, I believe it is crucial to develop technical solutions that enhance the trustworthiness of AI, such as improving privacy and fairness, but also robustness and explainability. With my research, I aim to contribute to the adoption of machine learning systems that are more aligned with ethical principles and societal needs.

What advice would you give to someone thinking of doing a PhD in the field?

Your advisor and team are key. Make sure that you can work well together: Keeping up in a field as fast paced as AI will be so much easier together. And keep in mind that every PhD journey is different, there are so many influencing factors – both in and out of your control, so avoid comparing your journey too much with that of others.

Could you tell us an interesting (non-AI related) fact about you?

I love singing, especially with others. I founded an a cappella ensemble when I moved to Graz and I am part of the Styrian youth choir. My family also loves to sing whenever we all get together.

About Lea

Lea Demelius is a PhD student at the University of Technology, Graz (Austria) and the Know Center Research GmbH. Her research revolves around responsible AI, with a focus on differential privacy and fairness. She is particularly interested in the trade-offs and synergies that arise between various requirements for trustworthy AI, as well as the practical and societal implications of integrating differential privacy into real-world systems.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

差分隐私 机器学习 AI伦理
相关文章