少点错误 06月10日
Research Without Permission
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

这篇文章讲述了作者在没有正式学历和机构背景的情况下,如何进入人工智能安全领域的故事。通过个人经历,包括职业转型和对LLMs的兴趣,作者分享了申请AI相关职位和奖学金的尝试,以及在过程中遇到的挑战和收获。尽管申请未成功,但每一次尝试都促使作者更清晰地思考和完善自己的想法。文章也强调了通过公开写作和参与AI社区讨论,从而获得反馈和改进的重要性,最终希望与他人合作,共同探索AI安全领域。

💡作者通过个人经历分享了她进入AI安全领域的旅程,最初并非源于学术背景,而是源于对大型语言模型(LLMs)的兴趣,以及在职业和个人生活中的转变。

📝作者详细描述了她申请AI相关职位和奖学金的尝试,包括在FAR.AI、Anthropic、OpenAI等机构的申请,以及在MILA和COSMOS的奖学金申请。虽然这些申请最终都未成功,但每一次尝试都促进了作者对相关概念的深入思考。

💬文章强调了公开写作和参与AI社区讨论的重要性。作者通过在Substack和AI社区分享想法,获得了来自陌生人的反馈和挑战,这些反馈帮助她改进想法,并建立了与他人的联系,从而促进了自我学习和成长。

Published on June 10, 2025 7:33 AM GMT

Epistemic status: Personal account. A reflection on navigating entry into the AI safety space without formal credentials or institutional affiliation. Also, a log of how ideas can evolve through rejection, redirection, and informal collaboration.

--

A few weeks ago, I wrote a long, messy reflection on my Substack on how I landed in the world of AI. It was not through a PhD or a research lab, but through a silent breakdown. Postpartum depression, career implosion, identity confusion and the whole mid-30s existential unravelling shebang. In that void, I stumbled onto LLMs. And something clicked. Not because I suddenly wanted to build models. But because for the first time in a long time, I felt curious again. Alive. Like my brain was waking up.

This is part two of that story. The part where I start applying to roles and fellowships that resonate with the ideas I’ve been thinking and writing about such as relational alignment, coaching architectures, distributed cognition.

To be clear, I didn’t get any of them.

The list is mildly ridiculous in retrospect. This includes roles at FAR.AI, Anthropic (model behaviour architect), OpenAI (human-AI collaboration lead) and fellowships at MILA (to build value-alignment protocol), COSMOS (to build truth-seeking AI). I also pitched a few engineering schools on running a studio-style 4-6 week course about trust, intimacy, and AI alignment, rooted in real human dilemmas.

Most replied with a version of “no.” Some just didn't reply at all.

Still, something useful happened.

Each application forced me to think, a bit more clearly. What do I really mean by relational alignment? What might a coaching layer look like in a live system? Why does my brain keep returning to architectural metaphors like layers, feedback loops, distributed modules, when I’m talking about human trust?

None of these ideas were “ready,” but each one got sharper in the process. I now have fragments. Patterns. A slow-building confidence that maybe this nonlinear perspective, part behaviour design, part systems thinking and part human mess actually has a place in the conversation.

That’s not to say the doubts have gone away. They’re always my best friends. I worry I’m not doing anything useful. I worry I should be learning to code. I worry that I’m performing insight, not producing it. I worry that I’m building a theoretical framework for a problem no one’s hiring me to solve. And, quietly, I’ve been carrying an existential question ...

Am I making a complete fool of myself while everyone stands by and politely watches?

Because every time I applied for one of those roles, I’d post the resulting essay or idea on Substack, hoping for feedback. My long-standing community was always encouraging, but rarely additive. No one would push back, challenge a premise, or help me think through it. I couldn’t tell if I was making sense or just sounded like a total wacko.

It’s easy to look at someone with a resume like mine, products at Big Tech, a matchmaking venture, relationship coaching, leading ops for a logistics giant, skilling platform for the government, teaching future engineers about trust and happiness, and think, “Oh here she goes again, chasing yet another crazy muse” (like my parents think). 

Honestly, without critique, it’s hard to know. So, then I started posting these ideas in AI communities like here or on various discord/ slack channels. Suddenly, people started to challenge me. Thoughtfully. Specifically. Sometimes critically. And it lit something up in me. I loved it. 

It made me want to learn more. To write better. To build stronger. These weren’t friends trying to be nice. These were strangers who didn’t know me, had no reason to protect me, only cared about my ideas (or the lack of it) and that makes their feedback golden.

I have no reputation here. Nothing to lose. Which means everything is additive. And I’m grateful for it. I just hope I can learn enough to become worthy of some builder’s attention, so I can collaborate, test these frameworks, and co-create something meaningful. Even if that doesn’t happen soon, I’m still enjoying the hell out of this process.

Could I prototype all this solo? Maybe. But I know I’d rather be in dialogue with people who see something in these ideas that they don’t yet know how to build, and are curious enough to try. So for now, I’m treating this as self-directed research, publishing in public, testing ideas through writing and building relationships one thoughtful challenge at a time. 

If any of this sounds familiar, if you’re also stumbling sideways into a field, learning out loud, unsure whether your ideas “count”, you’re not alone. And if you're working on adjacent questions around corrigibility, trust design, or behaviour-shaping architectures, I'd love to talk. Not to pitch. Just to think together.

When I was 16, I made a wildly optimistic life plan that involved running my own engineering firm (inspired by my dad) by the time I turned 39. Today, I’m 38, and instead of overseeing blueprints and buildings, I’m sitting here trying to build a new career from scratch, writing about alignment and system behaviour from my desk at home.

It’s not what I planned. But it feels honest. And today, on my 38th birthday, it feels strangely right to be sharing this here.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 非科班 职业转型 社区反馈
相关文章