少点错误 01月08日
AI Safety as a YC Startup
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了在人工智能安全领域创业的机遇与挑战,并从Y Combinator(YC)的视角出发,分析了如何将YC的创业建议应用于AI安全初创企业。文章指出,AI安全问题虽具未来性,但其潜在风险巨大,需要更多创业者加入。文章强调,AI安全初创企业应灵活应对市场不确定性,重视与客户的深度关系,并关注自身对世界的积极影响。同时,文章也提醒创业者,在追求影响力的同时,不应忽视规模的重要性,并鼓励技术人员积极投身AI安全创业。

💡AI安全问题具有未来性,但其潜在风险不容忽视,这为创业者提供了巨大的机遇。 随着AI技术的发展,解决AI安全问题的需求将会增长,这类似于个人电脑普及后杀毒软件市场的兴起。

🧑‍💻AI安全领域的创业者应灵活应对市场的不确定性,并重视与客户的深度关系。由于AI安全问题的特殊性,初创企业需要灵活调整策略,并与研究人员、政府雇员等客户建立紧密的联系,深入了解他们的需求。

💰在AI安全领域,由于潜在客户群体较小,初创企业应考虑“高收费”策略。YC的建议是,与其拥有大量的普通用户,不如拥有少量忠实用户,并在了解用户需求后逐步扩大规模。此外,由于竞争较少,客户对价格具有一定的弹性空间。

🌍AI安全创业者应关注自身对世界的积极影响,并在追求影响力的同时,不应忽视规模的重要性。创业者应努力使自己的事业对社会产生净积极影响,既要追求事业的规模,也要保证事业的正确方向,从而实现最大的社会价值。

Published on January 8, 2025 10:46 AM GMT

A while back I gave a talk about doing AI safety as a YC startup. I wrote a blog post about it and thought it would be interesting to share it with both the YC and AI safety communities. Please share any feedback or thoughts. I would love to hear them!

AI Safety is a problem and people pay to solve problems

Intelligence is dangerous, and I think there's a significant chance that the default scenario of AI progress poses an existential risk to humanity. While it's far from certain, even small probabilities are significant when the stakes are this high. This is an enormous abstract problem, but there are thousands of sub-problems waiting to be solved.

Some of these sub-problems already exist today, but most are in the future (GPT is not capable of killing us yet). When people start feeling these pains, they will pay to have them solved (similar to how anti-virus software became a huge business opportunity once personal computers became widespread in the 1980s). I don't think the solution is to slow down (although I'm not certain), because this also comes with a cost. Therefore, we have to solve these problems. I think it's one of the most interesting challenges of our time, because otherwise we won't get to reap the rewards of AI utopia.

More startups should solve these problems

From my experience, builders (entrepreneurial engineers) are underrepresented in the AI safety community. There are far more researchers and philosophers. They are also crucial, but the mix is not currently balanced (source: personal experience from trying to hire such people). I don't think this should be the case. The AI safety market is currently very small, but according to those who have attempted to forecast its trajectory, it may grow substantially. Since most people remain skeptical of near-term AGI timelines, betting on this growth early could provide a competitive advantage. VCs exist to enable startups to make such long-term bets.

If you're a technical person with a passion for AI safety, it's very tempting to join a big AI lab. However, I think you should start a startup instead. Startups are more fun, and you will have much more counterfactual impact. A friend once told me: “The most impactful people will build something great that wouldn't have happened without them”. I think it's generally harder to do this in the hierarchical structure of AI labs (but not impossible). More on this here.

Y Combinator

I'm a bit biased in this matter. I've been fascinated by startups for many years, and getting into YC straight from college was a dream come true. Like many other YC companies, our first idea didn't work out and we had to pivot. Our pivot was somewhat successful; three weeks later, we worked on something new with revenue that ultimately made fundraising easy (AI safety evals). We didn't find a concrete idea though, instead, we found a really cool customer we could build stuff for. This was largely thanks to intros from the AI safety community. AI safety has its roots in an altruistic movement (Effective Altruism), and you can see that from how helpful people are. This is a real advantage for AI safety startups. Whenever I speak to what would have been called a “competitor” in other industries, we share stuff much more freely because we want the same thing for the world.

Communities are such an incredible thing. I have been lucky to also be part of the YC community, which brought us our second big customer. YC is great for all of the obvious reasons, but in my experience, the community is its strongest asset. The advice is also great, but most of it is publicly available. This advice has become famous over the years; phrases like "Make something people want", "Love your customer", and "Do things that don't scale" are echoed everywhere you go in San Francisco. They are not, however, in AI safety circles. These phrases arise naturally for startups under market pressures, but might not be obvious for builders coming from the AI safety community.

YC advice in the context of AI safety

Not all AI safety startup ideas are the same, but there are some characteristics that apply to many of them. Here are some thoughts on how YC advice applies to these characteristics.

The problems they are solving are in the future

As discussed above, current AI systems do not pose an existential threat to humanity. It's therefore very hard to know if you have "made something people want" while trying to solve this problem. This means you have to be creative when trying to follow this advice. It might be hard to launch early and iterating. Additionally, the market is very uncertain, and you have to be flexible to change your ideas and processes. "Do things that don't scale" is therefore extra important in this setting.

Customers are often researchers from AI labs or from the government

This is obviously not true for all AI safety startups. It is of course possible to contribute to AI safety while having another customer group, but this is the case for many and it is for me. We primarily sell to researchers, and this makes my day-to-day very enjoyable. Every customer meeting I have is with someone I would probably like to go out for a beer with. "Love your customer" is easy! This makes it much easier for me to put myself in their shoes and understand what they want. Regardless of who your customer is, ask yourself if you like them.

The pool of potential customers is often small

As a result of customers being researchers and government employees, the pool of potential customers is not huge (yes, there are a lot of academic researchers, but they don't have a lot of money). YC's advice is often that having 100 passionate users is better than 1,000,000 average users. It's tempting to then assume that the small customer pool is not a problem, but this advice assumes that you can scale up the 100 users after you have learned from their feedback. However, another YC advice comes in handy here: "charge more". Most early-stage startups are scared of scaring away customers, but if you've made something people want, they won't walk away without attempting to push the price down. They know that you don't know what you're doing when you set the price and therefore expect the price to be flexible. This is especially true for AI safety ideas where there isn't much competition; if they walk away, they have no alternative.

Doing good

If you have a passion for AI safety, I think ideas in this space could lead to great startup success. But if you don’t, there are probably better ideas to maximize success probability. Founders with this passion often also want their startup to have a positive impact on the world. You do this by doing something that is net-positive and makes it influential. Basically, Impact = Magnitude * Direction. I think most people in the world have a bias for maximizing Magnitude. This is not to say that people are immoral. However I think most people just don’t recognize (or heavily underestimates) potential one’s career has to make the world a better place. They recycle and donate to charity, but their career is their biggest opportunity to make a difference to the world.

However, I think there is a group of people who over-optimize for Direction and neglect the Magnitude. Increasing Magnitude often comes with the risk of corrupting the Direction. For example, scaling fast often makes it difficult to hire only mission-aligned people, and it requires you to give voting power to investors that prioritizes profit. To increase Magnitude can therefore feel risky, what if I end up working at something that is net-negative for the world? Therefore it might be easier for one's personal sanity to optimize for Direction, to do something that is unquestionably net-positive. But this is the easy way out, and if you want to have the highest expected value of your Impact, you cannot disregard Magnitude. I am not an expert, but my uninformed intuition is that the people with the biggest positive impact on the world have prioritized the Magnitude (would love to hear other opinions on this. What are some examples of people from either side?) Don’t forget that you can always use “earn to give” as a fallback.

Thanks to Rudolf Laine and Ollie Jaffe for your feedback!



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 创业 Y Combinator 风险 影响力
相关文章