Interconnects 18小时前
What I've been reading (#1)
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文作者分享了近期阅读的关于人工智能的深度文章,涵盖了RL、人机关系、语言模型、AI批评等多个前沿话题。文章重点介绍了Substack平台上的一些优秀内容,包括对AGI之路、人类与AI关系、语言模型在AI中的作用等方面的思考。此外,还推荐了关于OpenAI的两本新书,并分享了一些其他领域的快速阅读材料。这些内容为读者提供了对AI技术发展趋势的深入理解,并引发了对AI未来走向的思考。

🚀 **独立的链接发布系列**: 作者将链接文章从之前的系列中分离出来,独立成一个系列,以提供更集中的内容,并分享阅读这些文章的原因,方便读者获取和讨论。

💡 **AI行业发展趋势的总结**: 作者推荐了Mary Meeker的幻灯片,该幻灯片总结了行业发展趋势,展示了AI在用户使用和收入增长方面的积极表现,并指出AI的实际应用比人们想象的要广泛。

🧠 **Substack上的AI内容精选**: 作者重点介绍了Substack平台上的一些优秀文章,涵盖了RL、人机关系、语言模型等多个方面,这些文章深入探讨了AI技术的未来发展方向,而非仅仅关注产品发布或行业内幕。

📚 **OpenAI相关书籍推荐**: 作者推荐了两本关于OpenAI的新书,并分享了阅读体验,从而帮助读者更好地了解OpenAI的运作模式和Sam Altman的早期经历。

🔗 **其他领域的快速阅读材料**: 作者还分享了一些其他领域的快速阅读材料,包括Claude代码逆向工程、对Q学习的分析等,这些内容拓展了对AI技术的理解,并提供了更广泛的视角。

The placement of links within Interconnects has had a few evolutions over the years and hopefully this next one is a more stable place. I’m moving the links post from the Artifacts Log series to a standalone, slightly less than monthly, series. This’ll give them space to breathe. They’re a popular subset of Artifacts Log’s based on the click data I get, but in this era with so much content it’s best to give you one obvious piece of content per email.

A separate post will also give me a bit more space to tell you why I’m reading them. Comments for these posts will be open, so folks can chime in on what they’re reading too.

Share


This Mary Meeker slide-deck did the rounds weeks ago, but it’s still the best summary of the industry we’ve seen in a long time. It shows countless trends that are up and to the right in usage and revenue growth. There is so much evidence that people use AI more than you think and that the revenue forecasts for AI companies, at least for existing products and not far out agents, are realistic because they’re supply limited on compute (unlike previous software companies).

The core of what I’ve been reading is within the Substack AI content ecosystem. There’s so much great stuff getting published right now telling you where the technology is going — i.e. stuff not focused on noisy product releases or drama. Ones I liked include:

My new colleague ’s take on the path to AGI with RL. This complements my recent pieces on next-generation reasoning models, the philosophy of reasoning machines, and RL generally. Here’s an excerpt:

The development of AlphaGo illustrates this paradox perfectly:

    RL was essentially the only viable approach for achieving superhuman performance in Go

    The project succeeded but required enormous resources and effort

    The solution existed in a "narrow passageway" - there were likely very few variations of the AlphaGo approach that would have worked, as can be seen by the struggle that others have had replicating AlphaGo’s success in other domains.

Artificial Fintelligence
Reinforcement learning and general intelligence
A disclaimer: nothing that I say here is representing any organization other than Artificial Fintelligence. These are my views, and mine alone, although I hope that you share them after reading…
Read more

Another one is from an author I consistently recommend, , who leads model behavior at OpenAI. This post is on human-AI relationships, which is somewhat far out, but reinforces how model behavior is going to get easier to steer as models get stronger, which opens up very different problems than we’re facing today with LMArena maxing models as our biggest problem.

Reservoir Samples
Some thoughts on human-AI relationships
// I lead model behavior & policy at OpenAI…
Read more

In some ways this feels like rounding up who among the top tiers of AI circles wrote in public, but in reality plenty of the pieces from these authors wouldn’t make the cut.

The next one from of UC Berkeley and Physical Intelligence fame provided an interesting provocation on why language as the foundation of AI may have been special — essentially letting the models think in a token space that can allow thinking. The post answers the question: If we have so much more video data than text, why are text-based models better at understanding the world than predicting just the next pixels?

Here’s an excerpt:

Unfortunately, things didn’t pan out the way that video prediction researchers had expected. Although we now have models that can generate remarkably realistic video based on user requests, if we want models that solve complex problems, perform intricate reasoning, and make subtle inferences, language models are still the main and only option.

Learning and Control
Language Models in Plato's Cave
From its original inception, the study of artificial intelligence has been intertwined with the quest to understand human intelligence. Predicated on the notion that the mind is fundamentally computational – that is, it can be modeled as an algorithmic framework independently of its underlying computational substrate or “hardware,” – AI researchers have…
Read more

And, of course, we have helping carry the torch of the few remaining people who are very optimistic about the potential of AI as a technology calling out the crazy moves of the industry. Too many of these critiques come from people who are critical of the technology as well, so their voices are ignored by people in power because they’re never being reasonable.

Rising Tide
Building supercomputers for autocrats probably isn’t good for democracy, actually
In early May, OpenAI announced OpenAI for Countries. Referencing their Stargate effort to build massive AI data centers in Texas and elsewhere in the United States, they wrote…
Read more

The post expanding on the problem of people making nonsensical critiques of AI is below, from a newer author I found. I describe this as an antidote to the Gary Marcus disease.

Learning From Examples
Academics are kidding themselves about AI
Last week I wrote about reasoning models. I argued that — despite some recent flawed work on the subject — they have some curious limitations, and outlined a rough sense of where I expect developers to go in the future based on those shortcomings…
Read more

The last of the AI posts on Substack you need to read is the Latent Space interview with Noam Brown. Simply a great episode recapping recent topics.

Latent.Space
Scaling Test Time Compute to Multi-Agent Civilizations: Noam Brown
Every breakthrough in AI has had a leading champion who identified and evangelized a core scaling law — Moore’s Law gave way to Huang’s Law (silicon), Kaplan et al gave way to Hoffman et al (data), A…
Read more

On the long-form side, I’ve been reading the two books that came out recently about OpenAI — Empire of AI and The Optimist. They do very different things. Empire of AI is better overall for learning about how OpenAI and the Valley operate, but some sections I skipped over as a bit too polemic for me. The Optimist has some nice history on Sam Altman’s origins and his time at Y Combinator, but the OpenAI stuff was much lighter, so I didn’t finish it and am happily starting Apple in China.

Subscribe now

From here, let’s go to some quick hits:

And to end this issue, for the Substack service level nerds like me, you can read this post on how so many mainstream voices are now joining Substack. This actually lifts all boats by bringing more readers onto the platform.

The Honest Broker
Substack Has Changed in the Last 30 Days
Everything happens so quickly at Substack. And in just the last few days, something big has changed…
Read more

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI Substack OpenAI RL
相关文章