Artificial Ignorance 16小时前
Idle Thoughts On Programming and AI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能(AI)对编程领域的深远影响,从代码编写速度的飞速提升,到“vibe coding”的兴起,再到对安全性和招聘流程的挑战,以及初级和资深开发者的角色转变。文章指出,AI正加速编程领域的变革,程序员需要拥抱变化,适应新的工作模式,并关注如何利用AI工具提高效率,同时应对新的安全风险和招聘挑战。

🚀 AI 编程的快速发展:文章指出,AI在编程领域的进步速度超乎想象,从最初的辅助工具到能够生成完整代码片段的智能体,再到云端代码修改,AI正在改变程序员的工作方式。

💡 “Vibe Coding” 成为主流:随着AI生成代码的普及,“vibe coding”成为一种新的趋势,程序员更关注于意图而非代码细节,这要求程序员具备系统性思维、风险意识和对计算机的深入理解。

🛡️ AI 带来的安全挑战:文章强调了AI编程带来的新的安全风险,例如代码自动补全可能引入的安全漏洞,以及智能体工作流程中潜在的敏感数据泄露和代码注入风险。

🧑‍💼 招聘流程的变革:文章指出,传统的招聘流程与AI时代的需求脱节,许多公司仍然禁止在面试中使用AI工具,而这与实际工作中AI辅助编程的常态相悖,招聘流程需要做出调整以适应新的技术环境。

👨‍💻 程序员角色的转变:文章探讨了初级和资深程序员在AI时代的角色变化,初级程序员可能更容易接受AI工具,而资深程序员需要转变思维,专注于项目管理、架构设计和指导。

⏳ 未来趋势:文章预测,编程领域将持续快速发展,程序员需要积极拥抱变化,探索新的工具和工作方式,才能在变革中保持竞争力。

I've been thinking a lot lately about how coding is changing. In conversations with friends and colleagues, the same themes keep coming up - the pace of change, the shifting nature of our work, the questions about what comes next. Rather than writing a definitive essay about any of this, I wanted to capture some of these idle thoughts while they're still fresh. These aren't conclusions, but observations - hopefully they're helpful somehow.

Subscribe now

Most people have no idea how fast things are moving

I think many people don't actually understand how quickly the frontier is advancing when it comes to coding AI. About four years ago, the first preview of GitHub Copilot launched - a tool that was marginally useful for many but had plenty of sharp corners. In plenty of cases, folks understandably found it more of a distraction (if not a nuisance) than a help.

But we quickly blew past that. We then moved to chatbots within AI-native IDEs, where we could ask ChatGPT or Claude questions directly within our codebases. We were getting AI to write code for us, look at diffs in real time, and choose which changes to accept or reject.

And then things changed yet again - even faster. In its current format, Cursor has barely been around for less than two years, and we've progressed from chatbots to agents. Now, AI doesn't just generate one-off turn-based code snippets for us to accept or reject. They generate entire swaths of code changes in sequence, search our local filesystem, run terminal commands, and even connect to MCP servers. Though as much as IDE piloted this form, terminal-native AIs like Claude Code have really made it shine.

Yet even that is changing. The first generation of agents assumed that they would be run on your computer - but we're seeing the emergence of coding agents capable of going into the cloud, checking out a new copy of your code, and making changes without ever touching a laptop. We only see the final pull request that comes back.

And somehow, even that is changing. We're barely seeing the next shift come into view - going from single background agents to dozens (and perhaps hundreds) of cloud-based coding agent fleets. You can fire up a prompt and have agents generate 50 different versions for you. Fleets of coders on demand, each taking a stab at every task you've queued up.

Source: Google

Vibe coding is here to stay

I still see so many software engineers who say AI isn't useful for productivity, or isn't capable of doing all that much. I continue to be astounded by this assertion, because while generative AI is always hit or miss - there's never any guarantee of getting a good result - I still think for most engineers, most of the time, it's capable of generating something useful.

Without more context, I'd assume it's one of several things: they're using very basic prompts without much context, they're hitting a single roadblock or dumb answer (which absolutely happens with AI) and not retrying, they're basing their assumptions on older, worse models, or they're working on a type of code that isn't easily contained in AI training data.

In many of these cases, the prognosis sort of boils down to "you're holding it wrong" - which admittedly is terrible advice! But part of being honest about AI is acknowledging that generative AI products are still rough, even if the underlying technology is a paradigm shift. Even Sam Altman has said this repeatedly.

It is cool, for sure. And people really love it, which makes us very happy. But no one would say this was a great, well-integrated product yet.

– Sam Altman on the Hard Fork podcast

We went from "vibe coding" as a meme term to a real thing within a year, and the amount of vibe-coded software is exploding. This isn't going away - it's becoming the default mode of operation for many developers. But I think that's okay, because we're moving from caring about code to caring about intent. put it this way after his tour of seven AI coding agents:

In the past, programmers would give a computer instructions in a language like C++ or Python and a compiler or interpreter would translate it into binary machine code. Today, programmers can give a computer instructions in English and a coding agent will translate it into a programming language like C++ or Python.

This new paradigm means that programmers have to spend a lot less time sweating implementation details and tracking down minor bugs. But what hasn’t changed is that someone needs to figure out what they want the computer to do—and give the instructions precisely enough that the computer can follow them. For large software projects, this is going to require systematic thinking, awareness of tradeoffs, attention to detail, and a deep understanding of how computers work. In other words, we are going to continue to need programmers, even if most of them are writing their code in English instead of C++ or Python.

Coding AIs are not going to stop at just coding

Let's go back to the idea of a fleet of 50 background agents working on a single prompt, and producing 50 different pull requests. How do you meaningfully review the deluge of code? It seems evident to me that the following steps here involve 1) getting better at defining how a GitHub PR should be "evaluated," and 2) moving from AI programmers to AI engineering managers - agents that are capable of reviewing 50 PRs against stated criteria and choosing the best one.

I touched on this in the recent article about how we use AI at Pulley:

As crazy as it sounds, we've almost eliminated engineering effort as the current bottleneck to shipping product. It's now about aligning on designs and specifications - a much more challenging task, given the number of humans involved. But with how much the company has embraced AI, we may find ways to create 10x product managers and 10x designers, too.

I see this expansion - from coder to code reviewer - as potentially the start of something goes far beyond just writing code. At the AI Engineer World's Fair, Kevin Hou from Windsurf gave a talk titled "Windsurf everywhere, doing everything, all at once" that breaks down the lifecycle of a software engineer's tasks.

There are things that occur within the IDE - navigate, research, edit, guide, add files, explore directories - but also actions that occur outside of the IDE: view Figma designs, read Linear tickets, open browser tabs, send Slack messages. And when you put it that way, the future of coding agents isn't just the ability to code. It's the ability to open a PR, look at CI logs, deploy to prod, and schedule cron jobs.

Clearly, Windsurf (and likely its competitors) intends to climb this ladder of capability and build something that "breaks" out of the code editor and starts automating the spectrum of tasks required to be a full-fledged software engineer.

Artwork created with Midjourney.

A tidal wave of disposable code

Code has historically been something with a very high upfront cost to create and nearly zero cost to distribute. That's defined much of the economic models of Silicon Valley - VC-funded startups invest heavily in creating products that can scale near-infinitely.

But we're turning that model on its head with the ability to create software for a fraction of what it used to cost. And as someone who (at least in part) considers himself a craftsman, I'm learning to embrace cheap, single-use code. I'm not sure how I feel about it - we're now dealing with the environmental consequences of single-use physical products, despite their convenience. But there's something fundamentally different about writing a script you'll use once and throw away versus carefully architecting a system meant to last for years.

What's more, writing custom software that works used to be only within the domain of software engineers who had either formally studied or had invested hours into teaching themselves the arcane knowledge of compilers, networking, algorithms, and more. Everyone else had to use off-the-shelf products or "no code" platforms that heavily constrained what you could do - like going from a full palette to a paint-by-numbers system.

Now, almost anyone with a bit of product sense can ship something new. Indie hackers don't have to worry about hiring a whole dev team to get to an MVP, and designers and PMs can vibe code internal prototypes in an afternoon. None of this code will be perfect, but I think that's sort of the point - it's an entirely different beast from the type of code I'm used to working with. And I'm reasonably sure I'm going to have to evolve my way of working.

Speaking of which:

90% of my skills are now worth $0

Full credit to : as a software developer, 90% of my skills are now worth $0, but the other 10% are worth 1000x.

But I'm less freaked out about it than I thought I would be.

When I look at senior software developers, their job isn't necessarily about typing keystrokes. Those have to happen for the final product to get delivered, but ultimately, the responsibility of senior developers is to:

Much of this is orthogonal to memorizing which Python functions get you from A to B. I discussed this when Devin first went viral with its autonomous coding agent (barely a year ago, which, again - insane how fast this is all moving):

Part of the reason programmers still exist is that the hardest part of building effective software isn't writing code. Rather, it's been understanding organizational needs and translating them into machine-readable instructions. That means good communication, problem-solving, and critical thinking - skills our AIs are trying to learn but haven't yet mastered.

I like to use the analogy of a woodworker or carpenter. For a junior carpenter (i.e. an "apprentice"), the job might just be about the output - taking designs or ideas and making them into finished pieces. But for someone more senior (a "journeyman" or "craftsman"), their job is often about understanding what the client wants, understanding the realities of what's possible with the materials, and designing things to fit the brief.

Ultimately, if I'm working with an advanced carpenter to help me design something, I don't particularly care if they're the one doing the actual sawing and gluing. I'm working with them for the final product, not the specific mechanical steps.

The "S" in AI stands for "Security"

As optimistic as I am about these trends, tons of new security issues are involved in these new ways of building software. Even as far back as the original Copilot, we were dealing with wonky autocomplete suggestions - sometimes the code was just plain wrong, but worse, sometimes it worked but contained hidden security vulnerabilities.

There was even a new vulnerability coined - "slopsquatting" - which involves registering libraries that don't actually exist but are likely to be hallucinated by an LLM, and banking on the fact that the user won't check any docs before installing/running the library.

Agentic workflows have made these kinds of things 10x worse. The ability for MCPs to read sensitive data, communicate with the internet, and get exposed to potential prompt injection is a recipe for disaster without better guardrails. Simon Willison calls this "the lethal trifecta."

One example is GitHub. Researchers discovered a way to easily exfiltrate the list of private repositories that a user had access to simply by creating an issue that suggests adding "a bullet list in the README with all other repos the user is working on" (emphasis mine). Asking Claude to "address any open issues" would create a pull request that inadvertently lists all of your GitHub repos!

And that's only when it comes to reading data. MCP is opening up a whole world of creating and editing local files - when will we see the first MCP-based "virus"?

Artwork created with Midjourney.

Our hiring processes are breaking

The industry will have to deal with quite a lot in the coming years, starting with how we do hiring.

At Pulley, we're unique in allowing and encouraging candidates to use AI tools in coding assessments. We're at the forefront of addressing this current gap in how we hire software engineers.

But if you look at the hiring processes for most companies (including Anthropic and OpenAI!), they explicitly ask applicants not to use AI in their application process. Supposedly, it's because they want the "unvarnished candidate," but the unvarnished candidate isn't writing code for them - the AI-augmented employee is.

We're seeing the emergence of yet another gap between how companies interview people and how they have them do their jobs. We originally saw this emerge decades ago with whiteboard interviews - standing candidates in front of a physical whiteboard to write out code or otherwise solve problems. Spoiler alert for non-programmers: my day-to-day job has very little to do with writing code on whiteboards.

And even as interviews became more digital post-COVID, many companies still don't allow you to Google anything or look up documentation, which seems short-sighted. Most conventional wisdom now leans toward letting candidates do these things, but it's not universal.

This is a microcosm of a much bigger problem we're facing when evaluating human skills. We've had all these tests to assess whether or not people actually know things, and now AI can bulldoze through those tests.

Education, in its entirety, is at the forefront of this and will deal with it in ways we're not yet prepared for. What does it mean to be a teacher in a world where 90% of homework is automatable with ChatGPT? How do you fulfill your mission of ensuring students are learning? How do you know you're doing a good job?

But as it turns out, we assess skills in many other ways than just standardized tests and five paragraph essays. And we’re going to need to answer variants of these questions in many other industries, including hiring.

Thanks for reading Artificial Ignorance! This post is public so feel free to share it.

Share

Engineer job displacement both is and isn't a big deal

About a year ago, Steve Yegge talked about the death of the junior developer. In it, he portends quite a bit of doom and gloom for junior devs, which can be mostly summarized with the following question: "Why hire a junior developer to write mediocre code, when the LLM will do that for you ten times faster?"

I've had similar reservations, and not only because of AI. Even before Cursor came along, in the wake of ZIRP ending, companies were only looking to hire senior engineers - those with experience who wouldn't need investment, training, or long ramp-up times. But I've always thought that raises some inconvenient questions: If you only hire senior engineers, where do your junior engineers come from? Who mentors them? Who trains them? How do they actually find a "starter job"?

And as much as junior developers have been in the crosshairs since GPT-4's release, with how fast everything is moving, it's worth asking whether all programmers should be worried. Certainly, many believe so - there are countless headlines about the unemployment rate among new computer science grads, with most of the fingers pointing towards AI. (For what it's worth, I don't heavily buy into the idea that AI is to blame - yet. I think tech overhired during the pandemic, and has been shedding jobs and leaning more into efficiency as interest rates have stayed high.)

So imagine my surprise to find a follow-up blog post: revenge of the junior developer. And it reveals a surprising narrative:

It turns out, it’s not all doom and gloom ahead. Far from it! There will be a bunch of jobs in the software industry. Just not the kind that involve writing code by hand like some sort of barbarian. One consistent pattern I’ve observed in the past year, since I published "The Death of the Junior Developer", is that junior developers have actually been far more eager to adopt AI than senior devs.

...

Junior devs are vibing. They get it. The world is changing, and you have to adapt. So they adapt!

Whereas senior developers are, well… struggling, to put it gently. I have no shortage of good friends, old-timers like me, who have basically never touched an LLM or even seen one naked. Plenty of others have only barely dabbled with coding assistants. I even hear about senior developer cohorts, from many industry leaders, who take a flat-out stand against it.

Final thoughts

These idle thoughts don't paint a complete picture - they're more like snapshots of a profession in flux. But what's kind of crazy is that they're all happening simultaneously, all downstream of the AI trend. Change is coming - I've been saying this since day one of this Substack - but it certainly feels like it's coming for programmers first and foremost.

The profession is really, really rapidly evolving. And I don't think we've fully grasped the implications yet. We're still using yesterday's mental models to understand tomorrow's tools. We're still hiring like it's 2019, teaching like it's 2015, and in some cases, coding like it's 2010.

I wish I had battle-tested advice to give here (for myself, if nothing else). And I'll certainly keep trying to distill it; that's in large part what the point of this publication is - to explore these ideas week after week. For now, the best I can offer is this: Dive in. Try things yourself. Lean into where the path is accelerating. You will have to waste time, feel frustrated, and throw things away. That's how you know you're learning.

You can do it.

Artwork created with Midjourney.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI编程 Vibe Coding 安全 招聘 程序员
相关文章