少点错误 前天 23:57
Human opportunities in the age of AI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能(AI)发展对人类劳动力的影响,并指出尽管短期内AI可能带来颠覆,但长期来看,它将创造新的就业机会,并推动技术与人类的深度融合。文章分析了AI时代下涌现的新兴机会,如数据解析、边缘案例处理、协作与规模化,以及隐性知识在工作流程设计中的重要性。文章预测,未来AI的发展将取决于如何设计更智能的工作流程,以及如何将人类的专业知识融入到结构化的任务流程中。

💡**输入解析:**AI系统依赖于数据,但原始数据并非预先整理好的JSON格式。因此,将现实世界的混乱数据转化为机器可读的结构化数据至关重要。对非结构化数据进行处理并转化为可用格式的人才将有巨大需求。领域知识,例如理解体育比赛中的“篮板球”或识别数字伪造事件,将变得比以往任何时候都重要。

⚠️**边缘案例处理:**AI系统可能在90%的情况下是正确的,但剩余10%的不确定性可能导致不可预测或有害的结果。在企业或高风险环境中,处理边缘案例的能力至关重要。能够优雅地处理模糊性、例外情况或升级情况的工具将获得成功。例如,自动驾驶汽车在完美条件下的表现出色,但在极端情况下可能失败,这限制了其推广。

🌐**协作与规模:**即使AI使个人效率提高了10倍,仍有一些问题需要大型、协调的团队来解决。AI改变了协作的形式,而不是消除协作的需求。规模化带来了小规模下无法实现的能力,如全球基础设施和跨学科研发。AI可能会改变组织设计,例如,超大规模组织可以利用AI打破传统的规模限制,实现当前工具无法支持的协调水平。

🧠**隐性知识:**LLMs和智能体并非即插即用的智能机器,而是需要引导的工具。为了获得有意义的结果,需要构建问题、明确目标、分解步骤并设计周到的工作流程。隐性知识(基于经验的深度理解)至关重要。例如,在法律摘要、软件开发或客户服务等领域,AI可以起草或建议回复,但人类仍然需要审查、调整或验证输出。工作流程设计将成为新的工艺。

Published on June 5, 2025 3:49 PM GMT

Much of today's discussion around AI centers around current human labor that will be rendered meaningless. While the near-term will assuredly be disruptive, the long-term development of AI will not only require immense human scaffolding that will generate new forms of labor, but will create novel opportunities for many that haven't existed yet because of scaling limitations or knowledge gaps. I think it's more likely that technology will become more human than less. 

I've outlined a few of the emerging opportunities below:

1. Input Parsing

One of the most overlooked but critical challenges in any AI system is turning real-world messiness into structured, machine-readable inputs. The success of AI is predicated on data, but raw human experience doesn’t come pre-packaged as clean JSON. Someone still needs to parse it.

Examples:

Prediction: There will be a surge in demand for individuals who know how to work with unstructured data and turn it into usable formats. Those who understand how to map messy human data into machine-readable formats will be essential. Tacit domain knowledge will become a key differentiator. For instance, knowing what a "rebound" is in sports or how to detect a digitally fabricated event will matter more than ever.

Open Questions:

2. Edge Case Handling

AI systems can be right 90% of the time, but trust erodes when the remaining 10% leads to unpredictable or harmful outcomes. Just because a system nails the base case doesn’t mean it’s ready for adoption, especially in enterprise or high-stakes contexts. Scalability isn’t determined by peak performance; it’s determined by floor performance. Can the system be trusted when things go wrong? Can it fail gracefully?

Enterprises won’t build workflows around tools they can’t rely on. Users don’t stick with products that break in edge cases. The bottleneck to adoption isn’t the median case; it’s whether the system can handle ambiguity, exception, or escalation without introducing chaos.

Consider autonomous vehicles. Despite sophisticated AI and advanced sensor arrays, real-world unpredictability (like sudden weather shifts, construction zones, or erratic human behavior) has made it incredibly hard for these systems to scale. A self-driving car that performs flawlessly in perfect conditions but fails dangerously in edge cases doesn't inspire public trust or regulatory confidence. This is why autonomous vehicle rollouts have been slow, cautious, and often limited to geo-fenced areas.

Other domains face similar dynamics:

Examples:

Prediction: Many startups and tools will fail to gain adoption, not because the AI doesn't work, but because it creates more complexity in edge cases. Tools that don’t handle edge cases will erode trust. Tools that recover gracefully will win.

Open Questions:

3. Coordination and Scale

Even if AI makes individuals 10x more productive, there are still classes of problems that only large, well-coordinated teams can solve. AI doesn't eliminate the need for collaboration; it changes the shape of it.

Scale buys capabilities that are impossible at smaller sizes: global infrastructure, vertically integrated ecosystems, and cross-disciplinary R&D efforts. Think of Apple designing chips and phones while controlling distribution and privacy policy, or Google orchestrating satellite imagery, real-time traffic, and language translation across the globe. These required enormous organizational coordination.

But AI introduces a wrinkle: technology can curve scaling laws. In the past, impact scaled roughly linearly (or sub-linearly) with headcount. But now, a team of 5 with 50 well-orchestrated AI agents might outperform a traditional team of 100.

In the other direction, AI may allow already-massive organizations to scale in non-traditional ways, coordinating across thousands of internal tools and functions that used to break under communication and management load.

Examples:

Prediction: Just as remote work reshaped team structures, AI will reshape org design. We’ll see:

Open Questions:

4. Tacit Genius: Designing Flows That Work

LLMs and agents don’t magically solve complex problems. They’re not plug-and-play intelligence machines; they’re tools that need to be guided. To get meaningful results, you have to scaffold the problem, clarify the goal, break it into substeps, and design thoughtful workflows or prompts.

This is where tacit knowledge (the kind of deep, experience-based understanding that's hard to write down or teach) becomes crucial. It’s the difference between someone who has read a recipe and someone who knows how to improvise a great meal.

The most effective AI systems aren’t just the result of better models; they come from encoding human expertise into the design of the workflow itself.

Examples:

Prediction: The future of AI won’t be decided by whether you’re using ChatGPT or Claude. It will be decided by who can design the smartest workflows. The real advantage lies with those who can encode deep, hard-won expertise into structured, repeatable task flows.

ChatGPT won’t beat Gordon Ramsay at cooking, not because it can’t follow a recipe, but because Ramsay knows when to break the recipe. That kind of expert intuition — knowing which steps matter, when to adjust, and why — is what separates a good system from an exceptional one.

Workflow design will become the new craftsmanship.

Open Questions:

5. From Median to Extraordinary

AI is blowing open the gates of creation. Anyone can now write code, generate designs, launch websites, or compose music with just a prompt. But while the tools are powerful, they're also generic. They can produce something competent, but not something extraordinary. Extraordinary comes from human taste, vision, and craft. What’s finally changing is that those things no longer need to be filtered through deep technical knowledge to come alive.

Historically, technical fluency has been a bottleneck. That means the foundational layers of the internet, software, and even AI itself have largely been shaped by a relatively narrow demographic: engineers, often working on problems they understand personally. As a result, entire categories of human experience and creativity have been underserved, underbuilt, or overlooked entirely.

But now, the gates are cracking open. A chef can build an app. A therapist can build a tool. A writer can automate their creative workflow. AI is making technical scaffolding optional. This isn’t the age of machine-built software. This is the beginning of the most human era of software we’ve ever seen.

Examples:

Prediction: We’re entering an era where creative fluency will matter more than code fluency. The builders of the next wave won’t just be engineers; they’ll be artists, therapists, teachers, small business owners, and visionaries who understand human needs deeply and can shape AI tools around them.

They won’t be creating software that looks like everything else. They’ll be creating tools that feel like them.

Open Questions:

Real Intelligence Is Calibration

One of the most profound forms of intelligence isn’t how much you know. It’s how accurately you understand what you know.

This is the insight behind the Dunning-Kruger effect: people with limited expertise often overestimate their abilities because they don’t yet know what they don’t know. In contrast, true experts tend to be more cautious. Not because they know less, but because they have a calibrated understanding of their own limits.

True mastery is the intelligence of boundaries. And this is exactly where today’s AI systems fall short. They generate fluent, confident responses, regardless of whether they’re right or wrong. LLMs don’t know when they’re bluffing. 

But this isn’t just an AI problem. It’s a systems problem.

Consider the story of Sahil Lavingia, a tech founder who joined a short-lived U.S. government initiative called the Department of Government Efficiency (DOGE). Like many technologists, he entered government expecting bloated bureaucracy and quick wins for automation. Instead, he found something different:

“There was much less low-hanging fruit than I expected… These were passionate, competent people who loved their jobs.”

From the outside, it looked inefficient. But inside, it was full of highly evolved processes, built not out of laziness, but out of the need to handle complexity, edge cases, and tradeoffs that outsiders didn’t understand.

In both public systems and AI systems, the greatest danger isn’t ignorance; it’s uncalibrated confidence. That’s why in a world increasingly filled with intelligent tools, the most valuable human trait is judgment.

As AI tools become more powerful and more accessible, it’s easy to assume that leverage comes from picking the right plugin, framework, or foundation model. But that’s not where the real differentiation lies. The edge isn’t in having the right tools; it’s in knowing why they work, where they break, and how to build thoughtful systems around them.

This idea echoes a point made by Venkatesh Rao: having the right system is less important than having mindfulness and attention to how the system is performing.

A flawed system, guided by a reflective operator, will outperform a perfect one that’s blindly trusted. And that’s the real risk with AI right now: people stack tools — agents, APIs, wrappers — without understanding how they behave, where they fail, or what unintended consequences they may trigger.

That means:


History tells a clear story: new technologies don’t eliminate human value; they shift where it lives.

AI is no different. Yes, it will automate tasks. Yes, it will reshape industries. But it will also unlock entirely new categories of work, from agent coordination to workflow design to AI-native creativity, for those who are willing to learn, adapt, and lead.

The most durable opportunities won’t go to those who simply use the tools. They’ll go to those who understand how the tools work, where they fail, and what uniquely human value they can amplify. Judgment, taste, curiosity, coordination, and emotional intelligence aren’t outdated traits. They’re becoming the core skillset of the modern builder, leader, and creator.

Crossposted from here.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI劳动力 工作流程 隐性知识 规模化
相关文章