Published on June 5, 2025 3:49 PM GMT
Much of today's discussion around AI centers around current human labor that will be rendered meaningless. While the near-term will assuredly be disruptive, the long-term development of AI will not only require immense human scaffolding that will generate new forms of labor, but will create novel opportunities for many that haven't existed yet because of scaling limitations or knowledge gaps. I think it's more likely that technology will become more human than less.
I've outlined a few of the emerging opportunities below:
1. Input Parsing
One of the most overlooked but critical challenges in any AI system is turning real-world messiness into structured, machine-readable inputs. The success of AI is predicated on data, but raw human experience doesn’t come pre-packaged as clean JSON. Someone still needs to parse it.
Examples:
- Live sports stat entry (understanding what constitutes a "rebound" in basketball)Field data collection in agriculture or conservationSynthesizing insights from customer interviewsJournalism and real-time event reporting
Prediction: There will be a surge in demand for individuals who know how to work with unstructured data and turn it into usable formats. Those who understand how to map messy human data into machine-readable formats will be essential. Tacit domain knowledge will become a key differentiator. For instance, knowing what a "rebound" is in sports or how to detect a digitally fabricated event will matter more than ever.
Open Questions:
- How do we reliably translate subjective, real-time events into structured formats?Can models be trained to parse input without domain-specific nuance?How will we detect real vs. AI-generated inputs based on digital signatures?Can AI learn contextually from messy information, or does it always need structure?Who owns the parsing layer of tomorrow’s workflows?
2. Edge Case Handling
AI systems can be right 90% of the time, but trust erodes when the remaining 10% leads to unpredictable or harmful outcomes. Just because a system nails the base case doesn’t mean it’s ready for adoption, especially in enterprise or high-stakes contexts. Scalability isn’t determined by peak performance; it’s determined by floor performance. Can the system be trusted when things go wrong? Can it fail gracefully?
Enterprises won’t build workflows around tools they can’t rely on. Users don’t stick with products that break in edge cases. The bottleneck to adoption isn’t the median case; it’s whether the system can handle ambiguity, exception, or escalation without introducing chaos.
Consider autonomous vehicles. Despite sophisticated AI and advanced sensor arrays, real-world unpredictability (like sudden weather shifts, construction zones, or erratic human behavior) has made it incredibly hard for these systems to scale. A self-driving car that performs flawlessly in perfect conditions but fails dangerously in edge cases doesn't inspire public trust or regulatory confidence. This is why autonomous vehicle rollouts have been slow, cautious, and often limited to geo-fenced areas.
Other domains face similar dynamics:
- Finance: AI risk models that can't explain black swan exceptions won’t gain institutional trust.Customer support: Chatbots that fail on outlier queries create more support tickets, not fewer.Healthcare: Diagnostic tools that miss rare symptoms erode clinician confidence.
Examples:
- AI misclassifying tone in sensitive customer service situationsLegal or medical misinterpretation of rare inputs
Prediction: Many startups and tools will fail to gain adoption, not because the AI doesn't work, but because it creates more complexity in edge cases. Tools that don’t handle edge cases will erode trust. Tools that recover gracefully will win.
Open Questions:
- How do you give greater responsibility to black-box systems?Can we design AI systems that escalate uncertainty instead of masking it?What are the best practices for fallback behavior when predictions fall outside trained expectations?
3. Coordination and Scale
Even if AI makes individuals 10x more productive, there are still classes of problems that only large, well-coordinated teams can solve. AI doesn't eliminate the need for collaboration; it changes the shape of it.
Scale buys capabilities that are impossible at smaller sizes: global infrastructure, vertically integrated ecosystems, and cross-disciplinary R&D efforts. Think of Apple designing chips and phones while controlling distribution and privacy policy, or Google orchestrating satellite imagery, real-time traffic, and language translation across the globe. These required enormous organizational coordination.
But AI introduces a wrinkle: technology can curve scaling laws. In the past, impact scaled roughly linearly (or sub-linearly) with headcount. But now, a team of 5 with 50 well-orchestrated AI agents might outperform a traditional team of 100.
In the other direction, AI may allow already-massive organizations to scale in non-traditional ways, coordinating across thousands of internal tools and functions that used to break under communication and management load.
Examples:
- Collaborative research teams using agent swarms to synthesize findings across domains rapidly.Product teams orchestrating multiple specialized agents (e.g., summarizer, sentiment analyzer, QA bot) in real-time feature delivery.Google-scale companies building and maintaining AI-driven platforms that update in real-time based on user feedback loops and live A/B testing.
Prediction: Just as remote work reshaped team structures, AI will reshape org design. We’ll see:
- Ultra-scaled orgs (10x Google) will use AI to break traditional scaling limits, enabling coordination at levels current tools can’t support. This could unlock infrastructure megaprojects, world-scale simulations, global societal systems, and more.Ultra-lean orgs (3–5 people) that outperform traditional companies by leveraging large networks of agents.A new class of “agent managers” who coordinate not just humans, but agents, systems, and infrastructure across large or small teams.
Open Questions:
- What does agent management look like as a discipline?What kinds of problems still require large-scale human networks despite agentic leverage?How do we audit, debug, and ensure accountability in complex, multi-agent systems?What coordination problems will AI solve, and what new ones will it create?
4. Tacit Genius: Designing Flows That Work
LLMs and agents don’t magically solve complex problems. They’re not plug-and-play intelligence machines; they’re tools that need to be guided. To get meaningful results, you have to scaffold the problem, clarify the goal, break it into substeps, and design thoughtful workflows or prompts.
This is where tacit knowledge (the kind of deep, experience-based understanding that's hard to write down or teach) becomes crucial. It’s the difference between someone who has read a recipe and someone who knows how to improvise a great meal.
The most effective AI systems aren’t just the result of better models; they come from encoding human expertise into the design of the workflow itself.
Examples:
- Agentic RAG (Retrieval-Augmented Generation): Instead of just retrieving documents and summarizing them in one step, agentic workflows break the task down: first search, then rank, then refine, then summarize. This multi-step approach consistently outperforms naive retrieval methods because it mimics how an expert would think through the problem.Human-in-the-loop systems: In areas like legal summarization, software development, or customer service, AI can draft or suggest responses, but humans still review, tweak, or validate the output. These workflows are most powerful when domain experts help define what “good” looks like.Therapy, coaching, DIY: For example, a performance coach who knows what questions to ask and how to tailor exercises for different types of people could build a better AI agent than someone who just asks ChatGPT to “be a coach.” It’s the sequencing, phrasing, and domain intuition that make the difference, and that comes from real-world experience.
Prediction: The future of AI won’t be decided by whether you’re using ChatGPT or Claude. It will be decided by who can design the smartest workflows. The real advantage lies with those who can encode deep, hard-won expertise into structured, repeatable task flows.
ChatGPT won’t beat Gordon Ramsay at cooking, not because it can’t follow a recipe, but because Ramsay knows when to break the recipe. That kind of expert intuition — knowing which steps matter, when to adjust, and why — is what separates a good system from an exceptional one.
Workflow design will become the new craftsmanship.
Open Questions:
- In what fields will tacit knowledge remain king?Can foundation models ever learn expert workflows without explicit decomposition?Where do agentic approaches hit their limits?What tools will emerge to help experts build agentic workflows without needing to code?
5. From Median to Extraordinary
AI is blowing open the gates of creation. Anyone can now write code, generate designs, launch websites, or compose music with just a prompt. But while the tools are powerful, they're also generic. They can produce something competent, but not something extraordinary. Extraordinary comes from human taste, vision, and craft. What’s finally changing is that those things no longer need to be filtered through deep technical knowledge to come alive.
Historically, technical fluency has been a bottleneck. That means the foundational layers of the internet, software, and even AI itself have largely been shaped by a relatively narrow demographic: engineers, often working on problems they understand personally. As a result, entire categories of human experience and creativity have been underserved, underbuilt, or overlooked entirely.
But now, the gates are cracking open. A chef can build an app. A therapist can build a tool. A writer can automate their creative workflow. AI is making technical scaffolding optional. This isn’t the age of machine-built software. This is the beginning of the most human era of software we’ve ever seen.
Examples:
- A chef building an AI-powered recipe site that captures the nuance of a regional cuisine they grew up with (something no generic food blog would ever get right).A filmmaker storyboarding and scripting experimental films using generative tools, bypassing traditional studio constraints.A small business owner automating back-office tasks with AI, then reinvesting that time to build more personalized and human relationships with their customers.
Prediction: We’re entering an era where creative fluency will matter more than code fluency. The builders of the next wave won’t just be engineers; they’ll be artists, therapists, teachers, small business owners, and visionaries who understand human needs deeply and can shape AI tools around them.
They won’t be creating software that looks like everything else. They’ll be creating tools that feel like them.
Open Questions:
- How far can technical abstraction go? Will there always be a layer of tooling that only engineers can build?What makes a contribution human in a world where machines can generate endlessly?Can intentionality and taste be embedded into workflows, or are they the final frontier of irreducible human input?
Real Intelligence Is Calibration
One of the most profound forms of intelligence isn’t how much you know. It’s how accurately you understand what you know.
This is the insight behind the Dunning-Kruger effect: people with limited expertise often overestimate their abilities because they don’t yet know what they don’t know. In contrast, true experts tend to be more cautious. Not because they know less, but because they have a calibrated understanding of their own limits.
True mastery is the intelligence of boundaries. And this is exactly where today’s AI systems fall short. They generate fluent, confident responses, regardless of whether they’re right or wrong. LLMs don’t know when they’re bluffing.
But this isn’t just an AI problem. It’s a systems problem.
Consider the story of Sahil Lavingia, a tech founder who joined a short-lived U.S. government initiative called the Department of Government Efficiency (DOGE). Like many technologists, he entered government expecting bloated bureaucracy and quick wins for automation. Instead, he found something different:
“There was much less low-hanging fruit than I expected… These were passionate, competent people who loved their jobs.”
From the outside, it looked inefficient. But inside, it was full of highly evolved processes, built not out of laziness, but out of the need to handle complexity, edge cases, and tradeoffs that outsiders didn’t understand.
In both public systems and AI systems, the greatest danger isn’t ignorance; it’s uncalibrated confidence. That’s why in a world increasingly filled with intelligent tools, the most valuable human trait is judgment.
As AI tools become more powerful and more accessible, it’s easy to assume that leverage comes from picking the right plugin, framework, or foundation model. But that’s not where the real differentiation lies. The edge isn’t in having the right tools; it’s in knowing why they work, where they break, and how to build thoughtful systems around them.
This idea echoes a point made by Venkatesh Rao: having the right system is less important than having mindfulness and attention to how the system is performing.
A flawed system, guided by a reflective operator, will outperform a perfect one that’s blindly trusted. And that’s the real risk with AI right now: people stack tools — agents, APIs, wrappers — without understanding how they behave, where they fail, or what unintended consequences they may trigger.
That means:
- Asking why a system behaves the way it does.Watching for failure patterns and emergent behavior.Keeping humans in the loop. Not because AI can’t automate tasks, but because feedback and reflection are the real drivers of progress.
History tells a clear story: new technologies don’t eliminate human value; they shift where it lives.
AI is no different. Yes, it will automate tasks. Yes, it will reshape industries. But it will also unlock entirely new categories of work, from agent coordination to workflow design to AI-native creativity, for those who are willing to learn, adapt, and lead.
The most durable opportunities won’t go to those who simply use the tools. They’ll go to those who understand how the tools work, where they fail, and what uniquely human value they can amplify. Judgment, taste, curiosity, coordination, and emotional intelligence aren’t outdated traits. They’re becoming the core skillset of the modern builder, leader, and creator.
Crossposted from here.
Discuss