Back to Being Deliberate

AI & Automation

The Human / AI Handshake

Illustration showing two distinct planes—a human figure navigating ambiguity and discovery on one side, an AI system executing precise defined work on the other—connected by a glowing handshake at the boundary between them

55,000 layoffs blamed on AI in 2025. Yesterday, Block cut 40% of its workforce. And Forrester says most of these companies don't have the AI ready to fill the roles they just eliminated. The problem isn't the technology. It's that nobody designed the boundary between what humans do and what AI does.

By Joe Minock 14 min read

The Axe Swing

Yesterday, Jack Dorsey cut 4,000 people—40% of Block's workforce—and blamed AI.

The stock jumped 24%.

In 2025, companies cited AI as the reason for 55,000 layoffs—twelve times the number from just two years earlier. Amazon. Salesforce. Pinterest. CrowdStrike. Chegg. HP. The list keeps growing. Dorsey told shareholders he expects the majority of companies to make similar structural changes within the next year.

And here's the part nobody wants to talk about.

Forrester found that most of these companies don't have mature AI applications ready to fill the roles they just eliminated. They're cutting people for AI capabilities that don't exist yet. Fifty-five percent of employers already regret it. And Forrester predicts half of these AI-attributed layoffs will be quietly rehired—offshore, at lower salaries.

Meanwhile, Wharton's Ethan Mollick pointed out that given how new effective AI tools are, and how little we understand about organizing work around them, it's hard to imagine a firm-wide 50%+ efficiency gain that justifies mass organizational cuts.

I wrote last week that AI doesn't fix broken processes—it scales them. This is the other side of the same coin. Organizations aren't just automating work they don't understand. They're eliminating the people who do the work—before they've figured out what the work actually requires.

This is what happens when you treat AI as a binary: replace humans, or don't. Both options are wrong. There's a third.

The Lesson from 1998

In 1997, IBM's Deep Blue beat Garry Kasparov at chess. The headlines wrote themselves: machine beats man. The end of human superiority.

Except that's not what happened next.

What happened next was that Kasparov invented a new game. He called it Advanced Chess—humans and AI playing together. And within a few years, something remarkable emerged: teams of average players with average computers, using a well-designed process, were beating both grandmasters and supercomputers.

Kasparov's insight was precise: a weak human plus a machine plus a better process beats a strong computer alone—and, more remarkably, beats a strong human plus a machine plus an inferior process.

The deciding factor wasn't the quality of the human. It wasn't the quality of the machine. It was the quality of the process that connected them.

That was 1998. And somehow, nearly three decades later, most organizations are still having the wrong conversation.

Two Planes, One Handshake

The dominant narrative around AI follows a binary logic: either AI replaces human work, or humans use AI as a tool. Pick a side. Bet on one.

Both framings collapse under real-world conditions.

Replacement fails because AI cannot navigate novelty. It cannot identify a customer profile it's never seen. It cannot judge whether a market signal is meaningful or noise. It cannot sit across from a founder and hear what they're actually saying underneath what they're asking for. AI operates on patterns derived from what has already happened. It has no native capacity for discovery.

Tool-use fails because it preserves the human bottleneck. If AI is "just a tool," then every output still flows through the same constrained human bandwidth that limited the work before AI existed. You get marginal efficiency gains, not transformation. The human is still doing the same work, just slightly faster. This is the spreadsheet-to-calculator upgrade—real, but not the leap everyone's promising.

MIT Sloan's EPOCH research confirms this: the framework identifies five categories of distinctly human capabilities—empathy, persuasion, originality, collaboration, and hope—that machines fundamentally cannot replicate. And the demand for these capabilities in the workforce has been increasing, not decreasing, since 2016.

Both the replacement and tool framings make the same fundamental mistake: they treat the human-AI relationship as a single surface. As if there's one plane of work, and the question is just who sits at the controls.

There aren't. There are two.

The Human Plane

This is the domain of the novel, the ambiguous, and the consequential.

Humans operate here because this work requires faculties AI does not possess: judgment under uncertainty, empathy with real users, pattern recognition across domains that have never been formally connected, and the ability to make high-stakes decisions with incomplete information.

This is where you discover market opportunities. Where you recognize that a customer's real pain point isn't the one they articulated. Where you decide what to build, what to skip, when to pivot, when to hold. Where relationships and trust create value that no algorithm can replicate.

The Human Plane is where the work is undefined by nature. Not undefined because someone was lazy with the spec—undefined because the answers don't exist yet. They have to be discovered, debated, and decided.

MIT Sloan Management Review put it plainly: generative AI can outperform human CEOs in data-driven strategic tasks, but it fails completely when facing unpredictable, first-of-its-kind disruptions. AI is exceptional at pattern recognition and optimization. It is structurally incapable of navigating genuine uncertainty.

The Human Plane isn't where AI "hasn't caught up yet." It's where AI cannot go—because the work requires the one thing machines will never possess: the capacity to confront the genuinely unknown.

The AI Plane

This is the domain of the defined, the precise, and the repeatable.

AI operates here because this work requires faculties humans cannot match: tireless execution of clearly specified tasks, pixel-perfect consistency across thousands of variations, speed that compresses weeks into hours, and the ability to hold an entire codebase in working memory while making changes.

Given a clear architecture and defined requirements, AI produces clean, tested, deployable applications at a speed no human team can match. It generates and iterates on customer-specific messaging with extraordinary precision. It runs feedback cycles that previously took weeks in a matter of hours.

The AI Plane is where the work is defined by design. Requirements are clear. Acceptance criteria are explicit. The creative and strategic decisions have already been made. What remains is execution—and execution at this level of clarity is exactly what AI was built for.

MIT's meta-analysis of 106 experiments on human-AI collaboration found something that sounds counterintuitive but makes perfect sense through this lens: on average, human-AI combinations don't outperform the best of either working alone. The combination only produces superior results when each agent operates in its zone of strength and the process connecting them is well-designed.

This is the centaur lesson, validated at scale by modern research: throwing humans and AI together doesn't automatically produce better results. Designing the right boundary between them does.

The Handshake

This is the critical surface. The leverage point. The thing almost everyone gets wrong.

The Handshake is where human discovery becomes AI-executable requirements. It's the translation layer between "I think our customer cares more about time savings than cost savings" and a set of messaging variants, page layouts, and feature prioritizations that AI can produce with precision.

The Handshake is not a handoff. It's not humans throwing work over a wall to AI. It's an active, iterative boundary where humans interact with AI tools to test hypotheses quickly, where AI output informs human judgment, and where requirements get progressively refined through rapid cycles of define, execute, evaluate, and redefine.

The competitive advantage isn't in having AI—everyone will. It isn't in having smart humans—that's table stakes. The advantage is in the quality of the Handshake: how effectively an organization translates discovery into production.

This is where I wrote last week that organizations can't automate what they can't describe. The Handshake is the discipline of description itself—the craft of making the ambiguous actionable, the novel specific, the discovered defined.

And this craft is itself a skill. One that can be developed, practiced, and improved. One that separates the organizations burning cash on AI experiments from those building operations that compound in value.

The Clarity Bottleneck

Here's the uncomfortable truth that most AI conversations miss entirely.

AI speed is effectively infinite relative to human speed. The bottleneck in every AI-augmented workflow isn't "AI can't produce fast enough." It's "the requirements aren't clear enough for AI to produce correctly."

Software engineers have been learning this lesson the hard way. The ones who struggle with AI tools aren't struggling because the tools are inadequate. They're struggling because they provide vague instructions, receive generic output, and conclude that AI isn't ready for real work. But the problem was never the AI's capability—it was the specification's clarity.

This applies to every domain, not just code. Marketing. Sales process. Client onboarding. Product development. Customer experience design. The pattern is universal: AI doesn't struggle with hard tasks. It struggles with vague ones.

And here's what makes this particularly dangerous: AI systems reward ambiguity with output. They fill gaps with confident-sounding reasoning. They produce polished results from imprecise inputs. And because the output looks good, most organizations never realize they're looking at an eloquent misunderstanding rather than a grounded solution.

Traditional systems punish ambiguity. If requirements are vague, the implementation blocks you until you clarify them. Engineers raise flags. QA escalates. The system forces you to be precise.

AI does the opposite. It takes your vague input and runs with it—fast, confidently, and in exactly the wrong direction. The Handshake is the discipline that prevents this.

So what does that discipline actually look like in practice?

It looks like sitting down with a single step in your workflow and answering five questions on each side: What must be true before this step can start? What actually happens during execution? What must be true after this step is complete? And critically—in that execution column—is this work Manual, AI-Assisted, AI Agent, or Autonomous?

That last question is the Handshake decision. And most organizations have never made it deliberately for a single step in their operation—let alone every step.

The organizations that will master the Handshake are the ones willing to do this work—step by step, with precision, before they hand anything to a machine.

Why the Binary Keeps Failing

Most AI implementations today are binary. They sit on one plane or the other.

AI-first applications try to eliminate the Human Plane entirely.

They assume that with enough data and a sophisticated enough model, AI can handle discovery, judgment, and execution all at once. These applications produce output that is technically competent and strategically empty. They build features nobody asked for, write copy that sounds good and says nothing, and generate code that works but solves the wrong problem. They fail because they have no mechanism for navigating novelty.

Human-first with AI assist keeps humans in control of everything.

The human still makes every decision, reviews every output, directs every action. These implementations capture maybe 20% of AI's potential because they never let AI operate in its zone of excellence—sustained, precise, high-volume execution of clearly defined work. They fail because they have no mechanism for leveraging AI's actual strengths.

The Human / AI Handshake resolves this by giving each domain its proper scope and making the boundary between them the primary site of organizational capability.

The research backs this up. MIT found that human-AI combinations produce gains specifically in tasks where humans already outperform AI on their own and in creative content tasks. The combination fails in tasks where AI is already superior. The lesson: the value isn't in combining them everywhere—it's in knowing where to draw the line and making that line work.

What This Means for How You Design Work

If the Handshake is the leverage point, then the most important question for any AI-augmented workflow isn't "where do we add AI?" It's "where does the work cross the boundary between novel and defined?"

This has concrete implications.

Every workflow has a discovery phase and an execution phase. The discovery phase belongs on the Human Plane. The execution phase belongs on the AI Plane. The quality of the workflow depends entirely on the quality of the transition between them.

The Handshake is a skill, not a technology. The ability to translate novel insight into clear, executable requirements is a human competency. It can be developed, practiced, and improved. Organizations that invest in this skill will outperform those that invest only in better AI models.

Feedback loops cross the boundary in both directions. AI output informs human judgment. Human judgment refines AI input. The Handshake isn't a one-way gate—it's a bidirectional membrane. The best workflows make these feedback loops tight and frequent.

And critically: the planes are not sequential. Humans don't finish all discovery, then hand off to AI for all execution. In practice, the work oscillates across the boundary constantly. A discovery on the Human Plane immediately generates a new execution task on the AI Plane. AI output reveals a new question that sends the human back into discovery mode. The Handshake is always active.

This is why "prompt engineering" misses the point. Prompting is a tactic for interacting with a model. The Handshake is a strategic capability for designing how an entire organization translates judgment into execution.

This Is Deliberate Work

This framework is native to the Deliberate Work methodology. The entire premise of Deliberate Work—transforming businesses from heroic effort to systematic excellence—depends on knowing which work requires human judgment and which can be systematized.

The Human / AI Handshake extends this to the AI era: the work that requires human judgment belongs on the Human Plane. The work that can be systematized belongs on the AI Plane. And the Handshake between them is the new site of deliberate practice.

This is the natural next step from understanding that AI doesn't fix broken processes. Once you've done the hard work of mapping your reality and designing your intent—once you actually understand the work—the Handshake tells you where and how AI fits.

Organizations that master the Handshake don't just use AI effectively. They build a durable competitive advantage that compounds over time—because every cycle through the Handshake improves both the humans' ability to discover and define, and the AI's ability to execute precisely.

The ones that stay binary—all AI or all human—will wonder why their tools keep getting better and their outcomes don't.

Key Takeaways

  • 55,000 layoffs blamed on AI in 2025. Most of these companies don't have mature AI ready to fill the roles they cut. 55% already regret it.
  • The "replace vs. tool" framing is wrong. Replacement fails because AI can't navigate novelty. Tool-use fails because it preserves the human bottleneck.
  • Human-AI work operates on two distinct planes. The competitive advantage lives at the boundary between them—the Handshake—where human discovery becomes AI-executable requirements.
  • Requirements clarity is the rate limiter. AI speed is infinite. The bottleneck is never "AI can't produce fast enough"—it's "the requirements aren't clear enough for AI to produce correctly."
  • The Handshake is a skill, not a technology—not a prompt template. Organizations that invest in this skill will outperform those swinging an axe.

Ready to Design the Handshake for Your Organization?

If your AI implementations are producing technically impressive output that misses the point—if your team is stuck in the binary of full automation or AI-as-a-faster-keyboard—the problem isn't the technology. It's the boundary between human judgment and AI execution. That's exactly what we design.

Schedule a Conversation

Sources & Further Reading

  • On centaur chess and process superiority: Nicky Case, How To Become A Centaur. MIT Press Journal of Design and Science. Kasparov's foundational insight: weak human + machine + better process was superior to strong computer alone and to strong human + machine + inferior process.
  • On human-AI complementarity: Rigobon, R., & Loaiza, I. (2025). The EPOCH of AI: Human-Machine Complementarities at Work. MIT Sloan School of Management. Identifies five categories of human capabilities AI cannot replicate and finds demand for these capabilities has been increasing.
  • On when human-AI combinations succeed: Vaccaro, M., Almaatouq, A., & Malone, T. (2024). When combinations of humans and AI are useful. Nature Human Behaviour. Meta-analysis of 106 experiments finding human-AI combinations underperform on average unless process design is deliberate.
  • On the new value of expertise: Kalluri, R. (2025). What's Your Edge? Rethinking Expertise in the Age of AI. MIT Sloan Management Review. AI outperforms in data-driven tasks but fails facing unpredictable disruptions. The value shift is from content to context.
  • On the prerequisite for AI: AI Doesn't Fix Broken. It Scales It. The companion piece to this post—why understanding the work comes before automating it.
  • On building deliberate systems: For the complete framework, Deliberate Work covers the methodology in depth. Get on the early access list.

More from Being Deliberate