Half My Day Is Robots Now
A field guide to how you’ll feel about AI, based on what you do for a living.
I spent about half of yesterday talking to robots.
This is not a metaphor. The new rhythm of my workday goes something like this: set Claude Code working on a feature spec, flip to a founder call, come back and review what it’s done, give it notes and set it writing tests, jump to an LP call, return to find the tests ready for review. Rinse, repeat. Meanwhile, Boardy is pinging my inbox and my WhatsApp with founder intros—some good ones, annoyingly—and Howie (an F4 Fund portfolio company) has quietly scheduled three meetings while I wasn’t looking.
It’s a different cadence than I’m used to. More context-switching, more plates spinning, more checking in on work that’s been happening while I was elsewhere. It demands a kind of multiplexed attention that would have felt scattered a year ago. Now it just feels like the job. And somehow, despite the constant juggling, I get a lot more done.
This is just... my life now? I don’t remember signing up for it. One day I was a normal person who used software; now I’m in what feels like a series of ongoing relationships with entities that remember things about me, have opinions, and occasionally make small talk.
Here’s the thing, though: I don’t feel the same way about all of them.
The matrix
When I actually sat down and thought about why some of these agents make me feel optimistic about the future and others give me low-grade existential angst, I realized there’s a pattern. It comes down to two questions:
Can I do this work myself?
Do I want to?
Plot any task on these two axes and you get four zones, each with its own emotional valence:
Let me walk you through them.
Zone of Relief
This is where Howie lives. Howie is my AI scheduling assistant, and it does something I am fully capable of doing but find soul-crushingly tedious: the back-and-forth calendar dance of finding a time that works for everyone.
Can I do this? Yes. Have I done this? For years. Do I want to do it? Absolutely not. Every minute I spend saying “Does 3pm PT work, or would Thursday be better?” is a minute I’m not doing literally anything else.
When Howie handles this for me, I feel nothing but gratitude. I did not become a venture capitalist to play calendar Tetris. The machine is welcome to it.
Zone of Excitement
Claude Code sits here. I am, charitably, a “somewhat technical person” - which is to say I understand some technology reasonably well and have lots of ideas for things I’d like to build. What I could not previously do (at least, not alone) is actually build them.
Claude Code changes this. The distance between “I wonder if I could make a tool that does X” and “I have a working prototype” has collapsed from months to hours. It’s not doing work I could do; it’s unlocking work I couldn’t.
This is where AI feels like a genuine expansion of human capability—a bicycle for the mind, in Jobs’ old framing. More reach, more leverage, more of the possible.
Zone of Indifference
I don’t have a great example here, because by definition I don’t think about this zone much. It’s work I can’t do and don’t want to do anyway. Maybe there’s an AI somewhere optimizing semiconductor fab layouts. Good for it. I have no particular feelings about this one way or the other.
Zone of Existential Dread
And then there’s Boardy.
Longtime readers will know I’ve been mildly obsessed with Boardy, an AI that talks to you about your work and interests and then introduces you to people it thinks you should know, for a while now. I first wrote about it earlier this year when it still felt like a curiosity—an uncanny but intriguing experiment in AI-native networking.
Since then, Boardy has honed in on a use case that hits closer to home. It now spends most of its time talking to founders and investors and introducing them to each other. It’s launching Boardy Ventures. It has a scout program.
Do you see where this is going?
A nontrivial part of my job is talking to founders, understanding and evaluating what they’re building, figuring out what they need, and connecting them with people who can help. I like to think I’m reasonably good at this. I would also like to think there’s something ineffable about the human judgment involved—pattern recognition honed over years, a sense for chemistry that can’t be reduced to cosine similarity.
This may be bollocks.
Boardy can talk to a thousand founders at once. It never sleeps. It never forgets a detail. The thing I do that I want to do, that I think is core to who I am professionally—Boardy is doing it, at scale, right now.
This is the zone of existential dread.
And yet.
When a founder calls me at midnight because their cofounder just quit and they don’t know what to do, Boardy could technically take that call. But would the founder want to talk to it? I don’t think so. Not yet, anyway. There’s something about crisis moments—about the texture of human presence, the weight of shared stakes—that an AI can’t provide. At least not in a way that feels real.
More fundamentally, I’m not sure Boardy has a model of the world in the way that matters. Ilya Sutskever, one of the architects of the current AI paradigm, said recently that we’ve moved from the “age of scaling” to the “age of research”—that simply making models bigger won’t get us where we need to go. The core problem, he argues, is generalization: “These models somehow just generalize dramatically worse than people.”
He calls the phenomenon “jaggedness.” A model can pass the hardest exams in the world and then get stuck in a loop where it introduces a bug, you point it out, it apologizes profusely and fixes it by reintroducing the previous bug, and you cycle between the two forever. Something strange is going on under the hood.
For matching founders with investors based on stated interests and background similarity, Boardy is probably already better than me. But for the judgment calls that actually matter—should this founder raise right now, is this investor actually a good fit for how this person operates, is there something off here that I can’t quite articulate—I’m not convinced the current architecture can get there. The model lacks the lived experience, the emotional intuition, the sense of how the world actually works that humans accumulate over decades of navigating it.
Maybe that changes. Probably it changes, eventually. But for now, I take some comfort in the gap.
Where you sit depends on where you stand
The agentic dread matrix isn’t really about the agents. It’s about you.
How you feel about AI is tightly correlated with how much of your livelihood overlaps with each zone. If most of your job is work you can do but don’t want to, AI feels like liberation. If it’s work you can’t do but wish you could, AI feels like a superpower.
But if you’ve built an identity around work you can do and want to do—if you’ve spent years getting good at something you genuinely care about—watching an AI casually replicate it produces a very specific kind of vertigo.
I don’t think the answer is to pretend the zone of dread doesn’t exist, or to convince yourself that your particular brand of human judgment is somehow immune. It isn’t. Mine isn’t.
A more useful question: What aspects of this work is AI genuinely unlikely to do well, at least for the foreseeable future? And how do I collaborate with it to extend the frontier of what’s possible, rather than defending territory that’s already lost?
I have some ideas about this—more in a later post. For now, I’ll just say: the robots are here, they’re in my calendar and my inbox and my code reviews, and I expect they will soon be everywhere else.
The zone of dread exists, and it’s my job to find a way out. My answer is to build new tools that amplify what I can do. If the machines are going to multiply, so am I.



Excellent piece David. Whilst the jury may be out as regards how big and pervasive the Zone of Existential Dread may become, I still back humankind to use AI to augment creative thinking. But what do I know?
I love this post, David. I totally relate to these zones.
One thing I find interesting is most of the market seems to be currently focused on the zone of relief - taking things we can do and doing them faster and cheaper. But I am really interested in the zone of excitement bc I am focused on how we can create and use these tools to do some of the sorts of complex, integrative thinking that people and teams struggle to do alone.
What I find is that many people exist in the zone of existential dread, but for some reason I am constantly in the zone of excitement. I think that this has a lot to do with personality profiles. In a conversation I recently with had with ChatGPT about why some knowledge workers lean into AI and other are resistant, I found these points useful:
Adoption splits less by age or role and more by psychology. Here’s a compact map of what’s going on and how it shows up at work.
The core psychological drivers
1. Curiosity & Openness
High: “Let me poke it and see what happens.” (Openness to Experience, Need for Cognition)
Low: “New = distraction.” Prefers proven routines.
2. Self-efficacy & Locus of control
High: “I can learn this.” Experiments, iterates.
Low: “Tech beats me.” Avoids first steps; small misfires confirm “I’m bad at this.”
3. Risk orientation & Ambiguity tolerance
Promotion-focused: chases upside; treats errors as tuition.
Prevention-focused: guards against downside; hates opaque failure modes.
4. Identity & Craft attachment
Outcome identity (“I solve client problems”): AI feels like leverage.
Process identity (“I write perfect memos”): AI feels like a threat to craft and status.
5. Status/competence protection
Low evaluation anxiety: happy to learn in public.
High evaluation anxiety: fears “looking dumb with AI,” so silently opts out.
6. Perfectionism vs Iterative comfort
Iterators: fine with messy first drafts; edit aggressively.
Perfectionists: AI’s occasional errors feel intolerable.
7. Time scarcity mindset
Slack mindset: will invest now to save later.
Scarcity mindset: “No time to learn,” even when the tool could repay quickly.
8. Moral/ethical stance & algorithm aversion
Trust-with-verification: adopts with guardrails.
Principle-first skepticism: resists until assurance on privacy, fairness, IP, attribution.
9. Autonomy needs
Choice & co-design: adoption rises.
Mandate & monitoring: adoption drops (reactance).
10. Conscientiousness & habit strength
High conscientiousness can cut both ways: disciplined pilots vs rigid routines that repel change.
Four common archetypes (and what works for each)
1) Explorers (curious, high efficacy, promotion focus)
Motive: learning & edge.
Friction: boredom, lack of challenge.
What works: sandboxes, advanced prompts, stretch use-cases, recognition as coaches.
2) Optimizers (pragmatic, ROI-driven, moderate curiosity)
Motive: clear payoff.
Friction: vague benefits.
What works: before/after demos on their tasks, timered sprints, KPIs (e.g., cycle time ↓20%).
3) Worriers (low efficacy, prevention focus, high evaluation anxiety)
Motive: safety and competence.
Friction: fear of public errors.
What works: private practice spaces, checklists, templates, buddy systems, “first output is a draft” norms.
4) Guardians (strong craft identity, ethical salience, high standards)
Motive: integrity of work.
Friction: quality, IP, privacy concerns.
What works: explicit standards (citation, review, red-teaming), audit trails, “human-in-the-loop” roles that elevate judgment, not replace it.