I was catching up with an old friend last week when I shared a hypothesis that’s been on my mind: people who have experience managing, coaching, or directing others might have a surprising advantage in the age of AI. He encouraged me to write about it, and here we are. The core of the idea is this: there’s a subtle art to getting the best out of people, and I’m beginning to believe the same is true for getting the best out of our AI partners.
It’s a strange disconnect I’m seeing everywhere. For all the buzz, a surprising number of companies are struggling to turn their AI experiments into real, lasting value. The latest McKinsey Global Survey on AI found that while about three-quarters of organizations are using generative AI, many pilots are stalling out. And it turns out, the problem usually isn’t the tech. The Boston Consulting Group (BCG) puts it bluntly with their “10-20-70 rule”: AI success is about 10% algorithms, 20% technology, and a massive 70% people and processes. It’s a leadership challenge, not a technical one. A people problem, not a silicon one.
This brings me back to my hypothesis. My gut tells me that successful managers have a head start in this new world because we’ve been trained—formally or through the hard school of experience—to be ruthlessly clear in our communication. We learn to define what success looks like, provide guardrails for the work, and then guide it with a steady hand through gentle correction and continuous feedback.
I’ve seen this play out as talented developers try to adopt these new tools. Many struggle, but I’ve noticed a recurring pattern: they fail to give the model enough context, get a generic or wrong answer, and walk away thinking the AI isn’t smart enough.
When this generation of LLMs first arrived, it was natural to treat them like a search engine—ask a simple question and expect a perfect answer. But that’s like walking up to a new junior engineer on your team, saying, “Build me a login system,” and expecting a production-ready feature a week later. You’d never do that.
You’d give them architectural documents, point them to existing libraries, explain the security requirements, and set up regular check-ins. You’d provide the context, the constraints, and the success criteria.
This is the heart of the matter. The very same principles apply when working with an AI. A vague prompt like, “Write a blog post on prompt engineering,” will produce a generic, soulless article. But a more “managerial” prompt changes the game entirely: “Synthesize a 2,000-word blog post on prompt engineering using the research and references provided in this document. Here is an outline to follow. Ensure the tone and style match these three writing samples.”
Suddenly, you’re not just a user asking a question. You are a manager setting a clear direction. What we call “prompt engineering” is, in many ways, a new form of management. The research community is even starting to use a more fitting term: “context engineering“—the strategic curation of the entire information environment in which the AI operates.
When you see it this way, the parallels between managing people and directing AI are impossible to ignore. A manager’s ability to articulate a clear vision becomes the skill of crafting a precise prompt. The strategic delegation of tasks becomes the art of defining the AI’s role, deciding which work is best for the model and which needs a human’s creative or empathetic judgment. And the rhythm of performance management—monitoring progress and giving feedback—is a perfect mirror of the iterative AI workflow, where we critically evaluate an output and refine our prompts to get closer to the goal.
This isn’t to say that every developer needs to become a people manager. Of course, many ICs are brilliant communicators, but the daily work of management is a constant exercise in clarity and context-setting. It strongly suggests that the “soft skills” of management—clarity, context-setting, and iterative feedback—are becoming the new essential “hard skills” for the AI-first era. Our job is no longer just to write the code, but to effectively guide the intelligence that will help us write it. We are all becoming managers of a different kind of mind.
This is just the beginning, and it points to a powerful new workflow. As Simon Willison has observed, models are getting remarkably good at writing prompts themselves. This indicates that prompt creation itself is a task perfectly suited for AI. So here is the call to action: before you dive into solving a complex problem, make your first step a collaboration with an AI to build the perfect prompt for the job. This is the next layer of abstraction. We are moving beyond simply giving instructions to strategizing with our AI partners about what the best instructions should be. The core managerial skill of setting a clear, high-level mission remains—only now, we’re applying it to the meta-task of designing the conversation itself.