Because AI Said So
A few months ago I ended a consulting gig on my terms. I had built a brand strategy from scratch. Guidelines. A full business development framework designed to carry them through four quarters. Nearly 20 years of experience poured into work that was meant to last.
A day after I handed it over, they came back with: “AI said to do marketing this way.”
The prompt, from what I understand, was a general revenue question. Nothing about their audience, their market, their positioning. None of the context the strategy was built on. Just a number and a question dropped into ChatGPT.
That output overrode everything. And I have been sitting with that moment ever since, because it pointed at something much bigger than one bad meeting.
The Average Is the Point
A friend who is an AI/ML leader said something recently that’s been stuck in my head. AI gives you the average advice. It synthesizes what’s already been said, already been published, already been validated by volume. Ask it a strategic question and you get a competent, reasonable and a thoroughly unoriginal answer. And that’s not a flaw, that’s what AI was built to do. It uses patterns, statistics, and data that tries to simulate understanding without the true awareness that we have, and it fails to handle complexities. AI lacks the human depth.
The work that actually moves organizations forward is rarely the average. It’s the brand position that’s uncomfortable before it’s obvious. It’s the strategy that runs counter to what everyone else is doing. It’s the person in the room who says, “this is the wrong direction, and here’s why.” That pushback. That friction. That willingness to argue for a better answer instead of a faster one.
AI doesn’t have that. It’s not because the technology isn’t mature enough, but because that kind of judgment isn’t a pattern you can extract from existing data. It’s learned, it’s grown, it’s questioning, it’s challenging, it’s trial and error. It requires complex and contextual thought, lived and shared experiences, consciousness, and emotional intelligence. None of which AI has. And we are systematically treating that as optional.
And I’d push this further. There’s a whole ecosystem of people now who think prompting AI to “be a marketing expert” is the same as having one (I’m using marketing as an example, but really, insert any role). They’re not asking AI to assist their thinking. Some are asking it to replace expertise they never had, or decide they no longer need to pay for. The output looks authoritative and it uses the “right” language. It sounds like someone who knows what they’re talking about. But in reality, it isn’t. A prompt describing expertise is not expertise. And an organization making strategic decisions based on AI performing a role is not the same as having someone in that role who has failed, learned, pushed back, and earned the instinct to know when something is wrong.
Image Source: Adobe Stock
Nobody Knows What They Didn’t Get
The question isn’t whether or not to use AI, it’s whether we’re being honest about what we’re trading away when we do so. The problem is that so many now trust it completely, without understanding what they’re actually trusting it to do. When “AI said so” becomes a strategic justification, you’re outsourcing execution and judgment. And once that happens, the people who would have pushed back get quieter. Their experience starts to read as friction instead of value. Organizations now get the average and then push it out at scale with efficiency and confidence.
And so many people are not noticing what they didn’t get, and that’s the most dangerous version of this. Not the obviously wrong AI output, which is easier to catch. The dangerous version is the competent, plausible, professional-looking output that quietly smooths over the thing that needed to be complicated. The insight that was never surfaced, or the question that was never asked, or the creative idea that never had space to exist.
The Asymmetry
Here’s what made me write this today. Mark Zuckerberg wants to use an AI version of himself to communicate with his employees. His presence, his voice, his relationship with the people who work for him, outsourced to a model. Meanwhile, thousands of workers across the industry are being laid off. The justification? AI can do the job just as good and faster.
I have one question: if that logic applies to workers, why doesn’t it apply at the top?
If an AI can be you to your own employees, you are not doing the job. You are a brand with a salary. And the same cost-cutting logic being applied to the people being let go should be applied to executive compensation packages too. The choices being made about who gets replaced, and who gets a model built in their likeness instead, tell you exactly what’s being valued and what isn’t.
The Judgment Tax
Every time a complex question gets handed to a model because it’s faster, cheaper, or less politically uncomfortable than asking someone to defend a position, we lose something. The decision gets made, the output looks professional, and nobody in the room knows what they didn’t get. Over time, organizations that operate this way will drift toward each other. We’ll see strategies converge, brand voices blur, and positioning homogenize. And that is because they all made the same decision, and they all got it from the same place.
Image Credit: partnershiponai.org
Who’s Asking the Right Questions
There are people doing this work thoughtfully. Asking harder questions about what we’re trading away. Here are just a few, but there are so many others:
One I keep returning to is Sam Illingworth, a professor in Edinburgh and the creator of the Slow AI Curriculum for Critical AI Literacy. His work is built around one core question: when do you use AI, and when do you leave it alone? His framework puts judgment back at the center.
Luiza Jarovsky writes Luiza’s Newsletter, covering AI governance, policy, and the legal and ethical implications of AI adoption. She writes directly about what AI costs humans, including displacement, dependency, and the illusion of productivity that doesn’t account for human limits. Sharp, rigorous, and pro-human in the clearest sense.
Melanie Mitchell, a cognitive scientist at the Santa Fe Institute, writes AI: A Guide for Thinking Humans. She brings actual AI research to bear on the question of what these systems can and can’t do, without hype in either direction. If you want to understand the real capabilities and limits, start here.
AD writes AI Governance, Ethics and Leadership, tracking AI governance from the ground up with a consistent focus on what it means for workers, communities, and everyday people.
The Question to Carry
Next time someone in your organization defers to “AI said so,” ask the follow-up: “What did AI not know when it said that?”
What context was missing? What relationship history wasn’t in the prompt? What institutional memory, what hard-won judgment, what uncomfortable truth did the query leave out? What would the person with 20 years of experience have said that the model couldn’t?
That gap is where expertise lives. Whether we keep making space for it is a choice we’re making right now. Most of us just aren’t recognizing it as one.
The average advice is available to everyone. The question is what you’re willing to bring that isn’t.
This post was written by me, with editing support from AI tools, because even writers appreciate a sidekick.