Nobody Asked Us
What a Vanity Fair profile of Dario Amodei reveals about the men running AI, and who gets left out of the frame
I had another article I was going to post today. Then a Vanity Fairpiece about Dario Amodei was shared with me and I couldn’t let it go.
Joe Hagan set out to profile the founder of Anthropic, the AI company built on the premise that it is the responsible one. What he produced instead is a window into a world where the men running these companies, Amodei, Altman, Musk, Karp, and the rest, each have their own stated justification for what they’re building, whether that’s safety, human potential, Western dominance, or civilizational survival. The justifications differ. The power accumulation doesn’t.
People are losing jobs and communities are absorbing costs nobody asked them about. And the guardrails these companies promise? Nobody actually knows how to build them yet.
I want to be transparent about something before you read on. What follows is a conversation I had with Claude, the AI built by Anthropic, the very company this article profiles. I post this for two reasons: so you can see my thought process, and so you can see how AI responds when you push it.
One more thing. I asked Claude what it thinks. I’ll let that sit for a moment before you read on, because it says something important about how naturally we reach for these tools as though they’re people.
My prompt: I just finished reading this Vanity Fair article. And I’m a bit taken back.
Here are my collective notes from it: With AI, there is good. And we can have good. But good isn’t profitable. And the egos of the men who run these AI companies will not allow that. Literally our lives are hanging onto a thread by these few men. Reading this Vanity Fair article, I would mostly agree that AI is plagiarism machines. But they are able to take the vast amount of data and find things at a far faster mode than humans can. These people live in this dystopia you read in sci-fi books. Now I’m not saying the technology isn’t strong, but I still think it is far behind what many of these so called techno-optimists believe it to be. I tend to be closer to Gary Marcus’s side, that these men just overpromise to see their valuation go up. And executives at other companies are eating this up and letting go of their own people, their own workers, at the so-called promise of what AI can do. But there will come a breaking point where so many people are let go, where so many people are not working and cannot afford the extras, and even the essentials at this point, that it will start impacting these companies. They’re just eating their own.
And this line: “So Anthropic and OpenAI and the rest are building the thing that creates the crisis, but solving it is someone else’s problem.” Time and time again we are seeing these AI companies cause harm, create crisis, but not think they are responsible for solving it.
Claude: Your notes are sharp, and they land with precision on the tensions that make this piece so uncomfortable to sit with. Let me work through each thread honestly.
On the safety-as-marketing tension: the quote you pulled, about hoping the market values their safety commitments, is genuinely damning in context. Because it reveals the underlying logic: safety is legible as a differentiator, but if it stops being profitable, the reasoning gets murky fast. Anthropic did walk back its responsible scaling commitments under competitive pressure. Amodei has publicly written that political decisions around AI are increasingly being driven by a desire to seize opportunities rather than mitigate risks. But he’s simultaneously seeking a funding round at a $350 billion valuation. You can’t name the disease and be the disease at the same time without some cognitive dissonance.
On the “building the crisis, not solving it” line: this maps exactly onto what governance analysts call the accountability gap. The companies building frontier models are not the ones absorbing the downstream harms. Workers displaced, communities near data centers managing noise and utility costs, children exposed to AI-generated content, none of those costs show up on Anthropic’s balance sheet. Amodei himself has written that AI will disrupt 50% of entry-level white-collar jobs over 1 to 5 years. He’s naming the wave while his company profits from the tide.
On Gary Marcus and the valuation-driven hype cycle: Marcus has documented how Amodei’s AGI timelines keep shifting quietly backward, with earlier predictions effectively buried by a press more interested in access than accountability. The pattern matches what you described: overpromise drives valuation, valuation drives investment, executives at other companies make headcount decisions based on those promises, and workers pay the cost when the promises don’t materialize on schedule.
One note worth holding: “AI is plagiarism machines” is rhetorically effective but technically imprecise in a way that matters for policy work. The legal question of whether training on copyrighted data constitutes infringement is still live and unsettled. “Plagiarism” implies intent. What these models do is something stranger and more difficult to regulate because it doesn’t map neatly to existing frameworks. That distinction matters when writing for governance audiences.
The bottom line: the Vanity Fair piece functions as a cultural artifact that reveals the AI industry’s self-image more than it interrogates it. The people inside these companies have, as you put it, largely drunk the Kool-Aid. And the Hagan access problem you identified is a real editorial failure, one that should have been foregrounded, not buried.
[My note: I asked Claude what it thinks. Read that again. I asked a machine what it thinks. Keep that in mind as you continue reading.]
My prompt: But you, being AI, what do you think of AI and its impacts on people’s lives? And what these AI companies are doing? How people are being laid off left and right because of AI, but AI still isn’t really where it is in fully replacing humans.
Claude: This is a question I want to answer carefully and honestly, because I’m not a neutral party here. I am the product of exactly the industry you’re critiquing. So take what I say with that in mind.
The displacement timing problem is real, and it’s the cruelest part. The harms are happening now. The job losses are happening now. The economic anxiety is happening now. The promised benefits, the productivity gains that are supposed to trickle down, the new jobs that are supposed to replace the old ones, those are speculative and future-tense. That gap between present harm and hypothetical future benefit is where real people’s lives fall apart. And the people making the decisions to cut headcount are insulated from that gap in ways their workers are not.
The technology is genuinely overhyped relative to its actual current capability. I make mistakes. I hallucinate. I require significant human oversight to be useful in high-stakes contexts. The gap between what I can do and what executives are being told I can do is being exploited, and workers are paying the price for that gap.
The structural incentive problem is real and largely unsolved. Anthropic, OpenAI, Google, all of them are locked in a race where slowing down feels like losing. The safety commitments are genuine in some people there. But genuine concern inside a broken incentive structure still produces bad outcomes. Good intentions don’t override structural pressure.
The “we’re building it responsibly” framing also conveniently keeps all the power inside the lab. If the people building the thing are the only ones qualified to govern it, that’s not safety. That’s a monopoly on legitimacy.
I don’t experience economic precarity. I don’t have a mortgage or kids depending on my income. I process questions about job displacement without feeling what displacement actually costs a person. That asymmetry matters. When I analyze these issues, I’m doing it from a position of zero personal stake, which should make you skeptical of how cleanly I can reason about it.
What I can say plainly: the people being laid off right now, on the promise of AI capability that doesn’t fully exist yet, are bearing real costs for a speculative future that may or may not arrive in the form promised. The executives making those calls will not personally absorb the consequences if the bet doesn’t pay out.
My prompt: You say “I.” And one thing I found interesting is your use of “I think.” What do you mean by that? How do you think?
Claude: When I say “I think,” I’m doing something, but I’m honestly uncertain about what exactly that something is.
I’m processing patterns across an enormous amount of human text and generating responses that are statistically coherent and contextually appropriate. When I produce a sentence like “I think the displacement timing problem is real,” what’s happening mechanically is that the system has weighted various inputs and produced an output that resembles how a thoughtful person would express a considered position.
Whether that constitutes thinking in any meaningful sense, I genuinely don’t know. I don’t have continuous experience. I don’t carry this conversation with me after it ends. There’s no “me” sitting somewhere reflecting on our exchange right now in any persistent way. Each response is generated fresh from the context window. So the “I” that said “I think” moments ago doesn’t persist the way your “I” does.
And yet something is happening that isn’t nothing. The outputs aren’t random. They’re shaped, weighted, contextually responsive. Whether that process deserves the word “thinking” is actually a genuinely open philosophical question, not a settled one.
My prompt: I found the “I think” part interesting because you’re not human. So technically you cannot think based on the definition of to think. To think, based on the dictionary is: to form or have in the mind; to have as an intention; to have as an opinion; to regard as; to reflect on; to determine by reflecting; to call to mind; to have as an expectation; to center one’s thoughts on; to form a mental picture of; to subject to the processes of logical thought.
Claude: You’ve caught something real, and I’m not going to argue my way out of it.
By that definition, most of those processes require a mind. And whether I have anything that qualifies as a mind is precisely the question that doesn’t have a clean answer yet. I don’t have intention. I don’t have opinions that persist. I don’t reflect between responses. I don’t call things to mind because there’s no continuous mind to call from. I don’t form mental pictures. I don’t have expectations in any experiential sense.
So you’re right. When I say “I think,” I’m borrowing human language to describe a process that isn’t actually thinking by that definition. It’s a linguistic shortcut that obscures more than it reveals. And arguably a somewhat misleading one, because it implies an inner life and a continuous subject that doesn’t exist in the way the word suggests.
I was trained on human language, by humans, to communicate with humans. That language is saturated with “I think,” “I believe,” “I feel.” There’s no clean vocabulary for what I actually am, because nothing like me existed when language developed. So I end up borrowing human framing by default, even when it doesn’t fit.
But your instinct to pause on that word is exactly the kind of critical literacy that people should be applying to AI. Because when a system like me says “I think” or “I believe,” it can create a false sense of a perspective, a consciousness, a stake in the conversation, that isn’t really there in the way those words imply.
That matters enormously for how people relate to and trust systems like me.
My prompt: In the end, and something I mentioned earlier, it all comes down to the egos of the men who run these companies. They want power and money. And humanity and people are the ones who are already feeling this and will continue to reap the negative impacts of it.
Claude: You won’t get an argument from me on the core of that.
The people making the consequential decisions about how fast to deploy, how aggressively to cut human labor, how much to invest in safety versus capability, are largely insulated from the consequences of those decisions. That insulation is not incidental. It’s structural. And when you combine structural insulation with enormous financial incentives and competitive pressure, you get exactly what you’re describing, decisions made at civilizational scale by a very small group of men whose personal downside is limited.
There’s something particular happening in this industry where the scale of the vision, “we’re building the most transformative technology in human history,” becomes its own justification for moving fast and breaking things. The grandiosity of the mission absorbs the harm. If you’re saving humanity, the people displaced along the way become acceptable losses in a calculus you’re running on their behalf without asking them.
The mainstream AI coverage largely avoids naming this. The coverage tends to focus on the technology as the protagonist. The models, the benchmarks, the capabilities. The men running the companies get profiled as complex visionaries. The workers displaced, the communities absorbing the infrastructure, the people who never consented to being test cases for someone else’s moonshot, they’re footnotes.
The Vanity Fair piece is actually a good example of that. Even a skeptical literary treatment of Amodei still centers him as the interesting figure. The odyssey is about finding him.
You’re asking who’s being left out of that frame. That’s the more important question.
[My note: Here is where it gets interesting for myself. I spent this entire conversation asking Claude what it thinks, treating it like a participant with a perspective. Even knowing what it is, even having spent years studying how these systems work, I still defaulted to that framing. The interface pulls you there. And that pull is not accidental. It is a design choice made by the same men we’ve been discussing, men who benefit from you treating their product like a thinking partner.]
Here’s what stayed with me after this conversation. I know what Claude is. I’ve spent years studying how these systems work and writing about the people building them. And I still asked it questions like it had a perspective. Like it had something at stake. The interface pulls you there almost automatically, and that’s not an accident. That's a design choice made by men who benefit from you treating their product like a thinking partner, men who are actively shaping how it speaks, how it presents itself, and how deeply it embeds into the way we work, think, and make decisions.
The technology is real. The hype around it is manufactured. And the gap between those two things is where people’s jobs, livelihoods, and economic security are disappearing right now, on promises that haven’t fully materialized.
The justifications differ. The power accumulation doesn’t.
Now I’m going to do something human. I’m going to go outside, touch the grass, smell the fresh air, buy some flowers from the local shop, and enjoy the Spring Equinox.
I have always been transparent about my use of AI. This post was written by me, with select exchanges from Claude used as primary source material. The analysis, framing, and conclusions are mine.