Why I'm Canceling ChatGPT
Read on Substack →Fair warning: this isn’t my usual kind of post. While I enjoy talking about politics and current events, I don’t like writing about them. I typically just ramble about my own life — cleaning out my grandparents yard, reflecting on things I’ve learned, occasionally crying about my career. You know, light stuff.
But since I’ve probably talked with most of you about AI before, and some big things have happened in the past 48 hours, I wanted to share some thoughts.
My nickname in some group chats with friends is “ChatGPT Peck.” It’s a joke, mostly, but it’s earned. My answer to everything — restaurant recommendations, debugging code at work, writing emails I don’t want to write, figuring out what’s wrong with my car — has been “hold on, let me ask ChatGPT.” I pay for the thing. I use it probably thirty times a day. It has become, embarrassingly, one of my most-used tools for thinking.
So when I tell you I’m canceling my subscription, I want you to understand that this isn’t performative. This costs me something. I’m not dunking on a company I’ve never used. I’m walking away from a tool I genuinely rely on because of what happened this week — and what it says about where we’re headed.
A few weeks ago, Matt Shumer published a piece called “Something Big Is Happening” that immediately became the thing I forwarded to every person in my life. He captured something I’d been feeling but couldn’t articulate — that AI had crossed some invisible line, quietly, while most people were still making jokes about it writing bad poetry. The tools I use every day at work are no longer assistants. They’re collaborators. Sometimes they’re better than me. And the pace isn’t slowing down. It’s accelerating in a way that makes my stomach do a little flip when I think about it too hard.
I’d been wanting to write that post myself, frankly. He just beat me to it and did it better than I would have.
The reason I bring it up is context. AI is no longer a fun trick. It is the most powerful technology ever built, and it is improving on a curve that the people building it describe as historic. That means the question of who controls it — and what guardrails exist around it — isn’t theoretical anymore. It’s the whole game.
Which brings me to this week.
Anthropic — the company that makes Claude, and the AI I’ve increasingly been switching to for work — had been negotiating a contract with the Department of War. They’ve been on classified military networks since 2024. They were the first AI company to get there, actually.
The negotiation hit a wall over two things. Just two. Anthropic asked for written assurances that their models wouldn’t be used for mass domestic surveillance of American citizens, and that humans would remain in the loop on lethal decisions — meaning no fully autonomous weapons. That’s it. They supported every other lawful military use of the technology. They weren’t trying to play politics. They were drawing a line around two specific things that most people, if you sat them down and explained it, would agree are worth drawing a line around.
The Department of War’s response was to label Anthropic a “supply chain risk to national security” — a designation that has historically been reserved for adversaries of the United States, and has never been applied to an American company. Secretary of War Pete Hegseth announced it on X with the kind of language you’d normally see aimed at a foreign threat, not a company in San Francisco that was literally the first to help the military use AI.
And then, within hours, OpenAI swooped in.
Sam Altman (OpenAI’s CEO) posted that OpenAI had reached an agreement with the Department of War to deploy their models on classified networks. He claimed the deal included the same protections Anthropic had been asking for — prohibitions on mass surveillance and autonomous weapons. The Department of War endorsed it. Hegseth reposted it. Everyone shook hands.
If you’re reading that and thinking “wait — so OpenAI got the same safeguards Anthropic wanted, but Anthropic got blacklisted for asking?” — yeah. That doesn’t make any sense.
Here’s where it gets personal. Before I joined Fulfill, I worked at Palantir. Whatever you think of Palantir — and people have lots of opinions — one thing they did well was take the ethics of deploying powerful technology inside the federal government seriously. They talked about it constantly. Engineers working on government systems had access to ethics resources. There were real guardrails on how data was used, who could access it, how it was audited. They turned down projects. The internal culture around responsible deployment wasn’t perfect, but it was real. It was structural. It was baked into the work, not bolted on after the fact.
What Anthropic was doing looked a lot like that to me. They weren’t refusing to work with the military. They were saying: we’ll do this, but here are two lines we won’t cross, and we’d like that in writing. That’s not arrogance. That’s a company doing exactly what we should want companies to do when they build something this powerful.
And they got punished for it.
Meanwhile, the company that slid into the chair while it was still warm has a track record that makes me nervous. I’m not here to write a hit piece on Sam Altman — he’s obviously one of the more consequential entrepreneurs alive, and OpenAI’s role in bringing AI to the mainstream is undeniable. But the nonprofit-to-for-profit conversion, the boardroom drama, the way people in Silicon Valley talk about him when the mics are off — there’s a pattern of someone who will do anything in their power to make more money.
OpenAI jumped at a contract within hours of Anthropic getting labeled a national security risk. The timing alone tells you something. Whether or not the safeguards in their agreement are real or enforceable — and I genuinely don’t know — the optics of that move are concerning.
Let me zoom out for a second, because the reason any of this matters goes beyond corporate drama.
We are finally at the point where AI is powerful enough to do things that used to be impossible. Stitch together massive, disconnected data stores. Build profiles on millions of people using financial records, browser history, location data, purchasing patterns — things that individually seem harmless but together paint a picture of someone’s entire life. Before AI, the data existed, but it was so sprawling and fragmented that assembling it at scale was practically impossible unless you were looking for a specific person.
The Edward Snowden revelations showed us the ambition. AI provides the capability.
That’s what “mass domestic surveillance” actually means. Not a guy in a trench coat following you around. A system that knows where you’ve been, what you’ve read, who you’ve talked to, and what you’re likely to do next — running quietly, at scale, on everyone.
And autonomous weapons — AI systems making lethal decisions without a human in the loop — involve a kind of nuance that we are nowhere close to trusting a machine with. There’s a reason Hollywood keeps making movies about this exact scenario. The margin for error in life-or-death decisions is zero, and the current generation of AI, as good as it is, still hallucinates, still makes confident mistakes, still lacks the kind of moral reasoning that a trained soldier brings to an impossible moment.
I’m not against AI being used in military operations. With the right guardrails, it could make operations more targeted, more precise, and could dramatically reduce civilian casualties. That’s a future worth building toward. But the key phrase is “with the right guardrails.” And what happened this week suggests that the people making these decisions are less interested in guardrails than in compliance.
I’d be leaving something out if I didn’t mention what happened today.
As I’m writing this, the United States and Israel have launched large-scale military strikes against Iran. Explosions across Tehran. Reports of civilian casualties, including children. Iranian retaliation hitting U.S. bases and Gulf states. The supreme leader reportedly killed. A region destabilizing in real time.
This is not a drill. This is not a hypothetical. This is the administration that just strong-armed an AI company into silence, that labeled them a national security threat for asking about guardrails on surveillance and autonomous weapons — now engaged in the kind of military operation where those exact questions matter most. The timing is hard to ignore. The people who wanted unrestricted access to the most powerful AI on the planet are, today, using military force at a scale we haven’t seen in years.
I don’t know how to tie that up neatly. I don’t think it ties up neatly. But it solidifies something for me: it matters who has these tools, and it matters what rules govern their use. The companies building this technology are one of the few pressure points that exist. When one of them holds the line and gets destroyed for it, and another one rushes in to fill the gap — that should bother you, regardless of where you sit politically.
So I’m canceling my ChatGPT subscription. It’s a small thing. I know that. $200/mo isn’t going to change OpenAI’s trajectory. But I’ve spent the last year telling everyone I know to try AI, to lean into it, to not be afraid of it. And if I’m going to keep doing that — and I am — then I want to be sending my money to the company that, when it mattered, said “no” to something it believed was wrong, even when saying “no” meant losing everything.
I’m switching to Claude. Not because it’s perfect, and not because Anthropic is some flawless organization. But because this week, when the pressure was on, they held a line that I think matters. And I’d rather give my money to the company that gets punished for doing the right thing than the one that gets rewarded for doing the convenient thing.
If you use AI — and you should — I’d encourage you to think about where your twenty bucks goes. It might matter more than you think.
Thanks for reading something a little different from me. If you made it this far, I appreciate you letting me think out loud about something I care about. Normal programming (career reflections, life lessons, stories about my grandma) will resume shortly.
And if you haven’t read Matt Shumer’s “Something Big Is Happening”, go read it. It’s the best thing I’ve read this year on what’s actually going on with AI — written for normal people, not tech bros. It’ll take you fifteen minutes and you’ll think about it for weeks.
Edit — 5:00pm EST February 28, 2026
This is exactly why I don’t write about current events. Things move fast. I hit publish and the ground shifted under me within minutes.
Shortly after this post went up, OpenAI published a detailed breakdown of their agreement with the Department of War. And I have to be honest — it complicates the story I just told you.
OpenAI is claiming they negotiated a contract with more protections than Anthropic’s original agreement — cloud-only deployment (which structurally prevents edge use for autonomous weapons), a safety stack they retain full control over, cleared OpenAI engineers embedded in the process, and contract language that locks in current surveillance and weapons laws even if those laws change later. They also drew a third red line Anthropic didn’t have: no high-stakes automated decision-making, like social credit-style systems.
They also explicitly said Anthropic should not be designated a supply chain risk, and that they asked the government to offer the same terms to all AI labs — including resolving things with Anthropic. That’s hard to square with the “OpenAI swooped in like a vulture” version of events I wrote above.
So where does that leave me?
The timing still makes me uncomfortable. You can negotiate the best contract in the world, but announcing it the same night your competitor gets publicly labeled a national security threat — with Hegseth reposting it approvingly — carries a certain energy. Whether it was opportunism dressed in safety language or genuine principled negotiation, I don’t know. I have my suspicions, shaped by everything I know about how Altman operates, but suspicions aren’t facts.
The bigger question is whether these protections are real or performative. “We can terminate the contract if the government violates the terms” sounds strong on paper. But would they actually pull the plug on a classified military contract with the Department of War? That’s a trust question, and I don’t have a clean answer.
What hasn’t changed: an American company got labeled a supply chain risk — a designation normally reserved for adversaries — for asking about guardrails on surveillance and autonomous weapons. That happened. And it should bother you regardless of what OpenAI negotiated afterward. The strongest version of this story was never really about OpenAI being a villain. It’s about a system that punishes one company for negotiating in public while rewarding another for negotiating quietly. Anthropic got destroyed for saying the same things OpenAI says it believes.
I’m still canceling ChatGPT. I’m still switching to Claude. But I owe you the full picture, not just the version that was cleanest when I first sat down to write.
Want help implementing AI in your business?
Learn about DeployedEngineer.ai →