Don’t Download Claude, Either

Sources

Hao, Karen. Empire of AI. https://bookshop.org/a/83711/9780593657508

Amodei, Dario. “Statement from Dario Amodei on Our Discussions with the Department of War.” Anthropic, February 26, 2026. https://www.anthropic.com/news/statement-department-of-war.

Benedetto, Antonio G. Di. “Talking to Windows’ Copilot AI Makes a Computer Feel Incompetent.” The Verge, November 18, 2025. https://www.theverge.com/report/822443/microsoft-windows-copilot-vision-ai-assistant-pc-voice-controls-impressions.

Copp, Tara, Elizabeth Dwoskin, and Ian Duncan. “Anthropic’s AI Tool Claude Central to U.S. Campaign in Iran, amid a Bitter Feud.” The Washington Post, March 4, 2026. https://www.washingtonpost.com/technology/2026/03/04/anthropic-ai-iran-campaign/.

Kang, Cecilia. “Backed by Anthropic, a Super PAC Group Begins an Ad Blitz in Support of A.I. Regulation.” Technology. The New York Times, February 23, 2026. https://www.nytimes.com/2026/02/23/technology/ai-pac-ad-blitz.html.

Kolodny. “Defense Tech Companies Are Dropping Claude after Pentagon’s Anthropic Blacklist.” AI Age. CNBC, March 4, 2026. https://www.cnbc.com/2026/03/04/pentagon-blacklist-anthropic-defense-tech-claude.html.

Levitz, Eric. “The AI Industry’s Civil War.” Vox, March 4, 2026. https://www.vox.com/politics/481229/anthropic-pentagon-openai-amodei-ai.

Nazzaro, Miranda. “OpenAI Faces Backlash over New Pentagon Contract.” The Hill, March 3, 2026. https://thehill.com/policy/technology/5765480-openai-pentagon-deal-backlash/.

O’brien, Matt. “Pentagon Dispute Bolsters Anthropic Reputation but Raises Questions about AI Readiness in Military.” Business. AP News, March 3, 2026. https://apnews.com/article/anthropic-pentagon-openai-claude-chatgpt-military-ai-b2bbcf5fda3f27353eae1e0eb7ab07b6.

“One in Four Federal Lobbyists Now Work on AI.” Public Citizen, February 24, 2026. https://www.citizen.org/news/one-in-four-federal-lobbyists-now-work-on-ai/.

Perlo. “OpenAI Alters Deal with Pentagon as Critics Sound Alarm over Surveillance.” NBC News, March 3, 2026. https://www.nbcnews.com/tech/tech-news/openai-alters-deal-pentagon-critics-sound-alarm-surveillance-rcna261357.

Samuel, Sigal. “The One Question Everyone Should Be Asking after OpenAI’s Deal with the Pentagon.” Vox, March 3, 2026. https://www.vox.com/future-perfect/481322/pentagon-anthropic-openai-surveillance-china.

Transcript

Hi, it’s Wednesday, March 4th, 2026, you’re tuned into Why, America? I’m your lawyer friend Leeja Miller. After a recent falling out between the Pentagon and Anthropic, the AI company known for creating AI chatbot Claude, a key competitor to OpenAI’s ChatGPT, the public has made it known just how skeptical they are of AI companies partnering with the government. Instances of people deleting ChatGPT from their phones soared, and Claude, always lagging behind ChatGPT, was catapulted to the most popular phone app in the country. The public has clearly voted that it wants there to be guardrails around the use of AI, especially by government actors, but trusting a private company that is ALREADY deeply imbedded in the Department of Defense to take a moral high ground that will genuinely protect human life, democracy, and more, is dangerously misplaced. Today, we’re discussing the fallout from the deal gone sour with the Pentagon and why AI companies, even with high-minded ethics statements on their well-designed websites, aren’t going to save us from the worst consequences of the AI race to the bottom.

AD

With all the chaos feeling like it's happening everywhere all around me all the time, it can be really hard to balance my mental health. One thing I really love doing and try to do consistently for my mental health is getting some exercise. I love weight lifting and I love my gym but I’m gonna be honest the weight of the world can often really get to me (no pun intended). I struggle to get to the gym on a consistent basis, and when I’m there I feel unfocused and sometimes unmotivated. Even though I KNOW lifting heavy is amazing for my mental health, managing my ADHD, and for ensuring my longevity. I’ve just always needed a little extra support. That’s where my partner on today’s video trainwell comes in. Trainwell pairs you with a personal trainer who gives you remote, one-on-one personal training that’s highly personalized for your fitness level, goals, and needs. I’ve been using trainwell for three weeks and this is the first time in a very long time that I have managed to happily stick to a truly consistent, sustainable training schedule. For so long I’ve felt like I was just treading water and making zero progress in the gym, but with the help of my trainwell trainer I genuinely feel like I can take on every workout and continue showing up for myself consistently. Here’s how it works: after signing up, I was paired with my trainer Jill. I met with her via a one-on-one zoom call and we had a really lovely conversation about my goals and my hang ups and obstacles when it comes to health and fitness. Jill was so helpful with giving me really down to earth, realistic, and achievable advice. After our meeting, Jill programmed out four weeks of workouts for me, which I access using the trainwell app, each exercise allows me to record the weight I used and number of reps I did, and has helpful videos and notes about proper form, and after every workout I send my feedback to Jill, and she’s been super helpful with giving me advice about certain exercises or otherwise just giving me a daily pep talk. Trainwell has genuinely been a perfect balance of flexibility because I can take Jill in my pocket no matter where I go in the world, but also accountability. Knowing she’s checking in and rooting for me, has genuinely made all the difference in me showing up to the gym consistently over the last few weeks and feeling like I’m actually making progress. I’m excited to see how I progress as I continue working with Jill and trainwell. And trainwell is great because if you get matched with a trainer that isn’t perfect for you, you can change any time, ensuring you have personalized coaching with someone who gets you. Try trainwell for yourself today. Take the quiz now at go dot trainwell dot net slash leejamiller to get matched with your perfect trainer and start your FREE 14 day trial. Thanks trainwell!

Here’s what’s happening if you need to get caught up because there’s A LOT GOING ON RIGHT NOW. So after the operation in Venezuela that ended in the capture of Nicolas Maduro, Anthropic, the creator of the chatbot Claude, raised concerns that its technology was used in ways that ran counter to its company ethics. Because let’s be really clear Anthropic, creator of Claude, has extensive department of defense contracts and was used not just for the operation in Venezuela but also in the war Trump just started with Iran. According to reporting from The Washington Post, quote “The military’s Maven Smart System, which is built by data mining company Palantir, is generating insights from an astonishing amount of classified data from satellites, surveillance and other intelligence, helping provide real-time targeting and target prioritization to military operations in Iran, according to three people familiar with the system.

Embedded into the system is Anthropic’s AI tool Claude, a technology that was banned by the Pentagon last week after heated negotiations over the terms of its use in war.

Over the last year military planners have seen Claude, paired with Maven, mature into a tool that is in daily use across most parts of the military.” End quote. This is the first time we’ve seen AI deployed so extensively in an active war, and Anthropic is at the center of it. Despite this, Anthropic and its founder Dario Amodei espouse lofty goals for AI safety and development.

Specifically, Anthropic is concerned with AI being used for mass domestic surveillance and for autonomous weapons technology–meaning weapons that act without oversight of a human. This led last week to the culmination of intense back and forth with the Pentagon over a specific department of defense contract, with Anthropic digging in its heels and requiring assurances from the DOD that its technology would not be used in these ways. And when you push back against this regime, you face consequences. Anthropic was given until 5pm on Friday to relent or, in retaliation, they were threatened with being blacklisted, being named a supply chain risk by the government, meaning no government contracts for Anthropic. The deadline passed, and Hegseth announced Anthropic and Claude were blacklisted. Within HOURS, OpenAI, Anthropic’s competition and creator of ChatGPT, announced a deal with the Pentagon. Which of course looked really bad. It was basically OpenAI saying we’ll do what Anthropic won’t, we have no qualms with use of our technology for domestic surveillance or autonomous weapons. After intense backlash, Sam Altman, OpenAI’s founder and CEO, released a statement acknowledging that the rushed deal looked sloppy and opportunistic. But the thing with Sam Altman is that he has a proven track record of saying whatever his audience in the moment wants to hear, leading to a lot of inconsistencies and outright lies, because he’s willing to tell one group one thing to make them happy and then tell another group a different thing to make THEM happy, which has historically created a lot of chaos inside OpenAI and makes him, in my opinion, inherently untrustworthy. But he knows how to say the right things to make it sound like he really cares and is being thoughtful and introspective about his shitty behavior. And then he turns around and continues with the shitty behavior. It’s like when someone has gone to just enough therapy that they know how to weaponize therapy talk to manipulate you.

In the fallout over the quick Department of Defense deal with OpenAI, protesters showed up outside OpenAI headquarters, and users began deleting ChatGPT from their phones. And then Claude saw a massive uptick in downloads and subscriptions, surpassing ChatGPT for the first time in the app store to become the number one app across all phone interfaces. This has of course sent OpenAI scrambling trying to fix the PR disaster of this department of defense contract. And I think it’s a powerful indicator of the current public sentiment around AI generally. People are curious, many people are using it, some use it daily either because their corporate jobs make them or because they find the chat tools helpful. Some people have developed a concerning co-dependency on these apps, using them instead of real human interaction as a form of companionship in ways we are just beginning to see the consequences of, including numerous AI-encouraged suicides and self harm. And so people, even those who use the technology, are rightfully concerned about the safety of AI, from the safety of consumers to the potential for AI to be used at mass scale to undermine democratic values or take human life. On top of that, we’re watching our lawmakers do absolutely fucking nothing about it. And so, absent actual laws, reforms, regulations, or any action at all from the people we’ve put in charge of regulating these things, we are turning to the AI companies themselves for assurances that they aren’t up to no good. And Anthropic, which was founded in 2020 by former OpenAI employees who were deeply concerned with OpenAI’s recklessness, provides a lot of assurances of its ethics on its website, through its founder Dario Amodei and his many blog posts, and now through its apparent stance against the Pentagon–its willingness to put its neck on the line to honor its commitments, while OpenAI is happy to do whatever the government wants, or so it seems. And the public is voting with its attention, its dollars, and its data, by deleting ChatGPT in favor of Anthropic’s Claude chatbot. But I think it’s deeply misplaced to put so much trust in ANY of these AI companies. Let me explain why I think this.

If there’s one thing I’ve learned as I’ve delved into the world of AI development it’s that the people doing the developing don’t just see it as their job–it makes up a part of their entire worldview. So to understand the AI world, which is deeply important for everyday consumers and everyday members of civil society who want to hold their lawmakers accountable, to understand the AI world you have to understand the AI worldview.

The book Empire of AI by Karen Hao is THE best thing I’ve read over the last year and it lays out the AI world in a really accessible, readable way that is a genuinely entertaining, if horrifying, read. I think EVERYONE should read this book, genuinely. As a VERY brief and incomplete overview, the key ideas you need to know to understand how AI fits into the worldview of its creators, includes understanding the spectrum of AI ethics that AI developers fall onto. You have accelerationists on one end, people who think that artificial intelligence is like THE key to human development and flourishing and that it should be developed as quickly as possible, unencumbered by pesky government regulations. On the other end of the spectrum you have the “doomers” who believe that AI development will result in the destruction of humanity and therefore its development must be carefully restrained. Within that camp, many of its adherents are guided by “effective altruism” or attempting to do the “most good” with their technological innovations. However, taking a step back from all of this, it’s important to keep in mind the kind of accepted ideas that everyone on this spectrum has. That is that whether you’re an accelerationist or a doomer you kind of have to be convinced in the power of AI. Many people falling anywhere on the spectrum deeply believe in the possibility of AGI, artificial general intelligence. If you didn’t genuinely believe that AI is going to revolutionize the world then you wouldn’t be so passionate as to fall anywhere on the spectrum. The problem is that no one can actually define AGI, not specifically. Most people just say we’ll know it when we see it. Or they give vague definitions that it’s a type of AI that is sentient, that can think for itself. But large language models, the type of AI that has taken over, because there’s more than one type of artificial intelligence, the LLM models are just guessing the next word based on the data it's been trained on. That is not sentience, no matter how much data you feed it. Also, the latest AI models have already been trained on basically all available human-created language in existence and I don’t think anyone is arguing we’ve reached AGI. So the generally accepted hypothesis in the AI community that AGI is the clear outcome of all of this and therefore we either must let it flourish and it will benefit all mankind or tamp it down with serious guardrails or it will extinguish all man kind, all of that rests of the overall belief that AGI is possible and the whole thesis is unproven. Which is I think an important high level point to understand, because much of the AI landscape we’re dealing with right now is the result of the hubris and ego of a few, mostly, men. And in practice AI can definitely do impressive things, but the level of false advertising that the companies trying to sell us on the AI extension of whatever product they offer is through the fucking roof. Last November, The Verge did an experiment where it asked Microsoft Copilot to do the exact things Microsoft claimed it could do in the copilot advertisements, with embarrassing results.

So there is a vast assumption happening, based on a theory, that the end result of AI will mostly definitely be AGI. That is just a theory at this point. And a lot of this worldview has resulted in what Karen Hao describes in her book Empire of AI as basically a self-fulfilling prophecy. Because for OpenAI, especially, in its fearmongering over the future of AGI, the dangers of it, and especially the dangers of AGI if China develops it first, which is a favorite fearmongering tactic for any new technology–what if the commies get it first–in all of this fretting, OpenAI has accelerated the rate of AI growth beyond anyone’s wildest expectations even a decade ago. It’s demands for datacenters, for high-powered chips, for more and more and more data to feed the AI training models, all of it FAR surpasses anything China was ever doing, but it has shown the world what’s possible and how to do it, and instigated an arms race for the first company to reach the promised land of AGI. If not for OpenAI, especially under Sam Altman, we would not have the AI technology we have today. And the move fast, break things mentality he brought to the AI world has meant that for all the handwringing over the potential damage AGI could cause to humanity, it has become a race to the bottom when it comes to AI development–who can do it fastest, no matter the human cost, spurred on by OpenAI itself. And ALL of this is over AGI, a technology that is just a theory. Meanwhile, despite all that advancement, we can’t even get a computer to do basic tasks on command.

AI has not delivered on the widescale productivity it has promised, it has not delivered on the profits it has promised, it has not delivered on the human flourishing it has promised. All it has delivered so far is shareholder value, a lot of speculation, and endless endless AI ads shoving this technology down our throats whether we like it or not–on top of the human costs of AI including suicides spurred on by chatbots but also the incredibly exploitative practices used in data harvesting, stealing IP, paying people in the poorest parts of the globe pennies to sift through deeply traumatizing material to try to help train this AI, and the incredibly high environmental cost of all of it. That’s why Karen Hao named her book Empire of AI–her central thesis is that the AI boom is similar in many ways to all the other past technological improvements that mainly led to increased wealth stratification and depended on the exploited labor of marginalized populations, whether you’re talking about the cotton gin or ChatGPT.

This is why we cannot pick winners in the AI race–we can’t look at these companies in black and white and think Anthropic good, OpenAI bad. We should be looking at all of it with skepticism. Because the founders of these companies, even ones like Anthropic with a lot of lofty, ostensibly charitable and good goals, are coming from the same worldview that created the problems of the world we currently exist in. For example, in a statement from Dario Amodei, CEO of Anthropic, on the Department of Defense discussions, he starts the whole statement off by saying quote “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.” end quote. No matter how innovative these (mostly) men think they are, no matter how much they like to think they’re doing something radical and new and different and gamechanging, they’re coming at the whole thing from the same worldview that created this mess to begin with. The black and white cold war fight between Good, the US, and Evil, Russia and China. The fearmongering that if we don’t do it, China will do it first, is the same reasoning behind why the United States invented the atomic bomb and killed literally hundreds of thousands of people with it. And it’s fueling the race to the bottom within the United States, too. Anthropic has all these lofty ideals, but they come with the assumption that it is Anthropic alone that is capable of responsibly shepherding in the AI age, and therefore it must do everything it can to develop its models, no matter what, to compete with its rivals like OpenAI and xAi. And because those other companies aren’t bound by the same ethics codes and are more on the accelerationist end of the spectrum, Anthropic has shown a willingness to bend its own rules in the name of competition. According to reporting from Vox, quote “In February, Anthropic formally abandoned its pledge to stop training more powerful models once their capabilities outpaced the company’s ability to understand and control them. In effect, the company downgraded that policy from a binding internal practice to an aspiration.

The firm justified this move as a necessary response to competitive pressure and regulatory inaction. With the federal government embracing an accelerationist posture — and rival labs declining to emulate all of Anthropic’s practices — the company needed to loosen its safety rules in order to safeguard its place at the technological frontier.”

And listen that is true–the Trump regime is clearly not just uninterested in AI regulation but actively hostile towards it. Attempts have been made to pass legislation that would BAN states from trying to regulate AI. And the DOD and Pete Hegseth have proven they are uninterested in any sort of guardrails to the ability of the US government to do whatever it wants with AI. Which is a great irony because in saying we need unfettered AI and we have to get AGI before China because China will use it to commit human rights abuses and further its surveillance state, the US wants to use AI to become more of a surveillance state. In fact, the Trump regime’s move to blacklist a company for refusing to conform to the government’s wishes is a move straight out of the Chinese government’s playbook. With all of that being what it is, it’s understandable that we would want a company like Anthropic to win out over one like xAI or OpenAI. But the problem is that we live in an unregulated capitalist system. If Anthropic can’t compete with its rapidly growing competitors, if it is blocked out of incredibly lucrative defense contracts, if it is bound by ethics that require it NOT to grow as quickly as possible, then Anthropic will not survive. So the lofty ideals of a company like Anthropic mean basically nothing, in the long run, because ethics are antithetical to the proliferation of capitalism. Ethics play no role in it. Every AI company that wants to survive has to play by capitalism’s rules. We’ve already seen that play out with OpenAI, which was founded to be a non-profit but quickly discovered that it could not afford the amount of resources it would need to actually create advanced AI systems at the pace it wanted to create them. And we’ve seen this in Anthropic’s willingness to partner with the Department of Defense and to back off from its prior ethics promises in order to keep competing.

Another way Anthropic and other AI firms are playing by the rules of this fucked up system: lobbying. AI lobbying. According to Public Citizen, fully one QUARTER of ALL LOBBYISTS working in DC now work on AI. And while OpenAI and Meta, among others, have hired lobbyists to take the obvious stance against any regulation of AI, Anthropic has also thrown its hat in the ring, but they are in FAVOR of AI regulations. In fact, Anthropic has poured 20 MILLION dollars into a new Super PAC called Public First Action which has been running ad campaigns in key districts pushing pro-AI regulation candidates. In the run up to the midterm elections, AI companies are making AI a central issue on both sides of the aisle. Again, Anthropic can innocently claim they simply had to get involved in the SuperPAC business to counter the negative work of their competitors–they’re the good guys, they WANT regulations. And yes that’s true but the problem is the fundamental way the system functions–the fact that private companies can pour millions of dollars into ads and lobbyists means that our voices matter very little in any of this, and the people who get elected on the back of AI money know who’s paying their bills and what they want out of them in exchange for these gobs of campaign cash. Yes, lawmakers who genuinely want to regulate AI companies should get elected, but the point is that what we the people want isn’t really factored into any of it, it’s just a matter of which AI company is willing to throw its weight around in DC. Our lawmakers become answerable to the AI companies, not to us. And that’s a major problem for our so-called democracy.

On top of that, having a company like Anthropic lull us into a cozy slumber with pretty promises of a glorious, safe AI future means there is less push from the public for comprehensive AI regulations. Anthropic has shown a willingness to use its AI for war and to back down on its ethics promises. Their dedication to “doing the most good” will continue to degrade in a system that isn’t built for anything other than incessant growth and profit. It is only through aggressive, comprehensive laws and regulations that we can ensure AI won’t wipe us off the face of the planet–not necessarily because AGI will come and kill us all, but because we will decimate the planet’s natural resources that we depend on all on our own, long before AGI happens.

And the answer to all of this is the same as the answer to all the ills wrought by capitalism, whether its income inequality or environmental degradation and more, because the AI race really is just a microcosm of what’s fucked up in the whole system, the answer to all of this is turning towards collectivism. Which is a fucking pipe dream right now but there are small ways we can push for it even in These Trying Times. The Trump regime and Congress are going to do nothing to save us here, as per usual. While it would be better for everyone, frankly, if we could get federal-level regulations that are comprehensive and ACTUALLY protect people and the planet from the annihilation promised by the expansion of AI. That way they would be the same across the nation, easier to comply with, and protective of everyone including in red states that you know aren’t gonna do shit about this. The reality is that that’s not possible, not under Trump, frankly probably not under a Democrat either. There needs to be a state-level push for comprehensive AI regulations. The work is already underway in New York and California, which have both passed laws requiring AI companies to establish basic security protocols, annual safety reviews, and third party audits. Many think these laws don’t go far enough, but it’s a start. And while I don’t think deleting chatGPT and downloading Claude instead is the moral high ground a lot of the public thinks it is, I do think that groundswell of support for Anthropic and against OpenAI is a clear indication the the public desperately wants any action at all from anyone on placing guardrails around AI. We understand the danger is there, even if we don’t get the intricacies of the technology. And if given the chance I do believe there could be a lot of good that could come from AI, again if collectivism was more central to how our system functions. It could theoretically generate enough wealth to provide for a universal basic income. It can do impressive things with coding and analyzing data and more in ways that could be for the public benefit but for the fact that a lot of that benefit doesn’t generate profit and therefore isn’t prioritized in this system. There are nonprofits and think tanks doing amazing work thinking about an AI future that would be to the benefit of everyone. But the reality is that AI is simply following the same well worn path of all past technological innovations by increasing the wealth gap and exploiting the labor of marginalized communities, not to mention the IP of countless artists, writers, journalists, and more, all to increase shareholder value. If you want to know what to do, I think the greatest thing you can do right now is learn to understand this technology. Again, I genuinely think everyone should read Empire of AI by Karen Hao, Hao does a lot of reporting on AI generally you can read if you want shorter form content, and try getting involved at the state level to push for state regulations of AI, laws that could act as building blocks and test grounds for future federal regulation. Congress has always been woefully slow to act when it comes to emerging technologies. It once again falls on us to try and save us. So act accordingly.

If you’d like to support my work consider joining on YouTube, Substack, or Patreon to get access to all these episodes completely ad free. Also if you like my Reagan Ruined Everything tshirt you can get one for yourself at leeja miller merch dot com. Thank you to my multi-platinum patrons Christopher Cowan, Evan Friedley, Marc, Sarah Shelby, Dennis Smith, Art, David, L’etranger (Lukus), Thomas Johnson, and Tay. Your generosity makes this channel what it is, so thank you!

And if you liked this episode, you’ll like my episode from Monday about why the US just started a war with Iran.

Next
Next

Why Did We Just Start A War w/ Iran?