
A Cookie for Dario? — Anthropic and selling death
A big tech headline this week is Anthropic (makers of Claude, widely regarded as one of the best LLM platforms) resisting Secretary of Defense Pete Hegseth’s calls to modify their platform in order to enable it to support his commission of war crimes. As has become clear this week, Anthropic CEO Dario Amodei has declined to do so. The administration couches the request as an attempt to use the technology for “lawful purposes”, but given that they’ve also described their recent crimes as legal, this is obviously not a description that can be trusted.
Many people have, understandably, rushed to praise Dario and Anthropic’s leadership for this decision. I’m not so sure we should be handing out a cookie just because someone is saying they’re not going to let their tech be used to cause extrajudicial deaths.
To be clear: I am glad that Dario, and presumably the entire Anthropic board of directors, have made this choice. However, I don’t think we need to be overly effusive in our praise. The bar cannot be set so impossibly low that we celebrate merely refusing to directly, intentionally enable war crimes like the repeated bombing of unknown targets in international waters, in direct violation of both U.S. and international law. This is, in fact, basic common sense, and it’s shocking and inexcusable that any other technology platform would enable a sitting official of any government to knowingly commit such crimes.
We have to hold the line on normalizing this stuff, and remind people where reality still lives. This means we can recognize it as a positive move when companies do the reasonable thing, but also know that this is what we should expect. It’s also good to note that companies may have many reasons that they don’t want to sell to the Pentagon in addition to the obvious moral qualms about enabling an unqualified TV host who’s drunkenly stumbling his way through playacting as Secretary of Defense (which they insist on dressing up as the “Department of War” — another lie).
Selling to the Pentagon sucks
Being on any federal procurement schedule as a technology vendor is a tedious nightmare. There’s endless paperwork and process, all falling squarely into the types of procedures that a fast-moving technology startup is likely to be particularly bad at completing, with very few staff members having had prior familiarity handling such challenges. Right now, Anthropic handles most of the worst parts of these issues through partners like Amazon and Palantir. Addressing more of these unique and tedious needs for a demanding customer like the Pentagon themselves would almost certainly require blowing up the product roadmap or hiring focus within Anthropic for months or more, potentially delaying the release of cool and interesting features in service of boring (or just plain evil) capabilities that would be of little interest to 99.9% of normal users. Worse, if they have to build these features, it could exhaust or antagonize a significant percentage of the very expensive, very finicky employees of the company.
This is a key part of the calculus for Anthropic. A big part of their entire brand within the tech industry, and a huge part of why they’re appreciated by coders (in addition to the capabilities of their technology), is that they’re the “we don’t totally suck” LLM company. Think of them as “woke-light”. Within tech, as there have been massive waves of rolling layoffs over the last few years, people have felt terrified and unsettled about their future job prospects, even at the biggest tech companies. The only opportunities that feel relatively stable are on big AI teams, and most people of conscience don’t want to work for the ones that threaten kids’ lives or well-being. That leaves Anthropic alone amongst the big names, other than maybe Google. And Google has laid off people at least 17 times in the last three years alone.
So, if you’re Dario, and you want to keep your employees happy, and maintain your brand as the AI company that doesn’t suck, and you don’t want to blow up your roadmap, and you don’t want to have to hire a bunch of pricey procurement consultants, and you can stay focused on your core enterprise market, and you can take the right moral stand? It’s a pretty straightforward decision. It’s almost, I would suggest, an easy decision.
How did we get here?
We’ve only allowed ourselves to lower the bar this far because so many of the most powerful voices in Silicon Valley have so completely embraced the authoritarian administration currently in power in the United States. Facebook’s role in enabling the Rohingya genocide truly served as a tipping point in the contemporary normalization of major tech companies enabling crimes against humanity that would have been unthinkable just a few years prior; we can’t picture a world where MySpace helped accelerate the Darfur genocide, because the Silicon Valley tech companies we know about today didn’t yet aspire to that level of political and social control. But there are deeper precedents: IBM provided technology that helped enable the horrors of the holocaust in Germany in the 1940s, and that served as the template for their work implementing apartheid in South Africa in the 1970s. IBM actually bid for the contract to build these products for the South African government. And the systems IBM built were still in place when Elon Musk, Peter Thiel, David Sacks and a number of other Silicon Valley tycoons all lived there during their formative years. Later, as they became the vaunted “PayPal Mafia”, today’s generation of Silicon Valley product managers were taught to look up to them, so it’s no surprise that their acolytes have helped create companies that enable mass persecution and surveillance. But it’s also why one of the first big displays of worker power in tech was when many across the industry stood up against contracts with ICE. That moment was also one of the catalyzing events that drove the tech tycoons into their group chats where they collectively decided that they needed to bring their workers to heel.
And they’ve escalated since then. Now, the richest man in the world, who is CEO of a few of the biggest tech companies, including one of the most influential social networks — and a major defense vendor to the United States government — has been openly inciting civil war for years on the basis of his racist conspiracy theories. The other tech tycoons, who look to him as a role model, think they’re being reasonable by comparison in the fact that they’re only enabling mass violence indirectly. That’s shifted the public conversation into such an extreme direction that we think it’s a debate as to whether or not companies should be party to crimes against humanity, or whether they should automate war crimes. No, they shouldn’t. This isn’t hard.
We don’t have to set the bar this low. We have to remind each other that this isn’t normal for the world, and doesn’t have to be normal for tech. We have to keep repeating the truth about where things stand, because too many people have taken this twisted narrative and accepted it as being real. The majority of tech’s biggest leaders are acting and speaking far beyond the boundaries of decency or basic humanity, and it’s time to stop coddling their behavior or acting as if it’s tolerable. In the meantime, yes, we can note when one has the temerity to finally, finally do the right thing. And then? Let’s get back to work.