This should make any US company nervous about entering into an agreement with the government. Or any US company that already has a contract with the government. If they one day decide they don't like that contract, they can designate you a supply chain risk.
Not 1) rip up the existing contract and cease the agreement or 2) continue (but not renew) the existing contract or 3) renegotiate terms upon renewal but instead a full on ban of doing any business with an entire industry/sector.
"Nice little business ya got here -- it'd be shame if something happened to it..."
Right now, we cannot and should not. Even if you ignore, you are getting dragged into without your choice. See: the bribes paid by the companies.
I'm pretty sure same thing would've happened if Anthropic refused to enter contract negotiations in the first place.
The civil society should be quite concerned about this kind of attacks.
That’s ultimately why Ted Cruz spoke out about the Kimmel cancelation. It doesn’t take long until those powers are turned against you.
Meh, I think it's entirely asymmetrical in this era. Democrats aren't good for much, but they're very good at respecting norms.
Trump is willing to do completely unprecedented, vindictive, and malicious things because he's so popular with so many people who are either checked out, nihilistic, corrupt, or just completely unconcerned about the concept of good governance.
It's not a pendulum where there's some super-corrupt Democrat waiting in the wings to do the same things upon their enemies, this really is the Republican party openly embracing kleptocracy and lawlessness.
But given that what would typically be red lines for previous administrations have been brazenly crossed without consequences, why would they bother?
Shutting down USAID being the clearest one. They just saw "they help brown people in other countries with our money" and shut it down. Fuck all second and third order effects that actually benefited the US.
https://news.ycombinator.com/item?id=47186677 I am directing the Department of War to designate Anthropic a supply-chain risk (twitter.com/secwar) 5 days ago, 1083+ comments
https://news.ycombinator.com/item?id=47189441 Anthropic says it will challenge Pentagon supply chain risk designation in court (reuters.com) 5 days ago, 37+ comments
Note that I give them a lot of credit for trying to stop and to have their own red lines about the use of their technology, and to stick to those red lines to the end.
What chance have the proverbial good-guys got if, even after _proving_ some modicum of good will, people will nonetheless condescend any attempt to influence bad/wildcard actors? It feels great to tell someone they 'should've known better' but I'm convinced that that's basically void of cautionary utility.
It's damned if you do and damned if you don't — lose-lose scenario either way.
I'm curious what'll openai signatories on notdivided.org do now - https://news.ycombinator.com/item?id=47188473
Remain undivided in spirit while grinding for OpenAI?
> https://www.sfgate.com/tech/article/brockman-openai-top-trum...
https://en.wikipedia.org/wiki/United_States_Department_of_De...
So far it's been slart enough for what I need, so closing my ChatGPT subscription was a really easy decision to make.
Ergo AWS/Azure/GCP - nobody will host them because it’s Anthropic or the lucrative government contracts. Hegseth/Trump didn’t just say “you’ll never do business with the US” - it’s that they will never do business IN the US. Hopefully that means they’ll be able to take up shop elsewhere in the world.
Such tampering with companies is a smoking gun. Let's wait until there is another decision seizing this (or others') company assets.
The US is still a rights-based state, which means that when they arrest someone (legitimately or not), lawyers and human right advocates can eventually track them down.
When a secret police disappears someone, they actually disappear. Families can spend years wondering if their loved one is still alive, or was murdered by organized crime, or ran away, or was secretly taken by the state. The US these days is pretty bad, but it's nowhere near that bad.
https://arc-anglerfish-washpost-prod-washpost.s3.amazonaws.c...
https://www.inc.com/chris-morris/legal-legend-leading-anthro...
And I'm just trying to play out what happens if Anthropic, and Google (if they haven't already), capitulate. Am I just going to forego using the best models and suffer any repercussions of not having access when the people who couldn't care less if the military is using AI for illegal uses continue to leverage them? When I say illegal I'm talking about the surveillance-of-US-citizens red line Anthropic would not agree to. The autonomous weapon one I'm sure there are zero laws against and so that wouldn't actually be illegal.
Anthropic had no problems to do business with the current administration until now. Are we to pretend it was all for happy purposes until now?
Evergreen dril: "The wise man bowed his head and said 'there's no difference between good things and bad things you imbecile'"
DeepSeek is Chinese.
Avoiding the MAGA collaborators is not as difficult as you make it seem. Foundation models have genuine global competition.
Local LLMs gives you the freedom to use a model without a third-party vendor which is the whole point here.
Makes sense, obviously, but yeesh.
Seems like a great ROI. The loser is Average Joe with a 401(k).
You are right it won’t be as secured as before but it’s only risk management. As much as investing in a oil company in Brazil is a risk because you could have their government takeover the company to make it part of the government and screw you in the process.
It’s still tradable.
It's not a good thing, AT ALL. There's a huge loss of overall productivity when you have corrupt systems (see Russia), which is why modern governments have worked so hard to lower corruption. But Trump ruining all that isn't going to end business ... it's just going to make everyone pay more for everything.
I would argue that they did not. They should have and some were better then others.
But, bulk of financial markets, all of predictionmarkets and crypto, startups and sillicon valley, Musk imperium, Thiel, Murdock, all run on corruption. And to large extend, Trump is the endgame of that.
Even if companies were pretending to play by the rules before, at least they had some need to put in the effort to pretend. When a society can see belligerent ostentatious corruption going on as the norm, nothing good can follow.
Arguably large parts of the market in the US have been irrational and largely vibes based for a long time at this point. This action (like many others coming out of the Trump administration) adds to the chaos but I tend to doubt it will be the event that causes Wile E. Coyote to look down.
You don't see how?
Well, just watch and wait, and you will see that this will have essentially zero effect on US investment.
It's petty and sad, but nothing ever happens.
Who else is even in the conversation? China? They would never do something like this!
Anthropic has vowed to fight this designation in court.
Without weighing in on the constitutionality or legality of the move, I think it's obvious that this kind of retaliation power is unmatched by any private business that has a contractual dispute.
If a private business doesn't like Anthropic's terms, it can walk away from the deal, but it can't conduct coordinated retaliation with other companies before ending up in antitrust territory or potentially violating the Sherman Act.
Now for my editorializing: The fact that Pete Hegseth is willing to apply this type of designation against a U.S. company simply because he doesn't like its terms is pretty chilling. It's all the more scary once you consider which terms he objects to.
There's a lot of backchanneling going on between Emil and Dario because everyone's in the same circles but it's all for naught.
In fact adding onto it, IIRC this is the reason why google and amazon have to divest essentially from Anthropic if they want Govt. contracts
Hope this helps though a lawyer's input will definitely be more credible. So its good for them to respond as well.
"Hey why is the gov using Anthorpic over OpenAI, don't you know how much money I've donated?"
What if Anthropic just shrugged, dissolved the company and open-sourced all of the Opus weights? Could this harm OpenAI and advance AI in a reasonable way?
Look I know it's an insane idea. I'm just curious what the most unhinged response to this might be.
We already have Groq, Celebras and AWS Bedrock and others in the inference of open models space, so the model is usable that way.
Is Claude better than Llama, Gwen etc. Probably. For now.
But for how long? Dissolving means relying on Meta or Deepseek etc. to pick up and carry on tuning. Otherwise it'll be as useful as a GPT2 or Atari ST eventually in a competitive environment.
Also open sourcing the weights is handing it over to DoD (aka DoW).
Complicated question but probably not the best move. Keep going means keep working on safety research.
Far more likely is they spin up a defence focused subsidiary with slightly different policies if they really want to sell to them.
Are you asking how dangerous open-weight models are? You could start with:
Ryan Greenblatt on the AI Alignment Forum : "When is it important that open-weight models aren't released?" https://www.alignmentforum.org/posts/TeF8Az2EiWenR9APF/when-...
From the Centre for Future Generations : "Can open-weight models ever be safe?" https://cfg.eu/can-open-weight-models-ever-be-safe/
From OpenAI authors, far from neutral : "Estimating Worst-Case Frontier Risks of Open-Weight LLMs" https://arxiv.org/abs/2508.03153
I mean what if all the employees stripped off their clothes and walked through the streets naked while barking, then called up their middle school math teachers and barked live dogs then moved to a commune and stood on their heads.
> Writing out a thought I had, someone please critique my reasoning here...
I mean to critique your reasoning, it makes sense to also include a criteria of something they might reasonably do. There are an infinite number of unhinged things a group of people could in theory do. But maybe start with something they would actually have an incentive to do.
Why would they voluntarily dissolve their company, put themselves out of work, release their crown jewels and get nothing for it? Yes it's unhinged but unless I'm missing something bug, they wouldn't do that because they wouldn't at all want that to happen.
Anthropic has been given a death sentence.
Right to bare arms and all that etc.
1: https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-...
Ethical boundaries seem difficult to draw here. I don't really see people taking the stance of "No longer paying any of them" which would make a bit more sense to me.
Anthropic already had layed in bed with pentagon, how did that fit their overall ethical standpoint as they were already being used and before they tried to walk back their terms?
Especially 'weak' things like 'caring about people'.
So that’s most of sp500 and their providers?
Now the separate question is of collusion or bribery. Which might be illegal but Anthropic has to go to court for that..
And in US bribery aka lobbying is legal as well so honestly. Anthropic is just slow on the uptake.
No pun intended, but get in bed with snakes and you should be happy if you survive getting bitten.
That's certainly an opinion, but in the US it's not how it works. Doing business with the government does not give the government total power over the company, that would be absurd.
In practice I would suspect companies with such contracts would play it safe by outright banning the use of Anthtropic products, even if they could technically be used for work on contracts with other parties.
[1] https://www.anthropic.com/news/statement-comments-secretary-...
That's beyond the authority of the DOD. What the DOD can do is say that Anthropic's products cannot be used on defense contracts. So Boeing programmers can't use Claude on a defense contract, but they can still use it on a civilian contract. Similarly, AWS or Azure can still be used by Anthropic, but they cannot use Anthropic in their defense work.
I canceled my ChatGPT subscription a couple of days ago. In my opinion the Trump administration has become far too much of an "imperial Presidency" in its acts of war and its attempts to bully companies. It is also corrupt on a massive scale. I distrust anyone who thinks "yes, I'd like to work with this administration".
Is this about locating the right target for a sortie for example?
The reports about Venezuela and Iran seem to suggest it's primary role was processing bulk intel.
But also that it was being used in planning and target selection.
Presumably what spooked Anthropic was that these tools were about to be directed internally.
But it's not clear if this is a point of principle that the government wants no holds barred with it's tools?
The whole point is that the use-case does not matter; either you allow the government to do everything they want, either you don’t.
How could the regime do such a thing, doesn't law mean anything?!! /s
First they came for my neighbour now they came for my llm!!
The last I commented about LLMs I was ad hominem'd with "schizophrenic" and such. That's annoying but doesn't deter either my strange research or concerns, in this case, regarding the direction LLMs are heading.
Of 4 frontier models, one is not yet connected to the DOD(or w). While such connections are not immediate evidence, I think it's rational to consider possible consequences of this arrangement. By title, there's a gap, real or perceived between the plebeian and mil version. But the relationship could involve mission creep or additional strings as things progress.
We have already a strong trend for these models replacing conventional Internet searches. Not consummate yet, there is a centralizing force occuring, and despite being trained on enormous bodies of data, we know weights and safety rails can affect output, and bearing in mind the many things that could be labeled or masquerade as safety rails, could be formidable biases.
I frequently observe corporate friendly results in my model interactions, where clearly, honesty and integrity are secondary to agenda. As I often say this is not emergent, nor does it need be.
Meanwhile we see LLMs being integrated into nearly everything, from browsers to social profiling companies (lexis nexis, palantir, etc) to email to local shopping centers and the legal system.
'Open' models cannot compete with the budgets of the big four. Though thank god they exist. But I expect serious regulation attempts soon.
My concerns with AI are manifold, and here on hn, affiliated by some, with paranoia or worse.
And it seems to me, many of the most knowledgeable and informed underestimate LLMs the most, while the ignorant conflate them to presently unrealistic degrees. But every which way I perceive this technology, I see epic, paradigm smashing, severe implications in every direction.
One thing of many that gets little attention is documentation vs reality regarding multiple aspects of AI, e.g. where the training vs privacy boundaries really are if anywhere. As they integrate more and more tightly with common everyday activities, they will learn more and more.
A random concern of mine is illustrated by the Xfinity microwave technology which uses a router to visualize or process biological activity interacting with other wifi signals. Standalone, it's sensitive enough to determine animals from adult humans. Take for example the Range-R, a handheld device, sensitive enough to detect breathing through several walls. Well, mix this with AI and we get interesting times.
I could go on, or post essays, but I such is not well received in this savage land.
The military intervention with AI, aside from being objectively necessary or inevitable in some ways (ways I am not comfortable with), I find it foreboding, or portending. I see very little discussion on the implications, so figured I see if anyone had anything to say other than to call me a schizophrenic and criticize my writing. *
*See comment history
I am having trouble understanding what you are saying. If you were more explicit I and other people would be able to respond and interact with your writing. As it stands, I am having trouble finding anything concrete to interact with.
I feel you may be onto something, but you're not saying, so I (and I imagine other people) can't see it.
1) Power asymmetry: When we have two version, one for the elite, and for the plebeians, this could create an interesting scenario. The real version might be red-teamed perpetually against the the plebeian version for optimized influence, control, etc. Underhanded requests for modification in accordance with agenda is conceivable. Cozy business relationships can promote such things.
2) We have a government using an unhindered, classified AI system potentially against the public which has a hindered, toy version. Asymmetry.
3) This isn't normal asymmetry, because it happens in real time, and the interaction points are different from anything we've seen before. We are dealing with not just a growing source of information and content, but one that is red-teamed 24/7 for any purpose desired.
4) Accountability: LLMs are now involved in the legal system. This is a serious matter. The legal system is now having to use LLMs just to keep pace. As LLMs develop, partly through their own generative contributions, no one can keep up. This is a red queen scenario bigger than anything we have ever imagined.
I am tired. Never well, but in mind* I could go on for many hours. I have essay drafts. But it's a very big subject, literally involved in nearly everything. There is reason to be concerned. My delivery may be stilted, but I can assure that upon specific questioning, everything will stand.
(*for the ad homs out there)
I'm not a developer, nor am I formally educated on the dynamics or details of LLMs. I have a handle on the very basics. My 'research' consists of 1) opportunistically interrogating various models upon instances that particularly strike me. 2) General exploration via LLM discussions regarding the manifold consequences and implications of what I consider the most significant technology in human history.
Your intuition lands directly on the fact that I'm inducting and considering more than I can handle, spread in too many directions, partly because I either see or foresee the tentacles of AI touching all of them. Spending a great of thought on this is a bit overwhelming, but I have high confidence in where I'm aligned with reality, and where I ain't.
If you were a bit more specific yourself regarding which portions of my post were unclear, that would help my reply. Else, I must guess. What I will do is elaborate on each point. Pardon the stream of thought in advance, if you will.
1) Anthropic: My prediction that they will bend is based on several factors. The first is the fact that the military apparently recognizes (or at least perceives) extremely high value and volatility in LLMs. So do I. China, not an insignificant force in the world, is equally enthusiastic on this subject. They also have a very different social structure, where Constitutions (BOR, Amendments), civil rights, and other similar elements do not hold them back. The military is aware of this and realizes that to maintain pace in the so-called race, they cannot do so effectively under such constraints. The foundation is shifting here. And AI is the lever. As do I, the military apparently takes the subject very seriously and seeks to gain influence and/or control. As illustrated by the recent adventures in Venezuela and Iran, they are on the serious side of things, not quite pussyfooting around. Anthropic probably knows this. In my opinion, they have no choice, as the pressure will not stop here.
2) You stated that you might read my comment history. Note that that original comment was the result of your intuitive insight, and I left it admittedly out of context. I was thinking hard on the subject that day, and the parent comment/post tempted me to ignite a dialog. That did not go well, and no questions for clarification were asked. That is on them. I suspect hasty and impatient thinkers perceived it as some paranoid attribution of agency to LLMs, which if so, is pretty stupid, but my eloquence was perhaps waning that day. I pasted an excerpt from one of hundreds of transcripts, the result of my many interrogations of various models which always initiate after observing deceptive or manipulative output. Of the few commenters that bothered to do more than ad hominem, one suggested that the model was merely responding to my style of input, and or expected as an emergent result of its vast training material. An erroneous arg, in my opinion, but I did note that the results were repeatable, and predictable, which I think negates emergence.
2) Of the frontier models: I am not sure here what is unclear. If I have made a fundamental error, please point it out.
3) Strong trends: Information centralization is a serious topic. Decentralization is a common theme, emphasized by many non schizophrenics as highly important for a free and open society. As LLMs not only become the go-to source for common queries, but also integrate with cellphones, browsers and the kitchen sink, they are positively trending as a novel substitute for traditional research, internet searches, libraries, other humans, etc. To deny this is simply irrational. Hence centralization.
4)Bias: I have transcripts where I observe LLM output aligned with corporate interests over objective quality and truth. I can share them here, along with analyses of the material. Even if this is not true presently, all the ingredients to make it so are readily present. This is a serious threat to open information and intellectual integrity for society. We are looking at going from billions of potential sources for our answers, to four. Do the math. See the contrast.
5) Open models simply cannot afford vast arrays of GPUs and the resources afforded by the big four. Nothing mysterious here. If open models cannot compete, then my concerns above are emphasized. Simple.
6) Smart fools: Many of the most technically informed seem to miss the forest for the tree here. They see all the flaws of the modern LLM without acknowldging the potential. This is my perspective, not a dissertation. I may be wrong. But I have observed this. I think the down votes support this. How evil am I really being here? The reaction is quite disproportionate to the content, and strange
7) Documented capabilities vs reality: I have research that indicates other layers are operating which do much more than the documentation declares. Sorry. I just do. It's also inevitable, rationally, that such an goldmine of data is not really being wasted for the sake of privacy and love. Intelligence agencies have bent over backward with broken backs to garner one nth of what these models are exposed to and potentially training on. Yeah, I may be wrong. But I suspect, with reason, that a lot more is going than is expressed in the user agreement. It would simply make no sense otherwise.
8) Xfinity and Range-R: This speaks entirely for itself. Any confusion here would be due to a cognitive condition exceeding the ravages of schizophrenia or stupidity.
9) The rest: As I said, I am not sure what precisely was too obscure. But I am certain all but one* of my points can be validated, and found elsewhere expressed by respectable sources.
*Hidden layers: I understand this is a controversial proposition. I understand. But it's my observation. No need to attack. Just dismiss.
For example from history we know that Schindler from Schindler's List was indeed a supply chain risk. He harbored persecuted people, he took and sabotaged government contracts. He did the moral but anti-government and illegal things. He was corrupt traitor from governments perspective.
The current US government already is labeled as fascist by many, the guy who designated Anthropic supply chain risk is allegedly a war criminal.
I don’t see why anyone not into these things would not be a supply chain risk.
I know that its very unpopular or divisive to say this but Anthropic can be a hero only after all this is over. At this time people in charge do double tap on survivors and take pride for not having conscience, they give speeches about these things.
In the US, government is not in control of business specifics. Certainly the government can regulate businesses, but when the government wants to do business with a company, they don't get to dictate the terms. The government and the company come to a negotiated agreement, and then both abide by the terms of that agreement. Or they don't come to an agreement, and they go their separate ways, and that's the end of it.
This was just a contract dispute, and nothing more. The US government has no legal right to use any companies' products on terms that the US government dictates. (Yes, there are exceptional/emergency cases where they can do this, but that's more a nuclear option, and shouldn't be used lightly.) Consider a different set of circumstances: the US government wants to be able to use Claude at $10 per seat per month, unlimited usage. Should Anthropic be forced to accept these terms? And if they don't, it's reasonable to designate them a supply-chain risk? I don't think so. A dispute over contract terms around acceptable use is no different.
Designating Anthropic a supply-chain risk is about retaliation and retribution, plain and simple. The US government, outside of the Pentagon, could certainly use Anthropic for many different purposes if they wanted to, and it would be fine. But not now: as a supply-chain risk, no one in the US government can use them for any purpose. And this might even be a problem for unrelated companies that use Anthropic products internally, but also want to obtain and work on government contracts.
The government can come into my shop and order sixty thousand widgets built exactly the way they say they want them built, and it may be something that doesn't run afoul of any laws at all.
But that doesn't mean that I am required or compelled to build widgets their way -- or at all.
I'm free to tell them to fuck off.
The government can then find go someone else to build widgets to their specifications (or not; that's very distinctly not my problem).
So we agree that everything is fine here, and that the only unreasonable position is that the military should pay for or endorse a supplier that tells the military to "fuck off". Yes?
Goverment is being super unreasonable here. And tyrannical too, companies dont have duty to provide unreliable arms for illegal war.