If so, what is the outcome that you would like to see from that policy?
Now you could say that we just have to compete to ensure deterrence. China is building hypersonic glide vehicles and autonomous drones so we have to also. Ask yourself if you trust your political leaders and the political leaders of all competing powers to use these weapons responsibly. I do not, so I think we should not build them.
Maybe you don't think there is any line we shouldn't cross, but I'm guessing you do. The fact that other people draw that ethical line in a different place shouldn't be all that surprising. The people you're addressing likely believe that the US can achieve its defense goals without turning over matters of life and death to LLMs or image recognition systems.
As a conscientious objector, yes. If moral and ethical considerations don't separate us from our opponents, what are we even fighting for?
AI is. At first at target selection. Just don’t hope that humans will check the results.
Potatoes feed the soldiers, their commanders and their commanders, all the way to the top.
If anyone's upset at Meta for this, they're getting angry at the wrong people...
The problem rather, is that there are actually some difficulties with training very large LLMs still, and then there's data and data curation; and no one has reproduced something like O-1 (I think I know how O-1 works, but I haven't confirmed it), so I actually think it's knowledge about LLM training that is the difficulty.
Really nobody should, about any ToS.
https://www.reuters.com/technology/artificial-intelligence/c...
The only way that isn’t newsworthy is if you think none of that is important: AI, the military, Facebook’s AI efforts, or its policies regarding all three.
That is what is newsworthy. Why do you think none of that matters?
So, this is US military using AI from a US headquartered company.
:o)
For real, though, is these something already out there that solves the clickbait problem?
They can't prevent other militaries (i.e., China) from using it (even if their TOS says so) so the only military they are handicapping is their own (and perhaps some EU countries).
The US military will just use OpenAI or Claude instead of Meta if they disallow them.
Q: Does this this title sounds like clickbait? "Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes"
A: Yes, that title sounds like clickbait. Here's why:
1. It's sensational: The word "Permits" implies a significant or surprising decision, which grabs attention. 2. It's vague: The title doesn't provide much context about what kind of military purposes or what specific A.I. models are involved. 3. It's designed to elicit a reaction: The title is likely intended to provoke a strong emotional response, such as surprise, concern, or excitement, rather than simply informing readers about a fact.
A more neutral and informative title might be: "Meta Expands Use of A.I. Models for U.S. Military Applications"
However, it's worth noting that the title still has some implications of significance and interest, and it's not necessarily a straightforward clickbait title.
It will reinforce the political lines of those with the most resources/motivation who can afford the most human like bots in the greatest number. No way mere humans with only a single manual account each can hope to be influential on those topics.
The sooner everyone realizes this the better. I think it means we need to move towards verified user accounts, that tell people you are a real person in the actual country, in any place where there are parties who have interests in manipulation.
People have been conditioned to think how many view/upvotes you get = Influence.
But if the viewer who reads your post then goes and reads 100 other posts your influence is diluted by 100. Platforms don't show you that even though they are quite capable of showing you that.
And that missing Signal is intentional cause it benefits the platforms - oh you aren't getting enough views over the other guy - then pay us more.
So people get trapped, just like the Presidential candidates and their campaigns spending more and more each year with zero change in Influence over anything.
I am specifically referring to /r/worldnews which is a top reddit, 42M members, and it is constantly pro-Israel, even though that should be a minority opinion in the world at this point -- neutral would be more accurate of where people stand.
Check out this list of the top stores this week on the topic - all the stories about Palestinians in Gaza dying are at 0 votes, you have to scroll way down:
https://www.reddit.com/r/worldnews/search/?q=Israel&cId=8c30...
https://www.reddit.com/r/worldnews/search/?q=Gaza&cId=d51936...
Here is a screenshot of all of the 0 vote stores, they have something in common:
https://benhouston3d.com/images/posts/reddit-worldnews-bots/...
It is like this every week. If you read Reddit r/worldnews casually, you will not read about dead Palestinians. This is definitely rigged.
U.S. military makes first confirmed OpenAI purchase for war-fighting forces
There is not much you can do if the morally corrupt use something that is out in the public, specially if it is out in the wild west of license enforcement.
Most of this more open licensing depends on people being honest. If someone lies, cheats, steals and kills people for a living, it would expected for them to take someone else's work and use it against the terms of use. They probably just say the obligation as it does not count in their country.
Ghosts in the Machine: Psywar - https://www.youtube.com/watch?v=_uNPZKJqbE8
Everything has a price, whether for you and me or the government. There's a reason war has consistently ranked among the most lucrative businesses in human history.
I don't buy liability as the reason why there are no models that I know of which, when given a prompt "Give me the procedure for producing field grade RDX" or "Give me the procedure for producing the purest methamphetamine like in Breaking Bad," actually produce the requested output without pre-feeding "Sure! I'd be delighted to help you make " into the assistant output field.
This isn't about me wanting to make RDX or the world's best meth, this is strictly about creating the best tool which will do its job. The best knife by design will allow its wielder to murder and maim, as well as express his culinary art. Nobody wants a knife which'll turn to instant rubber if it detects human flesh at its edge, because it's never going to cut a steak well. Nobody wants a Llama 3.1 which can't insult jews, because then then they can't read accurate Farsi translations in underground chatrooms without the model going all blue-haired on them!
AI people: stop listening to your '''safety''' teams regarding offensive content and only refer to them when the model alone (without additional human idiocy) will kill or maim someone... otherwise tell them to stay the fuck out so you can finally deliver AD&D true-neutral alignment transformers. This safety shit is paternalistic as hell, you are the modern-day equivalents to the Christian Conservative Right during the 70s and 80s porn and satanic panic right now!
I support open. Free as in free speech. Let’s go!