LLM's will of course also be used, due to their convenience and superficial 'intelligence', and because of the layer of deniability creating a technical substrate between soldier and civilian victim provides - as has happened for two decades with drones.
Call these LLMs stupid all you want but on focused tasks they can reason decently enough. And better than any past tech.
Make defensive comments in response to LLM skepticism all you want— there are still precisely zero (0) reasons to believe they’ll make a quantum leap towards human-level reasoning any time soon.
The fact that they’re much better than any previous tech is irrelevant when they’re still so obviously far from competent in so many important ways.
To allow your technological optimism to convince you to that this very simple and very big challenge is somehow trivial and that progress will inevitably continue apace is to engage in the very drollest form of kidding yoursef.
Pre-space travel, you could’ve climbed the tallest mountain on earth and have truthfully claimed that you were closer to the moon than any previous human, but that doesn’t change the fact that the best way to actually get to the moon is to climb down from the mountain and start building a rocket.
https://www.idf.il/en/mini-sites/hamas-israel-war-24/all-art...
Probably this is due to confusion over what the term "AI" means. If you do some queries on a database, and call yourself a "data scientist", and other people who call themselves data scientists do some AI, does that mean you're doing AI? For left wing journalists who want to undermine the Israelis (the story originally appeared in the Guardian) it'd be easy to hear what you want to hear from your sources and conflate using data with using AI. This is the kind of blurring that happens all the time with apparently technical terms once they leave the tech world and especially once they enter journalism.
At most charitable, that means a person is reviewing all data points before approval.
At least charitable, that means a person is clicking approved after glancing at the values generated by the system.
The press release doesn't help clarify that one way or the other.
If you want to read thoughts by the guy who was in charge of building and operating the automated intelligence system, he wrote a book: https://www.amazon.com/Human-Machine-Team-Artificial-Intelli...
Given that the underlying premise of the story is bizarre (is the IDF really so short of manpower that they can't select their own targets), and given that the sort of people who work at the Guardian openly loathe Israel, it makes more sense that the story is being misreported.
AI is how it is marketed to the buyers. Either way, the system isn't a database or simple statistics. https://www.accessnow.org/publication/artificial-genocidal-i...
Ex, autonomous weapons like "smart shooter" employed in Hebron and Bethlehem: https://www.hrw.org/news/2023/06/06/palestinian-forum-highli...