> “Everything was statistical, everything was neat — it was very dry,” B. said. He noted that this lack of supervision was permitted despite internal checks showing that Lavender’s calculations were considered accurate only 90 percent of the time; in other words, it was known in advance that 10 percent of the human targets slated for assassination were not members of the Hamas military wing at all.```
So, there was no human sign-off. I guess the policy itself was ordered by someone, but all the ongoing targets that were selected for assassination were solely authorized by the AI system's predictions.
This sentence is horrifically dystopian... "in order to save time and enable the mass production of human targets without hindrances"
> One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male.
> According to the sources, the army knew that the minimal human supervision in place would not discover these faults.
I took this to mean that a human did press the "approve" button on the computer's recommendation. Though they make clear they were basically "rubber stamping" the machine recommendation.
But to my point:
> “There was no ‘zero-error’ policy. Mistakes were treated statistically,” said a source who used Lavender.
What is the "zero-error" alternative approach for dropping bombs in a war, or firing rockets for that matter? I don't understand the implicit comparison between this approach to targeting and a hypothetical approach that allows war to be waged without any innocents dying or buildings being destroyed. This system should be compared to whatever the real alternative is when it comes to target selection. Again I know nothing about military strategy, I'm hoping someone with more experience will speak up.
To use an analogy: if we are talking about self-driving cars, the rates of collision or death should be compared the rates of collision or death in cars driven by humans. Comparing against some imaginary scenario where cars have no collisions and cause no deaths doesn't make sense.
Honestly, I'm not sure. Obviously humans make errors of all sorts as well, and even make intentionally unethical decisions.
I think the horror of this situation is that it makes war easier to wage. Accepting that all war has costs measured in blood, we should want less war. However, those in control of military forces always have incentive to wage war, so removing friction from the process is dangerous.
Off-topic of AI, but on-topic of your question:
The actual alternative to unleashing AI assassination is not human-selected targets, but not waging war. It isn't necessary to destroy Hamas with violence, it would have worked better to give Palestinians dignity and self-determination long ago. That can still work, although until it does Hamas will continue to be a problem. But as I said, war is useful for the political leaders of Israel, so they stoked and fed the flames for decades to maintain an excuse for the war machine.
During the Oslo peace process, when Israel was trying to address this in the way you propose, Hamas launched a suicide bombing campaign against Israeli civilians:
https://en.wikipedia.org/wiki/List_of_Palestinian_suicide_at...
https://en.wikipedia.org/wiki/Oslo_Accords
You can be critical of everything Israel does, in this war or ever - fine. But the Palestinians have no other accepted settlement other than shipping ~8 million Jews to Europe or killing them.
The people who suddenly developed this simplistic understanding of occupation/resistance/occupier have no idea what they're talking about. Often quite literally in the sense they don't even understand the meaning of what they're saying, not to mention the history of Israel or the middle east. EDIT: I realize this last statement can feel offensive but this is still my take based on two decades of interactions with a fairly random sample of people trying to explain wth is going on in this tiny piece of the middle east. The complexity of the situation doesn't yield itself to simplistic narratives (from neither side really, though my statement refers to one of those narratives the Israeli side simplistic narrative is also insufficient/inaccurate).
It isn't possible to destroy Hamas with violence, or apartheid for that matter. Israel has created hatred towards themselves that will last for generations, even if they could kill every last Hamas member, they've made damn sure that a subset of Palestinian (if not broader) youth will reorganize a militia and the cycle of violence will go on.
After 10/7 almost every Israeli knows that the Palestinians are not interested in their own state.
Of the 32,000 Hamas stated deaths, 13,000 are terrorists, thus resulting in a far lower civilian-to-combatant death ratio than in other urban conflicts such as Mosul.
The lesson learned with Japan in Germany in WW II is that total military defeat is necessary. The AI technology enables the targeting of all terrorists, not only senior-level terrorists as before, resulting in a quicker end to the conflict than otherwise and thus resulting in fewer civilian deaths.
As we know these terrorists hide among civilians including in and under hospitals, making these legitimate targets. The high number of civilian deaths occur from the terrorists hiding among civilians.
That's not the whole story. For example, we ban certain kinds of weapons -cluster munitions, chemical weapons, biological weapons, ideally we'd ban bloody mines- not because they kill too many people compared to "conventional" weapons (they don't) but because they are considered especially ... well, wrong, in the moral sense.
So maybe we decide that being killed by a machine, that decides you're a target and pulls the trigger autonomously is especially morally wrong and we don't accept it.
Chemical and biological weapons are banned because, like nukes, escalation of their use results in a scorched earth scenario.
Remember the scene in Men In Black where the recruita do target practice? They were all accurate at hitting what they shot at but only Will Smith's character was accurate at selecting a target. This AI chooses targets; it does not fire weapons.
So the issue isn't that there's errors, it's that the army knows there are errors and expect humans to pick them out in 20 seconds- which they know realistically won't happen. The human only has two realistic choices- approve every target, or disapprove every target (which gets you reassigned to another role).
It's the classic statistics case of two medical diagnostics for an underlying value that isn't directly observable.
I think you've misunderstood the "zero-error" statement. It's not saying "there must be zero errors", rather that "errors don't exist - only some level of collateral damage". Hence the follow up about things being viewed statistically.
They view it in the same way that you suggest they should - that there will always be deaths and the questions is whether the system leads to more or less of them.
Personally I view that as a very utilitarian argument when applied to a machine of war. It embeds the concept that some loss of innocent life is acceptable.
What we may be witnessing is the first information age level genocide, where the killing is done at the behest of a statistical function with near infinite computing power.
I’m disgusted by this, I don’t care anymore what happened in October, this needs to stop. Israel government cannot be trusted to run this war, it’s turned into genocide and we’re all complicit letting them do it and supporting them. I can’t believe people actually support this, it’s clear they’ve forgotten Palestinians are people.
The most upsetting(for me) thing is reports of all the kids killed by snipers and just in general, as a father I cannot imagine losing my child to this.
https://www.theguardian.com/world/2024/apr/02/gaza-palestini...
https://www.theguardian.com/commentisfree/2024/mar/23/israel...
In 1999 Yugoslavia killed ~12 thousand Albanians and displaced ~85 thousand more. Bill Clintons secretary of defense had no problem calling that genocide: "The appalling accounts of mass killing in Kosovo and the pictures of refugees fleeing Serb oppression for their lives makes it clear that this is a fight for justice over genocide.". This led NATO to drop bombs on Yugoslavia [0].
In this conflict Israel has killed ~31 thousand Palestinians and displaced ~2.3 million more [1]. And now we sell them jet planes [2].
[0] https://en.wikipedia.org/wiki/NATO_bombing_of_Yugoslavia
[1] https://en.wikipedia.org/wiki/Israeli_invasion_of_the_Gaza_S...
[2] https://www.msn.com/en-us/news/politics/biden-administration...
"Hamas terrorist" criteria: a male of fighting age, give higher weight to those congregating with others of fighting age. Basically take out a generation of Palestinian men and you're all set. Lovely.
>This sentence is horrifically dystopian... "in order to save time and enable the mass production of human targets without hindrances"
Reminds me of similar industrial thinking of a certain previous fascist government.
The attack itself was allowed to have a 15x to 100x number of civilians killed depending on the supposed importance of the target.
Now that we've established that this is horrific, please turn a small portion of your attention to American predictive policing systems (digital and not) and the circumstances that lead to mass incarceration (including the War on Drugs).
Of course it's perfectly ethical, why do you ask?
That said, any organization might do something if it’s 90% accurate. Assuming it even is (doubt it), I think any fair evaluation of such a technology must ask:
What is the accuracy of inexperienced humans in the same position who are rushing through the review during a blitz invasion? If they have battle experience, what about them, too? (I’m assuming most won’t.)
Is the system better than those humans or worse? How often?
Do the strengths and weaknesses of the system allow confidence scores on predictions to know which need more review? Can we also increase reviews when the number of deaths will be high?
That’s how I’d start a review of this tech. If anyone is building military AI, I also ask that you please include methods to highlight likely corner cases or high-stakes situations. Then, someone’s human instincts might kick in where they spot and resolve a problem even in the heat of war.
Basically AI is being used as a wheel-of-death and nothing more.
Anyone believing differently in my opinion is both delusional, and complicit
They just spout a high number that is not 100% (clearly civilians are being killed publicly undeniably ) claiming 100% would be too obviously ridiculous.
More than half of 32,000+ (more under the rubble) killed are woman and children, Hamas is still quite able to fight, hardly any hostages has been recovered .
Israel labels any sort of civilian organization as hamas including journalists, medical and aid staff. 200 UN staff and 100 journalists are dead so far . Israel’s argument is UNWRA terrorist aiding and journalists were also secretly Hamas and doing non journalistic stuff when killed so they include them in legitimate targets .
If you consider everyone is Hamas unless otherwise proven then 90% is possible .
There is no realistic way an algorithm was designed factoring in the level of destruction of infrastructure never seen in any real world data and also benchmarked accurately.