The accuracy point is a provocative and interesting question. I'm used to it in the context of ex. medical imaging or autonomous vehicles. In the context of picking bomb targets (where even a "positive" classification is kind of ambiguous [0]) I think it's probably above my pay-grade, so I'm going to set it aside.
> whoever uses it knows it's merely a fig leaf for shooting random people
I think this is the problem, but needs a little more unpacking, because IMO it goes beyond a pure 'fig leaf'. From what I understand it's not just a way to ID who is a combatant: it actively plans bomb targets. The difference is that a fig leaf provides purely pretense, and as you point out that's nothing new: we've had automated ways of ID'ing someone as a criminal or terrorist forever. But this not only provides the pretense of ID'ing someone as a combatant, it also loads the gun and aims it for you. So to me it's more than just someone saying "oh these people were all flagged, so let's plan an attack on them", it's actually the machine drawing up the full plan and just asking you "I found combatants should I kill then [Y]/N?". Both are bad (IMO), but the second one seems like a new evolution in the automation of warfare that I find uniquely concerning.
[0] Expanding on this point a little: combatant status seems ambiguous to me because it's not really a physically measurable variable. A car crashing or an image containing a tumor are all things that can be objectively verified, but the legal worthiness of killing someone for participation in a war is a far more ambiguous concept I think. Is someone who quarters enemy troops a worthy combatant? Someone who provides logistical support? I see lots of room for ambiguity that would be ugly to encode in data.