At least AI pretends to look at some data instead of just defaulting to tribal bloodlust... who's to say it can't be more ethical? It doesn't take much to beat our track record.
Reminds me of that story from probably 5-7y ago. Someone wanted to use AI to classify photos of tanks as soviet vs US. So he went to a US tank museum and took lots of pictures of the tanks under every angle. Did the same in a soviet tank museum. The resulting model worked great on that training dataset. Then he tried on photos outside of the training dataset. Turned out that it was cloudy the day he visited the US museum and sunny for the soviet museum, and the model used the color of the sky to classify.
(This kind of human model hallucination is how and why I think Genesis got written and taken seriously).
Yeah, I mean, black-box murder is never really desirable... but is it fair to assume AI will never be able to elucidate its reasoning? And that also seems a bit of a double standard, when so many life-and-death decisions made by humans are also not entirely comprehensible or transparent, either to the general public or sometimes even to the other individuals closest to the decision-maker.
Sometimes it's a snap judgment, sometimes it's a gut feeling, sometimes it's bad intel, sometimes it's just plain "because I said so"... not every kill list is the result of a reasoned, transparent, fair and ethical process.
After all, how long have Israel and Hamas (or other groups) been at each other's throats, with cries of injustice and atrocities about either side, from observers all over the world? And it wasn't so long ago we destroyed Afghanistan and Iraq, and Russia is still going at it because of the desires of one man. AI doesn't have to be perfect to be better than us.
If there's one thing humans are really, really bad at, it's letting objective data overrule our emotional states. It takes exceptional training and mental fortitude to be able to do that under pressure, especially life-and-death, us-vs-them pressure.
Humans make mistakes, too, and friend-or-foe identification isn't easy for humans either, especially in the heat of battle or in poor visibility. Training for either humans or AI can always be improved, but probably will never reach 100% accuracy.
Maybe we should start putting some hypothetical kill lists in front of both humans and AI, recording their decisions, and comparing them after a few years to see who did "better". I wouldn't necessarily bet on the humans...
Run it through some panel of experts and demand algorithm changes?
Send it to some Judge API and get back some JSON?
I dunno, what?
They're not exactly very good at preventing or punishing human atrocities, either... it's more of a symbolic group, or a tool of the victors, than anything resembling actual justice. I'd argue textbook authors have more of a lasting ethical impact than the ICC.
You should move your questions one step up in the thread, I'm not the one saying it might be better to let computers design or commit atrocities.
Having a human to make those decisions is better because this human can be judged if commits war crimes or genocide or violates international war laws.
A computer can't be jailed and this is the real power of designing this system. To hide the criminals on a black box so nobody can be made responsible