No, the point of this program seems to be to find targets for assassination, removing the human bottleneck. I don't think bigger strategic decisions like starving the population of Gaza was bottlenecked in the same way as finding/deciding on bombing targets is.
> is it also behind the decision to kill the aid workers who are trying to feed the starving?
It would seem like this program gives whoever is responsible for the actual bombing a list of targets to chose from, so supposedly a human was behind that decision but aided by a computer. Then it turns out (according to the article at least) that the responsible parties mostly rubberstamped those lists without further verification.
> can an AI commit a war crime?
No, war crimes are about making individuals responsible for their choices, not about making programs responsible for their output. At least currently.
The users/makers of the AI surely could be held in violation of laws of war though, depending on what they are doing/did.