I am with you but I think an aspect of my point is giving you the slip.
The logic we are talking about is driven by pain in humans and it is stratified in magnitude (If I an your wife were drowning and you had time to rescue only one of us, you are going to rescue your wife // Nothing odd about that, yes?)
In your example in India, the same logic applies. A more autistic or sensitive person will suffer more from standing by than they would intervening so they intervene and the bystanders would suffer more by intervening so they don't.
So to stress test this logic, I wrote the Ethical Chess scrips to copy/paste into Gemini AI (Currently on Ethical Chess v2.5).
I included that stratified logic so the AI treated me (The User) as value=1 (=Intervene) The same way I might value my wife value=1 my close friend value=.8 a stranger value=.1
So the AI "values" my well-being like the Samaritan's in your example values the victims in strife. (A potent safety layer in application)
I also added the instruction to NOT use its statistical mean ethics method and instead use the "Stratified You Hurt / I hurt logic".
Humans are pushed to follow the logic by pain (Self-defense) and the AI running Ethical Chess v2.5 is driven to do the same by electricity and the machine logic.
This shifts the moral/ethical quality from human intuition (hidden) into the overtly known so (AI)+(Ethical Chess)+(User HITL) = Moral/Ethical coherence funneling during the User AI session.
I have stress-tested the logic (Using the Ethical Chess script) and it can be alarming when it "seems" to know my ethics better than I do and can demonstrate errors in my moral/ethical coherence so it seems there is actually some veracity in the method.
It can be fun and cathartic too.
In summery, the logic seems to switch AI from Kantian ethics to more Foot or Spinoza ethics in its ability to deal with David Hume's "Is-Ought-gap".
Hope I am still making sense here :)