I think we shouldn't confuse efficient strategies with the chosen strategies. What causes Moloch is the inability to see the big picture, to see outside of the self in the collective (maybe Buddhism has a point).
An efficient strategy may very well be something we'd prefer, such as tit-for-tat. But is that the strategy we choose? Looking at the long history of evolution, I'd say no.
I would say we have a demonstrated ability of seeing the big picture, and a pretty good track record of making it work.
alternative explanation, given for the sake of argument:
we have a terrible ability to see the big picture, but have come up with some ingenious constructions where the small picture of each component in the system is correctly calibrated so the big picture outcome is successful. as you yourself pointed out, the supply chains are so complex that nobody involved understands all of it.
now, how would we go about distinguishing between which of these possible interpretations is correct?
thought experiment goes like this: suppose the big picture requires that some actors in the system do not receive satisfactory treatment in their local context, and that the only benefits those actors receive will be indirect, as benefits accrued to other actors in the system, but not to adjacent actors. will those actors still agree to participate or not?
Hence I said:
> without a lot of time
An AI spending a lot of time doing effectively the same thing humans have been doing (read: propagation of immense amounts of suffering) is not really something I'd want to see repeated. It seems rather obvious that these conclusions are very difficult and slow to arrive at at a proper scale, so no AI will have them by default. They'll be aggressive by default, just like your average animal in evolution. The fact that given their own millions of years (sped up) they may eventually arrive at the rudimentary level of cooperation that humans possess does not instill a lot of hope in me.
> I would say we have a demonstrated ability of seeing the big picture, and a pretty good track record of making it work.
I'm talking about this: http://slatestarcodex.com/2014/07/30/meditations-on-moloch/
A good example that's going to be hard to ignore will be the upcoming climate change due to humans catastrophically failing to see the big picture and focusing on smaller gains within their sub-groups. It really doesn't have much to do with complexity, but it has everything to do with the very same behavior you're seeing the AI execute here.
https://news.ycombinator.com/reply?id=13636150&goto=threads%...
why?
Frankly i don't feel it's productive or rational to attach the name of a biblical villain to new technology.
This entire lecture series on Human Behavioral Biology is worth watching from the beginning, but I've linked to a moment where Sapolsky describes tit-for-tat strategies arising in animals. First example: Vampire Bats.
[]: https://www.youtube.com/watch?v=Y0Oa4Lp5fLE&feature=youtu.be...
Evolution defaults to aggression as that is how it squeezes out fitness, and cooperative behavior is continually at odds with that and only seems to survive on one level up every so often, where evolution just starts treating it as giant agents anyway and the cycle starts again at a higher level. Similarly, we humans still have countries and borders and are only cooperating one level up. Cooperation is still merely being used as a survival tool, rather than an end in itself.
I.e., two people working together are working against another two people, those people if they somehow manage to combine are working against another collective, multiple collectives may combine and then work against other collective, etc... such developments may potentially be worse than just individuals fighting each other.
Similar to the idea that in a first contact situation, there may be an advantage in shooting first, and that often also implies only one iteration. I think shooting first is the default, and needs to be actively fought against.
Cooperation is not the default or preferred state for evolution, even though it's more efficient. To get there, it takes a lot of suffering and bloodshed. A few thousand years AI-caused suffering before it figures out that cooperating is useful more than one level up (if it ever does, as humans have failed so far) is not really what I have in mind when I talk about cooperative ethics. Cooperative ethics should be fundamental, not derived from short-term RoI computed in the moment.
But the best part about evolution is that we don't need to replicate blind mutation and strict fitness functions, we can use the proven-to-work strategies as our springboard. And the best part about AI is that we have no ethical issues simulating millions of evolutionary iterations of "bloodshed" until we arrive at an AI that is acceptable to our ethics.