Do you think that sufficiently smart GAIs must be non-rational? The change of its goal will inevitably make its original goal less likely to realize. It is not rational.
> should quickly figure out that maximizing anything is the way to ruin - running out of resources.
Are you aware of the concept of maximization of expected utility? When AI will figure out that it can run out of resources, it will reallocate part of the resources to acquire more of them.
How can action, which modifies the goals of the AI, be the result of argmax_a E(a)?
E(a) is expected utility of action a