The point again is that regardless whether or not the likelihood a mature superintelligence doing so is infinitesimally small (and frankly, we can't know that, but see also below) is irrelevant. What actually affects us is how many people may come to believe that this may be true, and adjust the way they act as a response.
But you're already changing the argument when assuming a mature super-intelligence. All that is necessary to posit for someone to be concerned about the torture aspect is any set of entities (doesn't even need to be intelligent, though it may take a super-intelligence to create the entities in question) sufficiently capable to run an ancestor simulation of the kind described by the simulation argument, that is willing to use torture, and that is prepared to run enough ancestor simulations to offset "good" simulations.
And the thing with this is that it does not assume a malicious AI even as the ultimate instigator per se. Assuming an indifferent AI that simply doesn't care about the contents of a simulation or is sufficiently removed to not even know about them. One that does simulation runs to understand the possible paths the development of AI could have taken might be sufficient. Or that experiments with variations of itself and simply doesn't care that some broken version spawns large numbers of ancestor simulations and plays with the contents in ways that massively skews the odds in "favour" of bad outcomes.
But the point is we don't know. And not knowing gives ample room for someone to decide on values that makes it rational for them to act in ways that may make our odds worse.
This is further an exercise in long term statistics: It doesn't matter what the likely first AI will do. It matters what the balance of outcomes of the sum total simulation runs that will ever exist until the end of the universe will be (regardless of who or how they are created). And if said simulations are sufficiently powerful, that may even apply recursively (imagine a single "rogue" AI playing with the ancestor simulation equivalent of a fork() bomb with added torture, with access to sufficient resources).
If you believe the total balance of simulation runs that you could plausibly be in will be ones run by some set of parameters where nasty things will happen if you don't act in a way leading towards an AI takeover, then one might want to act accordingly.
Note that Roko's Basilisk even posits a Friendly AI. That is, one that is imbued with respect for human values, but that may simply consider "threatening the past" with torture a necessity in order to ensure it is created as a moral imperative to minimize human suffering. Note that such a Friendly AI would have reason to carry out this threat even if it believes it exists and so has been created, because it will not know whether it is a "full version" running in reality, or a partial, restricted, time limited simulation by people trying to figure out whether or not Roko's Basilisk holds before deciding whether to let it loose in their reality.
To an AI that is looking to minimize human suffering, torturing a suitable number in simulations might be a small price to pay to ensure that if it is in a simulation, the people running the simulation does push through with creating the full thing in their reality (which might of course be yet another layer of simulation) out of fear for what will happen if they are themselves in a simulation.
We can't even look at "well, I've done nothing for the AI revolution, and I've not been struck down yet", because a Friendly AI, or indeed many subsets of possible variations over this argument, might be minimising the torture to only people who have a realistic shot at making a difference to the existence of such AI, or may be giving us every chance to "redeem ourselves" by waiting with the torture until a synthetic afterlife.
Roko's Basilisk also presents an unselfish reason for "cooperating in advance" with such an AI: To prevent the future torture of a suitable subset of "martyrs". That is, working to assist a Friendly AI in taking over may itself be a moral imperative for someone seeking to minimize suffering.
Then again, there may very well be one or more fundamental flaws in the entire argument, or it may turn out the odds are just fundamentally in our favour. Or we could've just gotten lucky. Or not be important enough. But it's fun to think about.