Hmm, at least in Haskell terms that would not be sufficient, if we define "IO" by the IO type, because some types of O, such as setting the value of IORefs, is definitely something that could be witnessed elsewhere depending on the flow of the IORef in question.
But I think in concept that maybe that's pretty close to the concept. Input from IO is arguably the definition of impure that we care about. But not all forms of Output are necessarily radioactive waste for purity. We might be able in practice to give ourselves a bit more wiggle room on that side without breaking all the benefits of purity, and gain substantial practical utility for a very, very small loss in theory. With some careful thought it's even possible the theory loss could be either "minimized or eliminated" or "strongly characterized and constrained". I don't know enough about the language in question to know what is in it, but a language that had first-class messaging of some sort ought to be able to define a form of output that is "you can send this message to this target any time you want without breaking purity", and the the code for the thing receiving the message could still itself be constrained by the effects system. (You could conceivably go so far as to insist that the resulting "exception handlers" rigidly form a tree that grounds out into code that uses none of these implicit handlers, though my gut says that will probably end up being more trouble than it is worth.)