No. To make that leap, I must assume the conclusion. I cannot know that I'll be able to observe what it does, because I don't know what it will do (or whether I can even perceive the dimension in which it does it for that matter).
Secondly, the word "understand" is subjective. I assume that Musk meant the word in the sense that there is sufficient depth of understanding as to actually be meaningful, else his statement itself would be meaningless (i.e. so obvious as not to be required).
That said, ignoring the Elon Musk bit, even if a superintelligent A.I. figures out how to communicate in some other dimension, surely there's plenty of work it will do that fits within the bounds of the physics we know, and we can observe what it does that we do understand, and we can observe what it does that we don't understand. And given time we can figure out how to understand that which we can observe. And more generally, even if it manages to jump straight to communicating in some other dimension without letting us see its intermediate work, it seems very likely that whatever it is doing will have observable effects within the realm of the physics we understand and can observe (even if it's just EM radiation). At the very least, we can tell that it's there and that it's doing something and start figuring out what it is doing.
This is in contrast to the ants of Michio Kaku, who don't even know anything is going on. The point isn't that the ants don't understand how to build their own 10-lane superhighway, it's that they don't even know that this is a thing built by another species or for what purpose it was built. sixQuarks's point (regardless of whether what he said about Elon Musk is true), summed up in his analogy about complex mathematics, is that we should still be able to observe aliens and comprehend that they're doing something, even if we don't necessarily understand how to do that thing ourselves.
And here is the actual paragraph:
"One topic I disagreed with him on is the nature of consciousness. I think of consciousness as a smooth spectrum. To me, what we experience as consciousness is just what it feels like to be human-level intelligent. We’re smarter, and “more conscious” than an ape, who is more conscious than a chicken, etc. And an alien much smarter than us would be to us as we are to an ape (or an ant) in every way. We talked about this, and Musk seemed convinced that human-level consciousness is a black-and-white thing—that it’s like a switch that flips on at some point in the evolutionary process and that no other animals share. He doesn’t buy the “ants : humans :: humans : [a much smarter extra-terrestrial]” thing, believing that humans are weak computers and that something smarter than humans would just be a stronger computer, not something so beyond us we couldn’t even fathom its existence."
>surely there's plenty of work it will do that fits within the bounds of the physics we know
Simply to say some work would itself be a huge assumption, but plenty? That assumes far too much about our current state of knowledge, and it places artificial limitations on the capacity of a super-intelligence.
The flaw in your thinking is your very reliance on our own knowledge as your frame of reference. You're extrapolating from that because you can't imagine anything else. But, that's the point: we don't know what we don't know (or what a super-intelligence might know).
EDIT: I could attempt to use hypothetical scenarios here to convey how your assumptions might be wrong (e.g. what if this intelligence could jump dimensions such that our laws of physics no longer hold?). However, to do that would be to commit the same error that you're committing: it extrapolates back from our own limited language and understanding. By definition, if we can explain it, then it doesn't meet the test of the type of intelligence that I'm referencing./EDIT
Your statement is also a pretty big tell that you're working from an assumption of only marginally increased intelligence, which is where I think our disconnect really enters. To appreciate the degree of intelligence I'm referencing, see [1].
It's long, but a good read in its own right. Specifically regarding this discussion, towards the middle of the page there are two graphs, labeled "Our Distorted View of Intelligence" and "Reality" which might help illustrate the scale of what I'm trying to communicate. Here also is an attendant quote:
>In our world, smart means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952
See what I mean? When intelligence reaches such a scale beyond our own, it destroys key assumptions. The idea that we can comprehend that the intelligence is doing something is not guaranteed. Further, even if perceived, that perception itself is potentially only incrementally more representative of true understanding than not perceiving it at all. So, for any meaningful purpose, our understanding would be much more closely aligned to that of the ants WRT the 10-lane superhighway.
Another way to look at this is to imagine an infinitely expanding intelligence scale. At some zoom-level, we are pushed so much closer to the ants that the difference is virtually indiscernible.
[1] http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...