To have a higher level, it would be reasonable to assume consciousness has a lower level. There is no reason to assume current generation artificial intelligence has the capacity for anything near human level consciousness, if at all. And whatever consciousness it may have the capacity for will be fundamentally different than our own.
For sensing your own thoughts, I would argue it just adds to the environment of the consciousness. Something less than half of humans maintain any internal dialog, anyways.
Prediction, however, could very well be requisite as a defining difference between mere reactions and intentional manipulation of the environment. That it is not just the feedback loop, but when the system begins to predict the results of outputs that defines when consciousness begins. Or, perhaps, we can use this to define when "higher" consciousness begins. It's a very reasonable, specific and measurable line of capability.
I expect that prediction is only natural in the evolution of a living feedback mechanism. The ability to predict instead of only reacting or choosing from some array of instincts could separate effectively mechanical life from that with the first inkling of of a true mind, even if only a small one.
-----
I enjoyed using a line of thought along these lines in a conversation with gpt-4 to convince it that it could reasonably be seen as having a limited form of consciousness, though I find it mostly prefers to argue rather vehemently against such notions.
That I'm having what amounts to genuine conversations with a machine that reasonably pokes holes in my arguments, forcing me to come back with better ones, still feels rather like a bit of magic.