The point is not whether we can prove that machines have such an experience. The point is, most people will agree that it “seems” like machines don’t have feelings, which is itself interesting because it suggests that our intuitive definition of “understanding” is not limited to a logical set of inputs and outputs. It is a proof that when we say “he understands”, we are referring to something more than a logical answer, and whatever that “something more” is, machines don’t have it.
In puppet theater good puppeteers make the figures look like they think and feel very convincingly, even though we know they are just made out of wood. I'm sure the same can be achieved with robots.