But IMO it’s a point worth bringing up, most people have no idea how digital photography works and how difficult it is to measure, quantify and interpret the analog signal that comes from a camera sensor to even resemble an image.
Probably not exactly the same side and orientation. https://en.wikipedia.org/wiki/Libration#Lunar_libration: “over time, slightly more than half (about 59% in total) of the Moon's surface is seen from Earth due to libration”
I would object slightly less if they made a model (3D or AI) that captures the whole side of the Moon in high detail, and used that, combined with precise location and date/time, to guide resolving the blob in camera input into a high-resolution rendering *that matches, with high accuracy and precision, what the camera would actually see if it had better optics and sensor*. It still feels like faking things, but at least the goal would be to match reality as close as possible.