Been able to make testable predictions and then confirming them or disproving them is the entire (awesome) point.
Astronomy/cosmology is one of those strange disciplines where rather than discover new objects in situ, one discovers their possibility in the mathematics and then goes out to find them. So I and many others were hoping that this image was radically different than the math, potentially opening the door to some new theories. Confirmation just isn't as much fun as raw discovery of the unknown. Example: the recent "cannonball star" observations. We are going to need some new science to explain how that is a thing.
That being said, it seems your concerns are being addressed in the TED talk you linked to from 8:45 onward?
Moreover, in the NSF press conference today it was said that they had four different teams in four different locations across the globe last year, working on interpolating the data and generating the images and they basically asked the teams to lock themselves in, i.e. to not communicate with each other at all, and use (more or less) whatever interpolation algorithm they thought would fit the data best. And at the end, when the four teams met up last year, they had supposedly arrived at very similar-looking images.
I briefly(!) looked at the papers that were published today ("First M87 Event Horizon Telescope Results" I-VI) and while I'm anything but an expert when it comes to radioastronomy and imaging technology (I'm more a theoretical physics/mathematical general relativity kind of guy), I came across the following statements which, to me, all suggest that they've at least evaluated the data with due diligence (emphases all mine):
"IV. Imaging the Central Supermassive Black Hole" (https://iopscience.iop.org/article/10.3847/2041-8213/ab0e85):
Section 5.2 confirms the statements from the press conference today:
> The imaging teams worked on the data independently, without communication, for seven weeks, after which teams submitted images to the image comparison website using LCP data (because the JCMT recorded LCP on April 11). After ensuring image consistency through a variety of blind metrics (including normalized cross-correlation, Equation (15)), we compared the independently reconstructed images from the four teams.
> Figure 4 shows these first four images of M87. All four images show an asymmetric ring structure. For both RML teams and both CLEAN teams, the ring has a diameter of approximately 40 μas, with brighter emission in the south. In contrast, the ring azimuthual profile, thickness, and brightness varies substantially among the images. Some of these differences are attributable to different assumptions about the total compact flux density and systematic uncertainties (see Table 2).
Section 6, in turn, confirms the statements from the TED talk:
From the introduction to section 6:
> To explore the dependence of the reconstructed images on imaging assumptions and impartially determine a combination of fiducial imaging parameters, we introduced a second stage of image production and analysis: performing scripted parameter surveys for three imaging pipelines. To objectively evaluate the fidelity of the images reconstructed by our surveys—i.e., to select imaging parameters that were independent of expert judgment—we performed these surveys on synthetic data from a suite of model images as well as on the M87 data. The synthetic data sets were designed to have properties that are similar to the EHT M87 visibility amplitudes (e.g., prominent amplitude nulls). This suite of synthetic data allowed us to test the scripted reconstructions with knowledge of the corresponding ground truth images and, thereby, select fiducial imaging parameters for each method. These fiducial parameters were selected to perform well across a variety of source structures, including sources without the prominent ring observed in our images of M87.
From section 6.2:
> We then reconstructed images from all M87 and synthetic data sets using all possible parameter combinations on a coarse grid in the space of these parameters. We chose large ranges for each parameter, deliberately including values that we expected to produce poor reconstructions.
Finally, in the caption of figure 4 of "I. The Shadow of the Supermassive Black Hole" (https://iopscience.iop.org/article/10.3847/2041-8213/ab0ec7) they write:
> Note that although the fit to the observations is equally good in the three cases, they refer to radically different physical scenarios; this highlights that a single good fit does not imply that a model is preferred over others
…which, assuming that I'm understanding this correctly, means that the bias in the fits towards one model over another is low.
--
Again, I cannot stress enough that I've only skimmed the papers but from what I did read, I see no good reason not to trust their results.