I have no great insights to offer, but there are some pretty interesting details in the original lecture. I've thrown in some commentary here and there, without taking any real care to distinguish it from the source material. The full lecture is on Caltech's YouTube channel. As always, any errors or distortions are mine.
Technically, the image gathered by the telescopes working in combination is in spatial frequency, basically how quickly bright changes to dark as you scan over the actual image. For example, a square divided into a white rectangle and a black rectangle has a lower spatial frequency than one divided into thin black and white stripes. It's possible to reconstruct a visual image from information about the spatial frequency, and the more information you have, the better the reconstruction. This sort of reconstruction is widely used in many fields, including astronomy, medical imaging and the image compression that lets us stream video over the internet.
The exact spatial frequencies being measured depend on the distance (baseline) between the telescopes. When the telescopes are spread across distant locations on the Earth, the image technique is called Very Long Baseline Interferometry (VLBI). Interferometry refers to measuring how the signals from different telescopes interfere with each other, that is, whether they reinforce each other or cancel each other out when combined. Very Long Baseline refers to, um ...
In this case there were eight radio telescopes at six different sites, taking data together on four different nights, scanning for a few minutes at a stretch. This provides 21 different pairs of telescopes, which would mean 21 different baselines, that is, 21 different spatial frequencies to combine, except that the baselines change as the Earth rotates. This is good, because it provides samples of more frequencies. There is at least some talk of adding telescopes in low Earth orbit at some later stage, which would not only provide more raw data but more varied baselines, since satellites move faster than the Earth rotates.
As mentioned in the other post, each scope collected hundreds of terabytes (hundreds of trillions of bytes), enough data that it was easier to fly the drives with the data to the site where it would be processed, rather than try to send it over the internet (and the South Pole data had to be flown out in any case).
The actual signal, coming from about 50 million light-years away, is quite weak, and noisy. There is a lot of matter at the center of a galaxy, radiating at various frequencies and blocking various frequencies. The radio band was chosen to minimize these effects, but there is still noise to contend with, along with noise from a variety of other sources, including atmospheric noise and imperfections in the telescopes themselves.
The various telescopes are looking through different amounts of atmosphere, depending on whether the black hole is overhead from their location or on the horizon, and this also affects how long it takes the signal to arrive at each telescope. It takes about 20 milliseconds longer for the signal to reach a scope that has the black hole on the horizon that it does for one where it's overhead. For a 1GHz signal, that means a difference of 20 million cycles -- and again, this is varying as the earth rotates.
All this means there is a lot of calibration to be done to make sure you're lining up the right data from the various telescopes and accounting for all the various sources of noise and distortion. In all, there were hundreds of thousands of calibration parameters to work with, though the team ended up using (only) ten thousand or so.
The reconstructed image is meant to represent what an Earth-sized telescope would see, if one could be built, but there are actually infinitely many such images that would match up with the limited data that was collected from a few scattered points on Earth.
Fortunately, having multiple telescopes means you can cross-check and isolate errors. There are also "closure quantities" that should be the same regardless of the exact details of calibration. I didn't really understand the details of closure quantities, but the idea of looking for quantities that don't depend on hard-to-control parameters is a powerful one. It's sort of similar to checking arithmetic by adding up digits.
The team used two different methods of calibration, and they split into four subteams, two for each method. The subteams worked independently for several weeks to reconstruct an image from the raw data. Bouman's own work involved getting the human-tuned method (CLEAN) to work without human tuning.
In addition to the work by the subteams, there was a lot of work done up front to try to make sure that the image processing algorithms they were tuning were robust. Much of this involved taking test images, calculating what signals the telescopes would get and then reconstructing the test images from the calculated signals. They did this not only with images of discs and rings that one would typically expect in astronomical images, but also with a variety of non-astronomical images.
The team actively tried to break the image processing algorithms, for example by tuning the parameters to favor a solid disc rather than a ring, and verifying that a ring still came out as a ring. Python fans will be glad to know that at least some of the code involved was written in Python.
The publicly distributed image is a combination of the images from the subteams. Since it's meant to represent only what we can be confident about regarding the actual black hole, it's deliberately a bit blurry. Even still, there are some significant results [I haven't yet re-checked the details here, so what follows may be garbled]:
- There is, in fact, a ring as would be expected from a supermassive black hole.
- The same ring was observable night after night.
- Previously there had been two separate estimates of the mass of the black hole, one from the motions of stars around it (stellar dynamics) and one from the appearance of the gas flowing into the accretion disc. The size of the ring in the image supports the first mass estimate.
- One portion of the ring is brighter than the rest, due to the doppler shift of matter orbiting around the black hole. This is to be expected, and the position of the bright spot matches up with other observations.
- The position of the bright spot is shifting over time, consistent with a rotation period on the order of weeks (I think this implies that the black hole's axis of rotation is not on the line of sight). Since the black hole in the Milky Way (Sgr A*) is much smaller, its rotation period is expected to be on the order of minutes. Since the observation scans are also on the order of minutes, the image processing for Sgr A* will have to take this into account.
Despite the whole effort taking years and requiring expensive radio telescopes at widely dispersed locations, and employing dozens of people at various stages, the movie Interstellar still cost a lot more to make. While its image of a black hole may be prettier (and, as I understand it, one of the parts of the movie where the science was actually plausible), the EHT image was both cheaper and more informative.
Again, really good stuff.