To comply, stations would strip the colour burst from the TV video sync block before it was broadcast. This infuriated many propeller-head techies and nerds, myself included.
To overcome the problem, the 4.43 MHz colour subcarrier in the broadcast video which wasn't deliberately stripped out was used to reconstitute the colour burst. This was achieved by modifying standard PAL colour TV sets (which weren't that difficult to obtain) with the addition of some subcarrier-extracting filters and appropriate phase-locking/modifying circuitry. This was a bit tricky, as the reference phase was no longer there and the fact that it was a PAL signal (PAL - Phase Alternating Line encoding).
In fact, I recall at the station I was working for at the time we had a modified TV set in the engineering department working in colour from off-air signals (one of my colleagues was a past master at tweaking up sets this way).
Perhaps a bit of broadcasting history trivia but it sure shows the colour recovery technique in this story wasn't the first effort.
Edit: Incidentally, the same trick was used on source material such as quadruplex videotape that already had the burst stripped at other locations.
Studio's filming for TV optimized the colors of their sets for contrast on a B&W TV set. Which meant they used ugly colors (cheapest paint available which worked) which were not meant to be reproduced.
I remember that color movies sucked also in 1950s. Technicolor has annoying fuzziness around objects. See Wizard of Oz.
And speaking of chroma subcarrier, yes, you will often see crappy chroma fuzz on a black-and-white image, but the reason for that is that the TV set did not have a chroma filter (exactly what they are talking about in this thread). In fact, as I'm answering that, I'm realising that THAT was probably the reason they had to filter out the colour - because it would look crappy on older black-and-white sets that did not have a chroma filter installed. Bingo!
But I have a couple of circuits from like 30 years ago that converted NTSC or PAL to RGB, and yes, they have the required filter, or you did indeed see the little blockies from NTSC or diagonal fringing on PAL colour transients.
Interesting discussion!
Maybe adding color did decrease the luma bandwidth and hence the horizontal resolution. I'm not sure about that. I think bw signals just used less bandwidth overall.
But in no way did color decrease the number of lines in the image. Those are defined by the scanning raster and remained the same in color and bw television.
So what does "high frequency luma" mean? it means that the brightness of a signal horizontally along a line goes rapidly from dark to bright and back again - if that happens it stomps on the colour sub carrier and the colour goes wonky.
S-video is just a cable that puts the 2 signals on different wires so this doesn't happen.
So it turns out that the things that are the worst for this are things like checked or p;laid shirts/ties/dresses, tartans, houndstooth jackets etc etc - Think about what happened to fashion in the 70s/80s as colour TV became ubiquitous, people on TV started wearing solid colours, they didn't want to be the person who's whole body was a crawling mess - and people in the rest of the world started wearing the same sorts of styles - all those 50s/early 60s styles with checks and plaids you see on old game shows, all gone, not because of some big change in fashion - but because they could no longer be represented in popular culture.
But it did. If you did not want to see the annoying 3Mhz color-carrier on your BW-TV, you had adjust the focus to 300 horizontal lines, which affected the vertical focus too.
Super-good BW-TV was 625 x 625 x 25 = 10 Mhz. The color-carrier was 4.3 Mhz. So if you did not want to see the color-shit on your BW-TV, you had adjust the focus so that less than 625 x (4.3e6/(625 x 625 x 25)) == 275 horizontal lines were visible. TVs did not had separate adjustement for vertical focus. So all you really had was 270x270 TV.
Except of course there never was 10Mhz TV-channels. It was below 8 Mhz, which was needed for full color. So there was moment of time, when we could enjoy 8Mhz black and white for a year. Almost 600 horizontal lines. And then they turned the color on and party was over.
The other lines are for the vertical retrace, when the video signal is blanked.
With square pixels, the B&W image would have been 576 x 768, which requires a 7.5 MHz analog video bandwidth (@ 50 Hz vertical & 15625 Hz horizontal frequencies).
Most 625-line B&W TV sets could display 576 x 768 images very well and some of the early personal computers with video outputs for TV used this format.
Nevertheless the broadcast TV signal was limited by a low-pass filter to lower horizontal resolutions, corresponding to 5 MHz analog video bandwidth in Western Europe and to 6 MHz analog video bandwidth in Eastern Europe. The reason was to provide space in the TV frequency channel for the audio signal, which used a carrier offset from the video carrier by 5.5 MHz in Western Europe and by 6.5 MHz in Eastern Europe.
So the broadcast B&W signal was worse than what the B&W TV sets could display, corresponding to 576 vertical pixels by about 510 to 620 horizontal pixels (depending on the country).
This happens with Technicolor only when it's processed badly and the registration isn't done with sufficient precision. I agree, this has happened from time to time.
Moreover, you also have to consider where the source material for the Technicolor process originated from. Tri-separated B&W negatives were used in the late 1930s, Wizard of Oz being one and the other major notable Gone With The Wind.
Prints from tri-separations can be quite excellent, in fact brilliant as the colour can be precisely adjusted. Also colour 'compromises' don't have to be made in the printing as is intrinsically the case with film that use colour couplers - Eastmancolor (Eastman color negative, its internegative and theatre release/print stock) to name just a few.
(Colour couplers in film emulsions are at best compromises as they have to be compatible with the processing chemistry and many of the best colour dyes and pigments are not. Processes that do not use colour couplers such as Kodachrome and Technicolor are much superior in this regard as stable dyes with the correct (or best) colour can be used. Colour couplers also lower the resolution of an emulsion although in many modern emulsions this isn't a significant problem.)
Nevertheless, if tri-separated B&W originals are used after being stored a long time then shrinkage differences in the three negatives can pose printing/registration issues.
It would be interesting to know the source of your Wizard of Oz, - as some years back the DVD version took this into account when the film was remastered. Every frame of the tri-separated B&W printing masters was resized to ensure its geometry was identical to all others. I've seen that remastered copy and its registration is excellent.
Incidentally, the very last version of the Technicolor processes of the 1950s was the best colour film system for movies ever devised before they went digital. However, one needs to bear in mind that many so-called Technicolor films are only hybrids, as they use Eastmancolor (or other) film stock for both the original source and for later dupes from earlier Technicolor theatre release prints. They, along with multigeneration copies, often create many issues including low (fuzzy) resolution and muddy cross-colour effects.
When making a claim like you have it's imperative you first check a film's manufacturing/printing methods. Tracing its manufacturing provenance is absolutely essential.
Edit: FYI, pre-WWII B&W film emulsions as used in the Wizard of Oz were never as grain-free or as sharp as modern-day equivalents are. You also need to ensure that you aren't drawing any comparison to these much newer products. The Technicolor process should not be blamed for limitations in the source material.
Throughout one recording, the phase shift caused by the distortion of the glass screen is probably approximately the same - and therefore could be learned.
Then for the actual decoding, certain elements of the frame should be of approximately known colours - for example someone's face should be skin colour. That then informs the colours for neighbouring objects, since over a small area phase is consistent.
Applying such techniques repeatedly over the whole video, trying to minimize inconsistencies, I'd bet you can get really good results.