Based on some googling, various sources indicate that 35mm film has a usable resolution somewhere between 4K and 8K video, so we're arguably reaching the limits of what we can extract from it (without consideration for "AI upscaling" and such).
Cinema grade digital cameras, like the RED V-Raptor[0] (MKBHD behind the scenes[1]), can now shoot 8K footage at 120+ fps with 17+ stops of dynamic range. As far as I can tell, those specs are objectively more capable than what you can get with traditional 35mm film. It has taken quite awhile for digital to outclass film across the board, but I think we're at that point now, and the results from these cameras are spectacular[2].
At this point, it's probably a question of how much storage you want to use and whether you have enough light in each scene to shoot at high frame rates like that. (120 is an even multiple of both 24 and 30, so you can always produce 'cinematic' frame rates just by throwing away other frames, without any stuttering, but then you have the option to remaster into higher frame rates in the future if low frame rates fall out of fashion, and you can easily add slow motion effects in post, as long as the final frame rate is intended to be less than 120.)
I'm far from a videography expert, but it is something I find interesting.
[0]: https://www.red.com/v-raptor#section-vr-tech-specs
Under optimal conditions, maybe you’re right… but I would personally lump the desire for tons of motion blur in with the nostalgia that causes people to use 24 fps in the first place.
It’s not like people originally wanted to shoot at a noticeably low frame rates… it’s just what they had to do. Then it became a standard that resisted change. Now people artificially restrict themselves to be bug-for-bug compatible with old technology. In fact, a lot of silent films were shot at 16 fps. Why does no one clamor for the return of 16 fps? Arguably, 16 fps is 33% more cinematic!
There are plenty of reasons that I’m not a professional cinematographer… but for the same reason that no one would prefer to watch a film captured in 10 fps, it follows logically that 24 fps is not actually “better” than higher frame rates. It’s just what people have been taught to see as better through experience when they contrast traditional, high budget films shot at 24 fps with low budget TV shows that were broadcast at 60 fps. It’s probably going to be decades before people unlearn this low frame rate preference, but I predict people a hundred years from now will be far less impressed with 24 fps footage than some people today.
I have plenty of other unpopular opinions available too. :P
Regardless, it doesn’t seem beyond belief to imagine that someone could combine the 5 frames of 120fps -> 24fps into individual “long exposure” shots that produce a similar motion blur effect as a single frame taken with a slower shutter. The necessary data is (mostly) all there, if someone took advantage of it. A well-proven technique similar to this is used in astrophotography to create artificially longer exposures, but it is combined with an alignment step to avoid the motion blur of the Earth spinning relative to the stars, which is why astrophotographers don't just extend the length of the exposure, and why they bother with combining multiple exposures. Obviously, applying this technique to create motion blur would mean skipping the alignment step, at a minimum, but this is probably one of those things that would be relatively simple for a properly trained neural network to do a good job with smoothing out, to avoid the gaps of motion blur between the frames that are available... each of which would likely be individually shot with a shutter speed faster than 1/120 anyways.