https://cavern.sbence.hu/cavern/
https://github.com/VoidXH/Cavern
The visualizer, which is what I was _most_ interested in (along with software decoding) is written in C# and the rendering is done in Unity -- both things I valued & thought were cool. In theory, you could build a DIY multi-channel "receiver" with this type of software if given enough audio outputs (and/or put something like Dante to use).
I explored it a bit further but it's relatively cost prohibitive, especially if you want to do something like accept HDMI input, it gets messy. AFAICT, at least when I went down this research path a few months back, even finding & getting dev kits/boards with HDMI input (of semi-recent generation) was non-trivial & pretty pricey.
That is, is it an audio renderer that does player side mixing into N channels? Dolby Access does it for headphones and for up to 7.1 surround systems
I'm new to this whole audio format thing and I'm just trying to figure out how things work, as all of the Dolby stuff is very "magic behind a licence fee"
However, I like to own music and that is simply impossible at the moment for most Atmos recordings. I would love to build a library of such recordings, preferably in physical form, and would happily spend quite a lot of money doing so. But Apple Music is basically the only way I can listen to anything.
I can't help but suspect this is entirely deliberate, an attempt to use this innovation to hasten the passing of the concept of owning music into the past.
Sadly, I also worry the move to streaming means an awful lot of music is eventually going to be lost forever.
The good thing though is that those cheap $10 HDMI audio extractors work well for this use case if you have a playback device that outputs PCM over HDMI. As a side note those extractors are also a great way of getting 5.1 surround sound from a HTPC running the Dcaenc DTS encoder [1] into an old pre-HDMI AVR.
Decoding, distributing, processing, and rendering across all the involved components can take in the order of 100's of ms. HDMI 1.3 and up has had a mechanism for equipment to communicate internal delays so audio remains time-aligned with the rendered image.
Some devices will also have manual overrides for this. If you are experiencing significant drift something is likely borked in the setup.
Because I'm pretty sure mine doesn't. I hate it, I'm the first to point it out or only one bothered by it. I haven't done anything (other than run Audyssey) to stop it, it just hasn't been a problem as far as I can tell.
And I know you're exaggerating, anyone would notice 'hundreds of milliseconds', but still.
The main reason that passthrough is the norm is history - the connection to the receiver used to generally be S/PDIF or HDMI 1.x versions that had the same capability as S/PDIF, so you had to use Dolby or DTS to get the audio to the receiver. Otherwise you could only do two channels.
Actually a shocking number of PC motherboards and soundcards of that era have 7.1 worth of analog outputs, but I can't say anyone ever used them. I believe 7.1 analog outputs were required for Intel HD Audio compliance.
I think the main reason for audio passthrough preference in the home theater crowd is seeing the DD/DTS logos light up on the receiver.
The use of Atmos in music is just plain bad. How many pop recordings are actually mixed for Atmos? I can't imagine that it's as many as Apple is presenting "in Atmos" on Apple Music. So is there some post-processing BS going on, a la "Q-Sound" and other fake surround over the past few decades?
Here's an example of Atmos messing up music. It's too bad it happens, too, because the Atmos versions of songs seem to be less dynamically compressed: https://www.youtube.com/watch?v=xUgfp6mFG2E
If you have long cable runs I'd use an optical signal or a balanced line signal (this is why professional audio gear has balanced outputs and inputs with TRS 6.3mm or XLR-3 connectors).
There are simple adapters that allow you to send 4 balanced audio signals over existing ethernet connections. With CAT6 you can easily push balanced signals over a kilometer (long beyond the 100m treshold of actual CAT6 ethernet) without any noticable degradation.
If you have unbalanced signals from weak sources (vinyl needle?) you should keep the cable runs short, but even if the driver is good it can help to add a balun (passive or active) to run the thing balanced when the cable run is longer than 10 meters or is in a harsh environment (e.g. power chords with bursty loads emitting EMF).
Otherwise it's gripes over finding the ideal combination of TV picture settings AND OS display settings. The TV is an OS of it's own, of course. How does one go about tweaking two sets of settings that overlap?
I used to use Kodi, but got tired of endless minor issues and UI skins that haven’t evolved since 2005.
PCs have also been left behind when it comes to HDR, Dolby Vision, and streaming options due to DRM.
This. It even remembers played status across devices. No need to guess which episode to watch when on the phone in bed.
Then buy a good source like an Apple TV for streaming, a BluRay HD if you like disks or a OSMC Vero to run kodi. They should require very little changes or setup.
I think the audio is more challenging.
In theory, the recent(ish)ly standardized SMPTE 2098-2 bitstream protocol will allow for 3rd party encoders/decoders of object-based "immersive audio." In practice, 2098-2 is the bastard child of Atmos and DTS:X and I kind of doubt we'll ever see a FOSS decoder.
But anything's possible.