Only if one doesn't understand what those bits mean or what they correspond to.
These bits are important for quantization, which is the process of converting analog sound into digital numbers. On a graph, X = time and Y = amplitude. The higher the bits, the higher the resolution.
A 16bit recording has 2^16 steps (discrete values) available for amplitude (65,536) and a 24bit recording is 2^24 or 16,777,216 steps.
So why is this important? Well, a 24-bit recording can more finely record differences in amplitude. Given that 1bit = 6dB: a regular 16-bit recording already has a dynamic range of 96dB. A 24-bit recording has a dynamic range of >144db. At ~125-130dB SPL is where hearing loss (permanent) begins.
You do not hear the difference because if you were listening to a 24-bit recording on a 24-bit capable system at sound levels loud enough to actually discern a difference, you would have permanently damaged your ears. Actually, I believe that applies to 20-bit, let alone 24-bit.
So why do 24-bit or higher recordings even exist? They are useful for people mixing and working with the raw audio, before it gets processed down to 16bit audio for distribution. At 24-bit resolution you have a larger amount of headroom before you start clipping, so it's easier to work with considering you have X amount of bits that are just part of the noise floor.
This is also assuming your input files are actually 24-bit to begin with. The vast majority of files are 16-bit because there is literally no point as a consumer to have larger file sizes for no humanly audible benefit.
44.1kHz 16-bit files are all that you need as a human consumer of audio. 48kHz has to do with video and is not better than 44.1kHz because you (a human) cannot hear the difference. 44.1kHz is 22.5kHz x 2. Humans hear sound from 20hz to 20kHz -at best-. This is assuming perfect hearing with no degradation. We sample at 44.1kHz due to the Nyquist-Shannon sampling theorem, and 22kHz gives us just a bit of headroom to apply filters to avoid aliasing. [2]
So I reiterate my initial assumption: flicking a switch to change from 16bit to 24bit should not magically change the quality of audio (in a humanly discernible manner). Assuming the file being played is 24bit lossless audio in the first place.
> BTW, following your logic there is no point in bying DAC
We're talking about dedicated external equipment vs an onboard soundcard+amp which are generally neglected. Not -all- onboard cards suck of course, the Realtek ALC1220 chip on my mobo seems to be comparable or better than entry level DACs from the specs I'm seeing. This is assuming no interference is happening, which is more likely to happen around unshielded electrical components. If you don't believe this is a thing, ask why the audio industry uses thick XLR [shielded AND grounded] cables as standard.
Certain headphones require equipment that can drive them properly, whether it's an onboard soundcard+amp or a DAC+amp. For example, my sennheiser hd600s are 300Ω but some models go up to 600Ω. And yes, the quality of the amp/preamp does make a huge difference.
If one can prove that a component is unable to drive a component, or is sub-par mathematically, one doesn't exactly need double abx trials. Those are for tests like "Monster says their $200 cable is better than <X> standard cable?", or "Is a McIntosh amp better than a $<amount> competitor?".
I don't need to do a double ABX study to realize that beats headphones are drastically worse in performance than sennheiser hd600s: [3], [4], [5]
[0]: https://www.mojo-audio.com/blog/the-24bit-delusion/
[1]: https://web.archive.org/web/20200202124704/https://people.xi...
[2]: https://en.wikipedia.org/wiki/44,100_Hz#Origin
[3]: https://reference-audio-analyzer.pro/en/report/hp/monster-be...
[4]: https://reference-audio-analyzer.pro/en/report/hp/sennheiser...
[5]: https://reference-audio-analyzer.pro/en/report/hp/audio-tech...