As such, fragmentation is always there; absolute disk sizes don't change the propensity for typical workloads to produce fragmentation. A modern file system is not merely a bucket of files, it is a database that manages directories, metadata, files, and free space. If you mix small and large directories, small and large files, creation and deletion of files, appending to or truncating from existing files, etc., you will get fragmentation. When you get close to full, everything gets slower. Files written early in the volume's life and which haven't been altered may remain fast to access, but creating new files will be slower, and reading those files afterward will be slower too. Large directories follow the same rules as larger files, they can easily get fragmented (or, if they must be kept compact, then there will be time spent on defragmentation). If your free space is spread across the volume in small chunks, and at 95% full it almost certainly will be, then the fact that the sum of it is 1 TB confers no benefit by dint of absolute size.
Even if you had SSDs accessed with NVMe, fragmentation would still be an issue, since the file system must still store lists or trees of all the fragments, and accessing those data structures still takes more time as they grow. But most NAS setups are still using conventional spinning-platter hard drives, where the effects of fragmentation are massively amplified. A 7200 RPM drive takes 8.33 ms to complete one rotation. No improvements in technology have any effect on this number (though there used to be faster-spinning drives on the market). The denser storage of modern drives improves throughput when reading sequential data, but not random seek times. Fragmentation increases the frequency of random seeks relative to sequential access. Capacity issues tend to manifest as performance cliffs, whereby operations which used to take e.g. 5 ms suddenly take 500 or 5000. Everything can seem fine one day and then not the next, or fine on some operations but terrible on others.
Of course, you should be free to (ab)use the things you own as much as you wish. But make no mistake, 5% free is deep into abuse territory.
Also, as a bit of an aside, a 20 TB volume split into 1 GB slabs means there's 20,000 slabs. That's about the same as the number of 512-byte sectors in a 10 MB hard drive, which was the size of the first commercially available consumer hard drive for the IBM PC in the late 1980s. That's just a coincidence of course, but I find it funny that the numbers are so close.
Now, I assume the slabs are allocated from the start of the volume forward, which means external slab fragmentation is nonexistent (unless slabs can also be freed). But unless you plan to create no more than 20,000 files, each exactly 1 GB in size, in the root directory only, and never change anything on the volume ever again, then internal slab fragmentation will occur all the same.