> While the data fork allows random access to any offset within it, access to the resource fork works like extracting structured records from a database.
So, whatever the on-disk structure, the motivation here is that from an OS API perspective, software (including the OS itself) can interact with files as one "seekable stream of bytes" (the data fork), and one "random-access key-value store where the values are seekable streams of bytes" (the resource fork).
So not quite metadata vs data, but rather "structured data" (in the sense that it's in a known format that's machine-readable as a data structure to the OS itself) and "unstructured data."
The on-disk representation was arbitrary; in theory, some version of HFS could have stored the data and resource forks contiguously in a single extent and just kept an inode property to specify the delimiting offset between the two. Or could have stored each hunk of the resource fork in its own extent, pre-offset-indexed within the inode; and just concatenated those on read / split them on write, if you used the low-level API that allows resource forks to be read/written as bytestreams.
This in mind, it's curious that we never saw an archive file format that sends the hunks within the resource fork as individual files in the archive beside the data-fork file, to allow for random access / single-file extraction of resource-fork hunks. After all, that's what we eventually got with NeXT bundle directories: all the resource-fork stuff "exploded" into a Resources/ dir inside the bundle.