That's a bad analogy. The weights are much closer to source code, because you can directly modify them (fine tune, merge or otherwise) using open source software that Meta released (torchtune, but there are tons of other libraries and frameworks).
Except doing continued pre-training or fine tuning of the released model weights is the same process through which the original weights were created in the first place. There's no reverse engineering required. Meta engineers working on various products that need custom versions of the Llama model will use the same processes / tools.