not all are auto labelled, some are hand labelled, some are initially labelled with something like clip/blip/booru and then corrected a bit by hand. The newest thing though is using llm's with image support like GPT4 to label the images, which kind of does a much better job most of the time.
Your understanding of the attack was the same as mine, it injects just the right kinds of pixels to throw off the auto-labellers to misdirect what they are directing causing the tags to get shuffled around.
Also on reddit today some of the Stable Diffusion users are already starting to train using Nightshade so they can implement it as a negative model, which might or might not work, will have to see.