>If I were the creator of an LLM, I would forbid the use of 'my' technology for horrific purposes like warfare.
Again. Cool. You'd use a license right? Or terms and conditions? Jussst like the ones in GPL, Apache, and most non-commercial licenses? Ya know, the ones that were blatantly ignored? Look, I'm very much of the Jeffersonian persuasion that information cannot be "stolen" per se. However, even when I've helped myself to the occasional apple off the tree of knowledge, I don't go around assuming I'm going to go and build a damn business around it without securing good faith terms of the creator first. I have historically honored an ethic whereby a creator does in fact have some claim on how they'd like something to be used. Golden rule/moral imperative, treat others as you would have them treat you, and I would like someone looking to build a business around something of mine to talk to me first, and at least do me the courtesy of not weaponizing things.
I'm not a consequentialist. I don't cut AI companies an ethical check because they think they're doing something for the public good; particularly when it is very clear they actually aren't.
>It’s fine to 'steal' knowledge to train models, but only if the goal is to build a better world.
Who gets to define better world? I might hypothetically think something of yours being weaponized is a better world. Therefore I'm justified taking it and free from recourse from you? I don't actually think that way mind, but do you see why that's easy to say, but terrible in practice?