Unlike a person, an large language model is product built by a company and sold by a company. While I am not a lawyer, I believe much of the copyright arguments around LLM training revolve around the idea that copyrighted content should be licensed by the company training the LLM. In much the same way that people are not allowed to scrape the content of the New York Time website and then pass it off as their own content, so should OpenAI be barred from scraping the New York Times website to train ChatGPT and then sell the service without providing some dollars back to the New York Times.
You're either going to get: it's a technological, infinitely scalable process, and the training data should be considered what it is, which is intellectual property that should be being licensed before being used.
...or... It actually is the same as human learning, and it's time we started loading these things up with other baggage to be attached to persons if we're going to accept it's possible for a machine to learn like a human.
There isn't a reasonable middle ground due to the magnitude of social disruption a chattel quasi-human technological human replacement would cause.
Can you help me to understand the term "chattel" as you used it? I never heard the term before I read your post, and I needed to Google for it: <<
(in general use) a personal possession.
(in law) an item of property other than freehold land, including tangible goods ( chattels personal ) and leasehold interests ( chattels real ). >>
Namely, a magic, technologically reproducible box that can be applied almost as effectively as a human hireling, but isn't a human hireling is near infinitely more desirable in a capitalist system since the blackbox is chattel, the hired human is not. The chattel has no natural rights, no claim to self sovereignty, and is an asset that is legally extant by virtue of the fact it is owned by the owner.
Chattel that are flexible enough to replace the legal burdens incurred by hiring a human to do the same job, will naturally be converged upon due to the capitalistic optimation function of minimizing unit input cost for output over dollars and potential dollars as expressed through legal exposure.
Imagine you had two human like populations. One made of plastic that aren't considered humans but property. I.e. are chattel. Then you have a bunch of people with all the baggage that comes with it.
Hiring people/employing people is hard. Particularly in the U.S. and other jurisdictions where a great deal of responsibility for actually implementing regulations/ taxation/immigration and such is tacked onto being an employer/being able to hire.
As the gap between the capability of the chattel population closes in on the human population, the more economic and workload sense it makes for the system to improve the chattel population under our current optimization strategy, (given no pre-emptive work to cut off externality dumping). Humans are messy and complicated to work with. Often unpredictable. Chattel are easy to account for; especially when combined with "technical restraints". You have to fundamentally engage in negotiation with another human being to get them on board with working for you. You buy the chattel, and that's that. The chattel has no grounds to refuse service. Socially speaking, we don't even recognize it's outputs as carrying any social weight, or resistance as anything but malfunctions.
Economics is the science around using access to resources as a means to get other people to work with you. Being chattel means you can cut out entirely all that complexity. You are resource. Not people.
Unironically, we need to have an answer to whether or not we are going to consider a sufficiently complex function imitator as something that requires a classification above "chattel" or controls around how we apply it in order to not self-destruct the economic equilibria in which we purport to exist. Because all it takes is removing or sufficiently obstructing the flow of value down from individuals who accrete the most of these wunder-chattel to render things so top heavy, most of the constraints/invariants of our socioeconomic systems as we know them become invalidated.
That does not bode well for anyone.
> The point of copyright is to protect an exclusive right to copy, not the right to produce original works influenced by previous works.
As I understand, the definition of "the right to produce original works influenced by previous works" has been a slowly moving target in my lifetime. Think about the effects of the album Paul's Boutique by Beastie Boys. They went wild with sampling and paid very little (zero?) to license those samples. Then, there were a bunch of court cases in the US that decided that future samplers needed to license the samples from the original authors. However, the ability to create legal, derivative works is usually carefully defined in copyright law. Can you comment on this matter vis-a-via LLMs? > If an LLM produces original works that are influenced by the training data, that is not a violation of copyright.
I'm pretty sure if an LLM creates Paul's Boutique 2.0 in 2025 using incredible number of samples, then someone cannot sell it (or use it in a YouTube video) without first licensing those samples. I doubt very much someone could just "hide behind" an LLM and claim, "Oh, it is original, but derivative, work, created by an LLM." I doubt courts would allow that.exactly how the new work is created is important when it comes to derivative works.
does it use a copy of the original work to create it, or a vague idea/memory of the original work’s composition?
when i make music it’s usually vague memories. i’d argue that LLMs have an encoded representation of the original work in their weights (along with all the other stuff).
but that’s the legal grey area bit. is the “mush” of model weights an encoded representation of works, or vague memories?