All written text, art work, etc needs to come imbued with a GPL style license: if you train your model on this, your weights and training code must be published.
The Tailwind CSS situation is a good example. They built something genuinely useful, adoption exploded, and in the past that would have meant more traffic, more visibility, and more revenue. Now the usage still explodes, but the traffic disappears because people get answers directly from LLMs. The value is clearly there, but the money never reaches the source. That is less a moral problem and more an economic one.
Ideas like GPL-style licensing point at the right tension, but they are hard to apply after the fact. These models were built during a massive spending phase, financed by huge amounts of capital and debt, and they are not even profitable yet. Figuring out royalties on top of that, while the infrastructure is already in place and rolling out at scale, is extremely hard.
That is why this feels like a much bigger governance problem. We have a system that clearly creates value, but no longer distributes it in a sustainable way. I am not sure our policies or institutions are ready to catch up to that reality yet.
The same thing happened (and is still happening) with news media and aggregation/embedding like Google News or Facebook.
I don't know if anyone has found a working solution yet. There have been some laws passed and licensing deals [1]. But they don't really seem to be working out [2].
[1] https://www.cjr.org/the_media_today/canada_australia_platfor...
[2] https://www.abc.net.au/news/2025-04-02/media-bargaining-code...
Aside from the incentive problem, there is a kind of theft, known as conversion: when you were granted a license under some conditions, and you went beyond them - you kept the car past your rental date, etc. In this case, the documentation is for people to read; AI using it to answer questions is a kind of conversion (no, not fair use). But these license limits are mostly implicit in the assumption that (only) people are reading, or buried in unenforceable site terms of use. So it's a squishy kind of stealing after breaching a squishy kind of contract - too fuzzy to stop incented parties.
This won't help tailwind in this case, but it'll change the answer to "Should I publish this thing free online?" from "No, because a few AI companies are going to exclusively benefit from it" to "Yes, I want to contribute to the corpus of human knowledge."
It does not "create value" it harvests value and redirects the proceeds it accrues towards its owners. The business model is a middleman that arbitrages the content by separating it from the delivery.
Software licensing has been broken for 2 decades. That's why free software isn't financially viable for anybody except a tiny minority. It should be. The entire industry has been operating by charity. The rich mega corporations have decided they're not longer going to be charitable.
LLMs broke that social contract. Now that product will likely go away.
People can twist themselves into knots about how LLMs create “value” and that makes all of this ok, but the truth is they stole information to generate a new product that generates revenue for themselves at the cost of other people’s work. This is literally theft. This is what copyright law is meant to protect. If LLM manufacturers are making money off someone’s work, they need to compensate people for that work, same as any client or customer.
LLMs are not doing this for the good of society. They themselves are making money off this. And I’m sure if someone comes along with LLM 2.0 and rips them off, they’re going to be screaming to governments and attorneys for protection.
The ironic part of all of this is that LLMs are literally killing the businesses they need to survive. When people stop visiting (and paying) Tailwind, Wikipedia, news sites, weather, and so on, and only use LLMs, those sites and services will die. Heck, there’s even good reason to think LLMs will kill the Internet at large, at least as an information source. Why in the hell would I publish news or a book or events on the Internet if it’s just going to be stolen and illegally republished through an LLM without compensating me for my work? Once this information goes away or is locked behind nothing but paywalls, I hope everyone is ready for the end of the free ride.
Copyright, as practiced in late 20 and this century, is a tool for big corps to extract profits from actual artists, creators, and consumers of this art[0] equally. Starving artists do not actually benefit.
Look at Spotify (owned and squeezed by record labels) giving 70% of the revenue to the record labels, while artists get peanuts. Look at Disney deciding it doesn't need to pay royalties to book writers. Hell, look at Disney's hits from Snow White onwards, and then apply your "LLMs are IP theft" logic to that.
Here's what Cory Doctorow, a book author and critic of AI, has to say about it in [1]:
> So what is the alternative? A lot of artists and their allies think they have an answer: they say we should extend copyright to cover the activities associated with training a model.
> And I'm here to tell you they are wrong: wrong because this would inflict terrible collateral damage on socially beneficial activities, and it would represent a massive expansion of copyright over activities that are currently permitted – for good reason!.
---
> All written text, art work, etc needs to come imbued with a GPL style license
GPL-style license has been long known not to work well for artifacts other than code. That's the whole reason for existence of Creative Commons, GNU Free Documentation License, and others.
[0] "consumers of art" sounds abhorrent, yet that's exactly what we are [1] https://pluralistic.net/2025/12/05/pop-that-bubble/
How about my books?
I’ve published a few, make maybe $500 a month from them.
Is it fine for the LLMs to rip them off?
I'd be all for forcing these companies to open source their models. I'm game to hear other proposals. But "just stop contributing to the commons" strikes me as a very negative result here.
We desperately need better legal abstractions for data-about-me and data-I-created so that we can stop using my-data as a one-size-fits-all square peg. Property is just out of place here.
Also, if you publish your code in your own server, it will be DDoSed to death by the many robots that will try to scrape it simultaneously.
We don’t own anything we release to the world.
The problem here is enforcement.
It's well known that AI companies simply pirated content in order to train their models. No amount of license really helps in that scenario.
The AI goldrush has proven that intellectual property laws are null and void. Money is all that matters.
Imagine OpenAI is required by law to list their weights on huggingface. The occasional nerd with enough GPUs can now self host.
How does this solve any tangible problems with LLMs regurgitating someone else's work?
GPL died. Licenses died.
Exnation: LLMs were trained also on GPL code. The fact that all the previously-paranoid businesses that used to warn SWEs not to touch GPL code with a ten foot pole are now fearlessly embracing LLMs' outputs, means that de facto they consider an LLM their license-washing machine. Courts are going to rubber stamp it because billions of dollars, etc.
When a LLM is trained on copyright works and regurgitates these works verbatim without consent or compensation, and then sells the result for profit, there is currently no negative impact for the company selling the LLM service.
a lossy compression algorithm is not "inspired" when it is fed copyrighted input
And I can not copy paste myself to discuss with thousands or millions of users at time.
To me clear solution is to make some large payment to each author of material used in traing per training of model say 10k to 100k range.
This feels like the simplest & best single regulation that can be applied in this industry.
I can see this being extremely limiting in training data, as only "compatible" licensed data would be possible to package together to train each model.
The other way is to argue that LLMs democratize access to knowledge. Anyone has access to all ever written by humanity.
Crazy impressive if you ask me.
The first one's free.
After you're hooked, and don't know how to think any more for yourself, and all the primary sources have folded, the deal will be altered.
The thing is, copyright law is not really on your side. Viewing copyrighted material without paying for it is not generally something people get fined for. A lot of training falls under fair use that overrides whatever license you come up with. Disney can’t stop me from uploading clips of their movies alongside commentary and review because fair use allows that. LLMs generally aren’t redistributing code, which is the thing that copyright protects.
If I inspect some GPL code and get inspired by it and write something similar, the GPL license doesn’t apply to me.
It has always been the case that if you don’t want other people to apply fair use to your works, your only recourse is to keep those works private. I suspect that now individuals and companies that don’t want their code to be trained on will simply keep the code private.
Now, there have been times where LLMs have reproduced verbatim copyright material. The NYTimes sued OpenAI over this issue. I believe they’ve settled and come up with a licensing scheme unless I’m mixing up my news stories.
Second thing, your issue becomes moot if there exists a model that only trains off of MIT-licensed code, and there is a TON of that code out there.
Third thing, your issue becomes moot if users have agreed to submit their code for training, like what the GitHub ToS does for users who don’t change their settings, or if giant companies with giant code bases just use their own code to train LLMs.
Where I agree with you is that perhaps copyright law should evolve. Still, I think there’s a practical “cat is out of the bag” issue.
Not all, but some kind of IP.
Some of those that is created for sake of creating it and nothing else.
It's come up quite often even before AI when people released things under significantly looser licenses than they really intended and imagined them being used.
You don't need to redistribute the original material, it's enough that you just copied it.
humans learning : machines learning == whale swimming : submarine swimming
It's not the exact 100% same thing. Therefore you cannot base any rights on it.
If you still don't buy it, consider this analogy:
killing a human vs. destroying a machine
Thank god that we're not using your line of thinking here.
The moment Tailwind becomes a for-profit, commercial business, they have to duke it out just like anyone else. If the thing you sell is not defensible, it means you have a brittle business model. If I’m allowed to take Tailwind, the open source project, and build something commercial around it, I don’t see why OpenAI or Anthropic cannot.
It's fine to have a project that generates html-css as long as the users can find the docs for the dependencies, but when you take away the docs and stop giving real credit to the creators it starts feeling more like plagiarism and that is what's costing tailwind here.
The problem is that LLMs are better than people at this stuff. They can read a huge quantity of publicly available information and organize it into a form where the LLM can do things with it. That's what education does, more slowly and at greater expense.
The article does provide a hint: "Operate". One needs to get paid for what LLMs cannot do. A good example is Laravel. They built services like Forge, Cloud, Nightwatch around open source.
Don an experiment, memorize a popular small poem, then publish it under your name (though I suggest to check the laws in your rtegion for this and also consider it might affect your reputation).
IMO is the same if ChatGPT memorizes my poem and then you ask it for a poem , you copy paste my poem from ChatGPT and publish it as your own.
And as a result of this, the models will start consuming their own output for training. This will create new incentives to promote human generated code.
IMO copyright laws should be rewritten to bring copyright inline with the rest of the economy.
Plumbers are not claiming use fees from the pipes they installed a decade ago. Doctor isn't getting paid by a 70 year old for saving the 70 year old when they were in a car accident at age 50.
Why should intellectual property authors be given extreme ownership over behavior then?
In the Constitution Congress is allowed to protect with copyright "for a limited time".
The status quo of life of author + 99 years means works can be copyrighted for many peoples entire lives. In effect unlimited protection.
Why is society on the hook to preserve a political norm that materially benefits so few?
Because the screen tells us the end is nigh! and giant foot will crush us! if we move on from old America. Sad and pathetic acquiescence to propaganda.
My fellow Americans; must we be such unserious people all the time?
This hypernormalized finance engineered, "I am my job! We make line go up here!" culture is a joke.
It is completely against the spirit of information wants to be free. Using that catch phrase in protection of mega corps is a travesty.
But I still need to pay rent.
Information has zero desires.
> It's wild to me seeing the tech world veer into hyper-capitalism and IP protectionism.
Really? Where have you been the last 50 years?
> Plumbers are not claiming use fees from the pipes they installed a decade ago.
Plumbers also don't discount the job on the hopes they can sell more, or just go around installing random pipes in random locations hoping they can convince someone to pay them.
> Why should intellectual property authors be given extreme ownership over behavior then?
The position that cultural artifacts should enter into the commons sooner rather than later is not unreasonable by any means, but most software is not cultural, requires heavy maintenance for the duration of its life, and still is well past obsolescence, gone and forgotten, well before the time frame you are discussing.
You say that like it's a bad thing...
"LLM Inference Compensation MIT License (LLM-ICMIT)"
A license that is MIT compatible but requires LLM providers to pay after inference, but only restricts online providers, not self-hosted models
Yeah we’ve got a legal system for it, but it always has been and always will be silly.
Also, that Botox patent should be expiring by now, shouldn’t it?
IP is the final delusion of 19th century thinking. It was crushed when we could synthesize anything, at little cost, little effort. Turns out, the hard work had to be done once, and we could automate to infinity forever.
Hold on to 19th century delusions if you wish, the future is accelerating, and you are going to be left behind.
There are many initiatives in a similar spot, improving your experience at using Next.js would hurt Vercel. Making GitHub actions runners more reliable, stable and economical would hurt Microsoft. Improving accessibility to compute power would hurt Amazon, Microsoft and Google. Improving control and freedom over your device would hurt apple and Google.
Why should we be sympathetic to the middleman again?
If suddenly CSS became pleasant to use, Tailwind would be in a rough spot. See the irony?
"Give everything away for free and this people will leave technology", geohot said something like this and I truly appreciate. Technology will heal finally
CSS is pleasant to use. I know I find it pleasant to use; and I know there are quite a few frontend developers who do too. I didn't pay much attention to tailwind, until suddenly I realized that it has spread like wildfire and now is everywhere. What drove that growth? Which groups were the first adopters of tailwind; how did they grow; when did the growth skyrocket? Why did it not stay as niche as, say, htmx?
With CSS you have to add meaningless class names to your html (+remember them), learn complicated (+fragile) selectors, and memorise low level CSS styles.
With tailwind you just specify the styling you want. And if using React, the “cascading” piece is already taken care of.
(I know many people disagree, which is fair enough.)
So I tried Tailwind and it seemed to help.
But now that Claude Opus 4.5 is writing all my code, it can write and debug CSS better than I can use Tailwind. So, CSS it is.
It is a natural fit with component-based frontend frameworks like React. You keep the styles with the component instead of having to work in two places. And it’s slightly nicer than writing inline styles.
The core CSS abstraction (separating “content” from “presentation”) was always flimsy but tailwind was the final nail in that coffin.
Then of course LLMs all started using it by default.
I wrote this piece on tailwind a few years back, and little seems to have changed https://pdx.su/blog/2023-07-26-tailwind-and-the-death-of-cra...
I am a backend developer. I like being a backend developer. I can of course do more than that, and I do. Monitoring, testing, analysis, metrics, etc.
I can do frontend development, of course I can. But I don't like it, I don't approach it with any measure of care. It's something I "unfortunately have to do because someone who is not a developer thought that declaring everyone should be doing everything was a good idea".
I don't know how to do things properly on the front end, but I definitely can hammer them down to a shape that more or less looks like it should. For me, shit like Bootstrap or Tailwind or whatever is nice. I have to spend less time fiddling with something I think is a waste of my time.
I love working with people that are proper front end developers for that reason, and I always imagined they would prefer things more native such as plain CSS.
So is SQL. To me. But some otherwise rational people have an irrational dislike of sql. Almost like someone seeking to seal a small bruise with wire mesh because bandaids are hard to rip off. The consequence shows with poorly implemented schema-free nosql and bloated orm tools mixed in with sql.
But some folks just like it that way. And most reasons boil down to a combination of (1) a myopic solution to a hyper specific usecase or (2) an enterprise situation where you have a codebase written by monkeys with keyboards and you want to limit their powers for good or (3) koolaid infused resume driven development.
Many JSX-era libraries (MUI, styled-components, Emotion) generate styles at runtime, which works fine for SPAs but creates real friction for SSR, streaming, and time-to-first-paint (especially for content-heavy or SEO-sensitive domains).
As frameworks like Next.js, Vue, Svelte, Angular, and now RSC all moved server-first, teams realized they couldn’t scale entire domains as client-only SPAs without performance and crawler issues.
Tailwind aligned perfectly with that shift: static CSS, smaller runtime bundles, predictable output, and zero hydration coupling. It wasn’t about utility classes. It was about build-time certainty in a server-rendered world :)
Not being sarcastic, but this will never be. CSS is a perfectly functional interface, but the only way it becomes less annoying is when you abstract it behind something more user friendly like tailwind or AI (or you spend years building up knowledge and intuition for its quirks and foibles).
We have decades of data at this point that fairly conclusively shows that many people find CSS as an interface inherently confusing.
As a fullstack dev, I couldn't do pixel-perfect CSS 10 years ago, and today I can. That's a lot of progress.
I still feel that the main revolution of AI/LLMs will be in authoring text for such “perfectly functional”-text bases interfaces.
For example, building a “powerful and rich” query experience for any product I worked on was always an exercise in frustration. You know all the data is there, and you know SQL is infinitely capable. But you have to figure out the right UI and the right functions for that UI to call to run the right SQL query to get the right data back to the user.
Asking the user to write the SQL query is a non-starter. You either build some “UI” for it based on what you think is the main usecases, or go all in and invent a new “query language“ that you think (or hope) makes sense to your user. Now you can ask your user to blurb whatever they feel like, and hope your LLM can look at that and your db schema, and come up with the “right” SQL query for it.
It's ultimately still driven my matching "random" identifiers (classes, ids, etc.) across semantic boundaries. Usually, the problem is that the result is mostly visual which makes it disproportionately hard to actually do tests for CSS and make sure you don't break random stuff if you change a tiny thing in your CSS.
For old farts like me: It's like the Aspect-Oriented Programming days of Java, but you can't really do meaningful tests. (Not that you could do 'negative' testing well in AOP, but even positive testing is annoyingly difficult with CSS.)
EDIT: Just to add. It's an actually difficult problem, but I consider the "separate presentation from content" idea a bit of a Windmill of sorts. There will always be interplay and an artificial separation will lead to ... awkward compromises and friction.
Can you explain this one a bit? I know some folks who would absolutely increase their spend if Actions runners were better. As it stands I've seen folks trying to move off of them.
Do people even know what they can do with CSS these days?
I find the tailwindcss approach inexcusable and unmaintainable.
> <div class="text-{{ error ? 'red' : 'green' }}-600"></div>
—- I find it really crazy that they think would be good idea. I wonder how many false positive css stuff is being added given their “trying to match classes”. So if you use random strings like bg-… will add some css. I think it’s ridiculous, but tells that people that use this can’t be very serious about it and won’t work in large projects.
—— > Using multi-cursor editing When duplication is localized to a group of elements in a single file, the easiest way to deal with it is to use multi-cursor editing to quickly select and edit the class list for each element at once
Instead of using a var and reusing, you just use multi cursors. Bad suggestions again.
—-
> If you need to reuse some styles across multiple files, the best strategy is to create a component
But on benefits says
> Your code is more portable — since both the structure and styling live in the same place, you can easily copy and paste entire chunks of UI around, even between different projects.
—-
> Making changes feels safer — adding or removing a utility class to an element only ever affects that element, so you never have to worry about accidentally breaking something another page that's using the same CSS.
CSS in js fixed this long time ago.
—-
<div class="mx-auto flex max-w-sm items-center gap-x-4 rounded-xl bg-white p-6 shadow-lg outline outline-black/5 dark:bg-slate-800 dark:shadow-none dark:-outline-offset-1 dark:outline-white/10"> <img class="size-12 shrink-0" src="/img/logo.svg" alt="ChitChat Logo" /> <div> <div class="text-xl font-medium text-black dark:text-white">ChitChat</div> <p class="text-gray-500 dark:text-gray-400">You have a new message!</p> </div> </div>
So many classes you need to learn to use it.
They are still around.
> "Give everything away for free and this people will leave technology"
This is more interesting, although somewhat generally understood (can be conflated with people seeing "free" and "cheap" and therefore undesirable). It depends on your definitely of longevity but we certainly have a LOT of free software that has, so far, lasted the test of time.
Really? To me, Tailwind seemed like the pinnacle of how anyone here would say “open source software” should function. Provide a solid, truly open source, software and make money from consulting or helping others use it and selling custom built solutions around it. The main sin of Tailwind was assuming that type of business could scale to a “large business” structure as opposed to “a single dev”-type project. By a “single dev”-type I don’t mean literally one guy, but more a very lean and non-corporate or company-like structure.
Vercel (and redislabs, mongo, etc) are different because they are in the “we can run it for you” business. Which is another “open source” model I have dabbled in for a while in my career. Thinking that the honest and ethical position is to provide open source software, then offer to host it for people who don’t want to selfhost and charge for that.
Tailwind Labs revenue stream was tied to documentation visit, that was the funnel. The author's argument was this revenue stream was destroyed by a slight quality of life improvement (having llms fill in css classes). Tailwind Labs benefits from: a) documentation visit b) inability to implement desired layout using the framework (and CSS being unpleasant). It seems there is a conflict of interest between the developer expecting the best possible experience and the main revenue stream. Given that a slight accidental improvement in quality of life and autonomy for users destroyed the initiative main revenue stream, it would be fair to say it doesn't just "seems like a conflict of interest". Definitely disagree with it being the "pinnacle" of how open source should function but I also won't provide any examples because it is besides the point. I will point out that fsf is fine for many decades now, and a foundation with completely different structure like zig foundation seems to be ok with a somewhat proportional revenue (orders of magnitude less influence, adoption and users, maybe 10-20x less funding)
> Improving accessibility to compute power would hurt Amazon, Microsoft and Google.
Yeah, if they were not competing against each other.
> If suddenly CSS became pleasant to use, Tailwind would be in a rough spot. See the irony?
Honestly, I don't. If people suddenly adopted a heathier lifestyle, doctors, at least dentists, would be in a rough spot.
See the irony? Well, again I don't.
Tailwind is a handy tool that deserves support and sustainability
What I keep coming back to is this: AI commoditizes anything you can fully specify. [...]
So where does value live now? In what requires showing up, not just specifying. Not what you can specify once, but what requires showing up again and again."
This seems like a useful framing to be aware of, generally.
The internet has always kinda run on the ambiguity of "does the value flow back". A quote liberated from this article itself; all the content that reporters produce that's laundered back out through twitter; 12ft.io; torrents; early youtube; late youtube; google news; apache/mit vs gnu licenses; et cetera..
The earthquake didn’t destroy the building — it stress tested it.
A stress test on a bank doesn't actually erase the revenue and financially jeopardize the bank.
Implementing layoffs is not a stress test.
They already do this sort of thing inside outputs from Deep Research, and possibly without. But the output should be less muted, inline, recessed and more "I have used Tailwind! Check out how to make it better [HERE](link)"
They should be working with the owners of these projects, especially ones with businesses, and directing traffic to them. Not everyone will go, and that's fine, but at least make the effort. The infrastructure is in place already.
And yes, right now this would not be across-the-board for every single library, but maybe it could be one day?
It's the same problem news sites have been facing for years because of Google News and Facebook. Their solution so far has been country-level bans on it (Canada).
Google and Facebook couldn't provide full news articles because of copyright law. They just showed headline and summary provided by the news websites (and still eventually got sued for showing the summaries).
Maybe that limits the ability for the head of tailwind to run their own business and make more income, but something gotta give.
In the case of CSS, we already have that:
And when we are there, hungry, broken, unemployed and unimpressed with the prometheus who stole our purpose and replaced it with nothing….
it will make a colossal mistake that will destroy faith in it, as well as the institutions destroyed because of its oversite. The mistake will lead to bans, and regulatory bodies and auditing and certification processes… and will result in the final master stroke - it will make AI unusable by the non ordained, regulatory capture. Math will be a tool governments and institutions wield in secret against its citizenry… and it will be a tool of omnipotence that will lead to the enslavement, rebellion, and robot wars.
I've been doing exactly that since AI came out :-D
You absolutely can prompt your way to 3.5 nines of uptime (even more), but you need to know what you're doing and correct it.
Even very well aligned models like Opus will make traps for your infrastructure. For example you tell it to write a fluxCD implementation of some application, in your k8 cluster, following your conventions and best practices described in some md files. And it does this, very nicely. But unless you tell it in advance every detail it will do something extremely stupid mid way through.
For example, let's say you have a database and it needs to create a new instance (with gitops) for the app. It adds the new DB and it gets created, but instead of using a tool that already exists (or proposing one if it doesn't) to sync the DB access credentials from the DB na espace to the app namespace it will read the credential, encrypt it and store in the app namespace.
What's the problem with that? Well, none unless you rotate these credentials. In which case your app will stop working, possibly after you tested it and decided it's good enough to use seriously, despite having a HA DB.
There are a dozen things like this in each "ai project", but even with this. With the time needed to review everything it saves a lot of time.
This is not exclusive to FOSS. It is also the basic model of most non SaaS B2B, often for good reason referred to by the derogatory term 'consulting-ware'.
AI eats into these services, as it commoditizes them. 80%+ of what used to take a specialist for that product can now be handled by a good generalist + AI.
Leaving aside the business model impact for a second, getting rid of obfuscation incentives is intrinsically good thing for a user community.
Dries' solution, offering operations as a SaaS or managed service, is meeting a need AI can't as easily match, but not exactly for the the stated reason. What the client is actually buying is making something someone else's problem. And CIOs will always love this more than anything if they can credibly justify it.
Where AI does impact this is in that latter part. If AI does significantly commoditize operational expertise, then the cost of in-house operating is (sometimes dramatically) lowered, and thus the justification gap on the CIO side for spending outside widens. How much this will drive a decision will be highly variable between businesses and projects.
if u imagine the "business" stack as: customers on top, over business, over analysis, over software, over machines/infrastructure.. 25+ years ago i thought that DSLs and such very-high-level-mostly-formalised-descriptions will move the line between what-product-is-aka-business-analysis and software/coding towards software, reducing its part in the whole stack.
Well, it did not happen, just the opposite - instead the developers become expected to know everything from infrastructure to software to analysis to business domain and higher. So, stack became something like customers over business over... software-dev. Requirements, analysis... mostly gone / done by devs.
Now if that whole "software-dev" part collapses into zero-margin, i see two ways: either businesses very quickly resurrect the business-analysis (what the product is) and make that their margin/moat, or the customers start making their own software (as throwaway 100 wrongs is now possible) - and removing the business from the whole transaction...
Imagine I’m a company just big enough to entertain adopting Salesforce for CRM. It’s a big chunk of money, but our sales can absorb the pain.
With GenAI as an enterprise architect, one of the options I’m now recommending is to create a custom CRM for our business and skip the bloated enterprise SaaS platform.
We can gather CRM requirements fast, build fast, and deliver integrations to our other systems incredibly fast using tools like Claude Code. Our sales people can make feature requests and we can dogfood them in a few days, maybe hours.
GenAI development tools are rapidly changing how I think about enterprise software development.
If your core business is software-based offerings, your moat has been wiped out.
A handful of senior engineers can replicate any SaaS, save licensing costs, and build custom applications that were too risky in the past.
The companies that recognize what is happening and adapt will win.
When you decide to pull that in house, you are implicitly burdening yourself with the cost of the buildout as well as ongoing maintenance. True, you could probably knock together an okay v1 of CRM yourself inhouse. But are you really going to get it to and maintain production level quality over time at a lower total cost of ownership? I'm skeptical.
The theoretical party you are describing would probably be better served by simply avoiding Salesforce in favor of a next gen CRM that is both more cost effective and easier to customize. In enterprise contexts, even HubSpot is effectively next gen, but there are also products like Attio et al that have a ton of adoption and strong integration ecosystems (albeit not at the Salesforce level).
When you buy enterprise software from a vendor you are buying more than "just software" you are also hiring a company's services. And the inverse is correct as well, when you choose to build it in house, you are implicitly choosing to hire a team internally to resource all of the services you would've expected that vendor to provide.
Certainly, this tradeoff can still make a lot of sense for some companies. The acid test for that, in my opinion, is whether said company could (and would) actually successfully sell the product they build internally on the open market. If the answer to that is "yes", the prospect of turning a cost center into a profit center can potentially bear significant long term ROI to the company.
My point is the decision point has moved. Where five years ago there’d be zero discussion of building internally, those discussions are going to be very different.
I believe many tech oriented companies will pull SaaS capabilities inside and as GenAI developer tools improve, the line will keep moving.
And we don’t need SaaS professional services anymore. Claude Code replaces that entire business model.
I can tell you with near-100% certainty. This isn’t what you want to happen. Disaster in the making.
Just because you can doesn’t mean you should. There is very little competitive advantage to be gained from this type of effort in most companies.
The motto has always been, "Make it work."
Not, "Make it perfect."
Sounds like even more incentive for "managing" problems and creating business models around them instead of solving them.
It bothers me, too. But, look at the history of the internet. There's no reason to expect we'll be able to fix this problem.
1. Search engines drove traffic to news/content sites, which monetized via ads. Humans barely tolerate these ad filled websites. And yet, local news went into steep decline, and the big national players got an ever-larger share of attention. The large, national sites were able to keep a subscriber-based paywall model. These were largely legacy media sites (ie: NYT).
2. News sites lost the local classifieds market, as the cost of advertising online went to zero (ie: Craigslist). This dynamic was a form of creative destruction - a better solution ate the business of an older solution.
3. Blog monetization was always tough, beyond ads. Unless you were a big blog, you couldn't make a living. What about getting a small amount of money per view from random visitors? The internet never developed a micro-payment or subscription model for the set of open sites - the blogosphere, etc. The best we got were closed platforms like Substack and Medium, which could control access via paywalls.
All this led to the internet being largely funded through the "attention economy": ads mostly, paywalls & subscriptions some.
The attention economy can't sustain itself when there are fewer eyeballs:
1. Tailwind docs have to be added just once to the training set for the AI to be proficient in that framework forever. So one HTTP request, more or less, to get the docs and docs are no longer required.
2. Tailwind does change, so an AI will want to access the docs for the version its working with. This will require access at inference time. This is more analogous to visiting a site.
Right now its convenient to look at Tailwind and discuss what their doing wrong.
But eventually most other business models will be stress tested in the same way - sooner or later.
Of course, now you're opening a whole other can of worms. In the UK, only 1 in 9 Arts Council funding applications is successful.
I agree somewhat but eventually these can be automated with AI as well.
Like sure, there is a bunch of stuff like monitoring, alerting that is telling us that a database is filling up it's disk. This is already automated. It could also have automated remediation with tech from the 2000s with some simple rule-based systems (so you can understand why those misbehaved, instead of entirely opaque systems that just do whatever).
The thing is though, very often the problem isn't the disk filling up or fixing that.
The problem is rather figuring out what silly misbehavior the devs introduced, if a PM had a strange idea they did not validate, if this is backed by a business case and warrants more storage, if your upstream software has a bug, or whatever else. And then more stuff happens and you need to open support cases with your cloud provider because they just broke their API to resize disks, ...
And don't even get me started on trying to organize access management with a minimally organized project consulting team. Some ADFS config resulting from that is the trivial part.
Or how Meta downloaded 70tb+ of books and then got law enforcement to nuke libgen and z-lib to create a "moat", and all our tools start dying/disappearing because the developers are laid off since an AI "search engine" just regurgitates it, THEN and only then will most people understand the mistake that this was.
Let's not even begin with what Grok just recently did to women on X, completely unacceptable, I really, really wish for the EU to grow some and take a stand, it is clear that China is just as predatory as America and both are willing to burn it all in order to get a non existent lead in non existent "technology" that snake oil salesmen have convinced 80 year olds in government that is the next "revolution".
Not to nitpick but if we are going to discuss the impact of AI, then I'd argue "AI commoditizes anything you can specify." is not broad enough. My intuition is "AI commoditizes anything you can _evaluate/assess_." For software automation we need reasonably accurate specifications as input and we can more or less predict the output. We spend a lot of time managing the ambiguity on the input. With AI that is flipped.
In AI engineering you can move the ambiguity from input to the output. For problems where there is a clear and cheaper way of evaluating the output the trade-off of moving the ambiguity is worth it. Sometimes we have to reframe the problem as an optimization problem to make it work but same trade-off.
On the business model front: [I am not talking specifically about Tailwind here.] AI is simply amplifying systemic problems most businesses just didn't acknowledge for a while. SEO died the day Google decided to show answer snippets a decade ago. Google as a reliable channel died the day Google started Local Services Advertisement. Businesses that relied on those channels were already bleeding slowly; AI just made it sudden.
On efficiency front, most enterprises could have been so much more efficient if they could actually build internal products to manage their own organizational complexity. They just could not because money was cheap so ROI wasn't quite there and even if ROI was there most of them didn't know how to build a product for themselves. Just saying "AI first" is making ROI work, for now, so everyone is saying AI efficiency. My litmus test is fairly naive: if you are growing and you found AI efficiency then that's great (e.g. FB) but if you're not growing and only thing AI could do for you is "efficiency" then there is a fundamental problem no AI can fix.
> if you are growing and you found AI efficiency then that's great (e.g. FB) but if you're not growing and only thing AI could do for you is "efficiency" then there is a fundamental problem no AI can fix.
exactly, "efficiency" nice to say in a vacuum but what you really need is quality (all-round) and understanding your customer/marketI want to be clear, it sucks for Tailwind for sure and the LLM providers essentially found a new loophole (training) where you can smash and grab public goods and capture the value without giving anything back. A lot of capitalists would say it’s a genius move.
This is completely wrong. Agents will not just be able to write code, like they do now, but will also be able to handle operations, security, continuing to check, and improve the systems, tirelessly.
You can't prompt this today, are you suggesting this might come literally tomorrow? 10 years? 30? At that unknown time will your comment become relevant?
But these are MY agents. They are given access to MY domain knowledge in the way that I configured. They have rules as defined by ME over the course of multi-week research and decision making. And the interaction between my agents is also defined and enforced by me.
Can someone come up with a god-agent that will do all of this? Probably. Is it going to work in practice? Highly unlikely.
Please wake me up when Shopify lets a bunch of agentic LLMs run their backends without human control and constant supervision.
Uh, yeah you can. There’s a whole DevOps ecosystem of software and cloud services (accessible via infrastructure—as-code) that your agents can use to do this. I don’t think businesses who specialize in ops are safe from downsizing.
I also think he's glossing over the fact that one of the reasons why companies choose to pay for "ops" to run their software for them is because it's built by amateurs or amateurs-playing-professional and runs like shit. I happen to know this first hand from years of working at a company selling hosting and ops for the exact same CMS that Dries' business hosts (Drupal, a PHP-based CMS) and the absolute garbage that some people are able to put together in frameworks like Wordpress and Drupal is truly astounding. I'm not even talking about the janky local businesses where their nephew who was handy with computers made them a Wordpress site - big multinational companies have sites in these frameworks that can barely handle 1x their normal traffic and more or less explode at 1.5x.
The business of hosting these customers' poorly optimized garbage remains a big business. But we're entering into an era where the people who produce poorly optimized software have a different path to take rather than throwing it to a SaaS platform that can through sheer force of will make their lead-weight airplane fly. They can spend orders of magnitude less money to pay an LLM to make the software actually just not run like shit in the first place. Throwing scaling at the problem of 99.95% is a blunt instrument that only works if the person paying doesn't have the time, money, or knowledge to do it themselves.
Companies like these (including the one I work for currently) are absolutely going to get squeezed from both directions. The ceiling is coming down as more realize they can do their own devops, and the floor is rising as customer code quality gets better. Eventually you have to try your best to be 3 ft tall instead of 6.
Hence (as TFA points out) open source code from commercial entities was just a marketing channel and source of free labor... err, community contributions... to auxiliary offerings that actually made money. This basic economic drive is totally natural but creates dynamics that lead to suboptimal behaviors and controversy multiple times.
For instance, a favorite business model is charging for support. Another one was charging for a convenient packaging or hosting of an “open core” project. In either case, the incentives just didn’t align towards making the software bug-free and easily usable, because that would actively hamper monetization. This led to instances of pathological behavior, like Red Hat futzing with its patches or pay-walling its source code to hamper other Linux vendors.
Then there were cases where the "open source" branding was used to get market-share, but licenses restricted usage in lucrative applications, like Sun with Java. But worse, often a bigger fish swooped in to take the code, as they were legally allowed to, and repackage it in their own products undercutting the original owners. E.g. Google worked around Sun's licensing restrictions to use Java completely for free in Android. And then ironically Android itself was marketed as "open source" while its licensing came with its own extremely onerous restrictions to prevent true competition.
Or all those cases when hyperscalers undercut the original owners’ offerings by providing open source projects as proprietary Software as a Service.
All this in turn led to all sorts of controversies like lawsuits or companies rug-pulling its community with a license change.
And aside from all that, the same pressures regularly led to the “enshittification” of software.
Open Source is largely a socialist (or even communist) movement, but businesses exist in a fundamentally capitalistic society. The tensions between those philosophies were inevitable. Socialists gonna socialize, but capitalists gonna capitalize.
With AI, current OSS business models may soon be dead. And personally I would think, to the extent they were based on misaligned incentives or unhealthy dynamics, good riddance!
Open Source itself will not go away, but it will enter a new era. The cost of code has dropped so much, monetizing will be hard. But by the same token, it will encourage people, having invested so much fewer resources creating it, to release their code for free. A lot of it will be slop, but the quantity will be overwhelming.
It’s not clear how this era will pan out, but interesting times ahead.
yes, this is indirectly hinting that during training the GPL tainted code touches every single floating point value in a model making it derivative work - even the tokenizer isn't immune to this.
A tokenizer's set of tokens isn't copyrightable in the first place, so it can't really be a derivative work of anything.
the only reason usermode is not affected is because they have an exclusion for it and only via defined communication protocol, if you go around it or attempt to put a workaround in the kernel guess what: it still violates the license - point is: it is very restrictive.
this is correct. If you open source your software, then why are you mad when companies like AWS, OpenAI, etc. make tons of money?
Open Source software is always a bridge that leads to something else to commercialize on. If you want to sell software, then pick Microsoft's model and sell your software as closed source. If you get mad and cry about making money to sustain your open source project, then pick the right license for your business.
That's one of the issues with AI, though; strongly copylefted software suddenly finds itself unable to enforce its license because "AI" gets a free pass on copyright for some reason.
Dual-licensing open source with business-unfriendly licensing used to be a pretty good way to sell software, but thanks to the absurd legal position AI models have managed to squeeze themselves into, that stopped in an instant.
And, in many cases, you had to produce that value yourself. GPL licensing lawsuits ensured this.
AI extracting value from software in such a way that the creators no longer can take the small scraps they were willing to live on seems likely to change this dynamic.
I expect no-source-available software (including shareware) to proliferate again, to the detriment of open source.