- many, if not most MLEs that got started after LLMs do not generally know anything about machine learning. For lack of clearer industry titles, they are really AI developers or AI devops
- machine learning as a trade is moving toward the same fate as data engineering and analytics. Big companies only want people using platform tools. Some ai products, even in cloud platforms like azure, don’t even give you the evaluation metrics that would be required to properly build ml solutions. Few people seem to have an issue with it.
- fine tuning, especially RL, is packed with nuance and details… lots to monitor, a lot of training signals that need interpretation and data refinement. It’s a much bigger gap than training simpler ML models, which people are also not doing/learning very often.
- The limited number of good use cases means people are not learning those skills from more senior engineers.
- companies have gotten stingy with sme-time and labeling
What confidence do companies have in supporting these solutions in the future? How long will you be around and who will take up the mantle after you leave?
AutoML never really panned out, so I’m less confident that platforming RL will go any better. The unfortunate reality is that companies are almost always willing to pay more for inferior products because it scales. Industry “skills” are mostly experience with proprietary platform products. Sure they might list “pytorch” as a required skill, but 99% of the time, there isn’t hardly anyone at the company that has spent any meaningful time with it. Worse, you can’t use it, because it would be too hard to support.
More than once I've just done labeling "on my own time" - I don't know the subject as well but I have some idea what makes the neurons happy, and it saves a lot of waiting around.
I've found tuning large models to be consistently difficult to justify. The last few years it seems like you're better off waiting six months for a better foundation model. However, we have a lot of cases where big models are just too expensive and there it can definitely be worthwhile to purpose-train something small.
In true hacker spirit, I don't think trying to train a model on a wonky GPU is something that needs an ROI for the individual engineer. It's something they do because they yearn to acquire knowledge.
I have avoided fine tuning because the models are currently improving at a rate that exceeds big corporate product development velocity.
I ask a version of this every six months or so, and usually the results are quite disappointing.
This time I had more credible replies than I have had in the past.
Here's my thread with highlights: https://twitter.com/simonw/status/1979254349235925084
And in a thread viewer for people who aren't signed into Twitter: https://twitter-thread.com/t/1979254349235925084
Some of the most impressive:
Datadog got <500ms latency for their language natural querying feature, https://twitter.com/_brimtown/status/1979669362232463704 and https://docs.datadoghq.com/logs/explorer/search/
Vercel run custom fine-tuned models on v0 for Next.js generation: https://vercel.com/blog/v0-composite-model-family
Shopify have a fine-tuned vision LLM for analyzing product photos: https://shopify.engineering/leveraging-multimodal-llms
Even worse, even if you DO get an improvement you are likely to find that it was a waste of time in a month or two when then next upgraded version of the underlying models are released.
The places it makes sense from what I can tell are mainly when you are running so many prompts that the cost saving by running a smaller, cheaper model can outweigh the labor and infrastructure costs involved in getting it to work. If your token spend isn't tens (probably hundreds) of thousands of dollars you're unlikely to save money like this.
If it's not about cost saving, the other reasons are latency and being able to achieve something that the root model just couldn't do.
Datadog reported a latency improvement, because fine-tuning let them run a much smaller (and hence faster) model. That's a credible reason if you are building high value features that a human being is waiting on, like live-typing features.
The most likely cases I've heard of for getting the model to do something it just couldn't do before mainly involve vision LLMs, which makes sense to me - training a model to be able to classify images that weren't in the training set might make more sense than stuffing more example images into the prompt (though models like Gemini will accept dozens of not hundreds of comparable images in the prompt, which can then benefit from prompt caching).
The last category is actually teaching it a new skill. The best example here are low-resource programming languages - Jane Street and OCaml or Morgan Stanley and Q for example.
Jane Street OCaml: https://www.youtube.com/watch?v=0ML7ZLMdcl4
Morgan Stanley Q: https://huggingface.co/morganstanley/qqWen-1.5B-SFT
- PaddleOCR, a 0.9B model that reaches SOTA accuracy across text, tables, formulas, charts & handwriting. [0]
- A 3B and 8B model which performs HTML to json extraction at GPT-5 level accuracy at 40-80x less cost, and faster inference. [1]
I think it makes sense to fine tune when you're optimizing for a specific task.
[0] https://huggingface.co/papers/2510.14528
[1] https://www.reddit.com/r/LocalLLaMA/comments/1o8m0ti/we_buil...
I've played around with doc recognition quite a bit, and as far as I can tell those two are best-in-class.
Our thesis was that fine tuning would be easier than deep learning for users to adopt because it was starting from a very capable base LLM rather than starting from scratch
However, our main finding with over 20 deployments was that LLM fine tuning is no easier to use than deep learning
The current market situation is that ML engineers who are good enough at deep learning to master fine tuning can found their own AI startup or join Anthropic/OpenAI. They are underpaid building LLM solutions. Expert teams building Claude, GPT, and Qwen will out compete most users who try fine tuning on their own.
RAG, prompt engineering, inference time compute, agents, memory, and SLMs are much easier to use and go very far for most new solutions
Otherwise, you should just use gpt5
Preparing a few thousands training examples and pressing fine tune can improve the base LLM in a few situations, but it also can make the LLM worse at other tasks in hard to understand ways that only show up in production because you didn’t build evals that are good enough to catch them. It also has all of the failure modes of deep learning. There is a reason why deep learning training never took off like LLMs did despite many attempts at building startups around it.
Andrej karpathy has a rant about it that captures some of the failure modes of fine tuning - https://karpathy.github.io/2019/04/25/recipe/
Training an LLM from scratch is trivial - training a good one is difficult. Fine tuning is trivial - doing a good job is difficult. Hitting a golf ball is trivial - hitting a 300 yard drive down the middle of the fairway is difficult.
We have a lot of more capable open source models now. And my guess is that if you designed models specifically for being fine tuned, they could escape many of the last generation pitfalls.
Companies would love to own their own models instead of renting from a company that seeks to replace them.
One annoying part was switching to new and better models that came out literally every week.
I don’t think it substantially changes anything. If anything I think the release of more advanced models like qwen-next makes things like fp4, moe, and reasoning tokens an even higher barrier of entry.
Yes, 100s of housands of them
Naturally I reached for CLIP+ViT which got me a ~60% success rate out of the box. Then based on that, I created a tiny training script that read `dataset/{slide,no_slide}` and trained a new head based on that. After adding ~100 samples of each, the success rate landed at 95% which was good enough to call it done, and circle back to iterate once I have more data.
I ended up with a 2.2K large "head_weights.safetensors" that increased the accuracy by ~35% which felt really nice.
I discuss a large-scale empirical study of fine-tuning 7B models to outperform GPT-4 called "LoRA Land", and give some arguments in the discussion section making the case for the return of fine-tuning, i.e. what has changed in the past 6 months
For training data, I was thinking you could just put all the stuff into context, then give it some prompts, and see how the responses differ over the baseline context. You could feed that into the fine tuner either as raw prompt and the output from the full-context model, or as like input="refactor {output from base model}", output="{output from full-context model}".
My understanding is that LoRA are composable, so in theory MCPs could be deployed as LoRA adapters. Then toggling on and off would not require any context changes. You just enable or disable the LoRA adapter in the model itself. Seems like this would help with context poisoning too.
There is growing emphasis on efficiency as more companies adopt and scale with LLMs in their products.
Developers might be fine paying GPT-5-Super-AGI-Thinking-Max prices to use the very best models in Cursors, but (despite what some may think about Silicon Valley), businesses do care about efficiency.
And if you can fine-tune an 8b-parameter Llama model on GPT-5 data in < 48 hours and save $100k/mo, you're going to take that opportunity.
Together with speed and const, this is from my point of view this is the only "case" for the return of fine-tuning here. And this can be managed by context management.
With growing context sizes, first RAG replaced fine-tuning and later even RAG was replaced by just a good-enough prompt preparation for more and more usage pattern.
Sure, speed and costs are important drivers. But like with FPGAs vs. CPUs or GPUs, the development costs and delivery time for high-performance solutions, eliminate the benefit most the time.
It requires no local gpus, just creating a json and posting to OpenAI
Curious to hear others’ thoughts on this
I don't think anyone thought fine tuning was dead.
The main claim was that new models were much better than anything you could get your hands on to fine tune.
IMO, intuitively that never made sense. But I never tested it either.
In any case, platforms like tinker.ai support both SFT and RL.