Beyond that, they do a lot of things with web, while having very little moat as a company. By that I mean they're involved in a lot of front-end libraries, articles, projects, etc. a lot of which they incorporate into their platform. Which is why everyone praises them for their ease-of-use, but anybody should know that there's a reckoning that comes after the honeymoon period when all this has to be maintained. And that stuff gets to be very expensive.
Which would be fine if they were Google with a really wide moat but they're not. They're a thin layer above the big three cloud providers. It's too easy for a dev to just pack up and move to AWS where they're not paying for the overhead once the project becomes serious. It also doesn't help that they're not seen as a serious solution because of things like their poor customer service.
The big cloud providers “solutions” around this all fall short introducing mindless complexity to upsell you more services often with half baked integration
You almost never need a serverless runtime when you're starting out. If you're building a SaaS, then your need to scale will be proportional to your revenue and you can easily afford to vertically and horizontally scale within your VPS provider using a small fraction of your subscription revenue. By the time you need to go serverless, you can afford to pay someone else to do it.
they have effectively acquihired React (Zuck's mistake). no moat?
(not to mention some genuinely great cloud DX and other platform features eg with their Edge Streaming)
I honestly don't even know what are you saying, is it that Vercel hiring the React folks makes Vercel untouchable somehow?
I see a complete disconnect between the two things. If anything, the "acquihire" could well lead to those React folks doing a job search because Vercel doesn't have a high switching-away-from cost and keeps dropping the ball PR-wise.
How is that a moat? They could completely own React and it still wouldn't prevent me deploying my Next.js project anywhere but Vercel.
> (not to mention some genuinely great cloud DX and other platform features eg with their Edge Streaming)
Vercel didn't invent CDNs.
The Image/img fiasco really pulled the covers off for vercel for me. I have migrated all my work off of the platform.
NextJS’ lint strategically dissuades you from using the img tag in favor of the NextJS Image component. If you make the mistake of heeding this advice and migrating to it, you can’t use static site generation—which means you are stuck using their hosting.
Here’s one of the most PR-shameful threads I’ve ever read in OSS:
* yes, not 100% silent, there is a small gray indicator from Github that it has been edited, but one would not expect a full-on replace with that.
Deceit is indeed the right word to describe Vercel's behavior.
Has this changed?
You could also modify the linting rules to exclude the Image rule.
Not saying vercel is awesome. I no longer use their products. I just didn’t find this particular matter to be problematic.
My advice is to just turn off the linting errors by adding this rule to ".eslintrc.json", like this:
{
"rules": {
// Other rules
"@next/next/no-img-element": "off"
}
}There should be a warning that it won't work with SSG and is only for SSR, but there are also many features on next.js that are SSR only.
I set `cache-control: public,max-age=2592000,immutable` on my SPAs' assets as they're hashed and should be immutable.
But Netlify somehow doesn't atomically swap in a new version: say my index.html referenced /assets/index.12345678.js before and is updated to reference /assets/index.87654321.js instead, there's a split second where Netlify could serve the new index.html referencing /assets/index.87654321.js, while /assets/index.87654321.js returns 404! So users accessing the site in that split second may get a broken site. Worse still, the assets directory is covered by the _headers rule adding the immutable, 30-day cache-control header, and Netlify will even add it to the 404 response... The result is user's page is broken indefinitely until I push out a new build (possibly bricking another set of users) or they clear the cache, which isn't something the average joe should be expected to do.
I ended up having to write a generator program to manually expand /assets/* in _headers to one rule for each file after every build. And users still get a broken page from time to time, but at least they can refresh to fix it. It really sucks.
https://kevincox.ca/2021/08/24/atomic-deploys/
In this case it is possible that Netlify could have avoided the issue where the new HTML loads the old asset but this will just make the problem where the old HTML gets a 404 because the new asset has been swapped worse.
At the end of the day hashed assets is a great idea but you need to keep multiple versions around.
Regardless, I say the solution is fat index files. Is there any tangible benefit to the long held tradition of separating the structure from the functionality from the styling? Seems to me like that’s just asking for trouble.
In addition, fat index files are really bad for multi-page apps.
If I'm deploying to my server, the structure would look like:
/srv/example.com/prod -> /srv/example.com/versions/1
/srv/example.com/versions/1/index.html
/srv/example.com/versions/1/assets/index.12345678.js
/srv/example.com/versions/2/index.html
/srv/example.com/versions/2/assets/index.87654321.js
A new version is atomically swapped in by changing the prod link from versions/1 to versions/2. If you request index.html and get the updated version, there's no scenario where assets/index.87654321.js could 404. Serving an updated index.html but 404 for a later request for assets/index.87654321.js is not reasonable. Of course distributed systems are harder but it's their problem to solve.Note that with a naive web server and the layout above, one could get an old index.html but no assets/index.12345678.js by the time the .js file is requested, but that's less problematic and could be covered by some lingering cache. Or I could simply include the last build's assets as there's no conflict potential.
You recently broke Next.js with AsyncLocalStorage by putting it in globalThis and breaking any runtime that's not Vercel's. With no specification, and other runtimes scrambled to reverse engineer your implementation: https://twitter.com/ascorbic/status/1616811724224471043 and https://twitter.com/lcasdev/status/1616826809328304129
Decades of working in this industry have taught me to value interoperability, modularity and standardization. I love the idea of a framework that makes SSR easy, and the idea of static site generation, and a router that uses the pages approach where each file is a route is handy in some cases (though very clumsy in others), but all of those a different things and aren't really something I need a single monolithic framework for. I may want one or some or none of those things based on what project I'm working on. If NextJS makes it harder to pick and choose what I want to use, or makes some features contingent on using other features I don't really care about, I'm going to start looking really hard for an option that gives me more choices.
Recently (I would say, after the latest investment round), I as a developer see lots of new features implemented seems rushed, not complete or having no thought about community as a whole.
- API Middlewares are not working as expected [1]
- The new pages layout (i.e. app/) are super-weird, implemented in a completely different way from pages/ with 2 incompatible API sets - one for pages/ (i presume soon to be declared legacy and unmaintained, and shiny new but still experimental?)
- Images API just as others pointed, are beneficial to some tiny subset of developers who have bunch of static images locally. But for most projects it is not useful at all at the current implementation.
- Your release versioning does not make sense for a mature stable product. When you release new major version (e.g. v.13) you stop supporting older v.12 version.
- Could you give balazsorban44 bigger team? Next-Auth needs more love to be a great product.
Simply “here’s what we have in mind for next release”. An open framework should not be developed like you folks do it. Look at how Python or Django develop their releases for better examples.
You have RFCs, but most of them are internal and don’t reach the public space. Why?
AppDir is a good example. Yeah you have an extremely high level table of what’s planned vs in progress but there’s nowhere public where these discussions are being held. We’re not able to see nor contribute to the decisions. We’re not able to chime in when bad decisions are obviously being taken. Some companies view this as an asset because they want the power to silently take decisions that are good for them but bad for their users.
When a major version bump with a host of breaking changes drops, we'd prefer to be able to stay on the superseded version and still receive bug fixes for at least some time, with expectations set for what that timeline looks like.
The last releases of v11 (v11.1.1~v11.1.4) lack release information/changelog, and the CI still looks failing for Mac and Windows on v11.1.4 without this being acknowledged.
The final release of v12 was 1 month after v13.0.0. The last few releases of v12 similarly lack release info or changelog. You could say "just look at the git history" but the Nextjs git log really doesn't lend itself well for it (unless you're already a dev on the Nextjs codebase or used to code-auditing, I guess).
For a "foundational" (that's what you aim for it to be, right?) software like Next.js, users should be able to have similar expectations to versioning, releasing, and documentation as Vercel is relying on for Nodejs.
Younger devs follow what the leaders in the ecosystem do, which is how trends and norms rise and change. If Vercel changed its approach here it could contribute to setting a good example instead of showing that maintenance as an afterthought is nbd bro, why don't you update already!
We go through a cycle of: new feature -> patch fix -> minor fix -> improvement -> finally works -> replaced by something else OR it broke again.
For something like NextJs that has been running for years maybe more focus on polishing the existing features would be a good start.
Open:
- Choose any UI framework you want (React/Vue/Solid/...)
- A lot more flexible than Next.js (e.g. i18n and base assets configuration are fundamentally more flexible)
- Keep architectural control (vite-plugin-ssr is more like a library and doesn't put itself in the middle of your stack)
- Use your favorite tools. And manually integrate them with vite-plugin-ssr (for full control without surprises).
- Deploy anywhere (and easily integrate with your existing server/deploy strategy)
- Open roadmap
- Ecosystem friendly
The upcoming "V1 Design" has been meticulously designed to be simple yet powerful. Once nested layouts, single route files, and typesafe links are implemented vite-plugin-ssr will be pretty much feature complete.
Note that, with vite-plugin-ssr, you implement your own renderer, which may or may not be something you want/need/like to do. Built-in renderers are coming and you’ll then get a zero-config DX like Next.js (minus extras like image processing as we believe they should be separate libraries).
Web dev isn't a zero sum game - a vibrant and healthy ecosystem of competing tools can co-exist. (I'm close to be able to make a living with sponsors.)
Vision is to make a truly open and collaborative foundation for meta frameworks.
Let me know if you have any questions.
https://github.com/brillout/vite-plugin-ssr/blob/main/LICENS...
I hit a similar issue when using Cloudflare and the Date header, where I was signing some parts of the response including the Date header. The problem was that if the request hit Cloudflare at just the right^W wrong time, the signature would be invalidated because their Date header value would be different than the original.
They didn’t see it as an issue, even though IIRC the HTTP spec states that a proxy server must not overwrite the Date header if it was set by a prior actor.
Took days of debugging to determine why some requests were producing invalid signatures.
I'm very much starting to distrust these huge companies with infinite product/feature lists and generic marketing-lingo websites.
"Vercel is the platform for frontend developers, providing the speed and reliability innovators need to create at the moment of inspiration."
Seriously?
I want serverless providers that tell me the 4-5 products they offer (Compute, maybe a KV store, maybe a database, maybe some pubsub, maybe a queue?), give me the pricing, and leave me the Hell alone.
I don't want to feel locked into a system promising end-to-end whatever, ones that heavily push a certain framework, and most importantly ones that look like the homepage was designed by a team of sales people instead of a team of engineers.
It's the difference between the Cloudflare Workers website and the Vercel website: Vercel looks like the new-age big-brother con artist, while Workers looks like a utility.
Sorry, what were we talking about? A runaway bill?
Vercel strips them because (1) at the time this RFC didn't exist and (2) most of the time you we found customers don't want to cache on the browser side or proxying CDNs, which makes purging and reasoning about cache staleness very difficult.
Another example there is the default `cache-control: public, max-age=0, must-revalidate`. Without that, browsers have very unintuitive caching behavior for dynamic pages.
Customers want to deploy and see their changes instantly. They want to their customers "go to our blog to see the news" and not have to second guess or fear the latest content will be there.
I appreciate Max's feedback and we'll continue to improve the platform.
I've seen a lot of customers get burn by sending `max-age` as a way of getting their CDN to cache, not realizing they're inadvertently caching on users' machines. Sometimes it's a seemingly harmless "5 minutes", but that can be damaging enough for rapidly changing pages (imagine breaking news on a homepage).
Regarding Vercel, they do have quite poor support so it doesn't feel rock solid and dependable. They are a great start though, but then ideally you should just switch to bare metal on Hetzner or something when you are earning serious money from your business.
They have exercised this discretion repeatedly for significant bandwidth users, usually in the form of "You need to upgrade to the enterprise plan or we will terminate services for your site." One of my sites got the enterprise "offer" after serving single digit TB in a month. Running on a real CDN from the start would have been cheaper than inevitably getting extorted to a large <contact sales> price for <contract term> or terminated with little to no time to migrate things.
[0] Section 2.6 https://www.cloudflare.com/terms/ [1] Section 2.8 https://www.cloudflare.com/terms/
My issue with using Vercel for "real workloads" is the pricing though. 100GB of Bandwidth for $40 is a blocker.
I love how easy the experience is to throw up an app and test it in the real world, the dashboard is great, build times are excellent, but I can't see myself paying that high of a premium.
https://twitter.com/ms_nieder/status/1626995266619420675?s=4...
Is Vercel a business or a scam masquerading as a tech company?
If a company needs to stoop to this level of billing shenanigans to make money, I have my doubts...
Many stories are emerging where it’s clear that trusting Vercel is a risky strategy.
Which is a shame as it makes it harder to learn things like AWS and Google Cloud (or for that matter, Vercel) in my spare time, and perhaps they might even work out cheaper for low-traffic hobby projects, but ultimately the risk is too great.
The lack of other voices on the internet with the same issues led me to believe that I was going insane. Now I'm starting to think going back to a good old VPS might not be such a bad idea.
We need to talk about how we do software architecture and technology choices today. In the time you played around with CDN stuff you easily could build a company listing, fixed your footer and header links, thinking about a pricing, build an apply-form, fixed/build job notifications, ...
Maybe the author intentionally set things up so they can have a low stakes system to learn about these technologies and improve their knowledge. This post is about Vercel and their poor practices. Let’s stay on topic and actually ask the author why such a complex setup instead of assuming you know and speaking off the cuff with criticism
I'd go a step further and say the question of why the author may or may not be overengineering their own app is besides the point.
I'm not saying asking out of curiosity would be a problem, but it shouldn't be construed as at all relevant to the very valid points OP is making regarding Vercel's service
Granted, CloudFront isn't terribly hard to use. It's nice to have all resources in one place, however, it's probably worth sticking to the more mature products for things like content delivery.
You probably don’t even need a CDN at all.
I very very much believe in some DIY & doing things ourselves but I also recognize a ton of value in using CDNs. Glad both are options. DIY is hard.
I'm really sorry we weren't able to get to a resolution faster. I've concluded it's not an issue with the Vercel Edge Network based on the reproduction he provided and pushed documentation and example updates (see below). I know Max spent a lot of time on this with back and forth, so I just want to say again how much I appreciate the feedback and helpfulness by providing a reproduction.
Here are the full details from the investigation:
- Beginning Context: Vercel supports Astro applications (including support for static pages, server-rendered pages, and caching the results of those pages). Further, it supports caching the responses of "API Endpoints" with Astro. It does this by using an adapter[1] that transforms the output of Astro into the Vercel Build Output API[2]. Basically, Astro apps should "just work" when you deploy.
- The article states that to update SWR `cache-control` headers you need to use the `headers` property of `vercel.json`[3]. This is for changing the headers of static assets, not Vercel Function responses (Serverless or Edge Functions). Instead, you would want to set the headers on the response itself. This code depends on the framework. For Astro, it's `Astro.response.headers.set()`[4]. This correctly sets the response SWR headers.
- Vercel's Edge Network does respect `stale-while-revalidate`, which you can validate here[5] on the example I created based on this investigation. This example is with `s-maxage=10, stale-while-revalidate`. Vercel's Edge strips `s-maxage` and `stale-while-revalidate` from the response. To understand if it's a cache HIT/MISS/STALE, you need to look at `x-vercel-cache`. I appreciate Max's feedback here the docs could be better—I've updated the Vercel docs now to make this more clear[6].
- I've started a conversation with the Astro team to see how we can better document and educate on this behavior. In the meantime, I updated the official Vercel + Astro example to demonstrate using SWR caching headers based on this feedback[7].
- The reproduction provided by Max[8] does not show the reported issue. I was not able to reproduce, which is the same result that our support team saw. It sounds like there were some opportunities for better communication here from our team and I apologize for that. I will chat with them. Free customer or not, I want to help folks have the best experience possible on Vercel. If Max (or anyone else) can reproduce this, I am happy to continue investigating.
[1]: https://docs.astro.build/en/guides/integrations-guide/vercel...
[2]: https://vercel.com/docs/build-output-api/v3
[3]: https://vercel.com/docs/concepts/projects/project-configurat...
[4]: https://docs.astro.build/en/reference/api-reference/#astrore...
[5]: https://astro.vercel.app/ssr-with-swr-caching
[6]: https://vercel.com/docs/concepts/edge-network/caching#server
I was surprised and I have to admit a bit dismayed to watch you throw yourself into the fray on a Sunday. The technical issues, which in fact persist, are at this point an aside to the way Vercel has handled this issue.
Vercel’s definitely in a weird place, trying to be the home for innovation while also offering more traditional support. While the support experience you had was less than ideal, you’re also failing to recognize that you are bleeding edge a bit here.
Your reply here makes it really hard to take any of what you’ve done in good faith. Lee has been incredible to work with and I commend his efforts here
Then why does [3] say "This example configures custom response headers for static files, Serverless Functions, and a wildcard that matches all routes.", if it isn't for changing the headers on Serverless Functions? (EDIT: my bet would've been that its adding the headers outgoing from the CDN, not from the function, but your claim above contradicts that too)
Most of the time folks using Vercel aren't actually using these Functions manually, but instead having framework-defined infrastructure[3] that generates the functions based on their framework-native code (e.g. `getServerSideProps` in Next.js)
[1]: https://vercel.com/docs/concepts/functions/serverless-functi...
[2]: https://vercel.com/docs/concepts/functions/edge-functions/ed...
[3]: https://vercel.com/blog/framework-defined-infrastructure
The first time I encountered it must have been 3 years ago. I have a feeling vercel doesn’t care.
Vercel and any other company in their space follow the same old playbook:
They play opensource to attract users and build nice stuff developers like (not necessarily what they need) to win market share and developer's mind and heart.
When they are above the competition, thanks to the free contributions of the community, they reveal their true nature and start play greed.
Developers get upset and start ranting on HN.
How many times do I need to see developers playing this movie? It's is the same shit over and over and over again.
https://twitter.com/eigenseries/status/1645515739280064512?s...
I know Cloudflare doesn't. I thought Vercel did but apparently not.
Yeah there is no cap on spend (Which cloud services do this? None afaik) but if you’re really worried about getting DDOSd then put Cloudflare in front of Vercel.