While the general trend today is to back the serverless environment with Javascript runtimes (Cloudflare runs its edge on top of V8, Netlify uses deno, most other serverless runtimes use nodejs), I'm optimistic that WebAssembly will take over this space eventually, for a bunch of reasons like:
1. Running a WASM engine on the cloud means, running user code with all security controls, but with a fraction of the overhead of a container or nodejs environment. Even the existing Javascript runtimes, comes with WebAssembly execution support out of the box! which means these companies can launch support for WASM with minimal infra changes.
2.It unlocks the possibility of running a wide range of languages. So there’s no lock-in with the language that the Serverless provider mandates.
3.Web pages that are as ancient as the early 90s are perfectly rendered even today in the most modern browsers because the group behind the web standards strive for backward compatibility. WebAssembly’s specifications are driven by those same folks - which means WASM is the ultimate format for any form of code to exist. Basically, it means a WASM binary is future proof by default.
I've published my (ranty) notes on why Serverless will eventually replace Kubernetes as the dominant software deployment technique, here - https://writer.zohopublic.com/writer/published/nqy9o87cf7aa7...
At Wasmer [1] we have been working actively towards this future. Lately more companies have been also doing awesome work on these fronts: Lunatic, Suborbital, Cosmonic (WasmCloud) and Fermyon (Spin). However, each of us with a different take/vision on how to approach the future of Computation at the Edge. I'm very excited about to see what each approach will bring into the table.
I mean, only in theory or when looking at it from the right angle, right? Or are you only comparing against JavaScript (unclear)? WASM is still much slower than native code. Containers spend most of their time executing native code; the "overhead" of containers is at the boundaries and is minor compared to the slowdown by moving from native code to WASM. In the future WASM may approach native performance, but it's not there now. I'm 100% certain that transitioning my native-code-in-containers workloads to WASM would be slower, not faster.
Anybody know if a common API for serverless components is being worked on?
Relevant: https://xkcd.com/927/
We are opening up early access to our connection pooling features in the next couple of weeks which allows FaaS platforms like Netlify, Cloudflare etc to create large numbers of ephemeral connections, without impacting your origin database, as well as reducing connection latency significantly.
For example, I write a simple single responsibility piece of code in Go `add_to_cart.go` and build it, deploy it, and somehow map it to some network request. dot slash pass args, and return the result?
No need to have containers or runtime?
A caveat is that most non trivial applications need something more than running a function.
You might need secrets management, ephemeral and non ephemeral storage, relational databases, non relational databases, dependency management, AAA capabilities, observability, queues/async, caching, custom domains... That's where said offerings differ.
EDIT: actually most FaaS offerings take code as input, not binaries. I'm not sure if that was the relevant part of your question. If it was, then yeah I don't know of such service.
AWS still needs the container/runtime to stop your code getting access to other things on the same physical computer.
Appreciate any suggestions or feedback.
I love the promise of WASM, but every time I look at it I get lost in a sea of acronyms, and my optimistic ideas of using language X with library Y on runtime Z are dashed because there is some missing piece somewhere.
If anything, the "any language" thing creates a giant matrix of potential pitfalls for the programmer.
In comparison, the combination of JS/TS, the browser API and a solid std lib looks pretty good for some problems.
Go's support is pretty good (with tinygo offering a tiny runtime more suited to this application). Rust appears to support compiling directly to WebAssembly, and there are some smaller languages like AssemblyScript and Lua with support. I'm guessing plain C works fine. Then there are projects that compile the runtime for interpreted languages to WebAssembly, so you can theoretically run things like Python.
Nobody is writing applications in C or AssemblyScript, so that leaves rust or go. If you're using one of those languages, though, you can just (cross-)compile a binary and copy it to a VM that is on some cloud provider's free tier, so this isn't really easing any deployment woes. It was already as easy with native code, so WebAssembly isn't adding much stuff here. (The isolation aspect was interesting in the days before Firecracker, but now every computer has hardware virtualization extensions and so you can safely run untrusted native code in a VM at native speeds.)
Anyway, I always wanted WebAssembly for two things: 1) To compile my React apps to a single binary. 2) To use as a plugin system for third-party apps (so I don't have to recompile Nginx to have OpenTracing support, for example). The language support hasn't really enabled either, so I'm a little disappointed. (Disappointed isn't really fair. I've invested no effort in this, and I can't be disappointed that someone didn't make a really complicated thing for me for free. But you know what I mean.)
As far as I can tell from the outside, that's still "WASM-called-by-Javascript", and many of their JS optimizations don't work the same way. E.g. if a Worker calls JS `fetch` and returns that `Response`, they recognize that and remove the JS from the data path; same is not true for WASM at this time.
To be honest on the server side of things containers are so nice because 99% of the time they include all your dependencies you need to run the app.
How would that work? Don't these tend to facilitate cloud lock-in or at least be cloud-only in the sense that they make it hard to operate your own metal infrastructure?
Cloud Functions is literally code you're running in the cloud. And the moment you approach their limit(ation)s, you will see the same "rising cloud deployment costs"
> Running a WASM engine on the cloud means ... a fraction of the overhead of a container or nodejs environment
You do realise that there are other languages than javascript in nodejs? That there other environments than cloud functions? And that you can skip that overhead entirely by running with a different language in a different environment? Or even run Rust in AWS Lambda if you so wish?
> so there’s no lock-in with the language that the Serverless provider mandates.
And at the same time you're advertising for a runtime lock in. This doesn't compute.
> Web pages that are as ancient as the early 90s are perfectly rendered even today... Basically, it means a WASM binary is future proof by default.
It's not future proof.
Web Pages from the 90s are not actually rendered perfectly today because browsers didn't agree on a standard rendering until late 2000s, and many web pages from the 90s and 2000s were targeting a specific browser's feature set and rendering quirks. Web Pages from the 90s are rendered good enough (and they had few things to render to begin with).
As web's standards approach runaway asymptotical complexity, their "future-proofness" is also also questionable. Chrome broke audio [1], browsers are planning to remove alert/confirm/prompt [2], some specs are deprecated after barely seeing the light of day [3], some specs are just shitty and require backtracking or multiple additional specs on top to fix the most glaring holes, etc.
> I've published my (ranty) notes on why Serverless will eventually replace Kubernetes as the dominant software deployment technique
"Let's replace somewhat unlimited code with severely limited, resource constrained code running in a slow VM in a shared instance" is not a good take.
[1] https://www.usgamer.net/articles/google-chromes-latest-updat...
[2] https://dev.to/richharris/stay-alert-d
[3] https://chromestatus.com/feature/4642138092470272 and https://www.w3.org/TR/html-imports/
Maybe I don't get the idea (and honestly I was too lazy to put in the legwork), but when I hear something like "serverless" I imagine some p2p javascript federated decentralized beast where the shared state is stored through magic and tricks with the users clients and there is literally no server anywhere to be found.
Instead it seems like a buzzword (?) for a weirdly niche way of running things that someone with a 4 Euro/Month nginx instance that hosts 10 websites will probably never understand.
Maybe I also don't need to understand because I know how to leverage static content, caching, fast Rust reverse proxy services and client side javascript to develope fast web stuff that gets the job done).
This could have been a great story but then tons and tons of VC money came in and now you’d have to think of ways to make the valuation worth it and make the product sticky: so now we have edge Deno powered functions, lambda-esq applications, form embedded HTML and so much other features that are used by the long tail of their customer base while they changed their price to charge by git committees and have daily short downtimes of 1 to 5 mins for the past month (monitored by external services, as they wouldn’t reflect that in their status page).
Soon, they’ll sell the company to some corp like Akamai or similar “enterprise” outfit leaving us high and dry.
There is a lot of money in building businesses that do boring stuff that just makes peoples lives easier. But when you take VC money, you’d need to build a moat to fend off cloud providers from the bottom, capture the value for the top from developers and everything in between.
Chime in if you’d like to be one of the first few customers. If there’s enough interest here’s how I’d play it:
1. I won’t raise VC money. I know how to build a SaaS business without it—I bootstrapped Poll Everywhere from $0 to $10m+.
2. My motivations these days are to build low complexity products. Ideally they’re “evergreen”, meaning I can ship a core feature set that I know will be the same in 10 years. The feature I’m selling their is stability.
3. I like to price things in a way that makes them accessible to as many people as possible while being sustainable for the business so it can operate for a long time with the support it needs for customers.
It fits within their goal of a 'heroku for frontend websites', for easily deploying sites.
No experience from the Netlify of old to compare with, though.
It seems that the free plan is 3M invocations/mo, starter is 15M/mo, and business is 150M/mo, but there aren't any ways to increase those limits (business says to contact them for higher limits).
Personally I'd prefer true pay-as-you-go without hard limits, even if it's a bit more expensive. To me the point is to sign-up-and-forget-it without having to worry if I'm within those limitations.
Relevant quote from the article outlining the policy changes:
> For sites connected to private Git repositories on Pro and Business teams, Git contributors will need to be team members in order to trigger builds.
> Teams will only be billed for the number of team members. Currently, Git contributors are people who trigger builds on your team’s site(s). Moving forward, in order to trigger builds, Git contributors who aren’t Team Members, such as people in the ‘Contributors via Git’ section, Reviewers, or people not on the team entirely, will need to have their deploy approved by a team Owner.
> Once their deploy is approved, they’ll be invited to become a Team Member and can deploy without approval from then on. If their deploy is rejected, their build won’t run and they will not be added as a team member to your monthly bill.
> This change does not apply to sites linked to public repositories or sites on Starter or Open Source plan teams.
So it sounds like you could limit your costs by limiting your team Owners.
This pricing doesn't seem like a good value proposition to me. I see Netlify as a web host and CDN which has products very comparable to some of Cloudflare's products. In those spaces billing is generally based on usage, not number of seats.
What you get from Netlify doesn't scale with the number of seats you pay for.
If I have 1 member on the Business plan I'll pay $99/mo and get 1.5TB of bandwidth per month. If I have 5 members on the Business plan, I'll pay $495/mo and still only get 1.5TB of bandwidth. Hardly seems fair or reasonable.
[1] https://answers.netlify.com/t/upcoming-changes-to-netlify-pl...
Please, a hard no to that. That's the worse aspect of AWS, Azure and all those new huge hosting centers - hard to calculate the real cost and set a budget.
I don't know about Netlify, but the old Linode (before it got acquired), was flexible with its "hard" limits in a plan - for example, if your site got slashdotted / Digged (or was that dug?) and suddenly saw a spike on its resource, exceeding the limits, they were quite accommodating in not charging their users for the unexpected extra usage. Linode even wouldn't mind an occasional surge in resource a few times a year. But if it happened more frequently, they would recommend that you upgrade to a more suitable plan. They earned a lot of goodwill that way from their clients who really appreciated that their server / site wasn't unexpectedly taken offline because of a resource crunch they hadn't paid for and / or anticipated.
I would much rather pay overage fees than have my site go down due to a hard limit, but I would also like the option to choose the opposite.
That way you appeal to both sides of the scalability-vs-predictability crowd.
The marginal cost of a request is probably negligible, hence the tens of millions of requests included, but there is a cost associated with each user making use of their platform because it includes a lot more than just compute, and that's the value they're charging for.
I think if you're looking for a compute provider that offers pay as you go billing in order to minimise your costs, then Netlify probably isn't the platform for you, and you'd be better off using their service provider directly (in this case, Deno, but many Netlify alternatives use Lambda, Cloudflare Workers etc.).
This has been one of the big knocks on AWS, that a poor little old lady can setup a "free" AWS account then when her website (and accompanying Lambda function) goes viral she gets hit with a $100k bill from uncle Jeff.
I don't understand this way of thinking. One of the main benefits of serverless is scalability, peace of mind for precisely when you go viral.
If you're doing something good, especially if you're selling something good, all you want is to go viral. And of you went viral, you don't mind paying the AWS costs, which should be tiny compared to your revenue. Just need to care about your unit economics.
Sure, if you can set a max budget. Otherwise, you'd constantly have to worry about the unbounded cost.
It seems the only way to have control over this is to write your own Cloudflare Workers. There must be a better way? I can't imagine this is an infrequent problem for people at scale.
For anything you can do at build time as a static HTML pages we already strip query parameters from cache keys.
You're experiencing friction trying to use something in a way that it's supposed to not be used. (I.e., click-tracking by junking up URLs.) You could look for an answer, or you could take a step back, evaluate your expectations, and then decide not to do what you're trying to do.
As more and more front-end frameworks starts leaning in on running part of their code at the edge, we felt it was important to champion and open, portable runtime for this layer vs a proprietary runtime tied to a specific platform.
export default async (request: Request, context: Context) => {
return context.rewrite("/something-to-serve-with-a-rewrite");
};
I'm surprised that the function is async but context.rewrite() doesn't use an await. Is that because the rewrite is handed back off to another level of the Netlify stack to process?Using async for functions that do not use await is still a good idea because thrown errors are converted to rejected promises.
`return await` can be useful because it's a signal that the value is async, causes the current function to be included in the async stack trace, and completes local try/catch/finally blocks when the promise resolves
The isolate hypervisor at the core of our cloud platform is built on parts of Deno CLI (since it has a modular design), but each isolate isn't an instance of Deno CLI running in some kind of container.
Isolate clouds/hypervisors are less generic and thus flexible than containers, but that specialization allows novel integration and high density/efficiency.
I do not recommend them anymore. We will move somewhere else.
Almost every few days we get a report that some customers can’t access our site from where they are. Our US east engineers can confirm that their POP is down.
Netlify’s status page says everything is working, but in reality it’s not.
Netlify as a CDN has failed for us on its core promise.
How to use them Drop JavaScript or TypeScript functions inside an edge-functions directory in your project.
Use cases Custom authentication, personalize ads, localize content, intercept and transform requests, perform split tests, and more.
> A bunch of server functionality
Why is that the use case? I don't see how an edge function can be faster than a centralized server endpoint if it has to reach out to literally any other component of the system involved in auth / persistence
You can pre-parse and pre-process JSON responses to minimize the payload size and customize it for your frontend needs. Makes dealing with client secrets and configuration easier too I believe. I didn't want to rewrite a bunch of backend code so this was one of the simplest solution.
Netlify Edge Functions are still in beta and don't have all of the same optimizations yet, but we're going to be working with Netlify over the next few months to enable these optimizations to Netlify Edge Functions too.
CF always seems so cheap compared to alternatives, if you ever expect to scale beyond the developer plans.
They could host their own infra at large enough scale when that makes sense, the same way AWS decided after many years to make their own chips (graviton), but that is not their core identity like AWS is not a chip manufacturer.