That's a bold claim. I'd expect Akamai, Google and Amazon to be peered at more locations, maybe even more. Or have they become that big?
Another day another dubious Cloudflare marketing claim.
Participating in the most IXs does not mean you have the most BGP peers. Simply participating in an IX does not mean that every other participating network at that IX automatically agrees to peer with you. Using AMS-IX as an example:
"Every member or customer at AMS-IX is in the position to peer with any or all other connected ISP's although they are not required to"[1]
And this is pretty standard across all IXs.
For instance Facebook might be far down that chart regarding the number of IX fabrics they peer at but if 99% of of the participants at those agree to peer with them there then they are likely "more peered" than you.
Trade offs I see
* Cloudflare is probably slightly closer to customers, but cloudfront is still very close
* lambda@edge should be cheaper with amazons scale and ecosystem
* cloudfront itself is very expensive compared to Cloudflare, almost every project can integrate Cloudflare and use this, where small projects don’t really make sense for cloudfront
I suspect price will have more to do with the tech's underlying ability to scale to lots of (separately-sandboxed) customers at lots of edge locations. Our tech is pretty different from Amazon's so it will be interesting to see how that shakes out.
The issue with game servers is that you probably need to make sure all the players in the same game instance hit the same worker. There won't be any way to do that with workers in v1. But, this is definitely something we've thought about, and as a big gamer myself I would like to see it happen someday. I have some particular ideas for a different kind of worker (not a Service Worker) that serves this use case. But it's probably a year out.
Maybe different tariffs based on the percentage of reads that you can serve locally, the average latency or something similar, in order to avoid having to replicate every piece of data to every edge?
As an example, I would assume worker is making requests with the internal view of the site, but can not have an internal view of other sites or security problems would ensue.. So what happens when two of my sites have service workers fetching something from each other on each request?
If a request bounces back such that the same worker script would need to run twice as a result of a single original request, then it fails with an error. There's nothing else we can do here: we can't let the request loop, but we also can't let it skip your script after it's bounced through a third-party script.
Does this hold true even for subrequests to your own zone, where you say here that they go directly to the origin server?
Very interesting, especially since it allows for a Cloudflare ESI system.
Wonder if many will take advantage of it