This blog post is a nice example: 30 requests (ublock origin blocked another 12, with those enabled the time to load increases to a whopping 28 seconds), 2.5M transferred, 7 seconds load time. And all that for 4K payload + some images.
The number of connections to one host isn't relevant in HTTP2. As ublock is blocking some ~20 connections these are going to different hosts. Connecting to a different host in HTTP2 is no different then HTTP1.1
If your HTTP2 is terminating at MANY boxes within your infrastructure you are failing to understand how HTTP2 works. Connections within a single TLS/TCP/IP connection are free, new TLS/TCP/IP connections cost exactly as much as before.
I am currently working as a dev in an SEO-Agency (in Austria), and we never believed this hypothesis - so we tested this once with a bunch of our sites:
When moving sites with a German speaking audience to a VPS in America, your rankings at google.de/google.at will decrease (slightly - the effect is not that big) - the other way around your rankings will improve (slightly).
However - even if your rankings would improve when moving to America I would recommend keeping your sites hosted in Europe: The increase in rankings will not offset the decrease in user satisfaction and therefore the decline in your conversion rates.
A bit off-topic, but out of curiosity, have you run any other interesting experiments like this? I would love to read a blog post about them.
And here is the article on budget https://webmasters.googleblog.com/2017/01/what-crawl-budget-...
Although doesn't help for all types of requests, it has its uses.
I would also assume that Google is smart enough to take the physical location of your server into account when calculating how much penalty to apply in which searches. Sites that load fast in Germany should have higher ranks in searches from Germany.
Is this the initial handshake which understandably introduces latency?
After that, times should be similar. What could be killing users far away is requiring multiple handshakes because multiple things requiring handshakes are being introduced at the same time.
For reference, I'm physically located in China so requests have to go through a bunch of filtering-oriented routers, and get 150-180ms from US, 200ms Japan and 180ms Singapore (yay geography) and around 200-250ms from Europe - this is SSL requests and not from a connections hub like Shanghai or Shenzhen close to domestic exit-points. Double to triple these times for first handshake.
From a VPS in Sydney, with a Good Enough bandwidth:
root@sydney:~# speedtest-cli 2>&1 | grep -e Download: -e Upload:
Download: 721.20 Mbits/s
Upload: 117.89 Mbits/s
… doing the request through Railgun is "quite bearable": root@sydney:~# ./rg-diag -json https://www.theregister.co.uk/ | grep -e elapsed_time -e cloudflare_time -e origin_response_time
"elapsed_time": "0.539365s",
"origin_response_time": "0.045138s",
"cloudflare_time": "0.494227s",
Despite our "origin" server being quick enough, the main chunk of time is really "bytes having to travel half the world".Why does railgun help? Because this is what a user would get otherwise; the "whitepapers" site is hosted in the UK, and doesn't use Cloudflare or Railgun – it only uses Cloudflare for DNS:
./rg-diag -json http://whitepapers.theregister.co.uk/ | grep elapsed_time
"elapsed_time": "0.706277s",
… so that's ~200ms more, and _on http_.How much would https add, if it were done without Cloudflare's https and Railgun? That's easy to check, as our the whitepapers site has TLS (although admittedly not http/2):
root@sydney:~# ./rg-diag -json https://whitepapers.theregister.co.uk/ | grep elapsed_time
"elapsed_time": "1.559860s",
that's quite a huge chunk of time that Cloudfalre HTTPS + Railgun just saves/shaves for us. Recommend it highly!That would be interesting.
root@sydney:~# ./rg-diag -json https://thereglabs.com/ | grep -e elapsed_time
"elapsed_time": "0.863677s",
So that's from Sydney to the UK, with https served by Cloudflare. The webapp serving that isn't the sharpest knife in the drawer, but when tested on localhost it replies in 0.015s – the rest is time taken moving bytes across the world. root@sydney:~# time curl -sH 'Host: thereglabs.com' -H 'Cf-Visitor: {"scheme":"https"}' http://THE_ORIGIN_SERVER/ -o/dev/null
real 0m0.821s
… and this is plain HTTP to the origin server: the free plan is great for offloading HTTPS at basically no cost in time added.We've got another domain on the business plan… so let's try that one.
This is an _image_ request, which is _cached by cloudflare at the edge_:
root@sydney:~# ./rg-diag -json https://regmedia.co.uk/2016/11/09/hypnotist_magician_smaller.jpg | grep elapsed_time
"elapsed_time": "0.239641s",
Lovely, the "local caching" of their CDN helps a ton!… compared to if we were to request the same file from the ORIGIN_SERVER over HTTP:
root@sydney:~# ./rg-diag -json http://ORIGIN_SERVER/2016/11/09/hypnotist_magician_smaller.jpg | grep elapsed_time
"elapsed_time": "0.704458s",
… but our "origin server" _also_ is likely to have the image in the "memory cache"…… and that image was likely in their cache; so… let's add a parameter so they _will_ have to ask the origin server:
$ pwgen 30 2
Eehacoh2phoo1Ooyengu6ohReWic2I Zeeyoe8ohpeeghie3doyeegoowiCei
There you go… two new randomly generated values… root@sydney:~# ./rg-diag -json 'https://regmedia.co.uk/2016/11/09/hypnotist_magician_smaller.jpg?Eehacoh2phoo1Ooyengu6ohReWic2I=Zeeyoe8ohpeeghie3doyeegoowiCei' | grep elapsed_time
"elapsed_time": "1.198940s",
Yup, took quite a bit longer than the 200ms it took when the image URL was fully in their cache.All in all, from the point of view of being able to _easily_ serve people on the other side of the world with a "good enough" (not great, mind you!) response time, both "standard" Cloudflare, the "pro" offering _and specifically_ the "business" offering are just effin AWESOME.
It's a bit older, but here's some info, much of it is still valid: https://istlsfastyet.com/
Without OCSP browser makes slow request to CA, but caches results for a long time so slow request happens not often.
With OCSP stapling enabled more data is transferred between client and server on each TLS handshake.
Main proponents of OCSP stapling are CA, because it saves them bandwidth/hardware.
I'm not certain how session resumption plays into this either. If OCSP is skipped for resumed session as well (which would be my guess), you'd probably not take that small bandwidth hit all that often.
As an aside, OCSP stapling improves your user's privacy quite a bit as well, by not giving your CA a list of all IP addresses connecting to a domain.
It's probably controversial, but I'd love to see a yellow security icon in browsers when sites are using well known https relays that can see plaintext (or are doing other obviously bad things, like running software with known zero day exploits, etc)
Most websites are on virtual servers (hardware in general) that is not owned by them. For example, Amazon could easily let the NSA look into your AWS server directly. IMO, the url lock should just be an encryption auditor. The end website is using acceptable algorithms and has a currently valid certificate? That's good enough.
Almost any HTTPS site can be forged/"broken" (unless they're using preloaded HPKP), if the attacker has root certificates (or even just a bug in a CA website), which the NSA certainty does.
Nation state adversaries just aren't really within the typical TLS threat model. I do concede that it does make agencies jobs much harder if used correctly, however.
CloudFlare's "Flexible SSL" offering means a CloudFlare "https://" site is quite likely to not even have that level of security though. They send supposedly HTTPS data unencrypted and unauthenticated across the open Internet; if that doesn't warrant a yellow/red icon then I don't know what does.
Hm. Good idea, why not go a step further and turn the 'no server signatures' advice on it's head: full disclosure, server signatures on, in fact, list each and every component in the stack so that end users can (through some plug-in) determine whether or not a site is safe to use.
Of course nothing bad could ever come from that. /s
I'm all for making the use of for instance Cloudflare less transparent so that users know who they are really talking to, but I'm confused about how you'd want to establish what a site is running without giving a potential attacker a lot of valuable information.
FWIW, my personal website uses let's encrypt, so it would be yellow or worse.
Anyway, I like the idea of tying the security color in the url bar to an attacker model, since it at least gets people to think about attack models.
Does anyone remember a few years ago when Google found out through leaks that the govt was wiretapping it's private traffic between datacentres?
What makes you so naive to think that the govt isn't sniffing every single page on cloudflare?
The risk here is real, but it's much more pervasive than one data handler.
If your risk profile is outside the boundaries of normal internet use then you likely already know what to do - and we now have a multitude of tools for more private communications.
This analysis seems flawed. If you care about mass surveillance, you want their top-tier security and legal teams working for you.
In an ideal world you'd want Youtube's (Google's) legal teams working for you, protecting you against DMCA abuses and alike... but they don't. Not unless you're a top-tier YouTuber and even then it's laughable dice roll as to whether they feel the bad press is worth their time to do anything.
And I don't believe from a bottom-line perspective the shareholders believe it's worth their time to do anything more than provide a platform (no matter how problematic) and market it.
Will Cloudflare do everything they can to keep your content accessible? Sure. Anything above and beyond that? lol. Good luck with that...
Nice part about cloudflare though is that they can use anycast to determine location and then send the closest server IPs. For sub-$200/mo, you're not able to do that, you'd have to find a provider that could do it for you, I'm not sure anyone offers country-based anycast DNS alone.
EDIT: Looks like easyDNS enterprise may be able to do it, https://fusion.easydns.com/Knowledgebase/Article/View/214/7/... for about $12.75/mo too. Might be a decent way to brew your own mini caching CDN for fairly cheap.
Anycast doesn't determine location or send the closet IPs, it's all the same IP address announced using BGP (border gateway protocol) to automatically route to the closest (in network travel) server.
Among other reasons, not encrypting traffic gives an opportunity for bad actors to replace content in transit to your end users when your end users are on compromised connections, such as rogue "free" wifi networks in airports or coffee shops, or even legitimate networks which have in some way been compromised, e.g. the ISPs of the world who decide to inject other content e.g. their own ads into unencrypted traffic.
The next question is usually "what could they possibly do, change a few pictures?"
They could inject malicious payloads, and for all your users would know, it would appear to them that it came from your site.
> I can't use LetsEncrypt with my hosting provider
Consider switching. For a static site, consider Gitlab; they do a good job of permitting LetsEncrypt.
---
I sincerely appreciate the question, though. I have marketing people ask me this question all the time in private who hesitate to do so in public because quite a few security types berate them for not doing something "obviously" more secure. It's not at all obvious to most of the world's web designers and content creators that a static site should be TLS'd until it's framed (heh) in this manner. The fact that you asked brings about a massive educational moment.
Anyway, consider switching hosts. :)
don't know the answer myself here.. there are good technical reasons, I agree..
but it is a logical fact that if google search was always 100%, there would be no need for adwords and site ads...
The Internet is not a safe place. We should aim for HTTPS EVERYWHERE.
They're not as easy to get away from as you think.
You should think about https for sites like yours the way you think about vaccines. SSL everywhere makes everyone safer, even though it doesn't have a tremendous impact on your own site.
Also, shameless plug, if you want really easy SSL you can use our new startup: https://fly.io. I'm not sure what country you're in, but we have a bunch of servers all over to help make it fast. :)
The second is more moral. Making https the default means more and more of the web will be encrypted and authenticated. This is a good thing.
First that almost every firewall out there right now supports https snooping via MITM. Example: https://www.paloaltonetworks.com/features/decryption
Second, I just got back from rural China where most unblocked american webpages take between 5-15 seconds to load on my mobile phone many of them take upwards of a minute to load fully. This seems to be a fun combo of network latency, smaller than expected bandwidth, and pages using javascript with a series of different load events to display content. That dompageloaded->xmlhttprequest -> onreadystatechanged chain can ad some serious time on a 500ms round trip, and that's without talking about the css, the images, and the javascript.
I forgot to pay me electric bill before I flew out and it took me nearly an hour to login, push pay my bill, accept the terms, and confirm payment. I was not a happy camper.
It seems to me that while https is a very good thing, in some cases http and low bandwidth solutions might be worth implementing. It seems to me that one might actually want to tailor this to your audience, no one in their right mind is going to waste 5 minutes loading your web page. If they are so desperate they need to wait, they are going to hate you every minute they do it.
Seems prudent to mention that this requires cooperation of the client bein MitMed. Specifically, the client needs to install a root certificate.
That sucks but I don't see how having a site where you may have to enter payment information on an unsecured connection would be a solution.
You forgot about the great firewall of China playing merry MITM with your connections.
I wonder if it would be lower latency to open a single websocket tunnel on page load and download assets over the tunnel. Although at that point I suppose you're just replicating the functionality of http/2.
$ curl -o /dev/null -s -w "@time-format.txt" http://rest.ably.io/time
time_namelookup: 0.012
time_connect: 0.031
time_appconnect: 0.000
time_pretransfer: 0.031
time_total: 0.053
$ curl -o /dev/null -s -w "@time-format.txt" https://rest.ably.io/time
time_namelookup: 0.012
time_connect: 0.031
time_appconnect: 0.216
time_pretransfer: 0.216
time_total: 0.237
(as measured from my home computer, in the UK, so connecting to the aws eu-west region)Luckily not that much of an issue for us as when using an actual client library (unlike with curl) you get HTTP keep-alive, so at least the TCP connection doesn't need to be renewed for every request. And most customers who care about low latency are using a realtime library anyway, which just keeps a websocket, so sidesteps the whole issue. Certainly not enough to make us reconsider using TLS by default.
Still, a bit annoying when you get someone who thinks they've discovered with curl that latency from them to us is 4x slower than to Pubnub, just because the Pubnub docs show the http versions of their endpoints, wheras ours show https, even though we're basically both using the same set of AWS regions...
The Cloudflare Railgun is an interesting solution, and one that could be implemented in the context of an SPA over a websockets connection. Or conceivably some other consumer of an API.
https://tools.ietf.org/html/draft-thomson-http-bc-00, and Ericsson's article on it https://www.ericsson.com/thecompany/our_publications/ericsso...
[0]: https://hpbn.co/
Why? Did it hurt user engagement? Were people complaining the site was slow?
If it’s no, then clearly we should improve.
To their credit this post talks about improving the performance, instead of just using it to complain about they can't use https because of a difference in a metric that may or may not actually cause end users any pain.
I understand that by using the generic CF free cert, https terminates at CF and the connection CF->Origin is over unencrypted HTTP. Is this why there is latency overhead? Because CF cannot connect to origin via https so it cannot open a persistent tunnel? Or is it because the overhead of keeping an open https tunnel per origin server is prohibitively expensive to maintain for every free customer?
I assume that even though there is no persistent tunnel, CF still must still use persistent TCP connections?