What's particularly great is that there is no configuration of any kind for Wordpress authors or their readers. Like they have done, we need to always default to secure.
http://captive.apple.com/hotspot-detect.html
I just have it bookmarked now. It's the one bookmark I use.
It would be nice to have a more formal standard (e.g. supplying an authentication URL along with the DHCP response) but to be honest, this emerging de-facto standard is perfectly serviceable.
Along with the 100 million in grants they write yearly, they provide a valuable social service for those more fortunate.
There are numerous approaches to make tls-for-all work, but they take more engineering effort. And some introduce security problems, like people who use CloudFlare which decrypts your traffic and then (optionally!!!) re-encrypts it between their servers and yours. Even taking the best encryption scheme with a CDN like CloudFlare still means that the CDN has full access to unencrypted payloads. That is an insane amount of trust to give away to a 3rd party company.
There is also an impact for large organizations that run everything through a local cache (like Squid). You can't share a cache between multiple clients when TLS is used. This push to encrypt everything means we're at the end of the era where such caches are useful. Does BuzzFeed really need to be encrypted?
One of the largest gaps in encryption in general is still performance. You hear all the time how TLS adds almost zero overhead anymore. This is simply false for most of us. Not everyone is Google who has their own custom low-level encryption implementation. Those of us using off-the-shelf software continue to pay the price of CPU overhead.
I wouldn't say it's no longer possible. You have to do your caching before the TLS termination, and you'll have to pay the encryption overhead for every resource.
> This push to encrypt everything means we're at the end of the era where such caches are useful.
You can still do this kind of caching if you install your own CA certificate on all your devices. Almost all enterprise deployments will do this anyway so that their security appliances can scan TLS traffic. Squid supports this.
> Does BuzzFeed really need to be encrypted?
Yep! Otherwise, that free WiFi at your favourite cafe, or your ISP, might inject malicious code or ads.
> You hear all the time how TLS adds almost zero overhead anymore. This is simply false for most of us. Not everyone is Google who has their own custom low-level encryption implementation. Those of us using off-the-shelf software continue to pay the price of CPU overhead.
Obligatory reference to https://istlsfastyet.com/
There are two aspects here: Handshakes and encryption performance. Handshakes are expensive, you can probably do about 500-1k handshakes per core per second with a 2048 bit RSA key. This is usually no big deal in production because you can use session resumption, meaning once you've completed a handshake with a user, you can skip that part until the cache expires (people tend to use one day as their lifetime).
The second aspect, raw encryption performance with a symmetric cipher, is something you don't get to complain about unless you're Netflix. Any server CPU with AES-NI can push at least 500MB/s per core per second, probably more. You'll run into some other bottleneck way before TLS performance becomes an issue. Unless you're Netflix, of course[1].
like Squid
I think the answer to that is that the days of local caches being useful are long gone. The top traffic count on many networks ends up being things like Youtube, and very few people are going to see the value in attempting to locally cache that sort of data.Even if served over http, tonnes of traffic is uncachable anyway due the increased use of dynamic resourcing. Even this basic looking HN page will end up uncached by Squid because I'm logged in, regardless of https being in the picture.
As for performance, I've played around with benchmarks using Goad on a micro tier EC2 instance and never found an appreciable difference between http and https.
Why is it insane to give that trust to a carefully selected business partner, but not the random 3rd parties with access to plain HTTP content? Also, your CDN has access to unencrypted payloads anyway, whether it's encrypted elsewhere or not.
2. Hey Joe, I've seen you've consulted wikipedia pages about unionizing. What about we give you plenty of free time to unionize yourself?
2. Have you ever downloaded the pic of a sexualized person? Do you have his/her ID card on file? If not, does she "look like" <18 years old? I don't mean is she actually, but what would a jury of average blokes (who neither care about truth and want to have the trial over soon) vote for if they had to vote? Boom, you're jailed as a child pornographer, even though the model was professional and over 25. That's the beauty of blackmail. This could happen as soon as you have professional secrets, or you are annoying for a competitor or a colleague, so as soon as you do anything meaningful in this world.
This.
I can flip the switch for default HTTPS on Neocities in a day. The hard part is figuring out how to not break user's sites in that process. Ideas welcome.
I believe it's breaking podcast feeds being served with WordPress.com, because iTunes doesn't support Let's Encrypt certificates.
https://www.dominicrodger.com/2016/02/29/lets-encrypt-itunes...
This may not affect a lot of customers (since WordPress.com doesn't support PowerPress for feed generation), but I know some podcasters create feeds by hand or with other apps.
This issue will cause at least some podcasts to disappear from iTunes without warning unless you can coordinate with Apple to fix it.
Asking because that's the problem I see at my site currently (https://groni50.org). In this case I'll just upload the external images to our site. I'll also brief our users. But I wonder if something couldn't be averted/checked in the wysiwyg editor.
This gets harder to implement correctly depending on what kind of content you allow on your sites (i.e. does your CMS only permit sanitized HTML, or are users allowed to do basically anything?), so it's not a perfect solution for everyone, but it might work here.
0. http://developer.wordpress.com/docs/photon/ 1. https://wordpress.org/plugins/jetpack/
Disclosure: I am the author of go-camo.
Growing pains. I think that will at least make people on the web more aware of the HTTPS "revolution".
Just wonderding about the metaphor of "growing pains". In humans its somethng that happens and for some its painful but has to happen, for others it isn't painful, but the process that makes the pain happens goes on regardless. Is this an accurate metaphor in this example?
https://en.blog.wordpress.com/2016/04/08/https-everywhere-en...
As for their reasoning... maybe performance, but more likely laziness.
If a user gets their credentials hijacked, and a hacker makes a bunch of unauthorized purchases with their saved credit card, who's the customer going to call? AliExpress or their bank to mark the purchase and fraudulent and refund the money?
To them, they're merely supplying the vehicle to do business. It's the payment processing companies, the banks and third party vendors who handle the money, so its their responsibility to notice the charges and shut the account down.
Like last week, I got a call from my bank asking if I was making purchases in Belgium, Norway and France. I was like, "Uhhhhhhhhhhh no, that's fraud." They blocked the purchases first and THEN called to confirm with me. It was pretty obvious based on my banking behavior this was out of the norm and immediately flagged. It wasn't the travel sites fault they let it happen, it would've been my banks problem if they let those purchases go through.
I'm glad they have an incredible fraud detection system. This is the second time they've flagged something on my account and shut these down before any damage could be done.
User logs in with HTTPS, gets redirected to HTTP site and the MitM throws up the "Incorrect password try again" page. User types their password and transmits it over HTTP or JS steals it etc. etc.
eBay does it because they aren't sufficiently interested in protecting against MitMs.
The web isn't ready for HTTPS only yet but it will happen over time.
https://bestcrabrestaurantsinportland.wordpress.com/ works fine
https://www.bestcrabrestaurantsinportland.wordpress.com/ displays a certificate warning
Unfortunately I don't think there's a good solution for this. Humans are gonna www- things.
The same humans who incorrectly add "www." when not told to are also unlikely to add "https://". So have the www. version redirect to the correct URL with HTTPS.
For that matter, since certificates with Let's Encrypt support arbitrarily many SubjectAltName (SAN) values, you can include the www variant in the certificate, so that your redirect can use HTTPS and HSTS.
Somewhat related - if you have a mapped/custom domain on WordPress.com, even though we are strongly no-www[0] the "www" will work over HTTPS, assuming that DNS for the sub domain points to our servers[1]
The "wildcard" certs' relevant RFCs are worded such that * . * .example.com isn't valid; one of the relevant RFCs restricts you to only 1 star in the leftmost component, so two stars is invalid.
There's a thing that you can put into a cert called "name constraints" that does have a syntax that allows you to say ".example.com", which allows things such as "foo.bar.example.com". It's valid on CA certs (it's oddly only valid on CA certs), which means that you could get a cert that was a CA cert for just your domain, and all subdomains. It'd be incredibly useful.
But no CA that I know of will issue them. Of the major browsers, only Firefox supports them. The whole Lets Encrypt thing makes it mostly a moot point though, since with Lets Encrypt you'd just obtain a non-CA cert for each specific domain.
(It'd still be useful, I think, to see it implemented, if only for restricting CAs to certain public suffixes, when/if that's appropriate.)
https://en.wikipedia.org/wiki/SubjectAltName http://wiki.cacert.org/FAQ/subjectAltName https://www.openssl.org/docs/manmaster/apps/x509v3_config.ht...
Certificate authorities charge extra for it, of course they do. DigiCert brands this as "Multi-Domain (SAN) Certificate" and charges nearly $300/yr, while my choice provider, sslmate.com offers the same for $25/yr.
And now $0 certificates with Let's Encrypt, I'm sad to see sslmate.com's business hurt, as they are the first to provide no-bullshit sysadmin-focused CLI tools to get the job done. I'm very wishful to see DigiCert.com and others like it go bankrupt, however.
I don't see any reason why Honest Achmed's request to be a CA was denied by Mozilla, https://bugzilla.mozilla.org/show_bug.cgi?id=647959 at least he is honest about his business model.
https://www.reddit.com/r/dredmorbius/comments/3hp41w/trackin...
https://www.chromium.org/Home/chromium-security/marking-http... https://developer.apple.com/library/ios/releasenotes/General...
https://www.eff.org/https-everywhere/atlas/
https://github.com/EFForg/https-everywhere/tree/master/src/c...
https://chrome.google.com/webstore/detail/kb-ssl-enforcer/fl...
Rate limits aren't much of an issue in that scenario unless someone has more than 20 separate subdomains set up as a WordPress.com blog under the same domain. Even then, you could theoretically get 20 * 100 subdomains covered every week if you're smart about which domains you combine on a single SAN certificate.
* 100 Names/Certificate (how many domain names you can include in a single certificate) * 5 Certificates per Domain per week * 500 Registrations/IP address per 3 hours * 300 Pending Authorizations/Account per week
It seems to me that WP.com could reach at least one of those... So I was curious to hear how they were doing that.
And yes, I was wondering if they would replace the *.wp.com wildcard - i guess not...
https://nonstop.qa/projects/387-hacker-news
(Free because I'm applying the GitHub model: free public projects, will eventually charge for private ones.)
By the way, the cert on Wordpress.com is issued by GoDaddy, all the examples I could come up with are also. Guess it's a roll out process.
You can find more information about upcoming and completed features here:
Neither WordPress or LetsEncrypt has any way to modify global server setting on any shared hosting environment. Slapping in an SSL certificate doesn't make a site secure, properly configuring the services that use the cert is what makes it secure.
GoDaddy isn't going to let Company Xyz rebuild Apache or configure cyphers server-wide...
In the end, while this is a move in the right direction, I fear it will give false confidence to many web providers that don't have enterprise experience with security fundamentals.
So it won't break servers or shared hosts.
I wonder if Squarespace will follow suit in this endeavor.
I hope this move by Wordpress will push Squarespace to support https for custom domains as it's a very frequently requested feature.
However, they could have shelved out a couply of hundred of bucks for a wildcard cert before.
PHP, mostly dyanmic everything, unmoderated cesspool of plugins, themes, etc... where you just drop code, predictable URLs and pages to brute force, I could go on...