After much teeth gnashing and research, we determined that a large segment of our user base was still using WinXP and the encryption protocols we offered weren't available to them.
We didn't think this would be a problem because the current version of the software wasn't compatible with WinXP any longer.
There was some debate internally whether the better fix was to including the legacy encryption protocols or just leave the HTTP version of the site running and use Strict-Transport-Security to move capable browsers to HTTPS.
In the end we had to include the legacy protocols so those customers could use our online store.
The logic that was communicated to them was that as a service provider, security a prime concern for us (as it should be for them as well), so we can't keep lagging on this forever. Currently, we have $single_digit merchants we're still waiting to make the switch.
It's made the whole switch process much easier and made customers actually appreciate our pro-activeness in this! :)
The scanning of the server logs occurred to us in hindsight as well.
They're admittedly few though and their moral high ground is debatable considering that there are self hosted FOSS alternatives around nowadays
> There was some debate internally whether the better fix was to including the legacy encryption protocols or just leave the HTTP version of the site running and use Strict-Transport-Security to move capable browsers to HTTPS.
Where can I read about this? Is there any way to display a special "Your browser is outdated" page for the users on WinXP?
Sorry if these seem like basic questions. I am just curious and would like to hear some expert advice.
https://browser-update.org/ is a great service that does this.
For the case where SSL was broken, unfortunately that wouldn't help at all, because they'd never be able to load the webpage.
"Oh no, this isn't a Mac, it's Windows"
This is a user of a highly secure system, containing user PII, who expected to use it on a 5 year old browser with XP.
~bangs head~
If its alright for you to answer,
1. What would be the best/cross platform way to proceed?We now have separate agents for windows, mac which causes maintenance hell
2. Is chrome remote's way of streaming desktop images as video better than images + diff.
3. Is there any open source mirror driver kind of thing in linux?
You can support HTTP and the occasional knowledgeable person will suggest you should upgrade. Or you can force TLS with SSLv3 enabled, and suddenly you'll hit a flood of people letting you know you're about to be hacked, based on online scanners. Often complete with requests for a bug bounty.
IIRC, Chrome and Firefox for XP support SNI because they bundle their own TLS libraries, rather than using a system library.
You ought to have more confidence in your writing. BRB stealing all your servers.
I was chatting with a non-engineer friend about why it's hard to estimate how long tasks often take, and this seems like a prime illustration: the dependencies are endless.
I also love the Easter egg:
"The password to our data center is pickles. I didn’t think anyone would read this far and it seemed like a good place to store it."
I told them all estimates go up by 2 years since we would need to reimplement everything. It ended up being unblocked a week later.
All roads lead to Stack Overflow these days for progrmaming problems.
People do incredibly stupid things. I've seen customer data dumps on web forums.
You need to find a new job.
https://securityheaders.io/?q=https%3A%2F%2Fstackoverflow.co...
As the headers go, here's my current thoughts on each:
- Content-Security-Policy: we're considering it, Report-Only is live on superuser.com today.
- Public-Key-Pins: we are very unlikely to deploy this. Whenever we have to change our certificates it makes life extremely dangerous for little benefit.
- X-XSS-Protection: considering it, but a lot of cross-network many-domain considerations here that most other people don't have or have as many of.
- X-Content-Type-Options: we'll likely deploy this later, there was a quirk with SVG which has passed now.
- Referrer-Policy: probably will not deploy this. We're an open book.
Expect-CT is one to look at as well.
Basically just tells the browser that Certificate Transparency should be available through the provider (DigiCert in this case).
Is it possible to pin to your CA's root instead of to your own certificate? That would make rotating certs from the same CA easy but changing CAs hard (but changing CAs is already a big undertaking for big orgs).
Also, I see your five minute HSTS header ;)
Do you have references to back this up?
> Referrer-Policy is a matter of choice. It's a useful information for the target site as long as the referrer doesn't contain sensitive information. IMO, most sites shouldn't set this header.
Exactly. I think its primary use is when the original site's URL contains user supplied input like Google Search page.
Wonder what the point is then.
What's the argument behind LetsEncrypt not doing that? Extended Validation stuff?
But it boils down to there being no practical way for Let's Encrypt to automatically validate that a wildcard certificate is safe to issue.
https://meta.stackoverflow.com/questions/348223/stack-overfl...
There are network bits we'd have to evaluate heavily as well, e.g. firewall rules - basically the very limited benefits don't make it a priority, yet. When things change there, we'll do it.
For example, if instead of having hundreds of domains serving millions of users with tons of user-generated content you're just serving static content from a single server on a small site, the entire process for you might actually be as simple as just running `certbot-auto` on the production server.
I suspect the difficulty of switching for most sites will fall somewhere between these two extremes.
That's exactly what we experienced migrating a bunch of sites to https. There were so many things that we didn't anticipate.
Why wouldn't they use split horizon DNS for this? Seems like the perfect use case
We'd consider it for a .local, when the support it properly there in 2016. Even subnet prioritization is busted internally, so that's a bit of an issue. Evidently no one tried to use a wildcard with dual records on 2 subnets before (we prioritize the /16, which is a data center) and it's totally busted. Microsoft has simply said this isn't supported and won't be fixed. A records work, unless they're a wildcard. So specifically, the <star>.stackexchange.com record which we mirror internally at <star>.stackexchange.com.internal for that IP set is particularly problematic.
TL;DR: Microsoft AD DNS is busted and they have no intention of fixing it. It's not worth it to try and work around it.
https://nickcraver.com/blog/2013/04/23/stackoverflow-com-the...