Meraki still thinks of themselves a startup, but they have these "uh-oh's" all the time. Random bad firmware that turns off the 5g channel on the MR42's. A DPI "upgrade" that blocked ALL SSL traffic (which at this point is basically all traffic). Their solution was always to try "beta" firmware... in production... in the middle of state-mandated online testing.
I was a huge advocate for them but at some point it's gonna be hard for me to keep recommending them. They're so excited about new features but really fail about 1) fixing bugs and 2) ensuring robustness. The "fail fast fail often" mentality really shouldn't work with critical infrastructure
I have heard similar stories to yours about meraki and that's what swayed the decision to just go ubiquiti since it's less expensive.
Thats terrible to hear their crap blows up. Everybody says to upgrade away from D-Link and Asus to the Ubiquiti stuff to get rock-solid pro-quality home network.
Sounds like I'm just as bad off with the consumer stuff.
I also have no love for how some features are only available via command line while others are only available in the UI. This also differs depending on what product line you're using. Pick one strategy and stick to it.
Consider: each time you introduce a new device that has local, physical access to the place your data lives, that's one more thing that could Halt and Catch Fire at just the wrong time, or be replaced with a USB Killer or a DMA cryptolocker device by social engineering. If it involves data center operators you don't know, that's more people you have to trust not to break whatever they touch or have been paid off to steal your corporate secrets. Etc.
Sure, the probabilities are small—but so is the probability of the great data fortresses crumbling to ash and you being the Last Best Hope for your data. Hypothetical ameliorations of sub-lightning-strike probabilities often have failure modes more likely than their use.
Note that a backup need not make things worse, but should only make things better.
Right. So don't do that. Put it somewhere else, and configure the original device to push to it rather than give the new device access to the original. You can use a service that implements the S3 API, then you don't even need to install new stuff on the original, just configure an extra endpoint. Also, encrypt before pushing (that counts for S3 too).
Think of it from a statistical perspective what is probability of you setting up this back up system vs them?
- account compromised, wiped out
- operator error
- malicious employee
All of these have happened to companies that I have worked with, so no, I won't do a better job of backing stuff up comapred to google, msft, etc, BUT I would rather have some get-out-of-jail-free card if any of the above should happen and suddenly where there used to be data there is nothing.
You should approach this from a cost-benefits perspective, not from a skills perspective.
Of course, even that doesn't prevent you from fucking up - your datastore will do exactly what you tell it to. Nobody can prevent you from doing the equivalent of rm -rf on your S3 store, or accidentally deleting the only copy of that movie your client's been working on for the last four years, and nothing can protect you from it except a decent backup.
Cost of resources aside, a person could run hourly full-backups all day every day and have just as good a backup regime as a billion dollar company. Time-to-restore is something that the aforementioned expertise factors into, but a good backup is the linchpin, and can still be restored by whatever means.
If you have your data in some cloud (either directly or as backup) as well as in your really crappy backup solution that has a 10% failure rate, you still are ten times less likely to loose your data than by just keeping your data in the cloud.
Just to be clear, if you're using AWS, GCP, and Azure to host your own applications, you're at your own peril to managed disaster & recovery. Those companies make doing that much easier than managing your own DC and yes the reliability is going to be better than DIY (but still never zero). I think you mean more towards SaaS applications or anything that "phones home" data to back it up, right?
We're going to start seeing more business continuity audits of SaaS players, akin to a BBB rating for the company's ability to maintain service levels. I thought I came across a website that actually has started doing this, but I can't recall which it was.
Their support team is amateur at best; at one point I had 6 Meraki engineers working on a DHCP problem (yeah...DHCP) and their recommendation after several weeks of troubleshooting - do a factory reset.
I have dozens of stories...don't even get me started.
I would think that if you lost your data, unless they have restored your deleted data the "issue" is still very much occurring for you as a customer.
Wouldn't remediation be that they have recovered your lost data?
Errrrrrr... so the issue was limited only to data I would actually care about then? Or did I misread?
That is a frankly extraordinary use of weasel words.
It haunts me to think about how many people are using these services as their single source of data. Fat fingers melt through 9s.