In my experience, the larger organizations will have a "security" questionnaire required of their vendors, and the person administering it is a droid, incapable of evaluating whether the questions, originally written in the mid-00s and only updated for buzzword compliance since, are applicable to modern security practice today, or to the particular product/service/vendor in question. And no firewalls or routers would be massive, disqualifying red flags on such a questionnaire.
Never mind that a KISS setup tends to bring security because of its minimized attack surface. In the minds that write and administer those questionnaires, security only comes from sufficient amounts of the right kinds of complexity.
I'm sure it can be done. IIRC, Cloudflare doesn't use any firewalls, and they do some big business. It just isn't easy to get past the droids programmed to ensure that all pegs shall be properly square, IME.
Yes, certainly.
We frequently fill out very detailed checklists and questionnaires related to our quality policy, standards, internal policies, etc.
We're also very honest about how we approach these issues:
https://www.rsync.net/resources/regulatory/pci.html
... and they generally appreciate the honesty.
[0] https://www.rsync.net/resources/regulatory/PCI_usw-s005_repo...
EDIT: It’s marked as "PASS" though, so it’s all fine, just funny.
Is there a modern, no-nonsense guide to filling these out honestly without telling the person doing the checkbox checking that their form is dumb?
I realize there’s a lot of rent seeking and money to be made by consultants in this space - I’m looking for the GitHub published guide or wiki to help smaller no-nonsense shops navigate the phrasing and map these vendor security questionnaires to “modern” technology.
I do security and I title this "Most secured platform in the world."
We initially had some troubles navigating these waters in the financial sector, but once we were able to convince 1 big customer to try our system on a trial basis, everyone else started to play along really nicely. No one wants to be the first one to try a new thing and get burned by it.
In 2021, you can sometimes leverage things like technological FOMO to make a business owner believe that they are going to lose out on future business value relative to competition, who you might frame as be willing to take on a bigger technological risk. And indeed, smaller clients in our industry are willing to overlook certain audit points (at least temporarily) in order to compete with bigger players.
Some might not like it, but being able to engage in the sales process and bend some rules occasionally is absolutely required to play in the big leagues. Once you are in, it's a lot easier to move around. No one has a perfect solution and everyone knows it. It's just a matter of who is the better sales person at a certain point.
Could we have argued with them during the sales process? Only if we wanted to lose the sale. The Fortinet was cheap compared to the value of the contract.
I know you mentioned 2000s, but it's funny that these contractually obligated boxes might introduce more worry: https://www.bleepingcomputer.com/news/security/fortinet-fixe...
And in many cases on the vendor side its some dude from sales filling it out... so pretty noisey on both ends.
This is a little disingenuous because their product is a modern firewall. It drops packets and conditionally allows sessions to your backend.
They may be even aware, they are just bound by their companys ruleset...
You just described my workplace. We have some rules that nobody understands and nobody remembers where they come from, but we have to follow them blindly. For example, they require that any access to the web services should go through a VPN, which would be fine if:
- The VPN actually worked, but it doesn't.
- The servers already uses TLSv1.3, all the services require user authentication, and there are 3 layers of firewalls and an integrated virus scanner in front of the services.
- We are an international project with people from 10 different organizations in 6 countries on 2 continents, and it's really difficult to impose these kind of rules.
So for example, I'm managing a GitLab instance that I can't use myself. I can only SSH login from a very specific computer to manage it, but I can't upload my own code from my office computer.
And I don't want to go into their blind devotion to the firewall and their concept of one way connections...
So I'm just letting time go by, until everybody is so angry they are finally forced to change. Doesn't help that this is Japan, the epitome of rigidness and "even it is broken, don't fix it".
Security is also about depth. You should assume breaches can happen and have another level of defense.
That only increases the attack surface but it's a much better approach for imperfect beings.
On the other hand, a firewall is an explicit declaration of the ports you want open and who you want them open to, which seems like, at the very least, a useful thing to do. If nothing else it seems like defense in depth. I'm not sure I buy that a system designed around "default deny" is an increase in secrity complexity, certainly it's complexity that would hurt availability, but complexity that would hurt security?
Either way, the real security comes from monitoring the reality of what ports are actually open/listening and verifying a person's assumptions about their systems.
Higher complexity = larger attack surface.
For example, if they used a firewall with one of Cisco's infamous backdoors.
https://www.zdnet.com/article/cisco-removed-its-seventh-back...
In fact he mentioned port knocking. You pretty much need some kind of host firewall for that.
I like that. I like that a lot. That's a very enviable practice. I think I know what I'll be experimenting with this next week.
Did not expect to read that article and have the most stand out thing be a routine change I'd want to copy. You never know.
On the laptop-front, I find myself drifting towards a similar setup to John. I have a hefty workstation laptop but the battery life is dire and it weighs a ton, so I pretty much just run it as a headless machine next to my server now. I'm planning on picking up a Pinebook Pro as an "outdoors" machine to just remote in. I also find myself extremely unwilling to arse about swapping multiple machines on my monitors so being able to keep my work machine separate and secure but operate it from my desktop is a nice compromise.
I've been using them in a small but important-to-me way continuously since 2008, and I have occasionally forgotten the service needed maintaining at all - at one point I forgot to pay them for an embarrassingly long time after a credit card expired, and they kept my storage going for me until I finally got myself in order. Please don't try that.
(My first contact with them was in 2007, to ask whether they supported pushing directly from git - the answer was no, though they added the feature a few years later - a bit ironically, I've never used it)
We just added git-lfs / LFS support. So now, when you do things like:
ssh user@rsync.net "git clone --mirror git://github.com/LabAdvComp/UDR.git github/udr"
... you can successfully pull over LFS assets, etc.So instead I have to use restic, which re-implements many features of ZFS and this also feels wrong.
We support encrypted zfs[1][2][3] and raw-send, etc.
The pricing is the same but there is a 1TB minimum because we need to give you your own VM (bhyve) and we have to burn an ipv4 address for you, etc.
[1] https://www.rsync.net/products/zfs.html
[2] https://arstechnica.com/information-technology/2015/12/rsync...
[3] https://www.servethehome.com/automating-proxmox-ve-zfs-offsi...
I really am enjoying the developer Q&A interviews that console.dev is putting out.
They're very much like the "usesthis"[1] profiles but more in-depth and with more interesting details ...
I did a usesthis a little while ago. https://usesthis.com/interviews/matt.lee/
"I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve."
> Now everything isn't connected, just connected to the cloud, which isn't the same thing. And uniform? Far from it, except in mediocrity. This is 2012 and we're still stitching together little microcomputers with HTTPS and ssh and calling it revolutionary.
ChromeOS was great for 'switch to a new machine and log in' use... but it's so much more complex than that now. Only the most locked down managed devices wouldn't have to worry about anything left behind on a device before someone abandons it.
At this rate, short of a native package manager and repo for installing applications to run on ChromeOS (instead of a container Linux distro on top), ChromeOS might as well be considered a distro in of itself. Especially with CloudReady being installable on non-Google sanctioned hardware.
Thank you for your great product and support, John!
javascript:(function(){w=window;d=w.document;b=d.querySelector('body');s=d.createElement('style');s.innerHTML='body{margin:0 auto!important;width:100%;max-width:120ch;font:normal 18px/1.5 Helvetica,Calibri,Arial,sans-serif;color:#333;background:#f9f9f9;}main,#main{margin:0 auto!important;}';b.setAttribute('style','');b.append(s);})();
It's pretty basic - changes body width to 120ch, centers body and main, and updates body font. Despite some issues, works great for a one-click fix.Any web designer who doesn't, in 2021, understand and make allowances for 4k/5k ultra wide monitors as well as phone sizes screens in portrait mode - isn't doing their job right.
The problem here is not the monitor shape or the user's browser window width, it's the css (and maybe html) and the lack of understanding of how to use it properly (or, more sympathetically, perhaps a conscious choice on the part of the people paying for the website to not allocate enough budget to cover all their competent webdev's suggestion?)
http://motherfuckingwebsite.com/ looks great on an ultrawide. It's pathetic that so many websites don't.
I was going over the lines with my finger to see if I could feel them. Thought it was internal. Wasn’t until it switched to another app that I saw them move.
Scared the shit out of me.
Does this make anyone else a bit uncomfortable?
I don't think MacOS is still receiving security updates on that hardware. I'm all for using old hardware for as long as it keeps working, but I would never browse the internet with a vulnerable OS on a vulnerable processor (spectre etc...)
Or am I missing something?
Yes, one minor thing ...
Although you are correct that Apple is not officially supporting the latest versions of OSX on that hardware, there is a trivially easy hack of the system that will allow you to load newer versions of OSX.
So, like many of you, I am not running Catalina but I am running an updated, patched version of OSX.
There's a simple patcher you can use for these old macbooks:
although i use Windows, i do have Catalina installed [and Debian for the triple boot]. also using open core. I'm pretty sure i downloaded a copy of osx from one of their repositories 0.o I'm super lazy, it's really not that hard.
my average cost for hardware since i bought my Mac is now less than 400/year CDN. is it worth it? while I'm slightly concerned about the security [I'm probably the biggest risk anyways since I'm not confident in my knowledge of secops], i get 95 fps playing pubg, can edit in 4k, run 100+ tracks in Cubase, and run 3 different OSes or as many vms as you'd like [which i think can also run bare metal vm on the 144 firmware upgrade]. on top of that the case still looks good and I've kept at least 50+lbs of ewaste out of landfills or whatever... seems pretty worth it [hopefully no one ever tries to steal pictures of my cats]
[we could also get into a discussion about the right to repair bill in the EU, talking this way]
do you game? i feel like that might have been intentionally left out of the interview?
what info would you keep unencrypted on your servers?
how much does a colo cost for a 2u server typically? how about back in 06?
is rsync a good solution for video files backup? what are the benefits over say, running a home server and keeping physical backups at your friends house or iron mountain or something?
can rsync use 'live' encrypted data? in other words, how do you encrypt/decrypt on the fly? say for streaming an mp3 or something? [not that you would do this if you were paying per GB...]
please excuse my ignorance. I'm not a real sys admin, just an old wanna be hacker that could never get his shit together.
You might be paranoid. I've been browsing on a few 2008/2009 obsolete Macs for a while, on the highest OS that they will run.
Eventually they'll be a pain to use because of browser incompatibility, pages will get even more bloated and these machines will run them even slower.
Yeah, I get it, people love their Mac's... but the company that produces them actively undermines your ability to continue using perfectly good hardware past what they feel is "profitable". This leads to huge efforts to hack/reverse the updaters, or alter newer OS versions to trick them into installing, etc.
I'd personally jump over to some system that doesn't hate it's users nearly as much. But, that's just me.
I can't agree more with the "no firewalls" approach to things, though I prefer to call it "host based" firewalls as it scares people less! I'm glad you've had no compliance/audit pushback on that, I architect things similarly and have had success pushing back on the requirement as well.
I'm very surprised by the l2 switches and actually choosing to run completely unmanaged switches. I assume you're running all 10G or more? Maybe i'm overthinking the complexity of your network but I would be lost without snmp counters on my switches and running switches+networking in fully l3 mode has some great isolation benefits, especially if you want full switch-level redundancy.
Do you have some more details on your data architecture? I'm very curious how do you do data direction/redundancy/sharding and balancing customer data across servers. I'm not trying to pry for things you consider secret but I think you have a very similar architectural mindset and I'm curious how you solve these things.
The benefits are tremendous, however, and go beyond day to day operations. A dumb switch has no credentials to protect and there is almost zero attack surface.
Further, if our switch dies we can immediately replace it with any other dumb switch that just happens to be lying around.
If you read failure studies - like those in the excellent Charles Perrow book _Normal Accidents_[1] - you see that in many cases there is a very special component that fails and everything goes to hell when they can't find a replacement for it.
So, while I can't encourage everyone to use dumb, unmanaged switches (because not everyone can) I can encourage everyone to remove as many very special components as they can.
In the right situation it's doable and potentially highly desirable due to the simplicity, but requires a lot of discipline by everyone involved, and the right conditions to make it work.
It was a design I supported and thought it was a great idea for the right situation, but I also was hesitant to introduce it to anyone but the 'right customer'.... who probably already knew what they needed to know about it.
Scrolling through the cert pages 2015 seems to be in the future though?
> We personally toured every single major datacenter in Hong Kong and Zurich to choose the facilities that best met our old-fashioned standards for datacenter and telco infrastructure. The same will be true of our upcoming Montreal location in Q4, 2015. https://www.rsync.net/resources/regulatory/sas70.html
The only exception is special purpose backplane networks that are designed explicitly to be isolated. These are basically data busses for clusters, not user-facing networks.
If you have everything on one host I'd say your overall setup on that host becomes much more complex because you only need to get hit by one successful exploit chain and all logs on that host cannot be trusted any more.
In the past, the benefits of a firewall were more clear-cut, but these days I think that it’s reasonable to have “defense in depth” without using a firewall as part of your solution.
I came to the same conclusion. I was aware of rsync.net and tarsnap, and have checked their prices in the past, but for raw storage it's simply not competitive. Some of the other features they offer might make it worth it though if you need those.
Personally i just need a place to dump a backup of my family photo albums and documents. A full backup is around 1TB (deuplicated, somewhat larger raw), and for that there are much cheaper solutions.
Self-hosting at home (or in the office) is a great option for some if you’re not worried about needing an offsite backup. For those that do care about this sort of thing, though, the extra you pay to have someone else manage the thing is well worth it.
Calculation: pi 40eur, 1TB external disk 50eur, typical lifespan of a disk 5 years (when excluding including infant mortality which falls under the mandatory 2-year warranty for new electronics), ~8W power draw is ~€15/year. Let's say you also need to replace the pi after 5 years just for good measure. That's 15+(90/5)=€33/year for 1TB, which gets cheaper per terabyte with bigger or multiple drives.
DIY is often cheaper if you ignore the cost of “doing it yourself” - and securely storing an offsite server is more than just the cost of a disk.
Native ZFS is also a feature for those who can use it.
One part concerned me though, in the interview, it mentions "we own (and have built) all of our own platform." and it fails to mention a few critically important key parts of a storage platform, first being encryption. How are personal files being handled? Is encryption being used? Are you able to access this data using a shared key?
As well as contingency, what happens if critically important data is stored on your platform. On your website you mention:
"We have a world class, IPV6-capable network with locations in three US cities as well as Zurich and Hong Kong"
however fails to mention if replication is done across these locations. If technology (drives) is stolen from your datacenter, or mechanical failures beyond your control happen, how will you be able to recover from physical failure if you only appear to be serving from a single location?
Excuse me if I'm wrong but I couldn't find anything concrete in either the interview or your website. The premise of the platform seems quite well aligned with keeping alive the the UNIX philosophy, and reminds me of Tarsnap.
Either way, well made interview and interesting approach to a storage platform.
As a sidenote, what keyboard are you using? It seems really interesting and you failed to mention it in the interview :)
EDIT: It appears that you offer Geo-Redundant Filesystem as as separate product, maybe you would want to make this a bit more visible on your website except for only the FAQ and order pages. Either way, it seems like a sufficient move, that does still leave the topic of encryption though. As mentioned traffic is encrypted using SSH ofcourse, but is the data itself encrypted on your platform?
We give you an empty UNIX filesystem. So, if you push up files over rsync or sftp, they will sit here unencrypted.
However, there are now excellent "tools like rsync that encrypt the remote result with a key rsync.net never sees" - chief among them being 'borg'[1]. Other options include duplicity and restic - all of which transport over SFTP.
So it's up to you and you have total control. If you want ease of use and you want to browse into your account (or one of your immutable daily snapshots[2]) and grab a file over SFTP you probably don't want to encrypt everything on this end.
On the other hand, if you want a totally secure remote filesystem that is nothing but encrypted gibberish from our standpoint, you should use 'borg'.
"Are you able to access this data using a shared key?"
We are running stock, standard OpenSSH and you can, indeed, use an SSH keypair to authenticate with. In fact, you have a .ssh/authorized_keys file in your account so you can specify IP restrictions and command restrictions as well ...
" ... how will you be able to recover from physical failure if you only appear to be serving from a single location?"
A standard rsync.net account has no replication. We are the backup and your account lives in, and only in, the specific location you choose when you sign up. However, for 1.75x the price (ie., not quite double) we will replicate your account, nightly, to our Fremont, CA location.[3]
"As a sidenote, what keyboard are you using?"
It is a Keytronic E03600U2.
[1] https://www.borgbackup.org/
[2] We create and rotate/maintain snapshots of your entire account that are immutable/readonly - so you have protection against ransomware/mallory.
[3] ... which happens to be the core he.net datacenter - one of the nicest and most operationally secure datacenters I have ever been in.
If you are interested, I would be more then happy to have an extended discussion with you going over implementation options, and updating the client side script to make it work better with your service. (https://www.snebu.com, https://github.com/derekp7/snebu, and the tarcrypt extensions to tar are described at https://www.snebu.com/tarcrypt.html).
I'd be happy with a socket/pipe to 'zfs recv zpool/benlivengood/data' that I could throw send-stream data at once a day or so.
As well I mean no offense, the entire platform seems very sturdy though it leaves some questions which aren't apparent immediately (which may just be me)
If I wasn't contempt with my current backup solution I would seriously consider yours, and I wish you guys the best of luck. You're one of the few keeping simplicity as a key value.
and cross geographic region replication to protect against natural calamities (earthquake, tornado, floods etc).
It also conjures a managed service with object-level (volume, directory, file) metadata, versioning and strong identity access management capabilities.
rsync.net doesn't seem to do any of these and charges 0.5 cent more per GB/month. What's the secret advantage I'm not seeing?
Notably, their website only claims transfer encryption, not encryption at rest. You can of course encrypt your files yourself with your own keys.
Personally, I feel like if you're going to encrypt your data, you should be encrypting it on your end, before sending it to some backup provider who may or may not be keeping your data secure.
What does that mean exactly? Is your IP provider quintuple-homed? Or are you running a bit more complicated setup than you explain but the gist is that you have no particular routing mechanisms?
What does that say regarding your high availability? If one of your location is down, then it's definitely down until being fixed?
Anyway, that was interesting, just curious about the fact of having no router at all. Thanks!
So we have a dumb switch in our rack, but they have routers.
In 2021 that's a weird bandwidth product and a weird setup but in 2001 it was "normal" and we just stay with that setup out of inertia (and the fact that we can't connect to he.net in San Diego).
A similar setup exists for us in Zurich with init7.
However, you are correct and we need to edit that FAQ language: our geo-redundant site in Fremont does not work that way.
(I will note that it has been 11 years since we put that location in place (he.net in Fremont) and it has zero minutes of downtime)
A tremendous amount of complexity and attack surface are eschewed by living with that setup and we're always looking for new ways to make that tradeoff.
[1] Castle Acess datacenter on Aero drive. Is now a KIO managed datacenter.
We were there at a similar time - I probably saw the rsync.net servers.
Not lack of physical memory, but lack of ability to address it as the UFS2 tools, like fsck, were not written to handle billions of inodes ...
We really can't thank Kirk M.[1] enough - he wrote custom patches to ufs and fsck just for our (dirty) filesystems and, as I mention in the article, eventually gave us the push to migrate to ZFS.
> I’m down with Bill Gates
> I call him “money” for short
> I phone him up at home
> And I make him do my tech support
If you had to do it all over again, what would you do different (if anything)?
E.g. product/positioning/tech-stack/employees/business-decisions
In terms of product / tech-stack I don't think I would change anything.
In terms of marketing and word of mouth I think we should have given away hundreds of free accounts in the early years (2006-2010) rather than trying to chase them down as paying customers. I believe we had a lot of decent word of mouth but I don't think I appreciated the power of influencers and their ability to amplify a message.
As for business decisions, I continue to wonder how much business we miss due to not having a Canadian location and we have considered deploying in Montreal for years now but have not pulled the trigger. I don't know if a Canadian location (but still a US company) solves the regulatory requirements of Canadian customers.
Even though I love free plans I think it’s better for small startups to grow organically with “cheap and easy to cancel” instead. Or offer credits for new users.
Is this a ( lack of ) capital issue or simply an uncertain sustainable revenue stream issue?
Having said that, I do think the site would really benefit from a new paint job. A good UXer can make it so much more aesthetically pleasing, while still retaining its simplicity and quick load time. It doesn't have to be fancy. Just static HTML with elegant styling and a few minor tweaks.
For example, I was really surprised to see big name clients such as Disney, 3M, ARM & ESPN hidden a few clicks away (behind a button which wasn't very informative, from what I remember). Same for being in business for 20 years. A good UX/product person will tell you to put this front and center in your landing page, and rightfully so.
@rsync: I love what you're doing, but please get a UX person involved :)
You said: This might seem odd, but consider: if an rsync.net storage array is a FreeBSD system running only OpenSSH, what would the firewall be ? It would be another FreeBSD system with only port 22 open. That would introduce more failure modes, fragility and complexity without gaining any security.
You seem to suggest the big firewalls do not bring any value to the table. I always thought they had more "intelligence" - dropping sessions based on some bad patterns, guarding against DDoS (to some extent), etc.
Are you saying BSD is as good as these expensive boxes? Does it apply to SSH only or HTTP(s) and some other traffic as well?
- No nonsense description of what they do
- Clear and simple pricing
- Simplicity as a core feature
Big fan. Look forward to using your services in the future.
Their support people confirmed it doesn’t work (though they didn’t seem to understand why it would be fine for them to support it as advertised...) yet 6 months later they still advertise that they support it, even when I have e-mailed to remind them (and it still doesn’t work either) :(
This doesn’t make sense given that the specific invocation of “rclone serve restic --stdio” doesn’t open any network sockets, it’s no less safe than e.g. “tar”
However, in 2005 or 2006 when we spun out of JohnCompanies[1] and incorporated under the name "rsync.net" I requested, and was given, explicit permission to use the name and domain by the maintainers of rsync.
[1]:https://www.rsync.net/products/borg.html [2]:https://medium.com/@mormesher/building-your-own-linux-cloud-...
Test your backups.
https://messengergeek.wordpress.com/2018/03/09/backblaze-rev...
Most people don’t really think about this and expect that any backup is AlSO an archive. You can get burned by this and have to use b2 or whatever it is instead.
I have a LILYGO that I coded up a time-tracking app, which basically creates an event log whenever I tap it, wherever I go - and when Internet is available, it squirts the log over to some text files that live on rsync.net ..
Pretty neat to be able to do this without much of a desktop or mobile phone in the way, I have to say. I wonder if there are more opportunities for this kind of IoT service out there .. it sure was fun to get this working without REST ..
Simple file system interface to all devices first, then any further software interfaces on top only if desired.
Thanks for making the option available for remote storage John!
However, if you want a backup process then you will, indeed, need to find some way to run 'borg' or 'restic' or 'rclone' on Windows.
I've never used WSL so I can't comment, unfortunately ...
If anyone has a recommendation for backing up Windows servers I'd love to hear it.
If you can get command line access to rsync.net with openssh and either CMD or Pwsh, then robocopy can forklift your stuff. This is without even getting into the weeds of the fact that WSL exists...
I am also seeing that some documentation exists for pointing Veeam at it, which is my preference. I don't run any metal computers that aren't hypervisors and using that to back up my VMs, be they windows or linux, is my preference.
Though if the descriptions of the service on their web page does not make you salivate, perhaps it's not for you.
But rsync.net is a backup product and OneDrive is a large file sharing drive.
Attempt to use that 6TB in backup situations and you may experience issues.
For me it was in 2004, also using 3Ware controllers. I was running on RedHat (before RHEL) and XFS before it was common on Linux, and similarly had memory issues when trying to repair filesystems.
> I start the day with a short walk outdoors. I don’t want the first thing my eyes see to be print, and I don’t want the first thing my body does to be sitting. So I walk a bit.
This is smart move!
I have a copy running in my NAS to always have a copy available, one in my laptop, one in my desktop, and I was thinking about having one in my phone to run only when I'm charging (so I don't kill my battery).
My setup to do this is that I run my own nextcloud server, which handles the computer and phone etc. syncing, then nightly that's backed up to a small computer in my house (I just use rsnapshot for that), which then backs itself up to rsync.net (using plain old rsync.)
I do a simple rsync of my precious but not too sensitive data, daily.
and for the more sensitive stuff, gpg before sending daily as well, the copies will add up but I prefer it that way.
10/10 great business
Simple stuff.
Guess you don't need a firewall when you have no open ports?
Haha yes! Guess I'm not the only one...
https://blogs.cisco.com/manufacturing/the-top-5-reasons-to-a...
Especially since they're running the boxes that it's connected to.
They can do resiliency, network segmentation, and monitoring on their platform.
What's a Cisco box going to do for them?
Managed switches typically have ACL support. I get the KISS principle, but this setup seems to be trading security for simplicity.
> Disadvantage #1 – Open ports on unmanaged switches are a security risk
Why? Is there something that would prevent an attacker with physical access from unplugging an existing cable? Does the average managed switch config have mac limits and auto shutdown if a link is lost for just a few seconds? Mac limits are easilly bypassed, even without (permanently) disconnecting the legimate device by inlining an active device, maybe some mac spoofing.
I don't include 802.1x or automatically shutting down a port that loses an uplink as a "simple and effective security precaution", it would be a right pain for many situations. Is the latter even a feature? I certainly haven't come across it (unlike normal portsecurity like limiting number of mac addresses, which just adds to overhead with limited effective security).
> Disadvantage #2 – No resiliency = higher downtime
If my device has one ethernet cable into one switch, how does that help? If my unmanaged switch goes pop, I have a spare that I can put in and be back running in a minute. My managed cisco edge switches take 10+ minutes just to reboot.
If my device has two ethernet cables, one into one unmanaged switch, one into another, losing that switch isn't a problem.
> Disadvantage #3 – Unmanaged switches cannot prioritize traffic
Correct they can't. Managed switches without qos set up can't prioritise traffic either. If your switch is dropping packets, you don't have enough bandwidth. I've seen packet loss when sending 500mbit down a 1G uplink on managed switches, even on QOSed traffic. Indeed I've seen higher priority traffic drop and lower priority not drop. QOS isn't trivial. Ultimately it comes down to how big your buffers are whether your packet gets through or not, so your application should cope with some loss, and if you get too much loss you need more bandwidth. If you have 48 devices connected at 1Gbit each, each firing 100mbit of traffic every second, all bang on the second, with a 10gbit uplink, on paper you only need 4.8gbit of uplink. You'll also need a 600MB packet buffer and expect a lot of delay on your packets, whether you have managed or unmanaged, QOS or no QOS.
> Disadvantage #4 – Unmanaged switches cannot segment network traffic
Correct, but then if I have 8 desktops in a cluster why wouldn't I pop in a desktop switch with 8 1G ports? I want them all on the same vlan anyway.
> Disadvantage #5 – Unmanaged switches have limited or no tools for monitoring network activity or performance
They don't, but again do I want that for a specific use case?
If I want a managed switch (which I usually do), then I'll spec a managed switch. It's unlikely it will be cisco. If my requirements don't need features of a managed switch then I won't bother.
I find it interesting that there's no mention of preventing broadcast storms, or IGMP snooping - both of which are far more useful for a typical edge switch than qos.
Personally, I tend to use managed switches - indeed I just bought a couple of 24 port TP Link POE switches for an event I'm planning. I'm not 100% sure I'd go for an unmanaged switch in rsync's case, but from your list
1) Doesn't apply -- servers are in a secure location
2) Doesn't apply -- servers are either single connected (so need a physical visit, and replacing an unmanaged switch is far quicker and easier than a managed switch), or they're dual connected to two different switches
3) If they're doing inline management then you might want to carve out a small part of your uplink to prevent yourself from being dossed by a dodgy server (if your server is saturating your uplink bandwidth and you ssh session can't establish that could be an issue. If you've got OOB access on a separate link though, not a problem, and clearly they don't have that problem)
4) Doesn't matter -- they don't want different vlans
5) They presumably measure the bandwidth use of each of their servers. The question thus is "does the ISP give me logs I can rely on for the wan". Personally I wouldn't, but I can see the idea
Spanning tree: Secure network, they aren't going to connect one port to another to cause a storm
IGMP: They presumably aren't using multicast for anything major so bitrates would be very low even if they were there
Reasons to use a firewall or a switch with an ACL in this specific case that I can think of:
1) 2 points of control -- a zero-day on freebsd's firewall could open a port to an unintended source which was listening but blocked by iptables (or bsd's version). If you had a non-bsd firewall it's unlikely the same zero-day would work
2) Port 22 is only open to a specific IP range, again there's a zero-day, and TTL of outbound packets is high enough to establish a session
Reasons to use a managed switch even ignoring firewalling:
1) Reliable traffic stats -- you could guess at these by summing the uplinks of all the connected devices although some packets will be dropped and some may be going to other devices on the network
Reasons to use QOS on a managed switch:
To allow inband managment if something goes wrong. A separate ilo/ipmi/kvm connection would be better for that though.
I don't think they'd need features like span ports (I personally use them all the time, and fibre taps, but I have a different use case which is UDP heavy and loss-intollerent)
> If your switch is dropping packets, you don't have enough bandwidth.
this isn't true, there exist more bottlenecks than just bandwidth, e.g. try sending 10 byte packets instead of 1500 byte packets and watch as your switch starts dropping due to CPU exhaustion
> Ultimately it comes down to how big your buffers are whether your packet gets through or not
not really, traffic prioritisation is about deciding which packets you drop when hitting your limits (or close to), not making sure that you never drop anything
obviously if you're never hitting any bottlenecks: the prioritisation does nothing
https://www.hetzner.com/en/storage/storage-box
Access via rsync/sftp/scp
Hetzner is throttling bandwidth after traffic exceeds ~5x the storage capacity while rsync.net doesn't seem to. Hetzner also only supports a very small number of snapshots in total while rsync.net supports more per day.
I don't think Hetzner and rsync.net are really competing with each other. rsync.net's focus is more on business customers, while Hetzner targets private customers.
I also like that their Europe location is in Switzerland. I think it's useful for a number of reasons to store critical data in more than one jurisdiction.
* he appears not aware of the role of hardware firewalls in mitigating DDoS by handling efficiently a lot of active TCP sessions (they have specialised hardware for this purpose)
* he is describing in great detail a lot of information that a phisher or other type of hacker can treasure to target him