At the very least it is nice to make acquaintance with at least one BSD because it will probably expand your knowledge on Linux in ways you wont be able to anticipate.
For example, FreeBSD got me into kernel development, full system debugging, network stack development, driver development, and understanding how the whole kit fits together. Those skills transferred back and forth with reasonable fidelity to Linux, and for me jumping into Linux development cold would have been too big a leap.. especially in confidence and developing a mental model.
For my personal infrastructure, I tend to use FreeBSD because in many ways it is simpler and less surprising, especially when accounting for the passage of time. ifconfig is still ifconfig, and it works great. rc.d is all I need for my own stuff. I like the systematic effects of things like tunables and sysctl for managing hardware and kernel configuration. The man pages are forever useful to new and old users. The kernel APIs and userland APIs are extremely stable akin to commercial operating systems and unlike Linux.
There are warts. There are community frictions. The desktop story and some developer experiences will be perpetually behind Linux due to the size of the contributor base and user base. The job market for BSD is very limited compared to Linux. But I don't think it's an all or nothing affair, and ideally in a high stakes operation you would dual stack for availability and zero-day mitigation (Verisign once gave a great talk on this).
That sounds very appealing to me. I have to keep a small number of servers running, but its not my main focus and I would like to spend as little time on it as possible.
I have started using Alpine Linux for servers (not for my desktop, yet) because it is light and simple. Maybe BSD will be the next step.
They also have things like `rpm` in ports that you can install. Why? Because you can enable linux binary compatibility[0] and run linux binaries on it (this implements the linux kernel interface, it's not a VM/emulator). It's also backward compatible with its own binaries back to FreeBSD 4 (circa 2000).
Though you may not need that as the ports/packages collection is pretty comprehensive.
It also comes with some nifty tools built-in for isolation (similar to but predating cgroups/containers) as "jails". It also has a hypervisor built in (bhyve) for virtualization if you do need to run any linux VMs or anything for any reason.
The way I usually sum up the difference to people is that FreeBSD is designed while Linux is grown. FreeBSD feels much more like a cohesive whole than Linux.
Really, the only reason I'm not running it everywhere is that the industry has kind of settled on linux-style containers for... absolutely everything, and the current solution for that on FreeBSD is basically "run linux in a VM".
I've migrated it through system upgrades and security fixes, but nothing else needed to change. usual uptime is about 3 years between major release updates.
freebsd is an awesome server platform.
Check it out: https://www.linuxfromscratch.org/
The CON is coming up, https://freebsdfoundation.org/news-and-events/event-calendar...
Fall 2024 FreeBSD Summit November 7-8, 2024 San Jose, CA
But I don't think we talk enough about the joy of not being surprised by updates. I'm about to do an upgrade from 13.2 to 14.1 this weekend and I am very confident that I won't have to worry about configuring a new audio subsystem, changing all my startup scripts to work with a new service manager paradigm, or half my programs suddenly being slow because they're some new quasi-container package format.
I mean, in addition to what kev009 mentioned, FreeBSD has so many great things to offer: For example, a full-featured "ifconfig" instead of ip + ethtool + iwconfig. Or consistent file-system snapshots since like forever on UFS (and ZFS, of course). I never understood how people in a commercial setup could run filesystem-level backups on a machine without that, like on Linux with ext4. It's just asking for trouble.
So, I'm happy to see this thread about FreeBSD here! Maybe we can make the Open Source scene a bit more diverse again with regards to operating systems…
If you are a typical SaaS provider the complexity may be beyond your capabilities and budgets.. allegory to a local delivery business choosing to build a long term relationship with a single vehicle manufacturer and dealer.
If you are a high stakes service provider, you need to start thinking about how to get out from being controlled by a single vendor and market flux and plot your own destiny.. allegory to a national shipping carrier sourcing vehicles from multiple manufacturers and developing long term relationships with them to refine the platform.
It really just fragments my knowledge to be honest.
Say "I gotta get things done".
Get me to a terminal. You've got Mac OS command line flags, GNU, BSD. Great.
Then it's some kind of asinine config to interact with some piece of software, all to achieve "generally the same thing", just a different way/flavor.
I really don't see the benefits.
For me, that deeper knowledge is an advantage. It helps me quickly evaluate tradeoffs between databases, debug at the OS level, or dismiss a library still relying on select if I expect heavy load. This insight saves time and increases efficiency.
As someone who admins a lot of btrfs, it seems very unlikely that this was unrecoverable. btrfs gets itself into scary situations, but also gets itself out again with a little effort.
In this instance “I solve problems” meant “I blow away the problem and start fresh”. Always easier! Glad the client was so understanding.
As someone who used it all day every day in my day job for 4 years, I find it 100% believable.
I am not saying you're wrong: I'm saying, experiences differ widely, and your patterns of use are not be universal.
It's the single most unreliable untrustworthy filesystem I've used in the 21st century.
I stayed away for a while but have it again on a Garuda install. I never completely give up on a technology, I hope they get it together.
I think the “experiences differ widely” point makes sense with this comment too. Synology uses btrfs on the NAS systems it sells (there’s probably some option to choose another filesystem, but this is the default, AFAIK). If it were to be “the most unreliable untrustworthy filesystem” for many others too, Synology would’ve (or should’ve) chosen something else.
Around that time, SLES made btrfs their default filesystem. It caused so many problems for users that they reversed that decision almost immediately.
As for the stories, AFAICT often the reason is that the user didn't know they could get their data back - or they are stories from many years ago when btrfs was buggy, but AFAIK those issues have been long solved (i think some specific case with some RAID setup still has issues but this is hearsay and AFAICT from the same hearsay, that setup isn't really necessary with btrfs in the first place).
Using btrfs is more complicated than using ext or something similar, especially since most tools that deal with files/filesystems are made only with ext-like features in mind - to the point where sometimes i wonder what the point is and i'm considering switching to ext3 or ext4, but then i remember snapshots and i'm like, nah :-P.
If you look for benchmarks comparing databases on Linux/BSDs you'll find lots of nuance in practice and results going both ways depending on configuration and what's being tested.
The goal of the talk and the article is not to urge people to migrate all their setups, but simply to share my experience and the results achieved. To encourage the use of BSDs for their own purposes as well. It’s not to say that they are the best solution; there is no universal solution to all problems, but having a range of choices can only be positive.
Bloke is not acquainted with Keynesian economics.
https://www.youtube.com/watch?v=9OhIdDNtSv0
https://www.youtube.com/watch?v=NO_tTnpof_o
All a man needs is food in his stomach and a place to rest at the end of the day. Everything else is vanity
What proportion of global GDP is dedicated to fulfilling our basic material needs?
It is mostly unnecessary. Inspite of the huge productivity gains made since the seventies, the current generation of young Americans are poorer than their parents and grandparents were at their age.
So what does all the IT optimization bring? Just more wealth for the owners and redundancies for their employees, including Joe Bloggs here.
It is time people in IT got to understand this. In the long term their activities are not going to improve their wealth. They are one of the few professions whose job is to optimize themselves out of a living, unless they own the means of the production their are optimizing, which they don't.
It is their employers that do.
I understand it alright, but I'm trapped. Closer to 50 than to 40, I've got a family to run. I could be interested in another profession, but our daily lives & savings would tank if I stopped working, for learning another profession. Also, there's no other profession that I could realistically learn that would let me take nearly the same amount of money home every month. If someone lives alone, they could adjust their standard of living (-> downwards, of course); how do you do that for a family?
Furthermore, there is no switchover between "soulless software job for $$$" and "inspiring software job for $". There are only soulless jobs, only the $ varies. Work sucks absolutely everywhere; the only variable is compensation -- at best we can get one that "sucks less".
When I was a teenager, I could have never dreamt that programming would devolve into such a cruel daily grind for me. Mid-life crisis does change how we look at things, doesn't it. We want more meaning to our work (society has extremely decoupled livelihood from meaning), but there's just no way out. Responsibilities, real or imaginary, keep us trapped. I'd love to reboot my professional life, but the risks are extreme.
FWIW, I still appreciate interesting tasks at work; diving into the details lets me forget, at least for a while, how meaningless it all is.
The houses of the 50s were shit tier and spread around the entire US. You can go buy them today for cheap in the 98% of locations people don’t want to live in.
Sounds like he understood it just fine. He owns the means of production.
There is a reason for the "you will own nothing and you will be happy" ideology being promoted by "PTB", ie subscriptions for ink cartridges, heated seats and advanced suspensions in newer cars.
Corporations now want a continuous income stream from the services provide by physical products they have "sold", but they don't want their employees and subcontractors from earning some of that income stream.
Some IT administrators have been know to schedule regular "downtimes" on perfectly performing systems just to ensure their users and bosses don't take their service for granted.
I also recall a problem with mmap() but I’m not sure if it was related to Java or something else
I've barely touched the BSDs and it's been a few years since I last used Solaris so I can't make much of a comparison as a user myself.
If my needs for storage were more complicated, I would probably use FreeBSD ZFS, but UFS suffices for my rather modest needs.
I use OpenBSD for desktop, web and mail services. There are some limitations, but none that are serious enough to warrant dealing with running another BSD, or Linux distribution.
As mentioned in the article, it also serves as a decent set of instructions, assuming the actual dockerfile(s) for the services and dependencies are broadly available. You can swap out the compose instance of PostgreSQL for your dedicated server with a new account/db, relatively easily. Similar for other non-app centered services (redis, rabbitmq, etc). You can go all in, or partly in and in any case it does serve as self-documenting to a large degree.
When that's said, we had routine XFS losses on SGI boxes. That was a very well known scenario: Write constantly to a one-page text file, say, every few seconds, then power cycle the machine. The file would be empty afterwards. This doesn't happen on Linux, I vaguely recall discussing this with someone some years ago (maybe here on HN) and something was changed at some point, maybe when SGI migrated XFS to Linux, or shortly after.
XFS is originally from SGI Irix and was designed to run on higher end hardware. SGI donated it to Linux in 1999 and it carried a lot of its assumptions over.
For example on SGI boxes you had "hardware raid" with cache, which essentially is a sort of embedded computer with it's own memory. That cache had a battery backup so that if the machine had a crash or sudden power loss the hardware raid would live on long enough to finish its writes. SGI had tight control over the type of hardware you could use and it was usually good quality stuff.
In the land of commodity PC-based servers this isn't often how it worked. Instead you just had regular IDE or SATA hard drives. And those drives lied.
On cheap hardware the firmware would report back it had finished writes when in fact it didn't because it made it seem faster in benchmarks. And consumers/enterprise types looking to save money with Linux mostly bought whatever is the cheapest and fastest looking on benchmarks.
So that if there was a hardware failure or sudden power loss there would could be several megs of writes that were still in flight when the file system thought they were safely written to disk.
That meant there was a distinct chance of dataloss when it came to using Linux and XFS early on.
I experienced problems like that in early 2000s era Linux XFS.
This was always a big benefit to sticking with Ext4. It is kinda dumb luck that Ext4 is as fast as it is when it came to hosting databases, but the real reason to use it is because it had a lot of robust recovery tools. It was designed from the ground up with the assumption that you were using the cheapest crappiest hardware you can buy (Personal PCs).
However modern XFS is a completely different beast. It has been rewritten extensively and improved massively over what was originally ported over from SGI.
It is different enough that a guy's experience with it from 2005 or 2010 isn't really meaningful.
I have zero real technical knowledge on file systems except as a end-user, but from what I understand FreeBSD uses UFS that uses a "WAL" or "write ahead log".. where it records writes it is going to do before it does it. I think this is a simpler but more robust solution then the sort of journalling that XFS or Ext4 uses. The trade off is lower performance.
As far as ZFS vs Btrfs... I really like to avoid Btrfs as much as possible. A number of distros use it by default (OpenSUSE, Fedora, etc), but I just format everything as a single partition as Ext4 or XFS on personal stuff. I use it on my personal file server, but it really simple setup with UPS. I don't use ZFS, but I strongly suspect that btrfs simply failed to rise to its level.
One of the reasons Linux persists despite not having something up to the level of ZFS is that most of ZFS features are redundant to larger enterprise customers.
They typically use expensive SAN or more advanced NAS that has proprietary storage solutions that provide ZFS-like features long before ZFS was a thing. So throwing something as complicated as ZFS on top of that really provides no benefit.
Or they use one of Linux clustered file system solutions, of which there is a wide selection.
I don't know about it's multi-disk story (I do use ZFS for that personally), but for single disk options it is great. You get so many of the ZFS benefits (snapshots, rollback, easily create and delete volumes, etc) with MUCH lower memory usage (at least in my own experiments to try this out).
[0] https://lwn.net/Articles/824855/ [1] https://arstechnica.com/gadgets/2021/09/examining-btrfs-linu...
Most of the SGI machine I've used of various sizes did not have hardware raid. In my experience, you were more likely to run into hardware raid on a PC than on traditional SGI or Sun servers (I don't have much experience with AIX or HP-UX), unless the unix server was in a SAN environment.
ZFS is used heavily on Linux and runs well, though there are some limitations which are being addressed over time in the OpenZFS project. It is used across all areas that Linux serves, whether laptop, desktop, home server all the way to enterprise. https://openzfs.org/wiki/Main_Page
I think you're describing UFS soft-updates? I think that's more or less for meta data updates, not data data. It's been a while since I reviewed it, but it gets you nice things like snapshots and background fsck so after an unclean restart your system can get back to work immediately and clean up behind the scenes. There is some sort of journalling that's fairly new, but my experience from 10 years ago was soft-updates and background fsck just worked; and if you wanted better, ZFS was probably what you want, if you can afford copy on write.
#1 reason we chose Ansible over other tools was support for the BSDs.
>my priority is solving my clients’ specific problems, not selling a predefined solution.
>It’s better to pay for everything to work than to pay to fix problems.
>computing should solve problems and provide opportunities to those who use it.
>The trend is to rush, to simplify deployments as much as possible, sweeping structural problems under the rug. The goal is to “innovate”, not necessarily improve — just as long as it’s “new” or “how everyone does it, nowadays.”
>Some people are used to thinking that the ideal solution is X — and believe that X is the only solution for their problems. Often, X is the hype of the moment
>When I ask, “Okay, but why? Who will manage it? Where will your data really be, and who will safeguard it?”, I get blank faces. They hadn’t considered these questions. No one had even mentioned them.
>We’ve lost control of the data. For many, it’s unnecessary to complicate things. And with every additional layer, we’re creating more problems.
Hopefully someday more people will wake up.
I don't really see Kubernetes as being a game changer. The biggest pro, it makes it easier to onboard both development and operations personnel having a quasi-standard for how a lot of things like scheduling and application networking work.
But it also seems to come with a magnitude of accidental and ornamental complexity. I would imply the same about microservices versus, say, figuring out your repository, language, and deployment pipelines to provide a smooth developer and operator experience. Too much of this industry is fashion and navel gazing instead of thinking about the core problems and standing behind a methodology that works for the business. Unless google moves its own infrastructure to Kubernetes, then maybe there's something to be had that couldn't reasonably be done otherwise :)
We went from a virtualized server model to managed Kubernetes and costs have escalated considerably. The additional complexity and maintenance overheads of Kubernetes are not trivial and required additional staff hires just to keep things ticking. I think the cost so far from moving from two cages in separate datacentres running blades to AWS is approximately a 6x multiplier including staff. This was all driven on the back of "we must have microservices to scale", something we have failed entirely to do. It's a complete own goal.
If you look at most enterprises today you will see it deployed everywhere.
And most of the complexity has been abstracted away by the cloud providers so all you're left with is a system that can handle all manner of different applications and deployment scenarios in a consistent way.
Kubernetes solves administration of a cluster of Linux machines, as opposed to administering a single Linux machine. It abstracts away the concept of a machine, because it automates application scheduling, scaling across different machines, rolling updates of applications, adding/removing machines to the cluster all at the same time. There are no instruments like that for applications, the closest to them are something like Spark and Hadoop for data engineering tasks (not general applications).
Microservices are also used to solve a very specific problem – independent deployments of parts of the system. You can dance with your repository and your code directories as much as you want, if you're not in a very specific runtime (e.g. BEAM VM), you will not achieve independent deployments of parts of your service. The ability to "scale independently" (which tbh is mostly bullshit) is an accidental consequence of using HTTP RPC for microservice communication, which is also not the only way, but it allows reuse of the HTTP ecosystem.
> I’m the founder and Barista of the BSD Cafe, a community of *BSD enthusiasts
Did the original article change it's title (currently "I Solve Problems"), or did the submitter editorialize it?
There's probably more collective writing about the various tradeoffs between Debian and FreeBSD in their forums and communities than anywhere else on the internet.
Personally I love ZFS and ZFS on root so much I can never go back to not having it. It's a shame more cloud providers like DigitalOcean/AWS/etc. don't offer it natively.
huh, were they running persistent docker containers and modifying them in-place? If that's the case, they were missing the best part of Docker - the Dockerfile and "container is a cattle". The power of Docker is there no ad-hoc system customization possible, it's all in Dockerfile which is source-controlled and versioned, and artifacts (like built images) are read-only.
To go from this to all-manual "use bastille edit TARGET fstab to manually update the jail mounts from 13.1 to 13.2 release path." [0] seems like a real step back. I can understand why one might want to go to BSD if they prefer this kind of workflow, but for all my projects, I am now convinced that functional-like approach (or at least IaaC-like one) is much more powerful than manually editing individual files on live hosts.
[0] https://bastille.readthedocs.io/en/latest/chapters/upgrading...
I am personally on board with using the various BSDs when it makes sense (though maybe just pick FreeBSD and stick with it, as opposed to fragmenting the install base, the same way how I've settled on Ubuntu LTS wherever possible; it's not ideal but it works), except the thing is that most job ads and such call for Linux experience in particular, same with tooling like Kubernetes and OCI/Docker containers and such. Ergo, that's where most of my time goes, I want to remain employable and also produce solutions that will be understandable to most people, instead of getting my DMs pinged whenever someone is confused by what they're seeing.
To give but one example, I recently reported a bug when FreeBSD didn't boot after upgrade from 13 to 14. Worse the disk format was somehow altered so when the reboot tried to boot off 13 due to zfs bootonce flag (supposedly a failsafe), it refused to boot for the same reason. I believe it's due to a race condition in geom/cam. The same symptoms were reported 6 years ago, but the bug report has seen no activity. Making your system irrecoverable without a rescue image and console access strikes me as pretty serious. He waxes lyrical about zfs, but it's slower and more resource hungry than it's simpler competition and it's not difficult to find numerous serious zfs bug reports over the years. (But not slower than FreeBSD's UFS, oddly. It's impressively slow.) Another thing that sticks in my mind is a core zfs contributor saying it's encryption support should never have been merged.
This sounds too disparaging because the simplicity and size of FreeBSD has its own charms, but the "it's all sunshine and roses" picture he paints doesn't ring true to me. While it's probably fair to say stable versions of FreeBSD are better than the Linux kernels from kernel.org, and possibly Fedora and Ubuntu, they definitely trail behind the standard Debian stable releases.
Comparing FreeBSD to Debian makes throws up some interesting tradeoffs. On the one hand, FreeBSD's init system is a delight compared to systemd. Sure, systemd can do far, far more. But that added complexity makes for a steep learning curve and a lot of weird and difficult to debug problems, and as FreeBSD's drop dead simple /etc/rc.conf system proves most of the time the complexity isn't needed to get the job done. FreeBSD's jails just make more intuitive sense to me than Linux's equivalent which is built on control groups. FreeBSD's source is a joy to read compared to most I've seen elsewhere. I don't know who's responsible for that - but they deserve a medal.
On the downside - what were they thinking when they came up with the naming scheme under /dev for block devices? (Who thought withering device names was a good idea, so that /dev no longer reflects the state of attached hardware?) And a piece of free advice - just copy how Linux has does it's booting. Loading a kernel + initramfs is both simpler and far more flexible then the FreeBSD loader scheme. Hell, it's so flexible you can replace a BIOS with it.
The combination of the best parts Linux and the BSD's would make for an wonderful world. But having a healthy selection of choices is probably more important, and yes - I agree with him that if you are building an appliance that has an OS embedded in it, the simplicity of FreeBSD does give it an edge.
Sorry, could you clarify what this means? I'm not super familiar with freebsd and don't understand what withering means here.
The /dev naming is about are how FreeBSD handles block device aliases. Like Linux, FreeBSD creates a number of aliases based on the block devices labels and uuid. My favourite Linux alias is missing on FreeBSD - bus path (which how you unambiguously get to the you just connected at a cable). On Linux these aliases are just symlinks to the real device, which means all it takes is "ls -l" to see the relationship between devices and aliases. Simple, elegant and it means all devices have one true name, so in error logs and so on you always know what device it's talking about.
Under FreeBSD these aliases are device nodes, so there is no single true name. The real device an alias maps to is not at all obvious. Worse, it's not the same device major or minor, and worse still the aliases behave differently. So for example, it the OS mounts as CDROM using it's label alias (which would be /dev/iso9660/label on FreeBSD) you can't eject it because the alias device doesn't understand the eject ioctl. But the you may not be able to get to it at all, because it's been withered away.
Complicating the issue still further is zfs. It wants to take over the roles of /etc/fstab and /sbin/mount. This gets particularly interesting when you boot off zfs, so there is no /dev, so there are no aliases, so it has no obvious way of figuring out what those path names you gave to zpool meant. They kludged their way around that somehow, but it doesn't always work - which I think is the trigger behind the boot failures I mentioned earlier. They took me days to figure out a work around. It was to turn off some of the aliasing.
Otherwise, I'd never dare to do something like that.
Its also a legal nightmare for the hoster if something goes wrong.
How do people on the Internet come to such random conclusions when there is no way you could have known the full terms of the contract between the author and their client?
Yes. I also always let my customers sign off when I change the libraries I use. Completely sane approach.
The major providers such as GCP, AWS etc share very few details about their underlying infrastructure with their customers. They change all sorts of things all the time.
As has been said, it varies case-by-case, and the OP believes they have a relationship with their client such that they didn’t need to provide notice for this, and they’re probably right. But most people doing this would send out a “maintenance is occurring on this date and some downtime may occur” email.
It was a time where sending regular mail to different countries could take weeks, and cross country phonecalls would cost between $2 and $20 per minute, and here was FidoNet that promised to allow communications across the globe with only 1-4 days delay and basically for free.
My 15-18 year old self was instantly sold. I spent countless hours reading through the "forums" on there. So much knowledge just at the tip of my fingers.
Of course some time later it was more or less replaced (for me) by email, usenet and IRC, but the memory still remains.
... This site and this directory were driving up server load averages, which causes instability for all users on this server. Our block on this site appears to have blocked over 150,000 connections in about half an hour. This is simply more than is appropriate for a shared server.
You may want to review your domain's traffic logs to see what kind of traffic this site is getting. The logs for today will be distributed at midnight EST tonight. If the traffic to this site is legitimate, you should look into a dedicated or a virtual private server. If the traffic is not legitimate, you should block it. I can provide assistance with that if you need it.
This domain and this directory will need to be disabled until traffic dies down, traffic is blocked, or your account is upgraded from shared hosting. If you need access to the directory so that you can make any necessary changes, please let me know.
and that should be the title of this post too.
I like that the blog post shares the slides, not just the video.
Yeah. That guy should not be allowed anywhere near the production workloads. "I solve problems", my ass.
Otherwise, I'd never dare to do something like that.
And I'm not so crazy as to do such an operation without the appropriate tests and foundations. Of course, when I started, I had all the conditions to be able to do it, and I had already conducted all possible tests. :-)
The VM change was sufficient enough to alter the runtime of a task by several times. This is NOT a small inconsequential change.
You _have_ to warn your clients when you do stuff like this.