I've heard about this, but my understanding was that when this happens, performance becomes extremely poor. While that may be quite bad, it's still worlds apart from losing data.
There's also the fact that the user may have partitioned the drive in a such a way to prevent it from ever filling up. Even root can't fill the partition beyond its size. Here, you have to go out of your way to make sure the partition doesn't fill out, or else you have a bad time. Shit happens, so this does look like a FS bug to me, much more than PEBCAK.
Except if you keep overwriting a flash-based storage system, at some point that flash storage gets destroyed (wear level). You can absolutely achieve such by having a near full filesystem on flash. Mechanical harddrives or partitions don't suffer from this issue.
Perhaps the issue occurs more quickly on btrfs, that I don't know, but it could happen on any filesystem. On the other hand, you should have backups. Personally, I use ZFS on two of my machines, with snapshot feature.
Wearing out is yet a different thing. I've had this happen on an SD card. It would refuse to write anything new, although it reported being mostly free. But the stuff that was already on it was readable.
I've had SD cards that got full. They didn't lose any data, and once I'd moved the things off them, they became usable again.
Granted, this was with a digital camera, so using fat32 at the time, so no fancy FS.
Exactly, a swimming-pool should never explode if you overfill it, however it's the users responsibility to turn the water off to prevent "data/water-loss"
That's why we made filesystems, preserve and organize data and tell the user/system when it cant take anymore.
In the ordinary cases, btrfs full behavior is the same as other filesystems. It gets full, you can delete files. Keep in mind deleting files on any cow filesystem is a write that consumes free space before space is freed from the delete operation. There is reserve space for this. If you hit an edge case (which would be a bug) there's currently no known data eating bugs. But it's not always obvious to the user their data hasn't been eaten if the filesystem won't mount. As in, this is indistinguishable from data loss. Nevertheless it's a serious bug so if you have a reproducer it needs to be reported.
But hey how about a quota behind the scenes?....you know like ZFS? AFS? ReFS?...you know so the filesystem tells the user "sorry cant take anymore" before it really cant take anymore? That would be some crazy enterprise level stuff....
You know, a Filesystem that immediately stops writing and instead cares more about the data that's already on the platter?
BTW: It was a DC-Harddisk
"Can't boot" is also vague. I've had data loss with XFS in 2002 or so (didn't have backups), couldn't mount filesystem anymore. Thanks to help on IRC from devs I got almost all data back. I've been able to get recover data from a dying Deathstar, too. And then there's the RAID5 write hole (can be mitigated), and RAID5 issues on btrfs (which are well known). For all we know you were using RAID5 shrug. Anyway, did you file a bug report, did you contact the devs?
No true with ZFS and XFS, you are trying to defend a ill designed filesystem....in typical linux *fashion ;)
It's S#it but at least we "invented" it.