I'd like to see FOSS do designs like that [minus the interface & API]. The labor advantage might lead to something pretty awesome. Meanwhile, they keep building complexity on top of UNIX architecture with a few stragglers on better OS architectures. Not much hope there. Fortunately, CHERI (capability) and SAFE (tags) are each building on mechanisms System/38 used for software reliability and security. An academic team or company might build something great on top of it. Still holding out some hope!
[1] http://homes.cs.washington.edu/~levy/capabook/Chapter8.pdf
I guess I buy the claim that you're better off starting with a log-structured file system that will use new APIs to allocate and free chunks on the SSD, than to use a conventional file system that the SSD tries to optimize
[0] https://www.usenix.org/system/files/conference/inflow14/infl...
https://en.wikipedia.org/wiki/Memristor#1808
"They can potentially be fashioned into non-volatile solid-state memory, which would allow greater data density than hard drives with access times similar to DRAM, replacing both components.[14] HP prototyped a crossbar latch memory that can fit 100 gigabits in a square centimeter,[8] and proposed a scalable 3D design (consisting of up to 1000 layers or 1 petabit per cm3).[66] "
They seem to be here.
Is that not the case?
But I'm not really sure how this by itself solves the problems of modern SSD's. Problems caused by two things, the differing segment/page sizes between the upper layers (OS/filesystem/database/etc) and the need to rewrite static portions of the SSD in order to wear level the entire address space. (and to a lesser extent the need to update small portions in the middle of existing pieces of data).
Its the second part that seems to be forgotten by lots of people suggesting alternatives to the current storage stack. Whats really necessary is a higher level communications protocol between the flash and the filesystem. One that provides the metadata to the ssd about the sizes of individual "objects" so that they may be keep together as atomic units. That way the flash layer knows if a write is to a larger object, or a new object itself that can be managed separately.
A little off topic from the article: I think the biggest tipping point is the decreasing cost of cloud storage because large companies like Google do so much of their time critical processing with data in RAM (spread over many servers) and these servers can re-purpose a lot of their disk I/O capability to service low cost cloud storage.
Re: your comment, I'm curious to see how the I/O capacity of those servers could be distributed for cloud storage. I would imagine that even with 10Gbps if those machines are really cranking, they could have a difficult time provisioning network I/O for a storage service.
re: I/O capacity: I was commenting on how Google, etc. can provide such inexpensive cloud storage, not access speed.
Their performance special source is using the minimum amount of flash to provide a performance boost over standard HDDs.
They also have very nice analytics.
They are good for VMware/virtualisation. For general file storage, unless they have changed recently, is pretty mediocre.