* What does the Nitro accelerator look like to the host? . Does the Nitro accelerator present as NVMe devices to the OS host, or is there a more custom thing it presents as? Does the Nitro accelerator use SR-IOV to or something else to present as many different PCIe adapters, per-drive PCIe, or a single PCIe device, or no PCIe devices at all, something else entirely (and if so what)? Are there custom virt-io drivers powering the VMs? How much change has gone into these interfaces in the newest iterations, or have these interface channels remained stable?
* What is the over the wire communication? Related to the above; ultimately the VM's see NVMe, & how far down the stack/across the network does that go? Is what's on the wire NVMe based, or something else; is it custom? What trade-offs were there, what protocols inspired the teams? Originally at launch it seemed like there was a custom remote protocol[1]; has that stayed? What drove the protocol evolution/change over time? What's new & changed?
* What do the storage arrays look like; are they also PCs based? Or do the flash arrays connect via accelerators too? Are these FPGA-based or hard silicon? Are there standard flash controllers in use, or is this custom? How many channels of flash will one accelerator have connected to it? How much has the storage array architecture changed since Nitro was first introduced? Do latest gen nitro & older EBS storages have the same implementation or are newer EBS storages evolving more freely now?
* On a PC, an SSD is really an abstraction hiding dozens of flash channels. There have been efforts like Open Channel SSDs and now zoned namespaces to give the PCs more direct access to the individual channels. Does the Nitro accelerator connect to a single "endpoint" per EBS, or is the accelerator fanning out, connecting to multiple endpoints or multiple channels, doing some interleaving itself?
* What are some of the flash-translation optimizations & wins that the team/teams have found?
And simply: * How on earth can hosts have so much networking/nitro throughput available to them?! It feels like there's got to be multiple 400Gbit connections going to hosts today. And all connected via Nitro accelerators?
It's just incredibly exciting stuff, there's so much super interesting work going on, & I am so full of questions! I was a huge fan of the SeaMicro accelerators of yore, an early integrated network-attached device accelerator. Getting to work at such scale, build such high performance well integrated systems seems like it has so so many interesting fascinating subproblems to it.