For the last few years, I managed the Container Runtime group at Facebook. My experience has been:
1. `if (has_capability(..., X)) { ... }` gets put into code pretty haphazardly in a way that's not necessarily super well structured. Once it's there, it's ABI, and you're screwed if you want to iterate on it. That's why cap_sys_admin is /almost/ root.
2. If you wanted to do the right thing from the jump (e.g. for bpf itself), you'd have to add a new capability. This is a heavy lift for something that might not actually get any traction. It requires changing a bunch of common tools, and you likely end up breaking a bunch of applications.
3. Debugging capability failures is a pain in the ass. We ended up building and deploying capability tracing infrastructure just to figure out what people are actually using.
4. For gradual roll outs of enforcement/changes, you need the flexibility to warn first, enforce second. We did large scale monitoring of all such changes to make sure we didn't break the workloads.
5. Even if you nail all of the above, the ability to make finer-than-capability-grained decisions (i.e. binding to port 20 or 80 is okay but not port 22) is really valuable.
I'm all for kernel abstractions that just work and solve all problems for all people, but I think the overwhelming trend has been towards kernel interfaces that provide a lot of flexibility and then more opinionated libraries/tools that kind of let us have our cake and eat it to (io_uring => liburing, bpf => libbpf, btrfs => btrfstools).
[1]: https://www.tcpdump.org/papers/bpf-usenix93.pdf
[2]: https://ebpf.io/what-is-ebpf#hook-overview
[3]: http://www.brendangregg.com/BPF/bpf_performance_tools_book.p...
https://fly.io/blog/bpf-xdp-packet-filters-and-udp/
An interesting fact is that packet filtering as a problem domain has been dominated by in-kernel virtual machines going back into the 1980s; it's an idea that comes all the way from Xerox.
https://github.com/xdp-project/xdp-tutorial
It's a good thing, I think! Compared to loading new unmanaged C code into the kernel, BPF is a really nice way to add functionality to Linux.
Happy to answer any questions.
As for the question: How are you looking to make money?
The Future of Networking? Networking is not only linux. eBPF is linux-only. Everyone else uses the secure variant dTrace, which has even wide-spread user-space support. So you can trace across the kernel, processes and its extensions/scripts. For decades.
Future of Security? eBPF is insecure. User-accessible arrays in the kernel can never be secure. dTrace did not do that for a reason, it was already compromised with the spectre-like attacks, and the fixes were laughable at best to safe face.
Linux might be advised to do better (or is just NIH?), but advertising Worse as Better was fashionable in the 80ies only.
Comparing dTrace and eBPF is definitely a very interesting question. I've actually asked Brendan Gregg in the Q&A of his keynote at eBPF summit this year how he compares dTrace and eBPF these days. Here is his answer (jumps right to the specific question): https://youtu.be/jw8tEPP6jwQ?t=4618
I doubt that eBPF will remain a Linux-only technology. Ports to FreeBSD are already underway it seems [0] and Microsoft declared intent to invest into eBPF [1]. I'm not sure what that means on timeline for eBPF availability on Windows though. There are also several user space implementations for eBPF which could become interesting to provide a universal programmability approach across traditional kernels like Linux, microkernels like Snap and application kernels like gVisor.
[0] https://papers.freebsd.org/2018/bsdcan/hayakawa-ebpf_impleme... [1] https://twitter.com/markrussinovich/status/12830391539203686...
https://github.com/solana-labs/rbpf
Unlike more common Rust + LLVM + WASM toolchain, Solana smart contracts use Rust + LLVM + eBPF.
They appear to be running some kind of "open security test"[1] but are only paying out their own imaginary funny money. I'd suggest you run for the hills as fast as you can instead of considering Solana.
0: https://github.com/solana-labs/rbpf/blob/f7007d6ae8728e61401... 1: https://forums.solana.com/t/tour-de-sol-stage-1-details/317
We’re currently moving to Kubernetes for our infrastructure at the Berkeley OCF (https://ocf.berkeley.edu/), and picked Cilium for all the networking things.
It’s good to see that there’s a company backing it now!