One of highlights working on OpenTitan was the amount of interest we got from the academic community. Work they did could actually get factored into the first generation silicon making it stronger. Ordinarily chips like that are kept deeply under wraps and the first time the wider security community can take a look at them development has long completed so anything they might find could only effect generation 2 or 3 of the device.
Academic collaboration also helped get ahead in post quantum crypto. This first generation chip has limited capabilities there but thanks to multiple academics using the design as a base for their own PQC work there was lots to draw on for future designs.
I'm no longer at lowRISC so I don't know where OpenTitan is going next but I look forward to finding out.
> Ibex® is a small and highly configurable open-source RISC-V embedded processor available under an Apache 2.0 licence. It is formally verified and very well validated, and it has excellent toolchain integration, which has led many companies to use it in their commercial SoCs.
> [...]
> Ibex is the main CPU in the OpenTitan® root of trust, which has brought the quality of the design and documentation to new heights.
So that's neat.
But, for almost all people this is shifting from one kind of "trust me bro" to .. another. We're not going to be able to formally prove the chip conforms to some (verilog?) model, has no backdoors, side channels, you-name-it. We're in the same place we were, with the same questions. Why do we trust this and the downstream developments? Because we do.
I know people who worked on cryptech, and I definitely had trust in their work, personal commitment to what they did, but that's "who you know" trust. The non transitive quality of this kind of trust is huge.
To be more critical my primary concern will be how deployment of this hardware is joined by significantly less benign design choices like locked bootloaders, removal of sideloads. To be very clear that's a quite distinct design choice, but I would expect to see it come along for the ride.
To be less critical, will this also now mean we get good persisting on device credentials and so can do things like X.509 certs for MAC addresses and have device assurance on the wire? Knowing you are talking to the chipset which signed the certificate request you asserted to before shipping is useful.
This is basically the best we can hope for until we get nanofabs at home and can build our own secure enclaves in our garages.
Trust decision theory goes like this; it it were possible for the manufacturer to fully control the device then competitors would not use it, so e.g. wide industry adoption of OpenTitan would be evidence of its security in that aspect. Finally, if devices had flaws that allowed them to be directly hacked or their keys stolen then demonstrating it would be straightforward and egg on the face of the manufacturer who baked their certificate on the device.
Final subject; 802.1x and other port-level security is mostly unnecessary if you can use mTLS everywhere which is what ubiquitous hardware roots of trust allows. Clearly it will take a while for the protocol side to catch up; but I hope that eventually we'll be running SPIFFE or something like it at home.
https://arxiv.org/pdf/2303.07406
Like, for example, the Boachip-1x MCU.
https://www.cnx-software.com/2026/03/04/dabao-board-features...
A justifiable concern, given sentences like "strongest possible security guarantees that the code being executed is authorized and verified" and "can be used across the Google ecosystem and also facilitates the broader adoption of Google-endorsed security features across the industry"
Sure you can. Get together as a group. Purchase a large lot of chips. Select several at random. Shave them down layer by layer, imaging them with an SEM. You now have an extremely high level of confidence that all the chips in the lot are good.
Physical security aside, I share your concerns about the abusive corporate behavior that widespread deployment of such hardware might enable.
> Knowing you are talking to the chipset which signed the certificate request you asserted to before shipping is useful.
Can't an fTPM with a sealed secret already provide that assurance? Or at least the assurance that you actually care about - that the software you believe to be running actually is. At least assuming we stop getting somewhat regular exploits against the major CPU vendors.
The entire push seems to be motivated by actors that want to deny users access to their own devices in thinly veiled promises of "security".
It's basically asking someone to give their company your house keys on nothing more than "trust me bro".
And it's completely opposite of how it should be, it should be my device that I then can give the vendor app limited sandbox that I can access fully, not the other way around.
Worked with the OT team at Google years ago and am glad to see this stuff finally taped out.
> both individual IP blocks and the top-level Earl Grey design have functional and code coverage above 90%—to the highest industry standards—with 40k+ tests running nightly
This is definitely not "to the highest industry standards". I've worked on projects where we got to 100% on both for most of the design. It's definitely a decent commercial standard though - way above most open source verification quality.
Having spent several years working on OT I can tell you that most of the gaps are things that should be waived anyway. Getting waiver files reliably integrated into that flow has been problematic as those files are fragile, alter the RTL and they typically break as they refer to things by line number or expect a particular expression to be identical to when you did a waiver for it.
This has all been examined and the holes have been deemed unconcerning, yes ideally there'd be full waivers documenting this but as with any real life engineering project you can't do everything perfectly! There is internal documentation explaining the rationale for why the holes aren't a problem but it's not public.
Yeah last time I did this we used regexes but I really don't like that solution. I think the waiver should go in the RTL itself. I don't know why nobody does that - it's standard practice in software. SV even supports attributes exactly for this sort of thing. The tools don't support it but you could make a tool to parse the files and convert it to TCL. I've done something like that using the Rust sv-parser crate before. Tedious but not impossible.
Also we found the formal waiver analysis tools to be very effective for waiving unreachable code, in case you aren't using those.
Congrats on the silicon anyway!
“Open source” has a very different meaning when it comes to silicon.
It's intended to be integrated into a larger SoC and used for things like secure boot, though you could certainly fab it with its own RAM and GPIO and use it standalone.
So, google, samsung, take your pick. User ones ? Nah, we cant trust user
Not something I would want to touch.