They sell servers, but as a finished product. Not as a cobbled together mess of third party stuff where the vendor keeps shrugging if there is an integration problem. They integrated it. It comes with all the features they expect you to want if you wanted to build your own cloud.
Also, they wrote the software. And it's all open source. So no "sorry but the third party vendor dropped support for the bios". You get the source code. Even if Oxide goes bust, you can still salvage things in a pinch.
Ironically this looks like the realization of Richard Stallman's dream where users can help each other if something doesn't work.
One of the more interesting discussions I had during my tenure at Google was about the "size" of the unit of clusters. If you toured Google you got the whole "millions of cheap replaceable computers" mantra. Sitting in Building 42 was a "rack" which had cheap PC motherboards on "pizza dishes" without all that superfluous sheet metal. Bunches of these in a rack and a festoon of network cables. What are the "first class" elements of these machines? Compute? Networking? Storage? Did you replace components? Or a whole "pizza slice" (which Google called an 'index' at the time). Really a great systems analysis problem.
FWIW I'm more of a "chunk" guy (which is the direction 0xide went) and less of a "cluster" guy (which is the way Google organized their infrastructure). A lot of people associated with 0xide are folks I worked with at Sun in the early days and during that period the first hints of "beowulf" clusters vs "super computers", was memory one thing (UMA) or did it vary from place to place (NUMA). I have a paper I wrote from that time about "compute viscosity" where the effective compute rate (which at the time largely focused on transactional databases) scaled up with resource (more memory more transactions/sec for example) and scaled down with viscosity (higher latencies to get to state meant fewer transactions/sec) Sun was invested heavily in the TPC-C benchmarks at the time but they were just one load pattern one could optimize for.
These guys have capitalized on all that history and it is fucking amazing! I just hope they don't get killed by acquisition[1].
[1] KbA is a technique where people who are invested in the status quo and have resources available use those resources to force the investors in a disruptive technology to sell to them and then they quietly bury the disruptive technology.
One of the most obvious examples of the problem with this approach is that they're shipping previous generation servers on Day 1. One can easily buy current generation AMD servers from a number of vendors.
They will also likely charge a significant premium over decoupled vendors that are forced to compete head-to-head for a specific role (server vendor, switch vendor, etc).
Their coupling approach will most likely leave them perpetually behind and more expensive.
But there are advantages too. Their stuff should be simpler to use and require less in-house expertise to operate well.
This is probably a reasonable trade-off for government agencies and the like, but will probably never be ideal for more savvy customers.
And I don't know how truly open source their work is but if it's truly open source, they'll most likely find themselves turned into a software company with an open core model. Other vendors that are already at scale can almost certainly assemble hardware better than they can.
Also, it has little to do with the cloud; it is yet another hyperconverged infra.
Weirdly, it is attached to something very few people want: Solaris. This relates to the people behind it who still can't figure out why Linux won and Solaris didn't.
They sell rack-as-compute.[0] Their minimum order is one rack: You plug in power and network, connect to the built-in management software (API), and start spinning up VMs.
[0] With built-in networking and storage.
Proxmox and a full rack of Supermicro gear would not be as sophisticated, but end result is pretty much the same, with I imagine far far better bang for buck.
I like it, but it doesn't seem like a big deal or revolutionary in any way.
>They sell servers, but as a finished product. Not as a cobbled together mess of third party stuff where the vendor keeps shrugging if there is an integration >problem. They integrated it.
I’m actually extremely impressed. I want one. I haven’t worked in a data center in years, but I’d be tempted to do it again just to get my hands on one.
Is this true? Can you set your own root of trust for the firmware signing key and build and deploy it yourself?
How is it ironic?
that's only true if you think that "users" means "people who operate cloud computers", which is about as far from understanding what Stallman is talking about as is possible. Someone who makes SaaS and runs it on an Oxide computer is no less of a rentier capitalist than someone who makes SaaS and runs it on AWS.
DC build out cost and effort
Power cooling requirements
Dark fiber bandwidth requirement
New headcount to support all this
No thanks , I have widgets to make and sell and a business to run.
As a technologist, I really appreciate what they have done. Impressive work, high quality, however I don't understand who this is for.
The meaningful market for Data Center hardware is pretty well defined in two clusters. People that build/make custom gear (such as Hyperscalers) and people that buys HP/Cisco/IBM/Dell... (blades or hyper-converged). To scale, you obviously want your DCs as standardized as possible.
Until this company has a certain/size and scale, no one serious will trust their black boxes at any type of scale.
Beyond the tech, how would support services really work? We can have a technician from any of the large vendors on-site in less than 2 hours. In some of our DC clusters we actually have vendor support personnel 24x7 on-site with vendor paid spare parts inventory. How would they provide that level of service?
Maybe I am not the target audience for this offering.
I think the motivations for why people would want to own their own are probably a mix of financial (at a certain scale there's a tipping point and it gets cheaper), and regulatory/compliance/whatever, like if it's healthcare data, or defense, etc.
- Why would anyone buy the Framework laptop, they don't have nearly the support/pedigree that Dell, HP, etc. has?
- Why would anyone use iPhones in the enterprise/IT world, they don't have nearly the support/pedigree that Blackberry, Microsoft, etc. has?
- Why would anyone use Google Fiber, they don't have nearly the network or support that AT&T, Spectrum, etc. has?
- Why would anyone ever use Linux (in enterprise, let's say), compared to the support and adoption that Microsoft/Windows offers?
- ...
I'm purposely picking different examples with varying degrees of success or adoption. I am not claiming that Oxide will be an instant category-dominating success. I don't think Oxide expects to replace HP/Cisco/Dell/etc. overnight, and I don't think a business has to launch with that ambition from the start, to prove that it's worth launching.
But this take is so repetitive as to be bordering on cliche -- I don't know if you're self-aware enough to realize, you are literally just a living embodiment of the "Innovator's Dilemma" right now...
One thing that Bryan understands is that you can "lock" the customer in with great products and services, as well as continuing development, while also making the customer feel secure in having a way out should you turn into a company that treats locked-in customers as cash cows. The open source strategy (it is a strategy for Bryan and Oxide) is there precisely to do this: make the customer feel they can leave you, but then not.
For your deeply technical staff, having source code access is a big deal too, since it enables them to better understand the products they use.
How big is the market of sufficiently-vendor-lock-in-averse customers? I don't know -- that's not my remit. But there's the size of that market right now, and whether Oxide (and any other companies with similar visions) can grow that market by sheer willpower. I make no predictions.
What if Oxide can get the next Netflix to use their stuff instead of a public cloud?
"Hey subdivision A, could we buy a few Oxide racks and move your workload there from AWS? It looks like they would have all the storage and compute you need. Yes? Ok, in 36 months we'll pay your current IT department employees a bonus of 50% of whatever it has saved us vs your current AWS budget."
How many times has computing hardware changed in response to the economics of the parts and the economics of the businesses buying hardware?
There are downsides to new models, but money solves a lot of problems
So I don’t know about Oxide in particular, but it seems short sighted to bet on stagnation
Also Oxide is doing what Google did 20 years ago, and Facebook open sourced ~10 years ago, so it’s not exactly unproven
Since many of those use cases probably already run extensive on-prem infrastructure this could appeal to them. AWS outpost talks about industries like healthcare, telecom, media and entertainment, manufacturing, or highly regulated spaces like financial services. I've heard of media companies that process through things like IMAX cameras that have just tons of TB's of data sometimes just for 5 minutes worth of footage. That would simply be too cost prohibitive - in bandwidth alone - to try and move around in the cloud and you don't want to have to wait for things like AWS snowball or whatever.
While I think the space is "niche" those niche spaces are not small. Big companies with big budgets.
Their marketing and story is supposed to convince you that you could save money running their things rather then Dell. And instead of paying for VMWare you get Open Source Software for most of it.
> Until this company has a certain/size and scale, no one serious will trust their black boxes at any type of scale.
I guess that a risk they are willing to take. Some costumers might wait for a few years until they see Oxide being big enough.
Other costumers might be sick of HP/Dell and might take a Risk on a smaller company.
Since they seem to have some costumers, some organizations are willing to take the risk to get away from Dell and friends.
So I think you are the target audience but you are not willing to risk it until they are larger and less likely to fail and they have a good story in regards to support. I assume they have a support story of some kind, no idea what it is 'Contact Sales' ....
In terms of 'trusting they will continue exists' all they can do is survive for a few years until they are pretty established, then more people will be willing buy their product. And hopefully in that time their existing costumers rave about how amazing the product is.
Lets hope they don't go bust because all potential costumers are just waiting. Then again, you can't anybody for not buying from a startup.
The current state of the art is a fucking train wreck. Choose any layer of abstraction and start picking at it and you’ll find mostly gaffer tape. Scaling up sucks. Rewriting old monoliths to micro services wasn’t a panacea except maybe for cloud vendor profits. You said the hardware market is well defined, which is another way of saying ossified. Particularly when you start comparing it to the expected pace of software. How’s Open19 going? How much liquid cooling do you have in customer racks?
In my opinion this is ultimately a software offering that happens to come on vertically integrated hardware. It offers a complete, highly polished API to a minuscule-scale DC. If they can find market fit and make a little bit of money, the next step is to start making deep improvements to things behind the abstraction. From where you sit, you are well aware that you could make huge improvements inside the DC demarc if you could do it without disrupting customers. But you’re probably limited by the terrible, terrible APIs you would be forced to use, that don’t offer the capabilities you need, from vendors who would be happy to chat but would provide timeframes in the 6-18 month range for even a modest improvement.
So this, IMO, is about defining that interface to a DC on a single hardware platform with vertical integration, and then scaling up from there. The current hyperscalers will adopt the suite of APIs and capabilities directly, make lower quality copies, or die.
Of course all assuming they’re successful enough to get the revenue machine going in the first place which seems likely to me, given the absolute dogshit state of the cloud world today, the trend towards multi cloud, and the business case for moving certain loads back on prem or to a bare metal colo++ offering.
This gets to the crux of my first thoughts when I read the marketing copy: can they deliver (reliability).
They do admit very clearly that what they're doing is hard and that at many points during development they were reluctant to be too ambitious (for obvious reasons), but at each stage they did just that: proceeded with the most ambitious option. That takes a huge amount of self belief that might be warranted or might be hubris. As you point out, untimely the success of Oxide won't only hinge on ability to deliver on that self belief, but more on their ability to convince prospective customers that they have competency to do what noone else can.
I fully support every part of their approach in theory but my wizened traveled self thinks it smells a little too good to be true.
I expect the oxide supporters to have a hard few years ahead of them of finding bugs in high throughput environments. But at the end of the day it will be worth it just to have another competitor in a pretty boring playing field.
Forgive me for being naive, but that sounds like a great way to differentiate their offering.
E.g. the famous Maytag Repairman, who sits at his desk doing nothing because Maytag washers are so reliable that he has nothing to do.
> In a time in which the laundry appliances of major manufacturers had reached maturity, differing mostly in minor details, the campaign was designed to remind consumers of the perceived added value in Maytag products derived from the brand's reputation for dependability.
I can't think of a lot of examples right now but I can already imagine one type of customers for such a product - universities wanting high performance computing. At my alma mater, the HPC cluster/server was in a different slightly distant location. Using something like AWS wouldn't be liked by almost any uni admin, and running a server on premises isn't a great idea in a place that gets the occasional (but rare) power or internet cut. Outsourcing some of these responsibilities may have been nice for our admin.
The problem is that the industry hasn’t noticed this yet. The hyperscalers have, and are printing money as a consequence.
A lot of people didn’t understand the need for the iPad mini or Mac mini or myriad other products without an obviously existing market. I think this is extremely keen of Oxide to fine the borders between these offerings and fit very snugly between them.
My guess, someone who buys hyper-converged infrastructure today (e.g. Nutanix, VCE, etc) ... but that market is getting smaller and smaller by the day.
Congrats to the team on reaching this big milestone and (in my eyes at least) just as much congratulations on doing it in a way that has been unique and sticking to values that seems to drive a strong positive culture (at least from the outside looking in).
Shipping products is hard, and only getting harder. IMHO, one of the big drivers of that is just how complex every market has became. Building and selling software alone is so much more multi-disciplinary than it was 10 years ago and adding hardware to the mix is upping that by a huge factor. As I look around, I see so many companies struggle to build teams that can handle the huge range of required tasks. To see a company like Oxide that (once again, from the outside a least) seems to have things together on so many fronts, especially while doing it while sticking strongly to some core values, is pretty inspiring.
Not to get overly cynical, but I don't think it is an extreme opinion to say that current start-up culture feels like you have to make big compromises in what you believe in order to be successful. Whether that be open-source, how you value and pay employees, or even just rushing things to deliver that aren't ready.
While I acknowledge Oxide has some well-connected, experienced founders that I am certain enabled them to get the resources and trust to do things their way, I really hope they kill it so that other founders and builders can learn that you still can build not just financially successful products, but great organizations that truly care about their values.
I've worked in startups most of my life, but it's tough feeling like you've got to take a bunch of shortcuts ("MVP"-it before you run out of runway, etc. etc.), so to see a startup just put a ton of real engineering prowess into a great product is a sight to behold.
Also, while there are obviously amazing products out there in the world, it's so often a "race to the bottom" that it feels like decently made products are just prohibitively expensive. I'm really into antique espresso machines, and there is a reason that seeing rebuilds of beautiful old lever espresso machines get so many oohs and ahhs online - they truly don't make them like they used to, so it's so cool to see these beasts brought back to life. For some reason I have a similar feeling looking at these Oxide racks - they just didn't cut corners to build an awesome machine. Again, not qualified to say whether that makes economic sense long term, but it's still a thing of beauty regardless.
And shipping full cabs is harder!
Semi-sort-of related... there was an automation company, Bedrock automation, that went defunct about a year ago. Their PLC hardware ideas were dope but I always felt like they were missing the boat a little bit by supporting stale PLC programming languages. I used to wonder if supporting Rust and Ada on these PLC's would be been a good idea to diversify/specialize into complex control system domains. Also, iirc, Bedrock didn't support EtherCAT which I felt was a mistake.
Anyhow... one of these days it would be cool for a forward thinking company like Oxide to tackle what a modern complex, distributed control system hardware/software stack should look like using Rust/Hubris.
Operation/Shop floor technologies (OT) are treated like mechanical equipment, "When it fails, we'll swap it out for a new one." Well, this isn't a motor, it has programming, and it interfaces with I/O sensors and devices.
The main challenge is the lack of knowledge and skills in modern technologies among technical staff and decision makers in industrial organizations.
A an aside, in my early days I hopped on a 5-hour flight and drove 6 hours to replace a failed hard drive in a Windows NT machine used as an HMI. Then a year later, replaced it and all the other clients with a vSphere stack. The local resources, both internal and external, were too intimidated to touch it.
I'd be in favor of a "reset" in automation, it feels like fighting city hall.
“Stale PLC programming languages” might be stale and in need of a rework, but “rewrite in rust” entirely missed the value proposition. Memory safety is important, sure, but these systems aren’t usually ever allocating memory or taking actions that aren’t deterministic. PLCs usually operate in strict realtime environments. Like each operation contains a known number of clock cycles, and is timed to match physical “stuff” moving in the real world. To rewrite in rust, you’d need a flavor of rust where you can deterministically know the timing of ops and calibrate the timing of each branch of any control paths.
That said, there is a market for softer realtime PLCs and companies like Rockwell sell Windows IoT hardware- but they still contain a strict-realtime companion controller. Arduino is now selling a mountable PLC that can be programmed using their tools, but even they support traditional PLC programming languages.
Spec wise, some low power systems like an Intel NUC, LattePanda Sigma, or Zimaboard. You could fit 3/4 of them in a single 1u with a shared power supply. They could even offer a full 1u with desktop grade chips on the same sleds.
I have thought about building one myself, but it's a large investment of time that I can't seem to find lately.
https://canonical.com/blog/jumpstart-training-with-the-orang...
https://arstechnica.com/information-technology/2014/06/hands...
If you want to play around with their Hubris OS: "You wanna buy an STM32H753 eval board. You can download Hubris, and then you’ve got – you’ve got an Oxide computer. You have it for 20 bucks.”
I’ve not personally used it, but their stack of software is open source, and according to some commenters in the thread, super high quality.
Don’t know if oxide would want or be able to compete in the low cost market but a bigger a more expensive desktop/workstation as a mini homelab cloud could be a great option to get people trained on the oxide platform.
Do they actually assemble and build their own racks of hardware, or is that outsourced? Somewhere, there must be an assembly plant. If this stuff actually exists. It's hard to even find pictures of their products. Do they have production installations?
First rack shipped to costumer Jul 1, 2023: https://twitter.com/oxidecomputer/status/1674901883130114048
Here is a picture an incomplete rack: https://pbs.twimg.com/media/FfT7MHoUoAE90QZ?format=jpg&name=...
You can find other pictures on twitter and other places.
Certainly odd but not out of the ordinary for a small Bay Area start-up where the Founders have a ton of cash and the focus is mostly on what they personally want to do. Posts like these are b/c the Founders want to hire 1-3 people who fit exactly into alignment with them.
The most recent few episodes have been about corporate open source and they've had excellent guests, like Kelsey Hightower. Definitely the best computer related podcasts out there. Bryan and Adam are great hosts and their humor is always a delight.
In a similar vein, Embedded.fm is a great podcast for embedded SW. Though I bet Oxide's take on embedded rust is a lot different than the hosts of that show!
https://softiron.com/hypercloud/
https://softiron.com/blog/run-bmc-why-we-decided-to-build-ou...
I would expect nothing less from that crew. They did amazing things at Sun Microsystems, Inc. (RIP), and they continue to do even greater things now.
Can't wait to read its source code, I was curious since reading this thread[2].
[1] https://github.com/oxidecomputer/helios (if this is not found for you, we're in the same boat)
https://github.com/oxidecomputer/hubris
And, that one underscore-delimited folder name in this repo just catches the eye, huh?
So, Humility is like MC/DC?
https://en.wikipedia.org/wiki/Modified_condition/decision_co...
I'm kidding, mostly. The engineer in me is bothered by it, but the part of my brain that cares more about design and branding understands that 0x1de would cause inconsistencies in the branding elsewhere. (E.g. 0xDocs: https://docs.oxide.computer)
We did manage to get a neat PCI vendor ID: https://pcisig.com/membership/member-companies?combine=01de
from https://bcantrill.dtrace.org/2019/12/02/the-soul-of-a-new-co...
"One size fits all" will never work.
Here are the main headaches that someone who actually runs a data center will run into:
- How many power circuits does the DC provide you? What voltage? AC or DC?
- How many amps do the circuits have? Is the PDU provided by the DC, or do you provide it?
- How many upstreams will you have? Dual core routers? BGP? Static routing? Failover?
- Upstream provided via Fiber? Copper?
- Given that for the last couple of years being able to re-buy a typical server model (say: Supermicro), CPU, Storage for more than a couple of weeks has been a challenge, and given that rack hardware is supposed to run for years - how can I be sure to be able to scale?
- What happens if that Oxide startup goes bust 6 months from now?
- Would any sane DC operator really buy "we just built our own switch"? I wouldn't. Those who have been in that business for 20 years still f*ck it up on a regular basis. I want a proven vendor for my mission critical stuff.
I am sorry, I don't see them addressing ANY of the real life pain points you have operating your own rack. Unless they ship a couple of data center engineers within the box.
The whole cloud stuff is simply designed for managers who believe that in a job market where it's hard to find DC/server engineers that things will be magically done in the background. With this, it's worse - it's pretended that "stuff just works" if you ship it in a single box. And that's simply not a case. A DC installation not constantly maintained by skilled engineer will go down in flames within months, one way or another.
EDIT: "it's not for me" - I'm not working with organizations that have that sort of need directly. Re-reading that phrase, it came across as a bit dismissive of what they've built (which is undoubtedly impressive).
And yeah it's totally chill: this product is not for everyone. No slight taken!
If you take some time and read around their site you'll see that they're offering a ready to run (turnkey) server. They have everything packed together, they've integrated everything that is needed (CPU, disk, networking...) into a nice looking cabinet, with not too many wires and they're selling it as a complete package.
If you're in need of a server (cloud computer) and you don't want to but separate components and unpack them and connect them yourself, then this looks like a possible solution.
Just my perception given the front page. I agree it's a bit vague.
The key difference is the price structure -- IBM leases the hardware wherever they can get away with it, and uses a license manager to control how many of the machine's resources you can access (based on how much you can bear to pay). This, however, is like a mainframe you own.
The main USP: you own it, you're not renting it. You're not beholden to big cloud's pricing strategies and gotchas.
Secondary USP, why you buy this rather than DIY / rack computer vendor: it's a vertically integrated experience, it's preassembled, it's plug and play, compatibility is sorted out, there's no weird closed third party dependencies. Basically, the Apple experience rather than the PC experience.
Dimensions H x W x D
2354mm (92.7") x 600mm (23.7”) x 1060mm (41.8")
Weight
Up to ~2,518 lbs (~1,145 kg)
It's a rack.BTW, they are following some no pictures policy. I found only a few pictures of boards but no picture of the product as a whole.
It's the iPhone of hyperconverged infrastructure.
(Sorry; three sentences.)
* Physical space - The servers themselves require a certain amount of room and depending on the workloads assigned will need different dimensions. These are specified in rack "units" (U) as the height dimension. The width is fixed and depths can vary but are within a standard limit. A rack might have something like 44U of total vertical space and each server can take anywhere from 1-4U generally. Some equipment may even go up to 6U or 8U (or more).
* Power - All rack equipment will require power so there are generally looms or wiring schemes to run all cabling and outlets for all powered devices in the rack. For the most part this can be run on or in the post rails and remains hidden other than the outlet receptacles and mounted power strips. This might also include added battery and power conditioning systems which will eat into your total vertical U budget. Total rack power consumption is a vital figure.
* Cooling - Most rack equipment will require some minimum amount of airflow or temperature range to operate properly. Servers have fans but there will also be a need for airflow within the rack itself and you might have to solve unexpected issues such as temperature gradients from the floor to the ceiling of the rack. Net heat output from workloads is a vital figure.
* Networking - Since most rack equipment will be networked there are standard ways of cabling and patching in networks built into many racks. This will include things such as bays for switches, some of which may eat into the vertical U budget. These devices typically aggregate all rack traffic into a single higher-throughput network backplane that interconnects multiple racks into the broader network topology.
* Storage - Depending on the workloads involved storage may be a major consideration and can require significant space (vertical Us), power, and cooling. You will also need to take into account the bus interconnects between storage devices and servers. This may also be delegated out into a SAN topology similar to a network where you have dedicated switches to connect to external storage networks.
These are some of the major challenges with rack-mounted computing in a data center among many others. What's not really illustrated here is that since all of this has become so standardized we can now fully integrate these components directly rather than buying them piece-meal and installing them in a rack.This is what Oxide has to offer. They have built essentially an entire rack that solves the physical space, power, cooling, networking, and storage issues by simply giving you a turn-key box you plant in your data center and hook power and interconnects to. In addition it is a fully integrated solution so they can capture a lot of efficiencies that would be hard or impossible in traditional design.
As someone with a lot of data center experience I am very excited to see this. It is built by people with the correct attitude toward compute, imo.
The Oxide machines seem to be aimed at 2020.
[0]: https://github.com/cbiffle [1]: https://cliffle.com/p/dangerust/
Can we please stop with the "the only way to get our pricing is by booking a sales call with us." This is a 100% surefire way for me to never pay for your product, and instead go to competitors who provide straightforward, no-nonsense pricing on the website.
This is ironic given the amount of self-praise they give themselves in this article about how much they care about shipping something you can buy once instead of renting from the traditional cloud. Great, so then tell me how much it costs...
I worked photojournalism in university, and it really strikes me when people talk about how cool their novel interconnect is and how clean the design is, but their long writeup doesn't feature any pictures?? Would love to see what it looks like.
(emphasis mine)
I'm having a hard time getting past the first sentence of this blog post.
Maybe I'm missing the obvious.
I feel like the "running your own datacenter" complexity of on prem is a big part of why companies go to the cloud in the first place - but that a good amount of cloud lock-in is actually in the software/service stack and reliance on it. I know there is a competent open source option for most of these services (and in fact most are repackaged open source software...), but they do seem pretty disparate and require quite a fair bit of knowledge and expertise to deploy and run at scale.
Secondary question in the same thread: What about "accounting"? Running infrastructure at large-scale companies (such that would look at a rack-level solution) usually entails budgeting and accounting of resources - how does oxide simplify this for infrastructure ops trying to keep the various IT consumers honest?
On accounting: yes! This is something that we have designed abstractions for from the start with respect to tenancy with our silo and project abstraction[0], and we are actively adding resource allocation and monitoring; stay tuned!
[0] https://docs.oxide.computer/guides/key-entities-and-concepts
It's high time the datacenter takes a more holistic approach. For years I have been thinking that it's silly to stick a bunch of tiny fans in individual servers when one large impeller pulling air from an inclosed rack would be more efficient. Imagine large fans with ducts that come down to each rack or row of racks. Current in == current out; so a large diameter fan (5 feet) coupling down to small hose (10 inch) means the air moving through the small end will be really moving fast through those components!
Old school CIO's, who make enterprise decisions without developer support, are typically the buyer of hyper-converged infrastructure (or Utility companies who struggle with CapEx vs OpEd).
I wish them luck. There's a bunch really good folks over there.
Does this "lock" you into the Oxide platform and you just buy into the whole thing instead of buying some server from Dell, then running some Proxmox like software and a Docker host?
My dad, who worked for IBM through the 90s and 00s always points out how amusing this is. We started with the cloud. Then went PCs. Then are going cloud again.
Doesn’t mean this is wrong. It’s just amusing.
If we do it properly, it should be far more optimal than all the local computers not doing anything most of the day.
It's a company run by people who care and have cared for a long time, and they have all my faith.
Over the decades the separation between "software" people and "hardware" people has grown. With "cloud" people have grown comfortable papering over poor basic performance by abstracting away even your visibility into how a system is running. "You don't need to worry about that! It's ~serverless~." But you'll worry when that bill comes due at the end of the month.
With systems like Oxide, you are allowing users to once again actually see what they get, and ensure they get what they paid for, from a high-end cloud server.
It should be putting other systems providers on-notice that the days of flaky, non-performant, and poorly-integrated components are behind us. People want beasts from their servers.
And software designers, this is also your wake-up call that you can't just put lousy-performing software with poor CPU utilization and memory hogging on big metal and hope that it's "good enough." You really need to design your software to run ~efficiently~ in such systems.
Lol. Bro.
This 100%!
Not everything needs to be a subscription. Sure there are running costs for operating that cloud computer as well as ongoing software development costs. But having photo albums in the cloud shouldn't cost me $30/month just because of storage - let me buy that hard drive (and maybe compute why not) and pay $10/y for operational costs
How many engineers and how many years does it take to build the infrastructure, write all the software, and deal with all the hardware to provide a photo service that "just works"? How many customers do they need before they break even? And obviously they can't raise VC because a business model that is predicated on making basically no money per customer can never have an exit. How would you bootstrap such a thing?
It feels strange that physical storage costs almost nothing on amazon but the storage attached to a cloud machine is expensive. But run the numbers and you'll see that running a cloud service isn't about hardware costs at all.
It seems that Oxide's product is rather tied to AMD's CPU offerings for the time being (which perhaps will serve them very well for now), and given what they've accomplished so far, I imagine it would take quite a bit of effort on the platform initialization side (and other lower level stuff) to get things working with Intel CPUs instead. Surely the ability to have vendor diversity for many of their components should be an advantage for Oxide (and their customers downstream as well), so maybe there's something to think about there. Of course, this is all interesting only once Intel actually gets competitive again on the datacenter CPU side of things, where they seem to have really dropped the ball in recent years.
On the other hand, their networking switch hardware uses Intel components (Tofino), so maybe that'll be the extent of Intel's integration in their rack for the foreseeable future?
AMD recently purchased Pensando, I'm not sure if their network chips are similar to Tofino but they seem like they are P4 compatible DPUs so they might be a good choice when migrating off of the dead Tofino platform.
So if anything Oxide will be moving further away from Intel!
I would love to work on this kind of stuff, not sure how to get started to end up working somewhere like Oxide..
It seems the only reason why people are getting too excited is because of the team members and the name.
What if this venture becomes yet another VC pump and acquire scam to get you to purchase a massive room heater?
They better not get themselves acquired like the rest of the hyped up VC scams out there.
Best case this gets acquired by HP,IBM or Dell and will die out because talent will leave.
I know a 4bn/y revenue company that could greatly benefit from this but they will never even consider buying from a company like this.
From their press release:
> Oxide customers include the Idaho National Laboratory as well as a global financial services organization. Additional installments at Fortune 1000 enterprises will be completed in the coming months.
* https://www.bloomberg.com/press-releases/2023-10-26/oxide-un...
From an interview:
> And what was the reaction of lucky rack customer #1? “I think it’s fair to say that the customer has appreciated the transformationally different (and exponentially faster!) process of going from new rack to provisioned VMs.”
> The same day Cantrill appeared as a guest on the Software Engineering Daily podcast.[1] (“The crate, by the way? Its own engineering marvel! Because to ship a rack with the sleds, it’s been a huge amount of work from a huge number of folks…”) Cantrill wouldn’t identify the customer but said that “Fortunately when you solve a hard problem like this and you really broadcast that you intend to solve it… Customers present themselves and say, ‘Hey, we’ve been looking for — thank God someone is finally solving this problem’.”
* https://thenewstack.io/in-pursuit-of-a-superior-server-oxide...
Niches can be profitable. Not every company has to be "web scale".
> I know a 4bn/y revenue company that could greatly benefit from this but they will never even consider buying from a company like this.
What does "like this" mean? Small? Every company starts out that way.
I think its cool that someone is trying to do startup hardware things, as in recent years it seems like its mostly software. I don't a use for the product in my area of IT, but I wish them luck.
https://www.asrockind.com/en-gb/4X4-7840U-1U
Also a former employer who was a bit below the hyperscaler level also had what they called "roll-in racks", though I believe they were just taking standard servers and networking gear and integrating them somewhere (presumably with cheap labor) and then bringing them into the DC as needed.
So I don't think it's a particularly new concept, but I agree it looks like it has some potential as a product.
You're not buying a bag of peanuts and they will likely price it differently if you're buying 1x or 500x.
All that really matters is whether Oxide's guesses about who'll buy and what they'll pay are right.
Lets say that Oxide are guessing that their lead customers are telco and oil/gas: conservative industries, regulated, very strict about uptime and data sovereignty, etc. Oxide might spend 1 to 2 years running the gamut of all the people in those types of customers that can say no: ops, swcurity, network teams etc. But if Oxide can reach the finance team and get them to believe that a one time chunk of $$ is better than opex in the current interest rate environment, them it really doesnt matter what the tech stack is. (And bear in mind that financial models are f(t) so things might turn upside down as the economic picture changes over time)
Assuming they have done this math, (surely a yes, or else VCs would not be lending them money) Oxide must need sufficent funding for a long sales cycle, and plausible enough bench strength to support customers who are going to hold them to ruthless SLAs and penalty clauses. All of that take serious coin up front, eg theres no outsourcing your 24x7 support to some kid reading from a script.
The Cloud is not "a computer". The Cloud is a public utility. You don't rent the transformers at an electric utility company - because why the hell would you? It's just one component of a much larger system that is valueless to the consumer without all the other components.
What these people are selling is more like a battery that you can purchase that plugs into the electric grid. But why would you want to purchase a battery? It degrades over time, losing value, and eventually needs to be recycled, and somebody has to deal with all that. Renting is the perfect way to push all of that time-consuming and complex maintenance out to someone else who can lower cost with volume.
The idea that renting servers is not sustainable would suggest that somehow computers are not like housing, cars, shop vacs, or any of the million other things you can rent. A computer is an expensive commodity like any other, and renting makes perfect sense a lot of the time.
If you need to purchase, for economic, load, tax, regulatory, or other reasons, there are already ready-made computers with (or without) service contracts and supported OSes that can be plopped into any colo, and colos sell dedicated machines already. This pitch doesn't include anything new other than buzzwords. Yeah I'm sure they did a bunch of engineering - that nobody will ever need. I'm calling shenanigans.
Didn't that still come with the weeks of overhead that compelled people to adopt cloud in the first place?
I'm not sure what "weeks of overhead" you're referring to, specifically, as it could mean a few different things. Happy to elaborate if you will :)
When opex is a primary benefit of the cloud, that's specifically for start-ups and other businesses that have little working capital. The actual primary benefit of the cloud for most cloud customers is flexibility - prior to AWS adoption, getting a server provisioned could be a multi-week if not multi-month affair negotiating between devs or operators and on-prem infrastructure workers to get the servers through capacity planning and provisioning. AWS made provisioning so simple, that you could start to set up auto-scaling, because you had extraordinarily high confidence that the capacity would be immediately available when your autoscaler tried to scale up.
But opex is not a benefit of the cloud for heavy/established businesses. Cloud opex is a financial expenditure every month that you can't get rid of and counts directly against your profits. Indeed, the desire for capex in the cloud is so high that businesses routinely purchase Reserved Instances and other forms of committed usage, which allows accountants to treat the cost of the RI as capex and then discount the expenditure through depreciation (to zero, since there will be nothing to sell when the RI expires) over the lifetime of the RI. It is normal and frequent for businesses to make capital expenditures to reduce their operational expenditures over time, thus increasing their monthly/quarterly profits.
Oxide's unique value proposition is to give customers, particularly those with high monthly cloud bills that they have difficulty reducing, the operational flexibility of cloud computing with the profit-improving benefits of capital expenditures.
My google-fu failed to find any articles for or against my statement that weren’t paid advertising or lightweight tech summaries. StackOverflow will have to do. https://stackoverflow.blog/2023/02/20/are-companies-shifting...
For a regional hospital for example, there might be a desire for in-house network and hardware - but perhaps the system for digital patient journals run on Kubernetes and managed databases.
I'd rather have $1 in capex than $10 in opex
1. Are there integrator companies that take these building blocks and build datacenters for other companies?
2. Are these racks intended to be bought in large quantities and put into new datacenters?
3. Is liquid cooling an option?
4. Do you sell datacenters as well as just the racks? Do you have any of your own datacenters built out?
5. Tough question: if these are so competitive and good for your customers, why aren't you also making lots of them for yourself and also selling cloud compute?
2. Eventually, right now we are just starting, so working closely with customers and in small quantities, but eventually that is the idea.
3. No. We are already doing very well with fans, the design means a very tiny power draw. The racks are quite quiet.
4. No.
5. Selling hardware and running a private cloud are two different businesses. The public clouds don’t also sell servers. We don’t plan on running a cloud, even if doing so would be a nice experience, we have to focus on what we’re doing.
Virtually all the value we get from cloud is in their managed offerings of complex and difficult to admin systems like Kubernetes, Postgres, and managed storage layers.
That’s where all the value is but it’s also where all the cost is. If you just want compute, storage, and bandwidth all those can be had at commodity prices. Look at Hetzner, Hivelocity, FDCServers, DataPacket, OVH, or VPS providers like Vultr. You can get dozens of cores, terabytes of SSD, and gigabits of unmetered bandwidth for a few thousand dollars or less. It’s very cheap.
Without the high level managed stuff we would be off cloud five minutes from now.
I’m sure there is a market for this among people who run on premise data centers, but I think they would have a much larger market if they went further up the software stack. Right now they just look like they are competing with Dell and Supermicro, not Amazon or Google Cloud.
https://azure.microsoft.com/mediahandler/files/resourcefiles...
The difficulty is that HP/Dell/NetApp/etc are well established and maybe if you're a SME you can appreciate the benefits of an end to end integrated solution but most of the people making purchasing decisions largely don't. It's easy to market on CapEx but difficult to market on OpEx - you're asking customers to (likely) pay more for a similar performance invariably less featureful solution from a small company which they're then locked into largely leaning on the promise it'll be easier to operate (where is your GPU integration, object storage, shared filesystem, integrated kubernetes, etc). They then have to convince all the incumbent software vendors to support their software (e.g SAP/oracle/etc) on this specialized hardware.
Do you think people want to design games with unreal engine on the cloud? Is there no desktop application that's safe? I don't fully accept this premise and I wonder if I'm the crazy one sometimes.
> TODay WE aRE aNNOUNCING tHe
It's actually much weirder than that, as some of the capitals in my quote are actually big lowercase letters, not capitals (i.e. the opposite of smallcaps).
Open it in a private window and see if you can reproduce. By default, a private window uses no extensions though I prefer to have uBO enabled by default there, too.
Another peculiar detail I saw on landing page is screenshot of management interface. It must be old as it shows various VMs including FreeBSD 12.1 and Ubuntu 20.10. Ubuntu 20.10 is EOLed long ago and also wasn't a LTS release. Not something you wanted running in enterprise 3 years ago, nevermind recently.
My favorite is the automated CIO repo: https://github.com/oxidecomputer/cio
I'd love to see a video exploring the rack and showing the first time setup.
> While the software is an essential part of the Oxide cloud computer, what we sell is in fact the computer. As a champion of open source, this allows Oxide a particularly straightforward open source strategy: our software is all open.
But their homepage says this:
> As soon as power is applied to an Oxide rack, our purpose-built hardware root of trust – present on every Oxide server and switch – cryptographically validates that its own firmware is genuine and unmodified.
That sounds like tivoization to me. The value of their software being open source is heavily diminished by their hardware refusing to run anything but their "genuine and unmodified" software.
Can it run Linux or Windows Data Center as ex-Sun folk they seem to have loved the last open source release before Oracle but now is still community developed variant of SunOS ???.
I would imagine the system architecture is different enough that running Linux as the base OS would take some work, for example it doesn't have an AMIBIOS.
Would be interested to hear your experience.
So far, I’ve seen mixed reviews. Some say that you can end up using unsafe a lot and so it’s better to stick with C or even use Zig.
Wondered if there was any merit to this.
Using unsafe in Rust isn't inherently bad. If you're doing embedded work, using unsafe is mandatory once you get to the level of interacting with the hardware.
That doesn't mean the code is literally unsafe, just that the interactions are happening in a way that the compiler can't guarantee. That's completely expected when you're poking at hardware registers.
You still get most of the benefits of Rust, the language. I've had good success with it.
Specs of the switch are available here: https://oxide.computer/product/specifications
There doesn't seem to be a way to provision a bare metal operating system, so HPC is out, and the networking is previous-generation, so there are two opportunities for progress right there.
Now that they're VC-funded I expect an OEM to snap them up before either opportunity can be pursued.
I remember a time when a Series A was rarely above $10m. Feels like at that capital level they should be further into the alphabet.
[PDF] https://web.archive.org/web/20050325213243/http://www.egener...
The same hardware and software using kvm would be much more appealing. Not being able to run nested virt or GPUs on such a powerful and expensive rack is pretty brutal. Perhaps that is planned for the future though.
It's going to be tough sledding getting these into enterprise accounts.
Best of luck.
This feels like the rackspace approach
But yes, if multiple teams and they have direct access this rack to provision VMs, maybe yes.
Also - as a developer you don't provision VMs on daily basis. You start a project, you need those resources at the very outset and that's about it?
Or maybe it'll be a curious unintended match instead, with whatever "streaming" devices Microsoft pushes next being connected to one's own "cloud computer" instead...
I can imagine SaaS businesses offering their product as a "box": making on-prem significantly easier. Deliver your software along with the hardware, hook it into their network.
Done.
Anyone with more experience have thoughts on if this will potentially become a common option for SaaS?
Similar hardware for home labs and for personal computers with open firmware would be great. I'd also love to see servers with hardware which can be upgraded (rather than completely recycled).
How is this different than colo space that has been predominant for years? Perhaps simply that it can be purchased more easily and standardized like a typical cloud VM?
Reinvent OpenStack, slap it on some 8Us and storage arrays, shove it in a rack, and ship it to a colo with a professional installer. So basically one of the larger server vendors but with integrated "cloud" software, minus the 2-hour service turnaround and spare parts.
The fact that they're writing the software from scratch is going to add years of lead time until they reach parity with other solutions. My guess is they're hoping they can get sticker price or TCO low enough that it outweighs the lack of functionality and uncertainty of a brand new everything. If you just need some VMs in a lab in the office closet, might work.
They only have $44M to build that company? I hope the next funding round is better :/
You all have arrived. Wishing you decades of prosperity and good fortune.
If users wanted to save some money, why not just implement a hybrid cloud system?
All text seems to indicate the former while the logo seems to spell the latter?
If they manage to do get an IBM mainframe level after-sales support, they will rock.
Does someone know how much the lowest level box costs?
I absolutely dread that future. I want all my computation back to be local, doesn't matter to me if I own the cloud computer or not as long as I also don't own the environment it is embedded in. If the connection to it isn't mine and neither is its source of power then I don't really control "my" cloud computer.
Pricing??
Also "Facilities" link in footer: 404
"A disk cannot be added or attached unless the instance is creating or stopped." "A network interface cannot be created or edited unless the instance is stopped."
really?
I know it's a brilliant team, and since ZFS/OpenSolaris/Dtrace I've known about Bryan. The product is a nicely looking product too. One thing I don't get is the target market. Who is it? Banks? Cloud vendors? Insurance companies? I think there's a part of the computer business that I don't get yet.
One guy in one of the previous HN threads here said that it's convenient to run 1 RFP for 1 box costing $1M-$2M vs running 30 RFPs for isolated components to put on the rack. Ok, I can see that. That's some argument.
But I know that Supermicro has rack assembly service. https://www.supermicro.com/en/products/rack and I'm sure Dells and HPEs have the same. Does it mean that it's impossible to call them and tell them: "Excuse me, I want 2 racks, 15 systems each, top of the line AMD CPU with RAM, NVMe storage, all maxed out, and whatever fastest Juniper switches you can find. Put me VMWare vFusion on it. Ship this thing to Infomart in Texas. I have $2M here with your name on it". They won't do this for me?
Its either this, or Oxide is going to e.g.: Bank of America, and along with RFP the bank is asking: "Show me complete supply chain records, including the source code to your boot loader, drivers, and any firmware that any component on this motherboard is running". And maybe that's ... impossible these days?
I'm the startup CTO, so not the consumer base, but after DHH started posting his thing about cloud exit, I decided to explore this, and I walked the floor of 3 DCs (in SV/TX). People who walk you around the DC don't appear to care about cables, fans or noise etc. If it's longer than 1hr on the floor, they'll put Airpods in and they're done. I've seen cages that look unified, pristine, with love and affection put into cable layout, and I've seen cages that look like a total mess. It's your cage--you only go there if things break.
My last guess is that maybe Oxide rack will end up being sort of what an Apple Macbook is among cheap HP/Acer laptops from Walmart. And it'll be a shiny toy of bold bearded IT dudes who work for GEICO IT by day, but by night they scavenge eBay for used server deals and get excited about the idea of running their own private rack in their basement. They can't afford it at home, but at work they'll want their own Oxide. If you're reading this, do know I know who you are.
It seems they only support up to virtual machines but this is still miles ahead of VMWare, Nutanix, or CISCO ACI to my eyes. This honestly looks simple enough that a developer/DevOps team could manage it. With the other 'hyperconverged' infrastructure offerings you will still need a dedicated team of trained ops professionals to manage it.
Man I would love to work for a company like that. I don't know why more people don't set their startups like that.
> Cloud computing is the future of all computing infrastructure.
Oh god no!
Anywhere I can donate to have this not happen? I want my computing on premise, preferably under my table.
But well, if it _HAS_ to happen, 0xide is probably a lesser evil.
Let's start with their beliefs: 1. Cloud computing is the future of all computing infrastructure.
Yeah, no. There's still gonna be things like mobile phones which will do some stuff locally. And probably also desktops. And probably also on-prem servers.
Which brings us to their belief no. 2: The computer that runs the cloud should be able to be purchased and not merely rented.
If you buy a computer and run virtualized loads on it, then you're not doing cloud computing. As simple as that.
Basically, as others have commented already, this is just something like a blade server system or a hyperconverged system like for example Nutanix is offering. The only real difference seems to be the open source approach.