Look at who oxide is selling to and for what reasons.
It's about compute + software at rack scales. It does not matter if it is good it matters that it's integrated. Gear at this level is getting sold with a service contract and "good" means you dont have to field as many calls (keeping the margins up).
> Everything we hear about Oxide sounds like an impressive green field implementation of a data center, but is that enough?
Look at their CPU density and do the math on power. It's fairly low density. Look at the interconnects (100gb per system). Also fairly conservative. It's the perfect product to replace hardware that is aging out, as you wont have to re-plumb for more power/bandwidth, and you still get a massive upgrade.
I think for the internet to break out of walled gardens, high-quality independent datacenters need to exist -- nobody wants to manage their own datacenters, and nobody wants to rely on Google/Amazon/Microsoft's platforms or (even worse) business products. I hope this continues.
Our stack is open source.
> what difference does it make if its on-prem or offsite?
The difference is not where it runs, it's that you own our racks, rather than rent them. In the traditional cloud, you're renting. Other vendors who sell you hardware will still have you paying software licensing fees, so it never feels like you truly own it. We don't have any licensing fees.
Because data must be on-prem, banks are stuck in legacy infra paradigms. The whole org suffers, innovation is stiffled, yada yada…
An on-prem cloud product (hardware+software) is a game changer for these companies, IMO.
My question to oxide: how easy is to integrate external hardware into the cloud? For example: bunch of GPUs or a bunch of next-gen hardware like SambaNova.
Oxide has been remarkably transparent about the development and architecture of critical system components. We can only hope they succeed and inspire others to follow their transparency lead.
You mean all the huge companies that ran multiple datacenters before the cloud was even a thing?
>We decided to do something outlandishly simple: take the salary that Steve, Jess, and I were going to pay ourselves, and pay that to everyone. [https://oxide.computer/blog/compensation-as-a-reflection-of-...]
Does everyone at Oxide have the same equity grant?
I thought I saw this question answered in a previous thread and the answer was basically “no”, but the question has been avoided a lot.
Aspects of equity compensation would inherently need to be different over time due to valuation, fundraising stage, and so on. However I always thought it was strange that they made a big deal about paying everyone the same base salary but then were silent on the equity comp strategy. Everyone knows that in a job like this the total comp is important.
The old Oxide compensation discussions were interesting. There was discussion about how they thought candidates asking about compensation to be something of a negative signal because they wanted people who weren’t in it for the money, basically. I heard this from a now ex-employee of Oxide who was describing how to navigate their hiring process, so take with a grain of salt.
EDIT: I checked their website again. The compensation link goes to a blog post ( https://oxide.computer/blog/compensation-as-a-reflection-of-... ) which has this section about equity:
> Some will say that we should be talking about equity, not cash compensation. While it’s true that startup equity is important, it’s also true that startup equity doesn’t pay the orthodontist’s bill or get the basement repainted. We believe that every employee should have equity to give them a stake in the company’s future (and that an outsized return for investors should also be an outsized return for employees), but we also believe that the presence of equity can’t be used as an excuse for unsustainably low cash compensation. As for how equity is determined, it really deserves its own in-depth treatment, but in short, equity compensates for risk – and in a startup, risk reduces over time: the first employee takes much more risk than the hundredth.
Which doesn't answer the question.
I've heard this from multiple hiring managers and C levels. The cognitive dissonance is amazing.
Do you know why I show up and work? Because I am paid for it, and in this country, medical is also gatekept by employment.
If I wasn't paid, I wouldn't work for them.
But somehow, I'm supposed to not care about money at the same time caring about money.
It's not an unusual way of thinking, but every time I see it, it seems bizarre to me. If the candidate was to propose any project once hired, I'm sure these folks would want them to think about costs and benefits.
This policy selects either for people unable to reflect on their life with the same wit they apply to work; or people who will front about their motivations. Both seem like poor outcomes.
I can't say anything of Oxide because I don't really know the people involved other than reading their writing.
This is an attitude a lot of industry veterans have. Most of them were in software pre-2015 and have long made their money and paid their debts. They saw the haydays of job hopping, massive salaries, massive equity, and "rockstars" (that never were). The people I know that joined software post-2015 and onward are not in the same financial position.
When it comes time to actually cash out or you notice some animals are more equal than others, the excuses start. The drama happens. No one knows what the equity is worth. If you try and advocate for yourself you'll be told the equity is actually worthless.
From here the 200k salary seems fair. Just don't live in SF. Chicago , Philly, Pittsburgh, Cleveland, quite a few cities are perfectly livable on 200k.
In fact, in any of the above metros you will be living nice.
Most valuable people know they’re valuable, and do (and should!) negotiate compensation.
I think companies avoiding discussions about compensation is a negative signal, because they’re only in it for my labor.
> equity compensates for risk – and in a startup, risk reduces over time: the first employee takes much more risk than the hundredth.
I think that paragraph answers the question pretty clearly. As an Oxide employee you will get equity. It will generally be less than the people that came before you, but more than people that come after you. So it's obviously not the same as everyone else
This is unlike an RSU from a public company, where you can sell the value of your shares as they vest and add that to your income with minor risk of price volatility.
It’s intellectually dishonest. I look forward though to their “in depth” follow up post.
> Since originally writing this blog entry in 2021, we have increased our salary a few times, and it now stands at $207,264. We have also added some sales positions that have variable compensation, consisting of a lower base salary and a commission component.
Employees apart from the very early ones are likely really getting fucked on equity because oxide explicitly treats candidates asking about equity as a red flag.
Companies structured like this end up turning into a combination of rich people who don’t need money and are just interested in the problem space and then mediocre people who can’t get a better offer. (If you are good, remote work with high TC is definitely available.)
Unless Oxide is giving out huge equity bonuses for good perf (which would make their comp post hypocritical), it’s going to have a continuous talent decline as it grows. There aren’t enough independently wealthy high performers interested in what Oxide is doing to sustain a talent pool.
But how good do you have to be at things other than the work itself (edit: and how lucky as well) to land one of those jobs with higher compensation? For someone outside of high-cost-of-living tech hubs, Oxide's fixed salary could be life-changing all by itself, even with minimal equity. The fact that they take that deal doesn't necessarily mean that they're mediocre.
However I’d also suggest that the concentration of tech in the last decade is also partly due to startups chronically stiffing their employees on equity. The difference in compensation potential naturally forces a talent split where talent joins larger and much better compensated firms.
Are you being snarky and suggesting that employees deserve the same upside that founders deserve?
How is it silly if they already do that for salaries? Co-ops with equal ownership isn't unheard of, and isn't silly at all.
The founders are excluded from the employee compensation discussion. They own the company because they founded it. Nobody thinks they just put the equity into a structure that nobody owns.
The question is whether all employees are compensated equally, which is a very important detail. Giving everyone the same salary is very different than giving everyone the same total compensation.
The founders aren’t really taking on that much more risk than the rest of the early team; it’s the VC’s money, not theirs.
I absolutely don’t agree with the idea that employees deserve the same upside as founders (because I think initiative and persistence against adversity/inertia is insanely rare and valuable and should be rewarded immensely), but it is not an insane proposition.
It’s especially popular among people who think the actual work output is more important than the leadership initiative. Both are obviously essential, and founders do both, while employees do only the first (or they’d be founders themselves).
Plus I predict more companies will exit the cloud once they realize how thick the margins have become or want better guarantees over sovereignty.
There are of course benefits to using cloud based VMs and I use them as well. But you are paying a very steep premium for what is a pitiful amount of compute and memory. There's a lot of wiggle room for price decreases and the only thing preventing that is a lack of competition. There's a reason Amazon is so rich: nobody seems to challenge them on AWS pricing. There's value in having them do all the faffing about with hardware of course. That's why companies use them. I'm in GCP; but same principle. I don't want to have to worry about replacing hard disks in the middle of the night, deal with network routers that are misbehaving, cooling issues, etc. That's why I pay them the big bucks. But I'm well aware that it's not that great of a deal.
I used Hetzner a decade ago and paid something like 50 euros per month for a quad core xeon with a raid 1 disks, 32 GB, etc. Bare metal of course. But also, 50 euro. We had five of those. Forget about getting anything close to that with modern cloud providers for anything resembling a reasonable price. Your first monthly bill might actually add up to enough to buy your own hardware. Very tempting. They have beefed up their specs since then. You now get more for less. And they also do VMs now.
With on-demand pricing the pod would pay for itself (and the cost to manage it) within a year.
That’s the only thing I can tell is useful about “the cloud”
I can build racks and servers easily, but the challenge is availability and getting past everyone’s firewalls
So the real win is any service that allows for instant DNS table updates and availability of DNS whitelisting.
This is why Google, Msft etc win in email because they have trusted endpoints
Alternative routes with self signed DKIM etc is more or less blocked by default forcing you onto a provider
We need more cloud flare tunnel and local hosting via commercial ISP routes and less new centralized data centers
I've also been doing web-related work for a living for over 25y and yours is the most spot-on take I've seen in this discussion.
But at the high end, I think the market is litterally infinite. Every large company should want this, and want it now. Cloud providers are extremely expensive and, outside of the 1-tier where prices are really outrageous, they perform poorly and often offer little support.
This really feels like the future.
Now a lot of the things that were done pre-cloud were done in bad ways, and I'm not saying that we were right about those things. Having APIs for provisioning and monitoring are far better than submitting a request to some queue and having your VM provisioned manually 1 week later by someone who gets a key detail wrong. APIs and granular permissions are how this should be done, and "the cloud" taught everyone that very early. But a lot of companies are really stuck in the cloud mindset now, and won't let go of it.
I think companies like Oxide and product lines like theirs are going to start becoming common. Microsoft, of course, completely fumbled the ball with Azure Stack, and I've never even heard of anyone deploying AWS Outpost, both for the same reason: the costs for these are absolutely insane for what they provide.
What most folks really want is their own infrastructure running their own stuff using APIs that are either written in-house or provided by some vendor. Oxide is betting that they can sell you a working scalable system for less money than it would take to hire a team to write the APIs that would allow a company to do the same with off-the-shelf hardware. I think that they're probably right about that.
Anyone can throw together a bunch of parts and software to run Internet-facing services from a closet. That doesn't mean that you're safe from issues that Oxide aims to solve, especially at that small scale.
My homelab (which hosts my blog and a couple of other things) runs off a Topton N17 micro-ATX motherboard ordered on AliExpress, featuring an AMD Ryzen 7 7840HS. Yes, that's a mobile CPU shoehorned onto a desktop platform with a funky mounting bracket to take AM4/AM5 coolers.
Anyways, I wanted to run SmartOS on it, but this system is so janky that the Illumos kernel couldn't find any PCIe devices at all. After spending an afternoon reconfiguring PCIe bridges by hand with the kernel debugger in an attempt to troubleshoot PCIe initialization, I gave up and installed Proxmox.
Admittedly, as far as janky hardware this takes the cake, but the point stands. To paraphrase Bryan, buggy firmware is the sysadmin's worst enemy.
You can "just" migrate by exporting or importing the vms.
> Our system delivers all the hardware and software you need to run cloud…
To "run cloud?" Does this mean treating your own servers like "serverless?" Does it mean running Kubernetes? Is this primarily for people who want to self-host LLMs?I'm literally so old that I write programs that run on a server and never think about infrastructure.
I agree that's a bit awkwardly phrased, let me send in a patch for that.
> Does this mean treating your own servers like "serverless?" Does it mean running Kubernetes? Is this primarily for people who want to self-host LLMs?
Not exactly any of that. We let you treat an entire rack as a single pool of resources, spinning up virtual machines that our control plane manages for you. Think "VPS provider but you own it." There's an API, but if you want to see what our console looks like, you can poke around with a demo here: https://console-preview.oxide.computer/
Machine/system/service deployment via API.
See §Essential Characteristics
> On-demand self-service; Broad network access; Resource pooling; Rapid elasticity; Measured service
* https://en.wikipedia.org/wiki/Cloud_computing
None are generally applicable to standalone pizza boxes as managed individual (as opposed to being herded by (e.g.) OpenStack).
Sadly when I look at their jobs posted I don't see much that would line up with my skillset, but I keep an eye on them just on the offchance.
And such a clear value statement.
If I could, I'd invest. Sure, they might fail, but they're shooting their shot, and to me it has a clear differentiator that would improve the market for most of their users (if not the incumbents).
Folks seem to be moving toward liquid cooling[4] either to the rack/chassis[5] or even to the chip[6].
[1] https://oxide.computer/blog/how-oxide-cuts-data-center-power...
[2] https://www.youtube.com/watch?v=4vVXClXVuzE
[3] https://www.youtube.com/shorts/hTJYY_Y1H9Q
[4] https://blogs.nvidia.com/blog/blackwell-platform-water-effic...
[5] https://datacentremagazine.com/data-centres/top-10-liquid-co...
Incremental improvements from things like more efficient fans and reducing the number of power conversions is great, but the power drawn by the CPUs or GPUs is on another level.
https://signalsandthreads.com/the-thermodynamics-of-trading/
There are many things that suggest free software and the movement for software freedoms might be on its way to a historical footnote. This is absolutely not one of them.
Hey Bryan, one day when you’re very successful market-wise (you and your team have already obviously been massively successful from an engineering standpoint) and aren’t in crash-priority-override mode to get cash flow, please consider a project to build SME stuff that reaps the security and integration benefits of your big enterprise stuff that is affordable for end users like entrepreneurs and home hobbyists, like Ubiqiti does. I’d love a lil’ $5-10k homelab unit, and I bet a number of smaller universities and organizations would go for stuff in the low 5 figure 2-3kW range. Obviously your bread and butter comes from companies that size their orders by number of racks, but if you never go downmarket then thousands of us hackers that love what you’re doing will never get to touch Oxide stuff except at a job in a megacorp.
> Our thesis was that cloud computing was the future of all computing; that running on-premises would remain (or become!) strategically important for many; that the entire stack — hardware and software — needed to be rethought from first principles to serve this market; and that a large, durable, public company could be built by whomever pulled it off.
Very clear and logical, stating from their first principle world view what the result could be if they succeed.
I thought that Oxide was based on OpenCompute, which is basically the rack designs that Facebook open sourced, after hiring some Google employees to build their custom data centers. This project started in 2011:
https://en.wikipedia.org/wiki/Open_Compute_Project
https://github.com/opencomputeproject
Google definitely rebuilt the stack from first principles -- the data center was basically a huge embedded system, from power to racks to CPUs/memory/disk/network to kernel to user space to cluster software. (And no, they did not use Kubernetes.)
I have no idea how active the OpenCompute project is -- is Facebook still the main contributor, and are they still releasing their new rack designs?
And I also wonder how much Oxide has diverged from it? At least on the hardware side. On the software side, I guess the Illumos-derived parts and Rust parts are completely different.
Well, when they say they did their own:
- board designs
- microcontroller OS
- platform enablement software
- host hypervisor
- switch
- integrated storage service
- control plane
Then yeah it seems like maybe only board designs and the switch COULD have either come from or been influenced by OpenCompute, but maybe those didn't either. (I have no idea tbh)Maybe they only got the mechanical and power stuff from OpenCompute? i.e. the parts that change more slowly
So in the end, even the mechanical and power didn't come from OCP. We clearly build on other components (we didn't build own rectifiers!), but we absolutely built the machine from first principles.
The ultimate aspirational homelab setup, ngl.
Would love to see their growth result in more versatile options, like quarter-rack or industrial deployments someday. In the meantime, congrats on the successful fundraising!
https://rfd.shared.oxide.computer/
Start from RFD 1 ;)
[0] https://oxide-and-friends.transistor.fm/episodes/rfds-the-ba...
I'm a big proponent of the "writing is thinking" mantra. Unfortunately, in my experience, not all technical leaders value grassroots proposal processes like the Oxide RFDs
Edit: some popup on their page links to https://github.com/oxidecomputer/rfd but it's a 404
Edit2: it's at https://github.com/oxidecomputer/rfd-site
USIT... what a cryptic website! Is it government-related (like In-Q-Tel) or private? Have no idea...
I think it brings an interesting point actually.
"Who will buy these", the obvious answer is anyone with a need, but the "standard pizzabox" server is ubiquitous for the same reason that x86 and miniPC's outcompeted mainframes.
((controversial take warning))
Mainframes are objectively better at high uptime and high throughput than rube-golderging a bunch of semi-reliable x86 boxes together, yet, the ubiquity of cheap x86 hardware meant that the lions share of development happened on them.
People could throw a pentium 2 PC in a corner and have it serving web traffic, and when things started growing too much you could add more P2 machines or even grab a Xeon 4socket machine later down the line.
This isn't possible with mainframes, and thus, people largely don't mess with them.
The annoying thing is that this kind of problem has some kind of stickiness effect. If you need a server, and then another, you buy them as you need them and if you're already 20 pizza boxes in; it's a pretty big ask to rip them all out and moving to a different vendor entirely than staging replacements one after another.
So I guess their target audience is the "we don't want to touch cloud" organisations that have a good IT spend that are willing to change vendors?
I don't think I've worked for any of those.
(FD: I'm actually a fan of the Oxide team, and the concept, and I would buy into the ecosystem except I have needs that are at most 3 servers at a time)
They can migrate to an Oxide "cloud" without too much difficultly as opposed to procuring, installing, and maintaining the rube goldberg machine you mentioned.
They also attract interest among the "we don't want to touch cloud" organizations where trying out $1M in hardware is a rounding error, but I don't know how much traction they'd end up getting.
Also, I'm given the impression that Oxide prioritizes user experience - their website shows off a clean UI and they presumably have modern, easy-to-use APIs. Mainframes, in contrast, seem like a whole different world - if I convinced my company to move to a mainframe, who would even operate it? I know modern mainframes are closer to "normal" servers than their old reputation, but still, I'd imagine it's pretty esoteric stuff, and IBM is famous for not being the cheapest to work with.
I do find it pretty funny that their business model seems to be reinventing mainframes, but I feel like there are important distinctions too. Hopefully they do well (I'd also love to have access to this stuff, but yeah, same "needs that are at most 3 servers" deal).
Companies do modernization/migration projects from time to time; I guess one way to solve the audience issue is to find companies that have such a planned event and try to market a “better” alternative.
While I’m also a fan of Oxide; my primary concern is whether they can actually get companies to ignore the marketing that comes out of cloud services.
The market is complex. Those who will buy will be those who find that the existing ying doesn’t snap perfectly into their own business’ yang. They’ll be at the margins first (the post references a lab for instance, not a booming tech company), then over time less so.
In exchange for your own hardware purchase cost, in practical terms also a license.
> We did our own integrated storage service, allowing the rack-scale system to have reliable, available, durable, elastic instance storage without necessitating a dependency on a third party.
In exchange for an unbreakable dependency on the first-party solution.
I'm being overly aggressive because I do lurve your product and the Sun/Apple way of vertical integration is especially valuable for security ... things break at the interfaces and since you have absolute control over that, you can be actually good. Then there's the improved UX that comes with an integrated product. The root of trust work you've done is especially noteworthy, in a sea of also especially noteworthy efforts across the entire vertical product.
But I'm leery that with the absolute lock-in, and VC pressure, you might succumb to squeezing your customers a la AVGO.
(random superficial comment: the ascii art + high fidelity ui combination on the oxide landing page is chef's kiss)
> An oxide rack has a minimum cost of something like 600k not including all the infra you need to run a rack, maintenance, and then needing to upgrade
So maybe not super useful for people who are looking to do some home-lab stuff or even setups for SMBs.
Edit: Another thread: https://news.ycombinator.com/item?id=36552556
> My guess is that a base config would be somewhere between $400K to $500K but could very definitely go up from there for a completely "loaded" config.
(other people's) previous guesstimates were 500k - 1M. It does look like you can order it partially provisioned (compute wise I'm guessing) and expand later.
Here's what I can knowingly share. We sell full-rack and half-rack SKUs today coming in at 1024/16TiB and 2048/32TiB of CPU cores and RAM respectively. Disk varies depending on the drive size. On top of the cost of the rack you'll pay an annual percentage for support and that's it. No software licensing or other weird fees.
just want to give a shout out to my old boss from the mid 80's who had researched it (mgt consultant) and told me at that time that cloud computing was the future of all computing because the economics were inescapable.
RIP.
It is always a pleasure to follow up with Oxide on their podcast, their technological decision to keep the Solaris linage alive, all the places across the infrastructure they have been using Go and Rust as well.
In 2027 - 2030, We will have 256 Core Zen 7 CPU with PCIe 6.0 or 7.0 SSD and Network. If Liquid Cooling ever come to Oxide we are looking at 5 - 10x the compute power of its current hardware.
Somewhere along the line a Single Oxide Rack would offer enough Compute and Storage for 95% of customers. And whenever I think about having Solaris in every rack just put a smile on my face.
It's more of that we're a young company, so we can only do so much, so quickly. Plus, because we do so much custom work, that all takes time. As we grow and scale up, we'll be able to move forward more quickly on updates on the hardware front.
The on-prem cloud is definitely the holy grail for many of us, but the justification to the capex and ongoing commitment to owning the hardware would be that it has have just as easy an upgrade path.
We'll ultimately ship a next-generation rack to take advantage of changing constraints (e.g., physical form factor, power requirements) and we'll provide a mechanism for upgrading to that generation of rack as well. Likely, in the form of multi-rack connectivity to move workloads or perhaps even financial incentives to purchase the latest and greatest.
I don't work on the hardware side at Oxide so I don't have further details to share. Great question!
Took them 3 months before a "we're not interested" email was sent. No reasons, either.
I probably should have just used an LLM to generate good sounding garbage. Probably the same chance to get even a stage 1 interview.
I didn’t realize at the time, but Oxide’s application process was the best form of interview prep I’ve done. The process forced me to thoroughly document my values and career accomplishments. In later non-Oxide interviews, I effectively recited what I had written my materials. In that way, it has felt less one-sided than every other company application process I’ve gone through. I was able to take away an artifact from the experience, versus being filtered out via a coding challenge. It’s also been rewarding to reflect on my submission from years ago to see how my mindset and skills have evolved.
If you have any interest in working in the pediatric telemedicine space, I encourage you to email me your application. We accept Oxide materials. I’m happy to provide feedback as a hiring manager. My email and our company website are in my bio.
We don't do "stage 1 interviews", we only do interviews as the very final step in the process, when we've narrowed things down to a handful of people. That initial packet is like 85% of the process.
Yeh this is really really tough. They have an excellent RFD that explains the hiring process and it contains a section on this here: https://rfd.shared.oxide.computer/rfd/0003#_rejection_of_non...
I ported their Hubris kernel to RISC-V and ARMv6 before applying for a role to work on it, so thought I had reasonably strong materials, but got rejected.
I fully understand and empathise with their reasons for not providing feedback, but it does mean you're left not knowing whether you were a hair's breadth away from getting in but were behind a better candidate and it would be worth trying again in the future, or rejected for other reasons that no improvement in materials will make up for. As a fully signed-up member of imposter syndrome club I'll obviously lean towards the latter ;)
With this news I hope they plan on expanding hiring for frontend devs soon, would love to work with such leadership once in my life.
Also, is your username something interesting? It feels familiar but a quick `echo -n '' | md5sum` didn't yield anything.
The flat salary structure at a generous level (from my perspective, anyway) is a breath of fresh air. Everyone getting caught up on the equity is a bit hard to understand, given the clarity of the message from the company.
I will be applying for one of the open positions. Kudos to this company for their approach to business, and congrats on the success.
I was happy to learn about Sweater Ventures https://www.sweaterventures.com/our-story (and their kind) which are opening up access to investments, and also helping ensure the entry price is quite low (for example I invest $50 a month with them) .
It is my hope that in the future you will be able to order a micro fraction of any company as easily as you could a starbucks.
And IMO part of that equation might be to have the state start to work against contracts which restrict your rights on your own property (essentially contracts restrict when you can sell your shares, usually not until IPO or a company organized liquidity event)
Even if all of the laws lined up just right, it’s unlikely that a company like Oxide would be interested in collecting a lot of little investors and then maintaining all of the obligations that go along with serving those investors.
> With growing customer enthusiasm, we were increasingly getting questions about what it would look like to buy a large number of Oxide racks. Could we manufacture them? Could we support them? Could we make them easy to operate together?
i.e. they need the capital in order to be able to satisfy large orders on sane timeframes - but that's very expensive when you're a hardware business.
https://artemis.sh/2022/03/14/propolis-oxide-at-home-pt1.htm...
(The author of this blog post now works for Oxide!)
Disclosure, I work in the software automation space.
Also love the Oxide and Friends podcast! What strikes me most lately about Oxide since falling into their hefty episode backlog, is their book club culture. I really appreciate the ability to get a fly on the wall experience of it, I learn a lot!
Yes.
> Then how does it bring cloud scalability?
You buy an entire (or half) rack at a time, and then can treat the entire thing as one elastic pool of resources, you do not need to manage the individual sleds within the server yourself. Your interface to the rack is similar to a VPS provider, it's "give me a new vm with these specs."
Happy to elaborate if you have more questions.
It should be trivial to create my own S3 bucket that’s stores data on my laptop.
I’m really glad folks like Oxide are moving in that direction.
But it seems when there’s enough money in something, openness goes out the window.
#1: Don't expect to see it until after AMD or someone else makes a product that actually manages to compete against Nvidia. Nvidia is pretty hostile to not running their low-level software stack, and Oxide is all about the legacy of Solaris.
#2: AMD MI400 or relative will be an extra chipset on future server motherboards (not a separate PCI card). Simultaneously, the boundary between "CPU vector processing" and "GPU used for transformers" will blur, and the chipsets will slowly merge into chiplets in one package.
#3: AMD MI400 and such AI accelerators will be primarily sold as full racks with its own custom "networking" (UALink switch), and the actual host CPU on those devices will be lower specs and mostly relegated to setup and metrics of the AI work, much like storage and networking appliances are built, AI workload will not even pass through the host CPU. I'm not sure Oxide can compete in that world. The "business logic CPUs" will reside in a different rack.
Isn't this just an iGPU? They generally have much lower performance than GPU's sitting on a dedicated card.
Could be a need for unclassified workloads also… but curious if this is a defense and Intel community venture fund backing this next round with my tax dollars.
USIT > U.S. Innovative Technology (“USIT”) is an investment firm that backs growth-stage commercial companies with critical technologies relevant to the national interest, including, artificial intelligence, future of compute, new industry, space & communications, bio & healthcare, and defense tech. USIT was founded by American technologist and visionary investor Thomas Tull.
joined by: Riot Ventures, Eclipse, Jane Street
https://oxide.computer/remote-access
> It seems like this is only tailored to large enterprise sales.
Since we sell half or full racks at a time, it's true that we're geared towards larger companies, as small shops just don't need that much computer.
> The only other option is just using them as your AWS replacement.
That's the use case, yep!
I just hope they can survive in a world dominated by relatively inexpensive, general-purpose Apple Watches.
So what is Oxide and why would I use that over OpenStack?
Still releasing software twice a year:
* https://en.wikipedia.org/wiki/OpenStack
Available as open source so you can self-deploy, or go with a vendor if you want some hand-holding / support:
* https://www.openstack.org/marketplace/distros/
Perhaps they were "everywhere" when they were newer and needed to get their name out there, but nowadays they're more established and so will come up in search and such as an option for those that want to run cloud (either for their internal needs, or as the basis of a public product offering).
$100M is a lot of money investors want to see returned of the total amount raised of almost $200M?
Remember Oxide is VC backed so there must be some strings attached here, is it an IPO or an acquisition for an exit or just staying private?
"Our thesis is [...] and that a large, durable, public company could be built by whomever pulled it off."
So the goal is an IPO.
Also, what they do is very capital-intensive, so I’m not surprised that they’re raising more money.
in the midst of everyone making a land grab for GB200s, does anyone have time to evaluate their alternative OS?
That's the high level, happy to elaborate if you have more questions.
It’s a 7-foot tall rack with compute sleds, storage sleds, and network sleds. It has its own BMC design, its own power design, its own physical connectivity design, its own virtual networking design, its own hypervisor (although it’s based on Bhyve), its own OS (although it’s based on Illumos), and its own API to spin things up and configure them.
As I think of it, it’s basically a private cloud in a box and The Register wasn’t far off.
Cool 1999 aesthetics though
Do new internet users not know that the landing page usually contains information about the product they're talking about in their blog posts? 99% of the cases you can find what you're looking for on the landing page, and it took me a whole of 30 seconds to get here, writing this comment took longer time.
And then as a follow up you have to explain that
- they sell an entire rack full of servers
- they're the only ones that do this. Normally you have to buy all the pieces and put it together yourself, or pay someone to put it together for you but all the parts are kind of designed on their own by different companies so it's kind of a mess. Oxide makes one big rack with like 16-32 servers and it's all designed by them and just works, so you just plug it in and you have servers you can put a bunch of vms on. Oh and they're huge, each one literally costs like a million dollars
Company Name Single line Headline
Problem Our Solution
Gives you the entire company premise and product description.
Seems pretty clear to me.
The “iPhone” of the data center.
The founders are uber geeks but have never done anything successful on their own. I predict this company will be another zombie.
I would love to know what you consider success then ...
Oxide is kind of like Sun Microsystems, building polished vertically integrated monster machines, e.g. Sun E10k.