Mind you, this XROS idea came after Oculus reorged into FB proper. It felt to me like there were FB teams (or individuals) that wanted get on the ARVR train. Carmack was absolutely right, and after the reorg his influence slowly waned for the worse.
It has some value a huge company funds these research, as long as it doesn't affect the practical real for-profit projects.
"To make something really different, and not get drawn into the gravity well of existing solutions, you practically need an isolated monastic order of computer engineers."
As a thought experiment:
* Pick a place where cost-of-living is $200/month
* Set up a village which is very livable. Fresh air. Healthy food. Good schools. More-or-less for the cost that someone rich can sponsor without too much sweat.
* Drop a load of computers with little to no software, and little to no internet
* Try reinventing the computing universe from scratch.
Patience is the key. It'd take decades.
What problem are we trying to solve that is not possible right now? Do we start from hardware at the CPU ?
I remember one of an ex Intel engineer once said, you could learn about all the decisions which makes modern ISA and CPU uArch design, along with GPU and how it all works together, by the time you have done all that and could implement a truly better version from a clean sheet, you are already close to retiring .
And that is assuming you have the professional opportunity to learn about all these, implementation , fail and make mistakes and relearn etc.
Parts of Africa and India are very much like that. I would guess other places too. I'd pick a hill station in India, or maybe some place higher up in sub-Saharan Africa (above the insects)
> What problem are we trying to solve that is not possible right now?
The point is more about identifying the problem, actually. An independent tech tree will have vastly different capabilities and limitations than the existing one.
Continuing the thought experiment -- to be much more abstract now -- if we placed an independent colony of humans on Venus 150 years ago, it's likely computing would be very different. If the transistor weren't invented, we might have optical, mechanical, or fluidic computation, or perhaps some extended version of vacuum tubes. Everything would be different.
Sharing technology back-and-forth a century later would be amazing.
Even when universities were more isolated, something like 1995-era MIT computing infrastructure was largely homebrew, with fascinating social dynamics around things like Zephyr, interesting distributed file systems (AFS), etc. The X Window System came out of it too, more-or-less, which in turn allowed for various types of work with remote access unlike those we have with the cloud.
And there were tech trees build around Lisp-based computers / operating systems, SmallTalk, and systems where literally everything was modifiable.
More conservatively, even the interacting Chinese and non-Chinese tech trees are somewhat different (WeChat, Alipay, etc. versus WhatsApp, Venmo, etc.)
You can't predict the future, and having two independent futures seems like a great way to have progress.
Plus, it prevents a monoculture. Perhaps that's the problem I'm trying to solve.
> Do we start from hardware at the CPU ?
For the actual thought experiment, too expensive. I'd probably offer monitors, keyboards, mice, and some kind of relatively simple, documented microcontroller to drive those. As well as things like ADCs, DACs, and similar.
Zero software, except what's needed to bootstrap.
its seriously not something you want to do if you want to get anywhere.
then again,its a lot of fun, maybe imagining where it could be some day if you had an army of slave programmers (because still it wont make money lol)
Its a contradiction very much at the core of the idea: Should I expect that the Operating System my monastic order produces be able to play Overwatch or be able to open .docx files? I suspect not; but why? Because they didn't collaborate with stakeholders. So, they might need to collaborate with stakeholders; yet that was the very thing we were trying to avoid by making this an isolated monastic order.
Sometimes you gotta take the good with the bad. Or, uh, maybe Microsoft should just stop using React for the Start menu, that might be a good start.
Agree but again worth pointing out the obvious. I don't think anyone is actually against React per se, as long as M$ could ensure React render all their screens at 120fps with No Jank, 1-2% CPU resources usage, minimal GPU resources, and little memory usage. All that at least 99.99% of the time. Right now it isn't obvious to me this is possible without significant investment.
What could it could mean for a "tech" town to be born, especially with what we have today regarding techniques and tools. While the dream has not really bore out yet (especially at a village level), I would argue we could do even better in middle America with this thinking; small college towns. While its a bit of existing gravity well, you could do a focused effort to get a flywheel going (redo mini Bell labs around the USA solving regional problems could be a start).
Yes it takes decades. My only thought on that is, many (dare say most) people don't even have short term plans much less long term plans. It takes visionaries with nerves and will of steel to stay on paths to make things happen.
Love the experiment idea.
To kick-start, given them machines with Plan9, ITS, or an OS based on LISP / Smalltalk / similar? Or just microcontrollers? Or replicate 1970-era university computing infrastructure (where everything was homebrew?)
Build out coursework to bootstrap from there? Perhaps scholarships for kids from the developing world?
In a few decades, they will reach to our current level, but then, rest of our world didn't idle for these decades and we don't need to solve the old problems.
These days, you get a medium-level description and a Linux driver of questionable quality. Part of this is just laziness, but mostly this is a function of complexity. Modern hardware is just so complicated it would take a long time to completely document, and even longer to write a driver for.
Not sure what the situation is for other hardware like NVMe SSDs.
[0] 2750 page datasheet for the e810 Ethernet controller https://www.intel.com/content/www/us/en/content-details/6138...
Real question: Why do you think Intel does this? Does it guarantee a very strong foothold into data center NICs? I am sure competitors would argue two different angles: (1) this PDF shares too much info; some should be hidden behind an NDA, (2) it's too hard to write (and maintain) this PDF.
The people who develop OSes are cut from a different cloth and are not under the usual economic pressures.
That's what's claimed. That's what people say, yet it's just an excuse. I've heard the same sort of excuse people have, after they write a massive codebase, then say "Oops, sorry, didn't get around to documenting it".
And no, hardware is not more difficult than software to document.
If the system is complex, there's more need to document, just as with a huge codebase. On their end, they have new employees to train up, and they have to manage testing. So any excuse that silicon vendors have to deal with such immense complexity? My violin plays for them.
That's obviously the wrong message. They should say "Go ask the engineering VP to get us off any other projects for another cycle while we're writing 'satisfying' documentation".
Extensive documentation comes at a price few companies are willing to pay (and that's not just a matter of resources. Look at Apple's documentation)
It’s not first party documentation that’s the problem. The problem is that they don’t share that documentation, so in order to get documentation for an “unsupported” OS a 3rd party needs to reverse engineer it.
Of course, your custom kernel will still have to have some of its own code to support core platform/chipset devices, but LKL should pretty much cover just about all I/O devices (and you also get stuff like disk filesystems and a network stack along with the device drivers).
Also, it probably wouldn't work so well for typical monolithic kernels, but it should work decently on something that has user-mode driver support.
thus calling into question why you ever bothered writing a new kernel in the first place if you were just going to piggyback Linux's device drivers onto some userspace wrapper thingy.
Im not necessarily indoctrinated to the point where I can't conceive of Linux being suboptimal in a way which is so fundamental that it requires no less than a completely new OS from scratch but you're never going to get there off of recycling linux's device drivers because that forces you to design your new OS as a linux clone in which cade you definitely did not need to write an entire new kernel from scratch.
Realistically for the OEM to debug the issue they're going to need a way to reliably repro which we don't have for them, so we're kind of stuck.
This type of problem generalizes to the development of drivers and firmware for many complex pieces of modern hardware.
(And I feel bad saying this since Meta obviously did waste eleventy billion on their ridiculous Second Life recreation project ...)
Then how do devices end up up having drivers for major OSes? It's all guesswork?
What is an easy gate task to get into “reverse engineering some drivers for some OS”?
Second thought: I don’t even know how to write a driver or a kernel, so I better start from there.
You know, one'd think that having a complex hardware should make writing a driver easier because the hardware is able to take care of itself just fine, and provide a reasonable interface, as opposed to devices of the yore which you had to babysit, wasting your main CPU's time, and doing silly stuff like sending them two identical initialization commands with 30 to 50 microseconds delay between or whatever.
I guess one exception maybe is Nvidia who have sort of hidden the complexity by moving most driver functionality onto software on the card. At least that's how I understood it. Don't quote me on that.
Without that, we would have probably just switched hw, because the quite obscure bug was in the ASIC, and debugging that on 2005-6-ish hw is just infeasible.
PS: Half-joking, you can write some big portions with LLMs but the point stands.
I've only seen John Carmack's public interactions, but they've all been professional and kind.
It's depressing to imagine HR getting involved because someone's feelings had been hurt by an objective discussion from a person like John Carmack.
I'm having flashbacks to the times in my career when coworkers tried to weaponize HR to push their agenda. Every effort was eventually dismissed by HR, but there is a chilling effect on everyone when you realize that someone at the company is trying to put your job at stake because they didn't like something you said. The next time around, the people targeted are much more hesitant to speak up.
https://www.youtube.com/watch?v=52hMWMWKAMk&t=1s
This is a guy who figures that what he wants to do most with his 3 free weekends is to port his latest, greatest engine to a Cortex-A8. Leading corporate strategy? Maybe not. But Carmack on efficiency? Just do it.
Dude just seemed frustrated with the lack of attention to things that mattered.
But...that honestly tracks with Meta's past and present.
If he doesn’t believe in something, he can sometimes be over critical and it’s hard to push back in that kind of power imbalance.
This is Meta. Let the kids build their operating system ffs. Is he now more concerned with protecting shareholder value? Who cares.
Interestingly, he was pulling the same bs at Google until reason prevailed and he got pushed out (but allowed to save face and claim he resigned willingly[1]).
[0] https://x.com/yewnyx/status/1793684535307284948 [1] https://x.com/marklucovsky/status/1678465552988381185
I even found myself letting really bad things go by because it was just going to take way to much of my time to spoon feed people and get them to stop.
(Mind you, Carmack himself was responsible for Oculus' Scheme-based VRScript exploratory-programming environment, another Meta-funded passion project that didn't end up going far. It surely didn't cost remotely as much as XROS though.)
> If the platform really needs to watch every cycle that tightly, you aren't going to be a general purpose platform, and you might as well just make a monolithic C++ embedded application, rather than a whole new platform that is very likely to have a low shelf life as the hardware platform evolves.
Which I think is agreeable, up to a certain point, because I think it's potentially naive. That monolithic C++ embedded application is going to be fundamentally built out of a scheduler, IO and driver interfaces, and a shell. That's the only sane way to do something like this. And that's an operating system.
Imagine being able to build an operating system, basically the end-game of being a programmer, and get PAID for it. Then some nerd tells on you.
Anyway, if anyone reading this gets a chance to build a custom OS for bespoke HW, and get paid FAANG salary to do so, go for it! :-D
impact is facebook for “how useful is this to the company” and its an explicit axis of judgement.
Why is complaining to HR even an option on the table?
I got angry emails from people because I wrote "replacing a primary page of UI with this feature I never use doesn't give me a lot of value" because statements like that make "the team feel bad". It was an internal beta test with purpose of finding issues before they go public.
Not surprisingly, once this culture holds root, the products start going down the drain too.
But who cares about good products in this great age of AI, right?
You don't know someone or how they really behave because they are a public figure.
a little bit a negative feedback at high level can domino quickly too. massive pivots, reorgs, the works.
Facebook VR never needed a new OS in the first place. It needed actual VR.
I've had to coach people and help them understand the entitlement involved in demanding that everyone adjust and adhere to their personal preferences and communication style. In my experience, it's about seeking to understand the person and adapt accordingly. Not everyone is willing to do that.
There weird hagiographies need to go. Carmack is absolutely not known to be kind. I have no idea what happened here but the idea that's he's this kindly old grandpa who could never, ever be rude or unprofessional is really out there.
There is a difference between “this project is not going to work” vs “these people are incompetent and the project should be cancelled as a result”. The former needs to be said, the latter is a HR violation.
This is one of the reasons I’m sick of working pretty much anywhere anymore: I can’t be myself.
Appreciating people for their differences when they are humble and gifted is easy. I side with liberals, but I have a mix of liberal, moderate, and conservative friends.
But there are only so many years of pretending to appreciate all of the self-focused people that could be so much better at contributing to the world if they could quietly and selflessly work hard and respect people with different beliefs and backgrounds.
I’m happy for the opportunity I have to work, and I understand how millennials think and work. But working with boomers and/or gen X-ers would be so much less stressful. I could actually have real conversations with people.
I don’t think the problem is really with HR. I think the problem is a generation that was overly pandered to just doesn’t mix with the other generations, and maybe they shouldn’t.
Supposedly you were meant to have you disagreements in private, and come to support what ever was decided. "hold your opinions lightly" The latest version of it was something like "disagree and commit".
This meant that you got a shit tonne of group think.
This pissed off Carmack no end, because it meant shitty decisions were let out the door. He kept on banging on about "time to fun". This meant that any feature that got in the way of starting a game up as fast a possible, would get a public rebuke. (rightly so)
People would reply with "but the metric we are trying to move is x,y & z" which invariably would be some sub-team PSC (read promotion/bonus/not getting fired system) optimisation. Carmack would basically say that the update was bad, and they should feel bad. This didn't go down well, because up until 2024 one did not speak negatively about anything on workplace. (Once carmack reported a bug to do with head tracking[from what I recall] there was lots of backwards and forwards, with the conclusion that "won't fix, dont have enough resources". Carmack replied with a diff he'd made fixing the issue.)
Basically Carmack was all about the experience, and Facebook was all about shipping features. This meant that areas of "priority" would scale up staffing. Leaders distrusted games engineers("oh they don't pass our technical interviews"), so pulled in generalists with little to no experience of 3D.
This translated in small teams that produced passable features growing 10x in 6 months and then producing shit. But because they'd grown so much, they constantly re-orged pushed out the only 3d experts they had, they could then never deliver. But as it was a priority, they couldn’t back down
This happened to:
Horizons (the original roblox clone)
video conferencing in oculus
Horizons (the shared experience thing, as in all watching a live broadcast together)
Both those horizons (I can't remember what the original names were) Were merged into horizons world, along with the video conferencing for workplace
originally each team was like 10, by the time that I left, it was something like a thousand or more. With the original engineers either having left or moved on to something more productive.
tldr: Facebook didn't take to central direction setting, ie before we release product x, all its features must work, be integrated with each other, and have a obvious flow/narrative that links them together. Carmack wanted a good product, facebook just wanted to iterate shit out the door to see what stuck.
This gives you best of both worlds - carefully designed system for the hardware with near optimal performance, and still with the ability to take advantage of the full linux kernel for management, monitoring, debugging, etc.
You can always mmap /dev/mem to get at physical memory.
They had amazing talent. Seriously, some of the most brilliant engineers I've worked with.
They had a huge team. Hundreds of people.
It was so ambitious.
But it seemed like such a terrible idea from the start. Nobody was ever able to articulate who would ever use it.
Technically, it was brilliant. But there was no business plan.
If they wanted to build a new kernel that could replace Linux on Android and/or Chrome OS, that would have been worth exploring - it would have had at least a chance at success.
But no, they wanted to build a new OS from scratch, including not just the kernel but the UI libraries and window manager too, all from scratch.
That's why the only platform they were able to target was Google's Home Hub - one of the few Google products that had a UI but wasn't a complete platform (no third-party apps, for example). And even there, I don't think they had a compelling story for why their OS was worth the added complexity.
It boggles my mind that Fuchsia is still going on. They should have killed it years ago. It's so depressing that they did across-the-board layoffs, including taking away resources from critically underfunded teams, while leaving projects like Fuchsia around wasting time and effort on a worthless endeavor. Instead they just kept reducing Fuchsia while still keeping it going. For what?
100% agree with your points. To me watching I was like -- yeah, hell, yeah, working on an OS from scratch sounds awesome, those guys have an awesome job. Too bad they're making everyone else's job suck.
Fwiw inventing a new application ecosystem has never been a goal and is therefore not a limitation for its viability. The hard part is just catching up to all the various technologies everyone takes for granted on typical systems. But it's not insurmountable.
I'm also not sold on the idea that having more options is ever a bad thing. People always talk about web browser monoculture and cheer on new entrants, yet no one seems to mind the os monoculture. We will all come out ahead if there are more viable OS out there to use.
3 main OSes vs 2 main browser engine for consumer to choose from?
Anyway the main issue with the Browser engine consolidation is that whoever owns the Browser engine, can make or break what goes in there. Just think about VSCode's current status with all the AI companies wanting to use it and make it their own product, while MSFT attempting to curtail it. At some point either MSFT decide it commit to FOSS on this one, or the multiple forks will have to reimplement some functionalities.
These were my imaginations. I thought maybe an OS that could run on the web. Or an OS that could be virtualized to run on several machines. Or an OS that could be run along several other instances on the same machine each catering to a different user.
https://www.zdnet.com/article/whatever-happened-to-microsoft...
That jives with my sense that META is a mediocre company
"I think that your team shouldn't even exist" doesn't mean "I want your team to no longer exist.".
When I was on nuclear submarines we'd call what you are advocating "keep us in the dark and feed us bullshit."
On well-functioning teams, product feedback shouldn't have to be filtered through layers of management. In fact, it would be dishonest to discuss something like this with managers while hiding it from the rest of the team.
So what are we supposed to do? Just let waste continue? The entire point of engineering is to understand the tradeoffs of each decision and to be able to communicate them to others...
There's nothing wrong with well-founded and thoughtful criticism. On the other hand, it is very easy for this to turn into personal attacks or bullying - even if it wasn't intended to be.
If you're not careful you'll end up with juniors copying the style and phrasing of less-carefully-worded messages of their tech demigod, and you end up with a huge hostile workplace behaviour cesspit.
It's the same reason why Linus Torvalds took a break to reflect on his communication style: no matter how strongly you feel about a topic, you can't let your emotions end up harming the community.
So yes, I can totally see poorly-worded critiques leading to HR complaints. Having to think twice about the impact of the things you write is an essential part of being at a high level in a company, you simply can't afford to be careless anymore.
It's of course impossible to conclude that this is what happened in this specific case without further details, but it definitely wouldn't be the first time something like this happened with a tech legend.
The OS does process scheduling, program management, etc. Ok, you don’t want a VR headset to run certain things slowly or crash. But some Linux distributions are battle-tested and stable, and fast, so can’t you write ordinary programs that are fast and reliable (e.g. the camera movement and passthrough use RTLinux and have a failsafe that has been formally verified or extensively tested) and that’s enough?
The only thing I can imagine that would be more invasive would require a brain implant.
https://en.wikipedia.org/wiki/HarmonyOS_NEXT https://www.usenix.org/conference/osdi24/presentation/chen-h...
So someone at Meta was so sensitive that being told their behemoth of a project was ill advised ended up getting reported to HR?
Much of the scenarios they tried to address could have been done with Mach or some realtime kernel or with fuchsia. I recall later on they did consider using fuchsia as the user space for the os for some time.
On another note, there was similarly an equally “just for fun” language effort in the org as well (eg “FNL”). That was also conceived by a bunch of randos who weren’t a bunch of compiler guys that had no product vision and just did it for fun.
Well when the era of efficiency arrived all of this stuff ended.
None of the code they wrote couldn't have just been written as a kernel module in Linux. It would've also been so much easier due to all the documentation and general knowledge people have about Linux both within the company and outside the company.
But ultimately it just makes sense to adapt existing kernels / OS (say, arch) and adapt it to your needs. It can be hair wrenchingly frustrating, and requires the company to be willing to upstream changes and it still takes years, but the alternative is decades, because what sounds good and well designed on paper just melts when it hits the real world, and linux has already gone through those decades of pain.
The driver ecosystem is the moat. Linux finally overcame it decades later
I think it was insane to start a new OS effort written in C/C++. We have plenty of OSes written in C/C++! We know how that story ends. If you're going to spend the effort, at least try a new language that could enable a better security model.
Still a very interesting project, but that feels like a similar story, for limited use cases (a smart thermostat/speaker with specific hardware) it works, but for wider use cases with heterogeneus hardware and complex interfaces (actual screen, peripherals) it didn't work.
For example any of the systems listed in Carmack’s post. Or perhaps Serenity OS, RedoxOS, etc.
The technical justification for Meta writing their own OS is that they'd get to make design decisions that suited them at a very deep level, not that they could do the work equivalent of contributing a few drivers to an existing choice.
If you mean exotic ones then the answer is the parts that are written are the easy parts and getting support for hardware and software is hard.
Carmack being Carmack, I'm sure the HR report came to nothing but it's just another reminder of the annoyances I don't miss about working at a BigCo. In the end, it doesn't matter that it went nowhere, that he was right or that it was an over-reaction and likely a defensive move in inter-group politics Carmack wasn't even playing - it just slowly saps your emotional energy to care about doing the right things in the right ways.
That made me really think about how fragile and toxic people can be.
Another Amazonian almost got fired for reacting with a monkey covering eyes emoji to a post shared by a black person (no malintent, of course, just an innocent “mistake” most normal people wouldn’t even think twice about).
Also, I am not surprised he was reported -- typical underhanded political hustling commonplace at Meta.
Most opinions of this man exists in a vacuum space isolated from the real world software industry. Building an OS from scratch is one of those examples.
It’s never seems like there’s a significant reason behind them other than………”I made dat :P”
Why is such a meme among gamers about Unity and Unreal based games?
Exactly because so many make so little effort it is clear where the game is coming from.
Someone said your preferred design won't work, and you go to HR.
I gladly throw my idea under the bus when I hear why it's bad.
Now offering any critique of a thing in order to help the company comes with a career risk.
It's probably not that hard to write bare metal code for a modern CPU that runs and crashes. It's obviously insurmountably hard to compete with Android in features with scratch built bare metal code. An "OS" can be anything between the two. But it's very easy to imagine an "XR OS" project snowballing quickly into the latter, and Carmack's concerns would be spot on(as always is, and as proven). Is it then an inherent difficulty in "designing a new operating system", or is it technically something else?
The drivers are the hard part. It takes a lot of inter-industry collaboration to get driver compatibility
I mean, I'd give a fair shake to an OS from the SQLite team [1].
Building a hobby OS taught me how little is just "software". The CPU sets the rules. Page tables exist because the MMU says so. Syscalls are privilege flips. Task switches are register loads and TLB churn. Drivers are interrupt choreography. The OS to me is just policy wrapped around fixed machinery.
The EOs https://en.wikipedia.org/wiki/EO_Personal_Communicator?usesk... used the AT&T Hobbit chipset, which was a descendant from the CRISP architecture. https://dl.acm.org/doi/pdf/10.1145/30350.30385 by Dave Ditzel et al. The architecture was informed by examining millions of lines of unix C code; the arch was an attempt to execute C code well. https://en.wikipedia.org/wiki/AT%26T_Hobbit?useskin=vector It was a beautiful overall design. The design focused on fast instruction decoding, indexed array access, and procedure calls. The 32-bit architecture of Hobbit was well-suited to portable computing, and almost ended up in the Apple Newton. The manual is possibly worth a peruse: http://www.bitsavers.org/components/att/hobbit/ATT92010_Hobb...
Meta has the talent and the balance sheet to pull this off. Worst case scenario we end up with one more open sourced operating system. Who knows what happens 20 years down the line.
Sigh... Usual company politics.
No matter how much money you pour with top talents, code quality, documents etc, developing a custom OS doesn't make sense.
Been there, seen that. I faced a similar situation at one company. They failed on custom Not-Invented-Here syndrome derived implementation. My technically correct skepticism was criticized for decreasing moral of the team working on it.
But I have wondered why one of these companies with billions of dollars to burn hasn't tried to create something new as a strategic initiative. Yes, there wouldn't be any ROI for years, and yes, the first several products on the platform would probably be better off on something more traditional.
But the long term value could potentially be astronomical.
Just another case of quarterly-report-driven decision making, I suppose. Sigh.
See Google's Fuschia: https://en.wikipedia.org/wiki/Fuchsia_(operating_system)
> But the long term value could potentially be astronomical.
Such as what?
If you're competing against nothing, then I see it: it opens up a wide variety of product possibilities. But linux exists. Why not spend 1/1000th the time to adapt linux?
That's not even counting the rather substantial risk that your new OS will never approach the capabilities of linux, and may very well never become generally usable at all.
Option A: spend years and millions on a project that may never be as good as existing solutions, diverting attention and resources from actual products, or...
Option B: work on products now, using an existing, high-quality, extensible, adaptable OS, with very friendly licensing terms, for which numerous experts exist, with a proven track record of maintenance, a working plan for sustainability, a large & healthy developer community exists, etc.
It's hard to imagine how it wouldn't be a complete waste of time.
Google has Fuchsia - is about 10 years in development. Recently was a target for layoffs
They have; Taligent comes to mind. You may not have heard of that -- or more likely, you have but just forgot about it -- but it's a good object lesson (no pun intended) in why a successful new OS is hard to just conjure into existence. There has to be a crying, desperate need for it, not just a vague sense that This Time We'll Get It Right.
You could probably cite OS/2 Warp as a less-obscure example of the same phenomenon.
In my non-expert mind, an OS for "foveated rendering" would be similar to what many cameras prioritize and more likely be similar to an "realtime OS" of some sort. OTOH, Apple's goggles use the XNU kernel, so maybe a microkernel would be sufficiently realtime, similar to QNX often used for automotive applications [4].
0. https://web.archive.org/web/20190214134247/http://www.canon....
2. https://chdk.fandom.com/wiki/For_Developers
Realistically there's no reason Linux wouldn't be fine on its own for AR and in fact I'm typing this on Linux on some AR glasses right now.
That scale is when creating an OS gives you a clear advantage over licensing or working with an open source OS.
Every other scale below that it's for knowledge, growth, research, or fun.
Roll call!
We also need to be clear what an OS is. Is it "darwin" or "macOS" - they have different scopes.
Things I'd want from an OS for an XR device.
1. Fast boot. I don't want to have to wait 2-3-4-5 minutes to reboot for those times I need to reboot.
I feel like Nintendo figured this out? It updates the OS in the background somehow and reboot is nearly instant.
2. Zero jank. I'm on XR, if the OS janks in any way people will get sick AND perceive the product as sucking. At least I do. iOS is smooth, Androind is jank AF.
Do any of the existing OSes provide this? Sure, maybe take an existing OS an modify it, assuming you can.
Android suffers from being Java at the core, with all the baggage that brings with it.
Another point I would add in support of that meme comment, is Google's recent rug-pull of Android not allowing sideloading apps from unsigned developers anymore starting this autumn, after over a decade of conquering the market with their "go with us, we're the open alternative to iOS" marketing.
The conclusion is to just never EVER trust big-tech/VC/PE companies, even when they do nice things, since they're 100% just playing the long game, getting buddy-buddy with you waiting till they smothered the competition with their warchest, and then the inevitable rug-pull comes once you're tied to their ecosystem and you have nowhere else to go.
Avoid these scumbags, go FOSS form the start, go TempleOS. /s but not really
I don’t know enough about its history to get the joke.
The point of R&D is the time horizon is long, and the uncertainty is high. Making JS slop that then has to be constantly babysat is opex, not capex.
If you're at all competent, go work somewhere else
Whatever you think of Meta core products, they pay a ton of people to work on various open source projects, do R&D on things which are only tangentially related to social media like VR or data center tech.
There is worse way to get a paycheck to do what you are interested in.
Problem is systemic.
What next, go work for TV stations and sabotage them?
Go work for McDonalds and make it inneficient?
Sabotage manufacturers of combustion-cars?
dont shit talk my goat like that
I’ve seen this firsthand. These giant tech companies try to just jump into a massive new project thinking that because they have built such an impressive website and have so much experience at scale they should just be able to handle building a new OS.
In my case it wasn’t even a new OS it was just building around an existing platform and even that was massively problematic.
The companies that build them from scratch have had it as one of their core competencies pretty much from the start.
I’m unsurprised meta had issues like this.
Yes.
The company is a black hole of wasted talent.
You can't do it without going through their fucking app, that asks for every permissions under the sun, including GPS positioning for some reason. After finally getting this app working and pairing it with my headset, I could finally realize the controller was just dead and their was nothing to do.
If it uses bluetooth, which it might for the controller?, the permission for bluetooth on Android is fine location --- the same permission as for using GPS. That might be the same permission you need for wifi stuff, too? Because products and services exist to turn bluetooth and wifi mac addresses seen into a fine location.
But who knows what they do with the GPS signal after they ask for it?
This is madness. The safe space culture has really gone too far.
If a professional can't give critical feedback in a professional setting without being rude or belittling others, then they need to improve their communication skills.
if youre apple, it does make sense to do stuff from scratch. i think in a way, software guys wind up building their own prisons. an api is created to solve problem X given world Y, but world Y+1 has a different set of problems - problems that may no longer be adequately addressed given the api invented for X.
people talk about "rewrite everything in rust" - I say, why stop there? lets go down to the metal. make every byte, every instruction, every syscall a commodity. imagine if we could go all the way back to bare metal programming, simply by virtue of the LLM auto-coding the bootloader, scheduler, process manager, all in-situ.
the software world is full of circularities like that. we went from Mainframe -> local -> mainframe, why not baremetal -> hosted -> baremetal?