This time, their "multiple-moonshot effort" is paying off big-time because they're doing it incrementally. Kudos!
[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...
[2] https://www.joelonsoftware.com/2000/11/20/netscape-goes-bonk...
One is that rewriting from scratch is going to give you a worse result than incremental change from a technical point of view (the « absolutely no reason to believe that you are going to do a better job than you did the first time » bit).
The second is that independent of the technical merits, rewriting from scratch will be a bad commercial decision (the « throwing away your market leadership » bit).
We now know much more about how this turned out for his chosen example, and I think it's clear he was entirely wrong about the first claim (which he spends most of his time discussing). Gecko was a great technical success, and the cross-platform XUL stuff he complains about turned out to have many advantages (supporting plenty of innovation in addons which I don't think we'd have seen otherwise).
It's less clear whether he's right about the second: certainly Netscape did cede the field to IE for several years, but maybe that would have happened anyway: Netscape 4 wasn't much of a platform to build on. I think mozilla.org considered as a business has done better than most people would have expected in 2000.
One of the big issues with software engineering advice is that it is really hard to find apples-to-apples comparisons for outcomes.
The promise of a rewrite institutionalized accumulating technical debt. When it comes time to do the rewrite, everyone starts out on the wrong foot. The big rewrite is a lot like New Years resolutions. They don’t stick, they cost money and time, create negative reinforcement and sometimes people get hurt.
Refactoring institutionalizes continuous improvement, thinking about code quality, and yet discourages perfectionism because you can always fix it later when you have more info. My theory is that people good at refactoring can handle a rewrite well. Maybe are the only people that can handle a rewrite well. But if you can refactor like that you don’t need a rewrite (or rather, you’ve already done it and nobody noticed).
That is, some of my projects (personal & professional) have benefited enormously from designating the old version as a “prototype”, creating a “new” project with slightly different architectural decisions based on the deficiencies of the old version, copying most of the code from the old version to the new, and filling in the details.
https://github.com/servo/servo/wiki/Roadmap/_compare/47f490b...
As someone who was involved with Servo during that time period, I was disappointed at the time that it was quite obviously not going to happen. However, looking at what has happened since then, the change of focus towards Firefox integration was definitely a good move.
Chrome on the other hand is not stagnant, far from it. And Google is not letting Chrome stagnate.
Never mind that Google is using their marketing muscle to push Chrome every chance they get.
Just try downloading some shareware on Windows, and you are bound to get Chrome along for the ride. Hell, even Adobe offers Chrome as a Flash bundle even though Google is using Chrome to effectively kill Flash...
One of the origins mentioned is that Graydon thought that rusts (a kind of mushroom) were pretty cool (they are! they have a super complex lifecycle and it's pretty interesting).
Another is that Rust is actually not something with new concepts in it, everything in Rust is a very well established concept from earlier PL research, Rust just packages them nicely.
This is the most impressive, and useful aspect of all the recent work in Firefox. Rust is an amazing language. It really brings something new to the table with its borrow checker.
The fact that rust was created as part of a greater effort to work on a web browser is amazing.
To put it another way, I find it hard to justify developing Rust just for a web browser. But if you consider it from the perspective of a foundation developing tools for the developer community as a whole, it makes much more sense.
These projects were valuable to Apple and Microsoft for a variety of reasons:
* promoting their IDE: XCode builds faster, and has better error messages. You can use any .NET language with Visual Studio in the same project.
* promoting their platform: Objective-C and Cocoa let you create fast GUI apps in a standard way, and we don't need GCC anymore. .NET provides a useful feature-complete standard library over a variety of languages.
To contrast, Rust was made with the intention of simply making a better systems language. Rust doesn't have a standard library or environment tied to a specific OS or proprietary dependencies. Rust itself doesn't promote Windows, OS X, ASP.NET, Cocoa, IOS, Android, etc. That is what makes it seem much less likely that rust would be created by a corporation.
"just for", our browsers are already very complex.
The browser is responsible for sandboxing and in effect taming the wild web.. and the web is not looking to become less wild :)
in the future browsers will have to prioritize CPU time between tabs seeking to mine bitcoin, crazy ad schemes, and battery power.
In terms of security, browser bugs scares me a lot more than some privilege escalation bug in the kernel. Because they can quickly be deployed widely.
Obviously, commercial organisations have a long track record of developing and supporting programming languages.
This was a pretty smart move by the Rust team, and this gave them a rock solid platform to go cross-platform. In words of Newton, "If I have seen further it is by standing on the shoulders of giants". Kudos team Rust, and let's hope they eat C++'s lunch soon.
Apple "just uses LLVM" in the same way Apple "just uses Webkit".
Apple hired Chris Lattner back in 2005, just as he completed his PhD. At the time, LLVM could just barely compile Qt4[0] (the result didn't quite actually work yet) and was still hosted on CVS.
Lattner had to make LLVM production-ready (rather than a research project), add a bunch of major features, start a C/C++/ObjC frontend (Clang) and create a team of LLVM developers in the company.
Apple shipped their first bits of LLVM-based tooling in Leopard in 2007.
[0] https://web.archive.org/web/20111004073001/http://lists.trol...
Calling it "Apple's" threw me off too, but it's not entirely misleading because without Apple, it might not have become a production-ready compiler. At the least, I would say Apple did more than "just use it".
It's equally weird to describe Android as "Google's".
For servo we've had long periods of time where no two employees are in the same office (it helps that most folks are remote).
In both teams we found it impossible to pick meeting times that work for everyone so we instead scheduled two sessions for the recurring meetings, and you can show up for one of them (the team lead / etc are usually at both so they're scheduled such that it is possible for them to attend both). Though for Servo we eventually stopped the meeting since we're pretty good at async communication through all the other channels and the meeting wasn't very useful anymore.
Also, to clarify, while that was the initial team, shortly after getting Wikipedia working the team grew with folks from the Servo and Gecko teams. Eventually we had folks living in the US, Paris, Spain, Australia, Japan, Taiwan, and Turkey working on it. As well as volunteer contributors from all over the place.
That said, why does it perform slower than Chrome on most benchmarks? Is it due to the Chrome team doing much more grunt work regarding parallelism and asynchronous I/O? Or are there still features in the current Firefox build that still call the original engine?
Does Rust have a runtime penalty as Golang does?
Which benchmarks are you talking about? It depends on what those benchmarks measure.
For example, a lot of the Quantum work was in user-percieved UI latency; unless the benchmark is measuring that, and I imagine that's a hard thing to measure, it's not going to show up.
> Does Rust have a runtime penalty as Golang does?
Rust has the same amount of runtime as C does: very very little. https://github.com/rust-lang/rust/blob/master/src/libstd/rt....
sys::stack_overflow::init();
I probably don't know what this function does, because my initial guess is not very comforting. :)I'm embarrassed to say that I just blindly trusted a couple of websites' claims that they ran benchmarks, without verifying they're even industry-standard. The articles were on Mashable and AppleInsider.
Mashable tested webpage load times, which only one dimension of many to optimize for. AppleInsider looked at speed, CPU usage, and memory consumption.
And since we are here, the prompt/dialog windows in FF are still not native looking too. These are my two major complaints :)
This is more generally known as Constant Flawless Vigilance: https://capability.party/memes/2017/09/11/constant-flawless-...
Is parallel layout something which can only be done through a full rewrite, hence with Servo, and bringing Servo up to full web compatibility, or can this be handled through the Project Quantum process, of hiving off components from Servo into Firefox?
Now, once stylo and webrender are in play, ideally layout can just fit in between. All the interfacing already exists from Servo.
However, there are a lot more things in Firefox that talk to layout. This would need work, more work than Stylo.
But this isn't the major issue. The major issue is that Servo's layout has a lot of missing pieces, a lot more than was the case with its style system. It's hard to incrementally replace layout the way webrender did with rendering (fall back when you can't render, render to texture, include it).
So it's doable, but a lot more work.
That said, we are absolutely going to explore opportunities for more Rust/Servo in layout, so we just need to find the right strategy. One incremental step I'm interested in exploring is to rewrite Gecko's frame constructor in Rust using the servo model, but have it continue to generate frames in the Gecko C++ format. This would give us rayon-driven parallelism in frame construction (which is a big performance bottleneck), while being a lot more tractable than swapping out all of layout at once. Another thing to explore would be borrowing Servo's tech for certain subtypes of layout (i.e. block reflow) and shipping it incrementally.
Each of these may or may not share code with Servo proper, depending on the tradeoffs. But Servo has a lot of neat innovation in that part of the pipeline (like bottom-up frame construction and parallel reflow) that I'm very interested in integrating into Firefox somehow.
We're going to meet in Austin in a few weeks and discuss this. Stay tuned!
I updated my developer version yesterday and it was as if Firefox - already ludicrously fast compared to before - turned on the turbo booster.
Obviously, I'm not complaining ;)
Of course it could be any incremental improvement instead of some big named feature.
Does anyone know if this is being worked on? Should I submit a bug report?
The medium-term effort to revamp the graphics stack is WebRender. Note that, like Stylo, WebRender is not just meant to achieve parity with other browsers. It's a different architecture entirely that is more similar to what games do than what current browsers do.
Are those things being measured at all in FF? It may be that the tradeoff is worth it (and I have no doubt it can be done better than the median compositing WM on linux), but it would be good to have that data.
On the other hand, it may be moot if Wayland does end up taking over from X.
https://play.google.com/store/apps/details?id=org.mozilla.fe...
Ideal register allocation is NP-complete, so a compiler can't get it right every single time.
I'm not sure how good in practice modern compilers are at this, but would be curious to know if there's some asm writers who can actually consistently outperform them.
It's also worth pointing out that "optimal" in theory doesn't necessarily correspond to optimal in practice. The hard problem of register allocation isn't coloring the interference graph (since there's not enough registers most of the time), it's deciding how best to spill (or split live ranges, or insert copies, or rematerialize, or ...) the excess registers. Plus, real-world architectures also have issues like requiring specific physical registers for certain instructions and subregister aliasing which are hard to model.
In practice, the most important criterion tends to be to avoid spilling inside loops. This means that rather simple heuristics are generally sufficient to optimally achieve that criterion, and in those cases, excessive spilling outside the loops isn't really going to show up in performance numbers. Thus heuristics are close enough to optimal that it's not worth the compiler time or maintenance to achieve optimality.
No, it's only SSA form that has optimal register allocation in polynomial time, otherwise someone would've proved that P=NP (as its proven to NP-Hard). :)
That said, finding minimal SSA from arbitrary SSA is NP-Hard.
> Modern GCC uses SSA form and I think LLVM might too.
LLVM has always used SSA (this is relatively unsurprising given its origins in research and so much research of the period being on SSA).
This needs to be emphasized more
What also is interesting for me to realise, though, is that a lot of this was happening at the same time as Mozilla was largely focused on Firefox OS, and receiving a lot of flak for that.
It's a shame that Firefox OS failed, but it was clear that they had to try many different things to remain relevant, and it's excellent to see that one of those is very likely to pay off. Even though Rust might've been dismissed for the same reasons Firefox OS was.
And then there's the disappearing dev tools - that's fun.
EDIT: I hope that there is something weird with my systems. But I fear that the rush to push this out might have been a little hasty.
EDIT EDIT Apart from the crashes the new FF has been nice. I've been able to largely stop using chromium for dev work - so not all is bad.
It's definitely something weird to do with your systems, meaning it's a real bug that you are experiencing, and I am not.
So please share crash reports, and file bug reports. Different hardware/software quirks may reveal bugs in Firefox/Linux/drivers/window managers/anything. By submitting a bug report for Firefox, you may be able to help find a video driver bug, etc.
Yet more propaganda. I’ve been part of the cracking and demo scene since my early childhood. If you didn’t code in assembler you might as well not have taken part in it at all, because small fast code was everything. None of us ever had an issue with register allocation, nor do we face such issues today. Not 30+ years ago, not now.
It would be great to create the html/css/javascript stack from scratch, or at least make a non-backwards-compatible version that is simpler and can perform better. HTML5 showed us that can work.
The whole Mozilla strategy of corroding Firefox piece by piece is actually very professional. Big backwards-incompatible transitions in technology almost always fail.
FWIW this is usually due to folks doing performance work in only one browser or not really testing well and slapping that label on after the fact.
Or stuff like Hangouts and Allo where they use nonstandard features.
The major things Firefox doesn't support that Chrome does are U2F (it does support it now, but flipped off, will flip on soon I think) and web components (support should be available next year I guess; this kinda stalled because of lots of spec churn and Google implementing an earlier incompatible spec early or something.)
Edit: I don't intend for this to sound like I'm complaining, just interested.
https://wiki.mozilla.org/Oxidation#Rust_components_in_Firefo...
Completed:
MP4 metadata parser (Firefox 48)
Replace uconv with encoding-rs (Firefox 56)
U2F HID backend (Firefox 57)
In progress: URL parser
WebM demuxer
WebRender (from Servo)
Audio remoting for Linux
SDP parsing in WebRTC (aiming for Firefox 59)
Linebreaking with xi-unicode
Optimizing WebVM compiler backend: cretonneI would find frequent cases where my system would stall for 10-20s (could not caps lock toggle, pointer stopped moving). I almost always have just Chrome and gnome-terminal open (Ubuntu 16.04). I had attributed it to either a hardware or BIOS/firmware defect.
Now, after switching to Firefox I have gone a week without seeing those stalls.
YMMV -- I never bothered to investigate, it could be something as simple as slightly-less-memory-consumption from FF, or still a hardware defect that FF doesn't happen to trigger.
If that ever happens again, you can run "free" on the terminal to see whether this is the case.
That said, even if the poster is correct, it isn't necessarily wrong either. AFAICT, nothing stops JS on a page from allocating that much memory, and "leaking" it (e.g., holding on to the JS object, maybe accidentally in a giant list, and not making use of it). It isn't the browser's fault if JS is actually "using" that much RAM.