In particular, it seems that "site" isn't precisely defined. It seems to be based on domains, but backed by a human-curated list of "sites": <https://github.com/publicsuffix/list>.
So it's different than Chrome's "every webpage gets a separate process".
That list is the reason why CORS behaves differently e.g. across two subdomains like [subdomain].herokuapp.com (requests are considered cross origin) in comparison with two subdomains of the type [subdomain].[myowndomain.ext] (requests are considered same origin)[1] - the reason for this difference is that herokuapps.com is part of that list.
[1] unless you added your own domain to the public suffix list.
It's a nasty hack, the successor to even worse proprietary hacks but still something we ought to strive to get rid of.
I can see exactly why it was the choice here, and I don't blame Mozilla for choosing it, but we're not going to make things better if nobody gets out and pushes.
That said, since we're stuck with the PSL for the foreseeable, I sure would like it if Mozilla shipped a way for extensions to just consult Firefox's built-in copy of the PSL, rather than needing to either build yet another awful hack or ship the entire PSL again in an extension.
[1] https://hacks.mozilla.org/2021/05/introducing-firefox-new-si...
This is a common misconception. Chrome doesn't technically do process-per-tab.
Chrome's model can most succinctly be described as process-per-domain, although even then, there are rare instances where two tabs opened on different domains will actually share the same process.
They did advertise Chrome as process-per-tab: https://www.google.com/googlebooks/chrome/big_04.html, pages 6 and 7 also definitely agree. (I haven’t read all through it again now, but I should also note that the process in the very centre of https://www.google.com/googlebooks/chrome/big_38.html shows what appears to be two tabs under it, which supports it not necessarily being process-per-tab.)
But either it never actually was, or they abandoned it as impractical even before release (https://stackoverflow.com/q/42804 has answers agreeing it isn’t process-per-tab on the day after the first release). So either Google lied, or they released the comic including a glaring and rather significant factual error (even if it had been true when Scott first drew the pages).
It’s frustrating when parties pull these shenanigans, making big claims around things like security and performance predicated on points that are simply not true, but never retracting those points properly or repudiating them, so that the misconception persists.
It's ridiculous to think that a budget laptop with 4 GB of RAM suddenly isn't enough to browse the Web comfortably. All thanks to Meltdown and Spectre.
I like the idea of containers, and will probably revisit periodically to see if whatever was fubar on my account is resolved (theres none/if any logging, so its hard to really dig in)
I'm using Temporary Containers, but if I visit `somedomain.com`, close it, and come back later, I get a new temporary container.
I do use a lot of tabs, so I fear I'm going to find myself facing the same problem I faced with Chrome: a site misbehaves and locks things up, crap, which process do I kill? A way of tracking which tab maps to which process would be nice, so the next time I trip over a badly-coded page, I don't have to kill everything just to get my browser to respond again. Lazyweb question to y'all: is there a feature in Chrome or Firefox that can do this (mapping tab/page -> process), or have I just stumbled upon a side-project idea?
Scrolling down I found a section starting with
> web (pid 1036080)
> Explicit Allocations
> 108.27 MB (100.0%) -- explicit
> ├───45.04 MB (41.60%) -- window-objects/top(https://www.that-random.site/, id=175)
I try to kill that process now, but I post this message first in case I kill the whole browser.
Result: the tab crashed, the browser survived.
> Gah. Your tab just crashed.
> We can help!
> Choose Restore This Tab to reload the page.
Restore did work.
I also like to occasionally sort by memory usage and kill the biggest Chrome processes. Chrome is nice in that it will show you when a process crashed, so what I do is kill the biggest memory hog, and then see what tab crashed. Then I do it again a few times.
This at least tells me which processes use the most RAM over time and should be recycled (Spoiler alert, it's always GMail and then GCal.)
I suspected gmail was the heaviest thing I regularly had open, but it's good to see the stats.
It lists tabs by process, and includes the PID (on Linux; no idea about other platforms). You can also directly kill tabs and processes from there.
https://addons.mozilla.org/en-US/firefox/addon/onetab/
to offload idle tabs. Downside is that it doesnt sync with firefox so anyhing i may need I may need to open in another firefox instance, I just bookmark.
The grouping is nice because it gives me a reference of a time i was doing x research/reading. And what i find is that there really isnt too much that i need to crossover, work stuff stays on my work computer, personal to personal etc.
If Firefox is still somehow responsive, open about:performance and identify the CPU-hungry tab(s), then close them.
Why/how can this happen? That sounds like a bad failure of the browser.
They really allow you to scroll through the post quickly and see if it is interesting to read in detail.
[0] https://hacks.mozilla.org/2019/08/webassembly-interface-type...
Thanks for the browser work as well!
I'm not sure what gave me the impression but, in my mind "process-per-tab" and "Electrolysis" were linked, but that was a misconception:
>In great detail, (as of April 2021) Firefox’s parent process launches a fixed number of processes: eight web content processes, up to two additional semi-privileged web content processes, and four utility processes for web extensions, GPU operations, networking, and media decoding.
>While separating content into currently eight web content processes already provides a solid foundation, it does not meet the security standards of Mozilla because it allows two completely different sites to end up in the same operating system process and, therefore, share process memory. To counter this, we are targeting a Site Isolation architecture that loads every single site into its own process.
Was this goal reached in the end, or perhaps even surpassed, or missed after all?
I guess this also makes adblockers even more valuable in terms of saving memory, since each blocked third party-iframe that doesn't load is potentially one additional process that doesn't have to be created…
It really depends on what web sites you have open. If you have a single tab with an ad-laden news site, the overhead will be high, but if you have a bunch of Google Docs tabs open, there's no overhead.
On the other hand I guess you're right about "people usually don't have that many unique sites open", so the original design value of 100 separate origins was probably purposely chosen to be on the large side, and thinking about it, I guess not having that many unique sites open usually fits my usage patterns, too. The unknown factor I can't really judge is how many iframes with potentially separate origins the pages I normally visit use, though.
Looking at it positively, one additional potential benefit could be that I have a few long-lived tabs that I always keep open – under the current model, this means that the content processes associated with those tabs never die and possibly slowly accumulate cruft and memory fragmentation from additional tabs that happen to be loaded in them (and later closed again).
Under the new model on the other hand, closing all tabs associated with a domain should be enough to get the associated content process to exit and free up really all memory used by those now closed tabs.
Can you explain this in more detail?
Putting sites in their own process mitigates against Spectre–like attacks, but it doesn’t do anything for higher–level problems like third–party cookies.
I'd love to drop https://addons.mozilla.org/en-US/firefox/addon/temporary-con..., which automatically assigns a temporary container to any new tab, which plays nicely with the https://addons.mozilla.org/en-US/firefox/addon/multi-account... but uses too much memory (at least from my experience).
Using temporary containers, multi-account containers, site isolation, along with a number of other privacy/security addons such as Umatrix, LocalCDN, and many others, I have not noticed any slowdown.
This on an older broadwell i7 with 32GB of ram.
So they do different things, and interaction between the features appears to work without issues.
One of the primary reasons I use Firefox is that it uses significantly less memory than Chrome, and the entire OS seems to function better as a result (I've seen the most stark difference on macOS). I had been under the impression that most of the reason Chrome uses so much memory is its multiprocess model.
I understand that maybe we need to give that up for better security, but it would be nice to know if that's indeed the tradeoff being made here.
Do you need more data? If so, what's the best way for me to add to it? Would that be installing Nightly, setting fission.autostart to true, and enabling some telemetry?
Looking at the navbar, the horizontal and vertical alignment is all over the place, the search input has no placeholder or label, background colors are inconsistent, and paddings are just bizarre.
Not to mention the fact the extension ecosystem is still crippled. There's still no user-agent switcher, meaning there are sites I literally cannot access from my phone without installing an old version of Firefox Mobile or go back to using Dolphin Browser. "View Desktop Version" perversely still tells sites you're on a mobile browser.
It would be sad if one day Chromium removed Manifest v2 and there was no alternative.
https://www.chromium.org/Home/chromium-security/site-isolati...
http://www.scottmccloud.com/googlechrome/
In early-mid 2008, I created a comic book for Google explaining the inner workings of their new open source browser Google Chrome.
If I'm mixing this up with https://wiki.mozilla.org/Electrolysis, that's still 10 years.
As engineers, we should not accept this status quo; we should replace it. We need a new web and new software.
Seems pretty cool tbh
https://dustri.org/b/the-web-browser-im-dreaming-of.html
Gemini might be a good replacement for the web: