It's a product of many cooks and their brilliant ideas and KPIs, a social network for devs and code being the most "brilliant" of them all. For day to day dev operations is something so mediocre even Gitlab looks like the golden standard compared to Github.
And no, the problem is not "Rails" or [ insert any other tech BS to deflect the real problems ].
The problem is they abandoned rails for react. The old SSR GitHub experience was very good. You could review massive PRs on any machine before they made the move.
Their "solution" was to enable SSR for us ranters' accounts.
Which, it seems, was a result of the M$ acquisition: https://muan.co/posts/javascript
Meanwhile, I opened a 100K line CSV in Neovim and while it took a couple of seconds to open and render highlighting, after that, it was fine.
There are of course performant react apps out there. What Steve did with tldraw is amazing.
However, the vast majority of the apps out there are garbage since the framework itself is terribly inefficient.
Too bad Phabricator is maintenance-only now https://en.m.wikipedia.org/wiki/Phabricator
My memory is fuzzy but I think it was on phab that I discovered and loved to use stacked merges. This is where you have a merge request into another open merge request etc. Super useful. Miss that in the git world.
I had to alter basically every aspect of how I interact with it because of how fucking slow it is! I still can't shake the sense that it's about to go down or that I've done something wrong every time I click something and nothing happens for several seconds.
I still love it! Works great, makes sense, is fast...
At the very least, I wish they set it to auto.
Gitlab is anything but light, by default tends to be slow, but surprisingly fast with a good server ( nothing crazy, but big ) and caching.
Never had any issues with it.
The page the person on the issue had loading for 10s, takes almost 2s here.
Perhaps it depends what software one is using
For example, commandline search and tarball/zipball retrieval from the website, e.g., github.com, raw.githubusercontent.com and codeload.github.com, are not slow for me, certainly not any slower than Gitlab
I do not use a browser nor do I use the git software
I use the Github website as I would any software mirror/repository
I'm not interested in images (mascots or other garbage) or executing code (gratuitous Javascript) when using the Github website, I'm interested in reading and downloading source code
For my sins I occasionally create large PRs (> 1,000 files) in GitHub, and teammates (who mostly all use Chrome) will sometimes say "I'll approve once it loads for me..."
I actually have been trying to figure out how to get my React application (unreleased) to perform less laggy in Safari than it does in Firefox/Chrome, and it seems like it is related to all the damn DOM elements. This sucks. Virtualizing viewports adds loads of complexity and breaks some built-in browser features, so I generally prefer not to do it. But, at least in my case, Safari seems to struggle with doing certain layout operations with a shit load of elements more than Chrome and Firefox do.
But CSS has bit me with heavy pages (causing a few seconds of lag that even devtools debugging/logging didn't point towards). We know wildcard selectors can impact performance, but in my case there were many open ended selectors like `:not(.what) .ever` where the `:not()` not being attached to anything made it act like a wildcard with conditions. Using `:has()` will do the same with additional overhead. Safari was the worst at handling large pages and these types of selectors and I noticed more sluggishness 2-3 years ago.
It's just easier to blame the tools (or companies!) you already hate.
"Rename 'CustomerEmailAddress' to 'CustomerEmail'"
"Upgrade 3rd party API from v3 to v4"
I genuinely don't get this notion of a "max # of files in a PR". It all comes off to me as post hoc justification of really shitty technology decisions at GitHub.
A very simple example: migrating from JavaEE to JakartaEE. Every single Java source file has to have the imports changed from "javax." to "jakarta.", which can easily be thousands of files. It's also easy to review (and any file which missed that change will fail when compiling on the CI).
But there is also the Safari Technology Preview, which installs as a separate app, but is also a bit more unstable. Similar to Chrome Canary.
It's hard to know which member of the duopoly is more guilty for breaking GitHub for me, but I find that blaming both often guarantees success.
I could like, buy a new computer and stuff. But you know, the whole Turing complete thing feels like a lie in the age of planned obsolescence. So web standards are too.
In case you're one of today's lucky 10,000, OpenCore Legacy Patcher supports Macs going to back as far as 2007: https://github.com/dortania/OpenCore-Legacy-Patcher
Planned obsolescence is some of it, some of it is abstractions making it easier for more people to make software (at the cost of using significantly more compute) and Moore’s law being able to support those abstraction layers. Just imagine if every piece of software had to be written in C, the world would look a whole lot different.
I also think we’ve gone a bit too far into abstraction land, but hey, that’s where we are and it’s unlikely we are going back.
Turing completeness is almost an unrelated concept in all of this if you ask me, and if anything it’s because of completeness that has driven higher and higher memory and compute requirements.
So GitHub is usable but there are a number of UI layout issues and searching within a file is sometimes a mess (eg, highlighting the wrong text, rendering text incorrectly, etc. maybe that's true for all browsers. you're better off viewing a file as text in raw mode)
I know some people feel like Apple is aggressive in this respect, but that's an 8 year old version of a browser. That's like taking off all of the locks on your house, leaving the doors and windows open all while expecting your house to never have uninvited guests.
Depending on where you live (or what websites you visit) it's not unreasonable.
- Project managers putting constant pressure on developers to deliver as fast as possible. It doesn't even matter if velocity will be lost in the future, or if the company might lose customers, or even if it breaks the law.
- Developers pushing back on things that can backfire and burning political capital and causing constant burnout. And when things DO backfire, the developer is to blame for letting it happen and not having pushed it more in the first place.
- Developers who learned that the only way to win is by not giving a single fuck, and just trucking on through the tasks without much thought.
This might sound highly cynical, but unfortunately this is what it has become.
Developers are way too isolated from the end result, and accountability is non-existent for PMs who isolate devs from the result, because "isolating developers" is seem as their only job.
EDIT: This is a cultural problem that can't be solved by individual contributors or by middle management without raising hell and putting a target on their backs. Only cultural change enforced by C-Levels is able to change this, but this is *not* in the interest of most CEOs or CTOs.
Don't listen to the opinions of the developers writing this code. Listen to the opinions of the people making these tech stack decisions.
Everything else is a distant second, which is why you get shitty performance, developers who cannot measure things. It also explains why when you ask the developers about any of this you get bizarre cognitive complexity for answers. The developers, in most cases, know what they need to do to be hired and cannot work outside those lanes and yet simultaneously have an awareness of various limitations of what they release. They know the result is slow, likely has accessibility problems, and scales poorly, and so on but their primary concern is retaining employment.
Todays version is: "You will get fired unless you use React".
So every site now uses React no matter if the end result is a dog slow Github.
Bad developers looks at "what are everybody else using?".
Good developers looks at "what is the best and simplest (KISS) tool for this?"
Good ol’ SSR - but eventually users and PMs start requesting features that can only be implemented with an SPA system, and I (begrudgingly) accept their arguments.
In my role (of many) as technical architect for my org, and as an act of resistance (and possibly to intentionally sabotage LLMs taking over), I opted for hybrid SSR + Svelte - it’s working well for us.
The short answer is: no, they don't. Google Cloud relied upon some Googlers happening to be Firefox users. We definitely didn't have a "machine farm" of computers running relevant OS and browser versions to test the UI against (that exists in Google for some teams and some projects, but it's not an "every project must have one" kind of resource). When a major performance regression was introduced (in Firefox only) in a UI my team was responsible for once, we had a ticket filed that was about as low-priority as you can file a ticket. The solution? Mozilla patched their rendering engine two minor versions later and the problem went away.
I put more than zero effort into fixing it, but tl;dr I had to chase the problem all the way to debugging the browser rendering engine itself via a build-from-source, and since nobody had set one of those up for the team and it was the first time I was doing it myself, I didn't get very far; Google's own in-house security got in the way of installing the relevant components to make it happen, I had to understand how to build Firefox from source in the first place, my personal machine was slow for the task (most of Google's builds are farm-based; compilation happens on servers and is cached, not on local machines).
I simply ran out of time; Mozilla fixed the issue before I could. And, absolutely, I don't expect it would have been promotion-notable that I'd pursued the issue (especially since the solution of "procrastinating until the other company fixes it" would have cost the company 0 eng-hours).
I can't speak for GitHub / Microsoft, but Google nominally supports the N (I think N=2) most recent browser versions for Safari, Edge, Chrome, Firefox, but "supports" can, indeed, mean "if Firefox pushes a change that breaks our UI... Well, you've got three other browsers you could use instead. At least." And of course, yes, issues with Chrome performance end up high priority because they interfere with the average in-house developer experience.
If you put a lot of momentum behind a product with that mentality you get features piled on tech debt, no one gets enthusiastic about paying that down because it was done by some prior team you have no understanding of and it gets in the way of what management wants, which is more features so they can get bonuses.
Speaking up about it gets you shouted down and thrown on a performance improvement plan because you aren't aligned with your capitalist masters.
If a developer has to put up a fight in order to push back against the irresponsibility of a non-technical person, they by definition don't have ownership.
Unrealistic timelines, implementing what should be backend logic in frontend, there's a bunch of ways SPA's tend to be a trap. Was react a bad idea? Can anyone point to a single well made react app?
Back in the day (I was a junior dev) this was easier than grappling with React hooks today:
BOOL CMainDialog::OnInitDialog()
{
CDialogEx::OnInitDialog();
m_pPropertySheet = new CMyPropertySheet(_T("My Tabbed Dialog"), this);
m_pPropertySheet->Create(this, WS_CHILD | WS_VISIBLE, WS_EX_CONTROLPARENT);
CRect rectMainDialog;
GetClientRect(&rectMainDialog);
CRect rectPropertySheet(10, 10, rectMainDialog.Width() - 20, rectMainDialog.Height() - 20);
m_pPropertySheet->MoveWindow(rectPropertySheet);
return TRUE;
}I don’t think the culprit apps would have substantially better UX if they were rendered on the server, because these issues tend to be a consequence of devs being pressured to rapidly release new features without regard to quality.
As an aside, I was an employee around then and I vividly remember that the next half there was a topline goal to improve web speed. Hmmmm, I wonder what could have happened?
React can have all the niceties and optimization in the world, but that fails when its users insist on using it incorrectly, building huge tangled messy components and then wondering why a click takes 1.3 seconds to deliver feedback.
IMO it's the MAIN thing to understand about React—how it renders.
Regardless, now I'm the one with egg on my face since the new compiler promises to eventually remove the need for manual memoization almost entirely. The "almost" still fills me with fear
In this very thread there's some asshole using the word "memoization" when "caching" would have been fine.
Svelte is ok. It could have been great but the api for their version of observables is a disaster (which I hope they eventually fix). Sveltekit is half baked and convoluted and I strongly advise not touching it.
I've definitely managed to make a page that uses almost no JavaScript and is dog-slow on Firefox (until Mozilla updated the rendering engine) just by building a table out of flexboxes. There's plenty of places for browsers to chug and die in the increasingly-complicated standard they adhere to.
The problem isn't React. The problem are KPIs and unrealistic timeline. It is the same then ever. Not a fault of React at all.
On react, it's funny that sites where the frontend part is really crucial tend to move away from generic frameworks and do really custom stuff to optimize. I'm thinking about Notion, or Google Sheets, or Figma, where the web interface is everything and pretty early on they just bypass the frontend stacks generally used by the industry.
The main problem is that it tries to do away with a view model layer so you can get the data and render it directly in the components, but that makes managing multiple components from a high level perspective literally impossible. Instead of one view model, you end up with 50 React-esque utilities to achieve the same result.
What about Slack, the messenger?
Umm, Discord? SoundCloud? Trello? Bandcamp? Spotify?
If I keep going there are actually hundreds and thousands of well-made react apps.
My irc client is taking 60MiB of memory and 0.01% cpu. My IRC client is responsive and faster, it has more configurable notification settings. I like the irc client more.
> Bandcamp
I just went to the bandcamp page and it indeed loaded very quickly. As far as I can tell, there's no react in use anywhere so I guess that's why.
What do you mean by bandcamp using react?
You call it well made? I'm sorry for you, you must really live a really harsh life.
Does anyone have concrete information?
[1]: https://yoyo-code.com/why-is-github-ui-getting-so-much-slowe...
https://chromewebstore.google.com/detail/make-github-great-a...
Sourcehut is basically a really barebones web interface for git server, so I don't think it's really comparable to GitHub
For hosting your own projects that's sometimes not a viable solution either. Limiting your open source project to platform other than GitHub hurts it's discoverability, because usually GitHub is what most devs and non devs associate with open source. I heard a lot of "It's not open source if it's not on GitHub". You can mirror your project to GH of course
"Just migrate to X because it's faster" doesn't work that well in the real world
Pushes and pulls would still kinda work, actions not so much (but that's cause it needed to transfer more then 100MB)
I see loading spanner everywhere and even the page transition take ages compared to before.
I am not sure what metric they are using justify ditching the perfectly working SSR they used before.
Slow as hell and the Safari search function stopped working. I loaded the same url on Firefox and it was insta-fast.
The Cloud to make single-digit-seconds operations on a local Raspberry Pi 2 and home Internet take a few minutes.
What a time to be alive.
If you actually load up a ~2015 version of Jira on today’s hardware it’s basically instant.
It was being hosted on another continent. It was written in PHP. It was rendering server-side with just some light JS on my end.
That used to be the norm.
A lot of the time we just break the branch permissions on the repo we are using and run release branches without PRs and ignore the entire web interface.
> publicly disseminate information regarding the performance of the Cloud Products
https://web.archive.org/web/20210624221204/https://www.atlas...
GitHub issues was so simple and now they keep shoving features into it.
Why has no one learned to not become Jira? You gotta say no sometimes.
I have an ever growing directory listing using SolidJS, and it's up to about 25,000 items. Safari macOS and iOS two major versions ago actually handled it well. After the last major update, my phone rendered it faster than an m1 MacBook Pro.
The solution is a test that fails when Chrome and Safari have substantially different render times.
That test will be disabled for being flaky in under a week because the CI runners have contention with other jobs, causing them to randomly be slower and flake, and the frontend team does not want to waste time investigating flakes.
"Just have dedicated runners with guaranteed CPU performance", but that's the CI platform team's issue, the frontend and testing teams can't fix it, and the CI infra team won't prioritize it for a minimum of 5 years.
On random site, Navigate to GitHub repo, navigate to file in repo, and hit back, and I'm on the random site, hit forward and I'm on the file.
So annoying.
One of a large handful of issues I've encountered post react conversion
Any time I click a GitHub link, if I navigate beyond the readme, then my history is completely borked. Going “back” one page might go to the readme, might go back to HN, or might even go back to the readme and then back to the page I was trying to leave!
It’s infuriating and I always figured it was a bug they’d fix eventually but it’s been at least two years of this crap.
Good to know others are feeling it too, hopefully it can get resolved soon. In the mean time, i'll try my PR reviews on FF.
Update: Just tested my big PR (+8,661, -1,657) on FF and it worked like a charm!
My CPU goes to 100% and fans roaring every time I load the dashboard and transactions. I can barely click on customers/subscriptions/etc. I can't be the only one...
And this is something browsers don't treat as bugs. You can crash any browser's tab by just exhausting its allocated memory
Then some charlatan thought to embrace the React hype and it became terrible to say the least.
Old GitHub was very light on features, whereas the new UIs are way more curated on the surface.
Unfortunately all of this brings in tons of complexity. It doesn't help that there are a lot of junior developers working on it, clearly.
I haven't been able to load it yet to actually check out these hip new features, it just crashes my browser, but I'm sure they must be great?
You really can't escape the enshittification.
Clean code argues that instead of total rewrites you should focus on gradual improvements over time, refactor code so that overtime you pay off the dividends, without re-living through all the bugs you lived through 5 years ago that you don't recall the resolution of. Every rewrite project I've ever worked on, we run into bugs we had already fixed years prior, or the team before me has.
There are times when a total rewrite might be the best and only options such as deprecated platforms (think of like Visual Basic 6 apps that will never get threading).
What frustrates me more is that GitHub used to be open to browse, and the search worked, now in their effort to force you to make an account (I HAVE LIKE TEN ALREADY) and force you to login, they include a few "dark patterns" where parts of search don't work at all.
I don’t know if that’s a good or realistic rule for most projects, but I imagine for performant types of applications, that’s exactly what it takes to prevent eventual slowdown.