EDIT: You mention this with archive.org links! Love it! https://mrshu.github.io/github-statuses/#about
That's not at all how you measure uptime. The per area measures are cool but the top bar measuring across all services is silly.
I'm unsure what they are targeting, seems across the board it's mostly 99.5+ with the exception of Copilot. Just doing math, 3 (independent, which I'm aware they aren't fully) 99.5 services brings you down to an overall "single 9" 98.5 healthy status but it's not meaningful to anyone.
Probably would be too obvious how badly M$ fucked this up.
If they don't get their ops house in order, this will go down as an all-time own goal in our industry.
Something this week about "oops we need a quality czar": https://news.ycombinator.com/item?id=46903802
It's extra galling that they advertise all the new buzzword laden AI pipeline features while the regular website and actions fail constantly. Academically I know that it's not the same people building those as fixing bugs and running infra, but the leadership is just clearly failing to properly steer the ship here.
It's like the fossil model, but on Git
https://github.com/git-bug/git-bug
If everyone used it, it would eliminate this specific form of lock in
That is what that feature does. It imports issues and code and more (not sure about "projects", don't use that feature on Github).
They literally have the golden goose, the training stream of all software development, dependencies, trending tool usage.
In an age of model providers trying train their models and keep them current, the value of GitHub should easily be in the high tens of billions or more. The CEO of Microsoft should be directly involved at this point, their franchise at risk on multiple fronts now. Windows 11 is extremely bad. GitHub going to lose their foundational role in modern development shortly, and early indications are that they hitched their wagon to the wrong foundational model provider.
(Actually there's 3 I'm currently working, but 2 are patched already, still closing the feedback loop though.)
I have a 2-hour window right now that is toddler free. I'm worried that the outage will delay the feedback loop with the reporter(s) into tomorrow and ultimately delay the patches.
I can't complain though -- GitHub sustains most of my livelihood so I can provide for my family through its Sponsors program, and I'm not a paying customer. (And yet, paying would not prevent the outage.) Overall I'm very grateful for GitHub.
Usually an outage is not a big deal, I can still work locally. Today I just happen to be in a very GH-centric workflow with the security reports and such.
I'm curious how other maintainers maintain productivity during GH outages.
Not who you're responding to, but my 2 cents: for a popular open-source project reliant on community contributions there is really no alternative. It's similar to social media - we all know it's trash and noxious, but if you're any kind of public figure you have to be there.
Edit- oh you probably meant an alternative to GitHub perhaps..
[1] https://www.theverge.com/tech/865689/microsoft-claude-code-a...
> During this time, workflows experienced an average delay of 49 seconds, and 4.7% of workflow runs failed to start within 5 minutes.
That's for sure not perfect, but there was also a 95% chance that if you have re-run the job, it will run and not fail to start. Another one is about notificatiosn being late. I'm sure all others do have similar issues people notice, but nobody writes about them. So a simple "to many incidents" does bot make the stats bad - only an unstable service the service.
> Let's make a timeline chart of https://www.githubstatus.com/history for the past 1yr and upload it as a gist
yields [GitHub Status Incident Timeline — Feb 2025 to Feb 2026](https://htmlpreview.github.io/?https://gist.githubuserconten...)
219 total incidents across 12 full months, averaging 18.3/month. January 2026 was the worst month, and August 2025 was the calmest.
This should not be normal for any service, even at GitHub's size. There's a joke that your workday usually stops around 4pm, because that's when GitHub Actions goes down every day.
I wish someone inside the house cared to comment why the services barely stay up and what kinds of actions are they planning to do to fix this issue that's been going on years, but has definitely accelerated in the past year or so.
That doesn't normally happen to platforms of this size.
There are probably tons of baked in URLs or platform assumptions that are very easy to break during their core migration to Azure.
Edit: Looks like they've got a status page up now for PRs, separate from the earlier notifications one: https://www.githubstatus.com/incidents/smf24rvl67v9
Edit: Now acknowledging issues across GitHub as a whole, not just PRs.
Investigating - We are investigating reports of impacted performance for some GitHub services. Feb 09, 2026 - 15:54 UTC
But I saw it appear just a few minutes ago, it wasn't there at 16:10 UTC.
Investigating - We are investigating reports of degraded performance for Pull Requests Feb 09, 2026 - 16:19 UTC
It has been a pretty smooth process. Although we have done a couple of pieces of custom development:
1) We've created a Firecracker-based runner, which will run CI jobs in Firecracker VMs. This brings the Foregjo Actions running experience much more closely into line with GitHub's environment (VM, rather than container). We hope to contribute this back shortly, but also drop me a message if this is of interest.
2) We're working up a proposal[1] to add environments and variable groups to Forgejo Actions. This is something we expect to need for some upcoming compliance requirements.
I really like Forgejo as a project, and I've found the community to be very welcoming. I'm really hoping to see it grow and flourish :D
[0]: https://lithus.eu, adam@
[1]: https://codeberg.org/forgejo/discussions/issues/440
PS. We are also looking at offering this as a managed service to our clients.
One solution I see is (eg) internal forge (Gitlab/gitea/etc) and then mirrored to GH for those secondary features.
Which is funny. If GH was better we'd just buy their better plan. But as it stands we buy from elsewhere and just use GH free plans.
Mirroring is probably the way forward.
You did back it up, right? Right before you ran me with `--allow-dangerously-skip-permissions` and gave me full access to your databases and S3 buckets?
A people have replied to you mentioning Codeberg, but that service is intended for Open Source projects, not private commercial work.
Also very happy with SourceHut, though it is quite different (Forgejo looks like a clone of GitHub, really). The SourceHut CI is really cool, too.
Distributed source control is distributable.
Dunno about actions[1], but I've been using a $5/m DO droplet for the last 5 years for my private repo. If it ever runs out of disk space, an additional 100GB of mounted storage is an extra $10/m
I've put something on it (Gitea, I think) that has the web interface for submitting PRs, reviewing them, merging them, etc.
I don't think there is any extra value in paying more to a git hosting SaaS for a single user, than I pay for a DO droplet for (at peak) 20 users.
----------------------
[1] Tried using Jenkins, but alas, a $5/m DO droplet is insufficient to run Jenkins. I mashed up shell scripts + Makefiles in a loop, with a `sleep 60` between iterations.
It's pretty nice if you don't mind it being some of the heaviest software you've ever seen.
I also tried gitea, but uninstalled it when I encountered nonsense restrictions with the rationale "that's how GitHub does it". It was okay, pretty lightweight, but locking out features purely because "that's what GitHub does" was just utterly unacceptable to me.
All the more reason why they should be sliced and diced into oblivion.
The engineers who build the early versions were folks at the top of their field, and compensated accordingly. Those folks have long since moved on, and the whole thing is maintained by a mix of newcomers and whichever old hands didn't manage to promote out, while the PMs shuffle the UX to justify everyones salary...
Hosting .git is not that complicated of a problem in isolation.
"A better way is to self host". [0]
The new-fangled copilot/agentic stuff I do read about on HN is meaningless to me if the core competency is lost here.
Github is down so often now, especially actions, I am not sure how so many companies are still relying on them.
Github isn't the only source control software in the market. Unless they're doing something obvious and nefarious, its doubtful the justice department will step in when you can simply choose one of many others like Bitbucket, Sourcetree, Gitlab, SVN, CVS, Fossil, DARCS, or Bazaar.
There's just too much competition in the market right now for the govt to do anything.
Simple: the US stopped caring about antitrust decades ago.
Incident with Pull Requests https://www.githubstatus.com/incidents/smf24rvl67v9
Copilot Policy Propagation Delays https://www.githubstatus.com/incidents/t5qmhtg29933
Incident with Actions https://www.githubstatus.com/incidents/tkz0ptx49rl0
Degraded performance for Copilot Coding Agent https://www.githubstatus.com/incidents/qrlc0jjgw517
Degraded Performance in Webhooks API and UI, Pull Requests https://www.githubstatus.com/incidents/ffz2k716tlhx
Notifications are delayed https://www.githubstatus.com/incidents/54hndjxft5bx
Incident with Issues, Actions and Git Operations https://www.githubstatus.com/incidents/lcw3tg2f6zsd
Edit: Now acknowledging issues across GitHub as a whole, not just PRs.
Hopefully the hobbyists are willing to shell out for tokens as much as they expect.
Gerrit is the other option I'm aware of but it seems like it might require significant work to administer.
I am able to access api.github.com at 20.205.243.168 no problem
No problem with githubusercontent.com either
I would love to pay Codeberg for managed hosting + support. GitLab is an ugly overcomplicated behemoth... Gitea offers "enterprise" plans but do they have all the needed corporate features? Bitbucket is a joke, never going back to that.
Are the other providers offering much better uptime GitLab, CircleCI, Harness? Saying this as someone that's been GH exclusive sicne 2010.
Today, when I was trying to see the contribution timeline of one project, it didn't render.
Maybe they need to get more humans involved because GitHub is down at least once a week for a while now.
Codeberg gets hit by a fair few attacks every year, but they're doing pretty well, given their resources.
I am _really_ enjoying Worktree so far.
Radicle is the most exciting out of these, imo!
It's definitely some extra devops time, but claude code makes it easy to get over the config hurdles.
hopefully its down all day. we need more incidents like this to happen for people to get a glimpse of the future.
Self hosting would be a better alternative, as I said 5 years ago. [0]
coincidence I think not!
Beyond a meme at this point
With the latter no longer a thing, and with so many other people building on Github's innovations, I'm starting to seriously consider alternatives. Not something I would have said in the past, but when Github's outages start to seriously affect my ability to do my own work, I can no longer justify continuing to use them.
Github needs to get its shit together. You can draw a pretty clear line between Microsoft deciding it was all in on AI and the decline in Github's service quality. So I would argue that for Github to gets its shit back together, it needs to ditch the AI and focus on high quality engineering.
Just when open source development has to deal with the biggest shift in years and maintainers need a tool that will help them fight the AI slop and maintain the software quality, GitHub not only can't keep up with the new requirements, they struggle to keep their product running reliably.
Paying customers will start moving off to GitLab and other alternatives, but GitHub is so dominant in open source that maintainers won't move anywhere, they'll just keep burning out more than before.
Github stability worse than ever. Windows 11 and Office stability worse than ever. Features that were useful for decades on computers with low resources are now "too hard" too implement.
Coincidence?
Any public source code hosting service should be able to subscribe to public repo changes. It belongs to the authors, not to Microsoft.
If we had a government worth anything, they ought to pass a law that other competitors be provided mirror APIs so that the entire world isn't shut off from source code for a day. We're just asking for a world wide disaster.
Just saying.
EDIT: my bad, seems to be their server's name.