Really quick pushes, too :-)
> kops update cluster
error reading channel "https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable": unexpected response code "500 Internal Server Error" for "https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable": 500: Internal Server ErrorI've since grown to strongly dislike how much of the entire ecosystem seems to depend on random unversioned Github gists or direct links into a raw file in a repo somewhere.
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
Guess why our python docker images stopped building today?edit: though their historical data doesn't back it up. It's just a 2020 thing it seems; coincidentally picked up right near the middle of February which correlates way too closely with the beginning of COVID.
But, they are moving much faster in recent years. They've added a lot of new features.
To GitHub SRE/oncall people: hang in there, you're awesome.
* https://github.blog/2020-03-26-february-service-disruptions-...
* https://github.blog/2020-07-08-introducing-the-github-availa...
Lots of incidents lately, but it's becoming increasingly hard to get away from Github.
GNOME, Xfce, Redox, Wireguard, KDE and Haiku all have self-hosted on either cgit, Gitlab, Phabricator or Gitea.
I'm not going to say it was hard, but it was work, and it was work that took time (time that was volunteer free time) away from working on Xfce itself. When I did the final git imports of our svn repos in mid-2009, GitHub had been around for about a year, maybe a year and a half, and wasn't that popular yet. And I suppose back then I had the bull-headed "must self-host because that's the only thing a respectable open source developer should do".
That was a foolish attitude that took time away from the actual goal, which was making Xfce better. If it were today, I would have moved us to GitHub in a heartbeat, or perhaps GitLab (their hosted offering; I wouldn't self-host), instead. I haven't been involved with Xfce (still a happy user though!) in nearly a decade, but I suspect they still self-host out of inertia, with probably a little of that bull-headedness mixed in (that I myself can't claim to be fully free of either).
While GitHub's uptime isn't perfect, it's pretty damn good, and better than most volunteers will get running their git server off a single box someone had racked somewhere in Belgium, which I believe was what we were doing at the time. Tools for that sort of thing are better now, and if I had to self-host today, I'd use EC2 or something like that, and automate the hell out of everything, but it's still a lot more work than just using somebody else's infra.
Meanwhile, I expect many forgotten repos to remain online with Github one way or another for a very long time.
If you do go the self-hosting route, add a mainstream host as an additional remote and push your commits to both.
In fact, with all these free services, I'd probably say it's well worth automatically making remotes (at least) for GitLab, GitHub and having a local Gitea for everything you do. This should be resilient against specific outages, or GitHub simply deciding they don't like your project name, or some other disaster.
I do appreciate how easy git and other dvcs make it to mirror repositories, though. I find myself using Github mirrors of projects that are otherwise self-hosted, just because the code search works on Github works pretty well.
And should your project go unmaintained it'll probably disappear if you don't have managed hosting.
Might be good to have a self hosted backup that's automatically synced with Github
Any ideas?
git remote add origin https://user@gitlab.com/myrepo
git remote set-url --push --add origin https://user@github.com/myrepo
git remote set-url --push --add origin https://user@gitlab.com/myrepoSeriously, though, any repo that I work on regularly will be cloned to my local dev environment, so it’s not a hard blocker.
That said, a cron job on a cheap VPS would probably do the trick.
ssh sparkling@git.example.com
mkdir project-1.git
cd project-1.git
git init —-bare
exit
git remote add alternate sparkling@git.example.com:project-1.git
For the syncing part. If you don’t want to do it manually, you can add multiple destinations on the same remote. Someone already mentioned it here, https://news.ycombinator.com/item?id=23818609, https://jigarius.com/blog/multiple-git-remote-repositoriesIf more privacy is required, you can use something like gocryptfs to only send encrypted data to these services.
Syncthing is a similar option that doesn't require an external service, also doesn't require local encryption since all data in transit is always encrypted.
This has a big caveat though: there's no locking mechanism that will work reliably on the bare git repo, so you may have to resolve some conflicts manually if two separate devices push to the same git repo at the same. This is why you should only use this method of you work alone on these repos.
we are trying to migrate to https://www.sonatype.com/nexus-repository-oss (self hosted), it would cache the git tags and you just have to replace the git links to nexus like in the dependency manager.
if you want something simpler, you can try satis for php, sinopia or verdaccio for npm packages. you will find a lot of other tools for the other languages.
https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-...
GitLab is starting to look good (or even Gitea self-hosted).
Dont forget that change in software is inherently risky and will result in bugs, etc. Id rather have a platform that is always looking to make things better and risking a bit of downtime, than a stale platform that we all know we depend on.
So is there any actual pressure to move fast and break things?
Github started doing availability reports. Last month's details in the blog post below with summary of the issue.
Stay tuned till next month for the current outage.
https://github.blog/2020-07-08-introducing-the-github-availa...
I am using https://github.blog/category/engineering/feed/ for engineering category
- A different central server
- A shared on-filesystem copy, e.g. local network drive
- HTTP or SSH between developer computers (put your repository somewhere where your NginX or Apache serves it, the other developer can "git remote add chvid http://chvid.example.com/repository").
Make github a mirror (at least source-wise) and you can benefit from it's outreach without being held hostage. Am happy with that e.g. https://notabug.org/mro/ShaarliOS/src/master/doap.rdf Inspired by https://indieweb.org/POSSE
There have been some other annoyances/changes in behaviour that have bugged me too, but mostly stopped remembering them because am resigned to it now.
What's the best way to keep my own copy of the packages my software needs (and their dependencies), so that my build process is less fragile? Ideally, I'd only have to rely on those 3rd party platforms to download new versions or have them as a backup.
When relying on my own copy of required packages - can I expect much faster builds?
A day wrecker!
Or, are you guys all devops geniuses better than those who work at GitHub?
When my own side-project has more uptime than Github, there's something wrong somewhere.
Though really, if it is that important for it to be up, you should mirror it to at least one other provider (ex. self-hosted and github).
One day, for whatever reason, he couldn't bring the VM back up. Self-hosted GitLab was out for the rest of the day. I found this pretty funny.
Really git is designed to be serverless and decentralised this centralised GitHub malarkey is probably wrong.
Isn't the only reason it will go down be because of network issues or power failure? What other possible cause could be for system failure?
I have been running HA, Pi-Hole, Maria-DB and my own API instances on my Raspberry Pi and in the 22 days of uptime till now, there have been 0 failures
At that point you'd have to create and manage a cluster.
You have to update servers, etc. etc. and if you count the hours is many times more than working locally and wait a couple of hours until technicians at GitHub fix the problem for you.
For larger projects where you have the resources to have dedicated infra people, I guess it depends.
Do you call IBM to maintain your pi-hole?