I recognize your username because I met you at Kubernetes Community Day AMS, after Helm Summit, when you told everyone your username from up on the stage...
By the way I loved your talk "Convergence of Communities" and everything about the Jellyfish modeling, so for the benefit of anyone who maybe does not know who you are, or why you posted this one-liner... this is Diane, Director of Community Development at RedHat, and you can find that talk on YouTube.
So, thanks for doing this!
https://docs.projectquay.io/deploy_quay.html
"For a Project Quay Registry installation (appropriate for non-production purposes), you need one system (physical or virtual machine) that has the following attributes:
Red Hat Enterprise Linux (RHEL): Obtain the latest Red Hat Enterprise Linux server media from the Downloads page and follow instructions from the Red Hat Enterprise Linux 7 Installation Guide.
Valid Red Hat Subscription: Obtain a valid Red Hat Enterprise Linux server subscription.
"So, is it truly tied to RHEL and a subscription, or is that page just making me FEEL that way?
Annoying either way. :/
So guessing the docs will slowly shift to RHEL8 and podman.
Why does a container registry need so many resources?
Second, I think it would help to know why you think a container registry wouldn't need a moderate amount of resources.
I don't necessarily disagree that the resources could be lower at the minimum (and in fact, I recall they are quite a bit lower than this when running it on your laptop without any load), but is this really anything unexpected?
It's written in Python so it's not going to be as efficient as Go or C++ but it certainly isn't Java levels of resources being requested here.
High resource requirements mean that I need to spend more on compute, whether that means paying a cloud provider for a beefier instance, or spending more money on hardware and electricity.
Spinning up a cloud VM with 4GB for a year is a bit more than beer money though.
Reminds me of installing gitlab. Requirement 8GB...for a basic install. After a bunch of crashes (on a 8gb VM) I went with vanilla git.
https://www.jfrog.com/confluence/plugins/servlet/mobile?cont...
For those that use LDAP authentication for this, it makes is a much smaller attack vector.
The per-team "organizations" is very nice and allows you to give teams their own flexibility while still running things within your own firewalls (on-prem or in a vpc). It is an alternative to docker hub with a lot of really nice features.
The ability to do scheduled mirroring of images from other registries (such as docker hub) and replication between different instances of quay is also really beneficial.
Disclaimer: commercial quay enterprise user for some time.
Disclaimer: I work for Red Hat but have been a fan decades longer than I've worked there.
The issue is that OSS is the only way to compete with the cloud. Quay would die without 3rd party contributions vs Amazon, Google, Microsoft.
(Disclaimer: Red Hatter since almost 15 years. I never doubted for a second that we would open source what we buy. Ansible, 3Scale, CoreOS, we have a long history of sticking to our principles)
Regarding RedHat (or IBM) truly committing to open-source, I'll believe when OpenShift 4.x is open-sourced.
But everything it's being built with is entirely FOSS. Making OKD happen is a high priority and is being worked on.
From my understanding, most of it's been blocked on Fedora CoreOS being at a state that it can be used for OKD and just putting resources onto setting up the automation for building everything for OKD.
Remember that Openshift 4.x fundamentally changed how Openshift does updates and that affects OKD a lot. Claytons email touches on this quite a bit.
Disclosure: I work at Red Hat, on projects related to Openshift.
Red Hat is very committed to putting the necessary infrastructure and organization in place around projects before they open source to make sure that the code isn't just available but can actually use community contributions. I don't have any inside info on this, but I wouldn't be surprised if OpenShift 4 is just waiting for that, or possibly to be in a stable enough state that the community can contribute.
Of course it's possible also that RH is keeping it closed for other reasons as well such as avoiding tipping their hand to competitors until their end goal is realized or something like that. I guess the point is I don't know, but given Red Hat's history of open sourcing even valuable acquisitions, I have faith that they will with OpenShift 4 as well.
And now users are in a much better place, because they have multiple choices of container registry, which will hopefully drive innovation.
So, what is Quay? And why the information pages assume everyone knows what it is?
Edit: I called it Kway myself and googled it after getting puzzled looks from my UK peers. The referenced article says "key" is the older pronunciation but either is acceptable.
Disclaimer: Named, cofounded, and now engineering lead of Quay
(I assume this has carried through to Red Hat.)
Still, it’s great to see quay make it out into the open.
It was in a mostly stagnant state, a release once in a while, and now it's going regular and strong.
The thing is, after things get the "CNCF" stamp they kinda go viral and become the "de facto" standard.
This means that Harbor would become the most usual way to run a private registry and thus Quay would lose ground (=> harder to sell).
Source: just implemented Harbor at work. Quay would probably have been better (probably "production ready") but Harbor was free & open source.
Not to take anything away from the accomplishments of the Quay team or their contribution, there is definitely value in having more than one kid on the block when it comes to open-source solutions for problems like this. I think the tendency is to push for "one solution to rule them all" and that kind of approach can stifle innovation pretty hard.
I'm not sure there's any relevance between this announcement and that one, as Harbor has been in the incubator since about 12 months ago from what I can tell, and was sandbox before that. But it appears to be another mature solution in the same space with many of the same features, if that isn't something worth noting I can't understand why not!
To your point, it would be great if the comment was a bit more substantive.
We've been discussing and working on this since the CoreOS acquisition. My good friend and Consulting colleague wrote a lot of the Operator code now used for installation.
They're note using Slack!
It's my premise that the author of that press release doesn't know the difference between "premise" and "premises." I know we have a habit in American English to evolve the meaning of words faster than any other language but surely IBM's press team could offer to proof-read things like this before posting.
https://www.merriam-webster.com/dictionary/premise
(And yes, I die inside a bit when someone on NPR ends a sentence with a preposition...)
1.) There are arguments but no definitive justification for choosing between on-premise and on-premises.
2.) A lot of what gets lumped under on-prem is not actually on your premises anyway. It's in a colo or a managed hosting provider, etc.