> Note: This is not an official Google product.
However the copyright is Google, and you must sign their CLA. That seems pretty official to me? Or is there some other implication to "official" beyond ownership?
If it's easier, just drop the word "official". It's not a Google product- IE, a saleable or free service that google provides to customers- but rather just a thing Google put out in the world, subject to the rules that Google applies.
Think about this: someone work for Google wrote a software and Google is kind enough to open source it.
Disclaimer: I work for Google but my interpretation could be wrong.
Here's how it works (in California, anyway):
Google reserves the right to review open source projects to determine whether there is overlap with a current or potential Google business. This is done by submitting a form that describes one's project and then waiting a while for a decision. If the decision is "yes", then you are free to work on your project with your own time/resources, and publish it under your own copyright.
Google being Google, almost anything could be construed as overlapping with Google business interests. However, my anecdata suggests that Google is quite liberal in this regard: I had no problems myself, and know a few others who also had no problems.
In this case (i.e. this Hacker News story), I'm guessing that we're talking about something different. I suspect that the project in question was written using Google time and resources, and it is IMO appropriate for Google to own the copyright.
Rate limiting is an inefficient way to distribute a service, that makes sense only if you preallocate your resources and have queries with predictable cost. Let's use this technique only as a bug-prevention tool not for resource economy, as organised scarcity is likely as inefficient for the data center as it is for the distribution of goods :)
I would like to see an adaptive system that, when resources are scarce, pushes towards a more equitable distribution.
As in, is this protocol inherently cooperative or could an implementation have checks/controls added?
https://aws.amazon.com/message/5467D2/ https://aws.amazon.com/message/2329B7/ http://aws.amazon.com/message/65648/
An example quote:
"When this network connectivity issue occurred, a large number of EBS nodes in a single EBS cluster lost connection to their replicas. When the incorrect traffic shift was rolled back and network connectivity was restored, these nodes rapidly began searching the EBS cluster for available server space where they could re-mirror data. Once again, in a normally functioning cluster, this occurs in milliseconds. In this case, because the issue affected such a large number of volumes concurrently, the free capacity of the EBS cluster was quickly exhausted, leaving many of the nodes “stuck” in a loop, continuously searching the cluster for free space. This quickly led to a “re-mirroring storm,” where a large number of volumes were effectively “stuck” while the nodes searched the cluster for the storage space it needed for its new replica. At this point, about 13% of the volumes in the affected Availability Zone were in this “stuck” state."
So these things are very hard, can occur in totally unexpected situations, and I'm not at all surprised that a company like Google comes out with something like this.