> Hi-
We’re improving the Terms of Service that apply to your Colab Pro or Colab Pro+ subscription making them easier for you to understand and improving the ways you can use Colab. The changes will take effect on September 29.
The [updated Terms of Service](https://research.google.com/colaboratory/tos_v3.html) include changes that will allow you to have more control over how and when you use Colab, allowing us to offer new services and features that will enhance your experience using Colab.
We will increase transparency by granting paid subscribers compute quota via compute units which will be visible in your Colab notebooks, allowing you to understand how much compute quota you have left. These compute units are granted monthly and will expire after 3 months. You will be entitled to a certain number of compute units based on your subscription level and will have the ability to purchase more compute units as needed.
Additionally, we will allow paid subscribers to exhaust their compute quota at a much higher rate. This will result in paid subscribers having more flexibility in accessing resources. Read more about these changes at our [FAQ](https://research.google.com/colaboratory/faq.html#compute-units).
If you would like to cancel your Colab Pro or Pro+ subscription, you can do that by going to pay.google.com and clicking Subscriptions and services. If you have any trouble canceling, you can email colab-billing@google.com for assistance. Please include an order number from one of your receipt emails if you email us for assistance.
-The Colab team
Individuals tend to be upset; while professionals are happy that individual free-riders will no longer be sucking up undue amounts of compute power, and so QoS on the system will improve for them.
What I understood from my interactions is that they complain but will not use a paid product because even though they're paying from nothing to $49, the actual resources used is in the $800/month ballpark (notebooks running 23 hours per day, seven days a week, using a GPU).
These are clearly hobbyists. The pros had different problems such as not being able to pay for it from certain countries.
In other words, there are people who need a notebook to run and not crash and willing to pay for that and there are others working on toy projects/individual pet projects or projects with no real stakes who'll complain about it but will not switch because another company will not really subsidize usage.
Yes, there are other companies that offer notebooks, but our product was for professionals in the ML field, and there's much more to ML project than running a notebook (real time collaborative notebooks, automatic experiment tracking, plugging compute from any cloud provider, one click model deployment, object storage like a filesystem, live monitoring dashboard for deployed models, and more).
Any other suggestions for where to rent cheap GPUs? - i've heard about Hetzner (https://www.hetzner.com/sb?search=gpu), but they are 1080s.
I got an A100 after I susbscribed, so it worked out for me, but still annoying you don’t know what you’ll get.
I am working on an AI book in Python. (I usually write about Lisp languages.) About half the examples will be Colab notebooks and half will be Python examples to be run on laptops.
In any case, I like the soon to be implemented changes, sounds like a good idea to get credits and see a readout of usage and what you have left.
I read every feedback submission in Colab so if you ever have feedback you'd like addressed, send away.
No intention to lock it down? Whatever that would mean? We ensure notebooks are totally portable for any other Jupyter install you want to move to.
This change is about laying the groundwork for increased transparency for your paid compute consumption, vs. the current model of kind of hiding that away.
Flat compute units seem simple, but result in a lot of waste.
And if it does go idle, it saves energy, which costs money. At scale, compute isn't free.
I guess nothing stops you from buying infra and offering it for "free (or nearly free)".
HN used to be a place for interesting discussions. Now it's a grievance forum for entitled freeloaders.
Compute is dynamic. You might be above capacity for Christmas shopping, and below capacity at 4am in the middle of the weekend.
By varying pricing, you can be more efficient. People who can will smooth out that load. If I don't need to run something during peak hours, I might wait until off-peak. Google needs less capacity. Everyone comes out ahead.
For a profit-making project, dynamic pricing makes sense. I suggested free since the primary goal of co-lab isn't to make money (but they also don't want to subsidize it too much, so they do need to charge).
Free notebooks can be run for 6 hours at a time.
More info available in docs: https://docs.paperspace.com/gradient/machines/#free-machines...
I even tried and failed to get it up and running with a Google cloud GPU recently, before just switching to Lambda which worked first time (but had since hit availability issues).
The restrictions listed at https://research.google.com/colaboratory/tos_v3.html differ slightly from the limits listed at https://research.google.com/colaboratory/faq.html specifically tos_v3.html does not mention these items from the faq
* using a remote desktop or SSH
* connecting to remote proxies
I can appreciate why those were added - I've read posts and notebooks explaining how you can use ngrok or cloudflare to do those things in violation of the restrictions in the faq and clearly many people aren't using Colab as intended.Speaking as someone who has been playing around with the Colab free tier with the expectation of moving to a paid service once I know what I'm really doing, I'd like to know if it's likely these restrictions will be eased a bit with the move to a compute credit system.
I'm still learning and haven't had a need to do those things yet but I believe remote ssh access would greatly simplify managing things. The Jupyter interface and integrated Colab debugger are good for experimenting but I'm worried that as I get closer to production I'll need a way to observe and change the state of long-running Colab processes the way I could with ssh, ansible or other existing tooling.
Clearly I can build that myself or use something like Anvil Works https://anvil.works but that's time and effort I'd rather avoid if possible. So I'm hoping that the Colab team will ease the SSH restriction for people like me who want to use it for more traditional ops/monitoring of long running tasks.
Do you anticipate any change or easing of the SSH restriction?
Both of those address angles of abuse that I don't want to discuss in big forums, and go counter to interactive notebook compute, our top priority.
All is not lost though, I've got a few irons in the fire that should help resolve those points of feedback over the coming year.
In the meantime, you can always just buy a GCP VM and you have all the certainty you want: https://research.google.com/colaboratory/marketplace.html I find most people don't want that because it's a pain that Colab Pro/Pro+ largely abstracts.
> This has been planned for months, it's laying the groundwork to give you more transparency in your compute consumption, which is hidden from users today.
https://twitter.com/thechrisperry/status/1564806305893584896
I suppose my concern is this - Was Colab using "compute units" internally for limits already? Or was it "as available"? And if so, will my compute units allocation be what it has been or will it be decreasing? To be honest, when I read the email today, I assumed the limits would be tightening. It feels like the sort of thing that happens when a company tightens up on a product. Not really a reflection on you or your product, just past experiences.
I fine-tuned a model on Colab Pro earlier this year and having to launch and quit 6 or 7 times to get a faster graphics card to ensure it completed within the time limit sucked.
Hope this will give more transparency into whether you are assigned a whole card or a virtual slice of one. Something I could never work out before!
And yes, hoping to give you more control over chip type too, stay tuned.
I would have upgraded to Pro+ if I had confidence it would speed up the process, but the promises of what you get (beyond "runs in background") were/are so vague I couldn't tell if it was worth it. I was only using 50-80 hours a month and it sounded like a plan aimed at more usage.
Google was/is leaving money on the table with the old scheme.
Like, I don't know, 15$ per month. Still cheaper than buy a GPU VPS.