This is a monthly reservation for a single 6000 Ada for $940. You can get the same on RunPod for $670.
And to actually train stuff you'd likely want nodes with more of them, like 8, or just different GPUs all together (like A100/H100/etc).
It is actually paid by the hour.
The price per hour for this server is € 1.5980
more info: https://docs.hetzner.com/general/others/new-billing-model/
I've been pleasantly surprised by what such a mediocre GPU and Llama3 8B can do for certain (simple) use cases. Ollama makes it all pretty easy.
https://cocalc.com/features/compute-server
In case you are not familiar, CoCalc is a real-time collaborative environment for education and research that you can access via your web browser at https://cocalc.com/
Consider this use case: I want to upload 50 GB of audio somewhere and run whisper (biggest model) on it. I imagine the processing should be doable in minutes for a powerful GPU and must be very cheap, the script will be like 20 LOC, but I'll spend some time setting stuff up, uploading this and so on (which for example, makes colab a no-go for this). Any recommendations?
Also, when they say it's "per hour" do they mean an hour of GPU-time, or an hour of me "renting the equipment", so to say?
It is actually paid by the hour.
The price per hour for this server is € 1.5980
more info: https://docs.hetzner.com/general/others/new-billing-model/
Could you please clarify?
RTX 6000 Ada is ~A100
Hetzner is a great, reliable company with fantastic offerings and excellent support.