Our first use-case is GitHub Actions. We support x64 and arm64 Linux runners; and reduce your Github Actions bill by 10x. We can give you hardware similar to default GitHub runners because of the very high margins on the cloud. One difference with our hardware is that we use local NVMes for better disk performance.
Ubicloud and our GitHub Actions integration is also open source. You can check out our integration here: https://github.com/ubicloud/ubicloud/blob/main/routes/web/we...
For security and isolation, we give you a clean and ephemeral VM for each job. When the job completes, we deprovision the VM and wipe out the block storage device attached to the VM. We set up your firewall rules to lock down access to the VM; and also encrypt your data at rest and in-transit.
(We use Linux KVM and the Cloud Hypervisor as our underlying VM tech. For our block device, we use and extend SPDK: https://www.ubicloud.com/blog/building-block-storage-for-clo...)
Ubicloud runners are also fully compatible with GitHub runners. To get started, all you need to do is change 1-line in your workflow file. Each account gets 1,250 free build minutes per month with standard-2 runners.
We’d love to hear your feedback!
https://www.ubicloud.com/docs/github-actions-integration/qui...
- Pricing looks awesome.
- I'm not currently the target audience because everything I'm doing right now is open source with free GitHub actions.
- I'm left wondering what the catch is / why it's cheaper and faster.
- Visual nit: lacking horizontal padding from 990px to ~1200px, a common window size on my 14" MBP.
> Ubicloud is an open source cloud. Think of it as an open alternative to cloud providers, like what Linux is to proprietary operating systems. You can self-host Ubicloud or use our managed service.
I find this hard to parse and the first few times I thought you were saying it's a linux alternative. I just clicked to the docs and the "What is Ubicloud?" section is clearer because you say concretely and directly what it is rather than how I should think of it metaphorically: "infrastructure-as-a-service (IaaS) features on providers that lease bare metal instances, such as Hetzner, OVH, and AWS Bare Metal. It’s also available as a managed service."
There's some old "counterintuitive" adage I'm too lazy to look up about how the best marketing to engineers is just saying in concrete language what it is rather than the benefit it provides. In this case, I'd do both: tell me what it actually is and why that makes it cheaper/better. Also a minor note, there's a typo in that paragraph (missing space "systems.Ubicloud").
> Imagine to do more
I'd get rid of this. I don't understand the phrase, and it sounds like fluff.
> Fast runs even at this price point
I'd get rid of the "point". "Price point" isn't a synonym for "price", which I think is what's being attempted here. I'd be tempted to just have no tagline, and retitle this section "Faster than GitHub Actions". You've already said it's cheaper.
> Ubicloud is an open, free, and portable cloud. Think of it as an open alternative to cloud providers, like what Linux is to proprietary operating systems. You can check out our source code in GitHub or see Ubicloud runners in action for our GitHub Actions. An open and portable cloud gives you the option to manage your own VMs and runners, should you choose to.
This is woolly. How about: Ubicloud is an open and free cloud. You can run it on the hosting provider of your choice, or bring your own hardware. Check out our source code on GitHub!
Their pricing page looks amazing. Not because of any funky UI stuff but because of the slight displacement of the decimal point.
So, while this is the first time I've heard of ubicloud, I use gha extensively.
Abd frankly, I think it's just because Github has a crazy markup on their actions compute over the raw compute. Taking a quick look, it appears that their base rate is like $0.008 per-minute!! That's a rate that wouldn't look crazy out of line with EC2's hourly rate.
I've worked on projects where we saved significant money, and improved build times just by launching a single EC2 instance and connecting it to Actions.
We did the same, and set up GitHub actions runners on hetzner
Halfed the integration test time and made them more reliable.
I don't think it's very expensive to run a build server.
One counter-intuitive thing we found is that it's slow to save and restore caches, but the machines have good CPU, so for us it's been faster to disable cache entirely and just redo everything on each build.
In your case your workflow can run in less than 5 minutes on AWS ephemeral machines, for the same price as ubicloud: https://github.com/runs-on/arroyo/actions/runs/7723361513/jo...
The link to SPDK was very interesting: https://www.ubicloud.com/blog/building-block-storage-for-clo.... I use filesystems for very high performance applications, and I've found ZFS to often be the limiting factor when compared to simpler solutions of XFS +- mdadm +- encryption.
It's a controversial point, but others have made similar findings: https://klarasystems.com/articles/virtualization-showdown-fr... : "Although I suspect this will surprise many readers, it didn’t surprise me personally—I’ve been testing guest storage performance for OpenZFS and Linux KVM for more than a decade, and zvols have performed poorly by comparison each time I’ve tested them"
OpenZFS seems to be starting to consider optimizations to better perform on modern drivers (SSD, NVMe) which have very different performance profiles to what ZFS was built for (spinning rust)
In the SPDK summary they say "To make VM provisioning times go faster, we changed our host OS from ext4 to btrfs" (...) "Also, when we switched the host filesystem to btrfs, our disk performance degraded notably. Our disk throughput dropped to about one-third of what it was with ext4."
Ubicloud: the problem seems to be generic to CoW filesystems, and it's interesting you came with a slight variation (CoA) but have you considered the even simpler alternative of any journaling filesystems (XFS, Ext4...) with overlays?
Or just UFS2 + snapshots to restore from a given state (initialized, ready for each test) then restore to this state between tests?
I think customers finding that disabling cache works better means the CoA has similar issues to CoW.
Personally, I'd have just tried to using SR-IOV with a namespace per customer, and call it a day instead of bringing extra complexity, but there must be good reasons for it. I'd love to know what these reasons are.
I wonder what the consequential "carbon footprint" of this is, but at scale for all companies/all jobs of similar nature
It's really kinda impossible to judge the carbon footprint for these kinds of things, and if it's justified. Consider the carbon footprint of all the dumb AI feature rolled out at Facebook, Google, etc. Consider the carbon footprint of everyone trying to use ChatGPT now. Do you know how much power GPUs are guzzling to give each little answer? It's really huge, compared to a traditional google search! Is it worth it? ... who can judge eh
[1]: https://runs-on.com
I should say we also use BuiltJet for docker builds because of their arm support. Now that ubicloud has arm we may switch those over as well.
We have dozens of customers using Ubicloud runners in production today. We’re now designing our caching layer (Docker instance registry, Docker layer cache, or package cache). We wanted to put this out there for any comments.
Also, if you have any points related to the broader topic of an open and portable cloud, please pass them along!
This would be quite useful for users of other GitHub Actions clones like act [0].
Saved over $25k in CI costs compared to GH Actions - they're also using super powerful bare metal servers with Hetzner - we got about a 94% reduction in build time!
Absolutely chuffed, good to see more companies on the market though.
Ubicloud runs on bare metal providers and they don't lease Mac hardware. Technically, we could run MacOS VMs on arm64. However, our interpretation of Apple's End User License Agreement (EULA) tells us that we can't do this.
This repo has some good references on the topic: https://github.com/kholia/OSX-KVM?tab=readme-ov-file#is-this...
Edit: the TOS for OS X says this:
3. Leasing for Permitted Developer Services. A. Leasing. You may lease or sublease a validly licensed version of the Apple Software in its entirety to an individual or organization (each, a “Lessee”) provided that all of the following conditions are met: (i) the leased Apple Software must be used for the sole purpose of providing Permitted Developer Services and each Lessee must review and agree to be bound by the terms of this License; (ii) each lease period must be for a minimum period of twenty-four (24) consecutive hours;
Of course, hosting myself means I also gotta own the uptime of it..
[1] https://docs.warpbuild.com/runners#macos-m2-pro-on-arm64
The main reason is their platform is hosted on Hetzner dedicated instances.
I assume this is done to cover the time that the VM reboots between jobs?
The VM reboot times add up quickly and that's likely the rationale. However, it gets hard when there are users running linting jobs that take ~2s but running it with 16vcpu instances so we kept the 1 min floor.
EDIT: per responses, it looks like this is outdated information and the project now uses AGPL!
Been using this setup for year, very happy with it. I don't see the point of Github to be honest?
Edit: Oh, Hetzner. They’re really ubiquitous in cheap computing these days.
I put a calculator for RunsOn at https://runs-on.com/calculator/
"Additionally, regardless of whether an Action is using self-hosted runners, Actions should not be used for: the provision of a stand-alone or integrated application or service offering the Actions product or service, or any elements of the Actions product or service, for commercial purposes"
They absolutely support custom runners, it's how all these work, they don't need to stop them via ToS, they can just not allow it as an option. `runs-on: ubicloud` only works because GitHub implements it right.
E.g. don’t use Github as Infrastructure as a Service.
I was reading https://www.ubicloud.com/blog/ubicloud-hosted-arm-runners-10... but seems like a bit misleading to compare an ARM workload over QEMU vs a native ARM.
What about native x86 vs native x86 or at least native x86 vs native arm?
But in the meantime I’ve recently released RunsOn [1] with the same promise (10x cheaper, faster) but the whole thing runs in your AWS account.
Does this provide guarantee that subsequent job won't be able to recover the data?
Although we have a KEK and DEK code for regular VMs, they are not operative on GHA...yet. The reason has to do with a technical conflict with copy-on-write we aim to close, not least of which because Ubicloud needs to grow its own copy-on-write features for block device snapshots, things we lack today.
I expect within a few months, all expired GHA vms will be cryptoshredded upon their deletion. This is already true for regular virtual machines or managed postgres machines.
We completely shut down the VM and remove the block device (all files associated with the block device).
Several builds connect to internal resources, so running them on external nodes suboptimal and expensive when it comes to network egress.
Sure would be good to see competition for the MacOS and Windows runners too, these are the ones that tend to cost us the most.
Does it have on demand workers like those Kubernetes providers, where you use shared master nodes (or with a very small fee), and scalable node pools on demand (charged when used).