* Unmodified macOS guest VMs can run under ESXi (if you're reading this on macOS, you have an Apple-made VMXNet3 network adapter driver on your system--see /System/Library/Extensions/IONetworkingFamily.kext/Contents/PlugIns/AppleVmxnet3Ethernet.kext )
* Accelerated 3D has reasonable guest support, even as pure software. You wouldn't want to work in one of those guest VMs, but for any sort of build agent it should be fine, including opening i.e. Unity editor itself in-VM to diagnose something
Does anyone know where either of these things stand with Proxmox today?
I imagine macOS VM under Proxmox is basically a hackintosh with i.e. OpenCore as bootloader?
It's the fastest mac I've ever owned and it's virtual, executed on a machine running a chip that apple never supported, and you'd never be able to tell it was a vm unless you were told so. kvm and vfio are amazing pieces of software.
A good place to start: https://github.com/luchina-gabriel/OSX-PROXMOX
Why? IIRC running ZFS on NVMe SSDs limit their performance seriously. With sufficient queue depth modern SSDs can easily get up to 1mln+ IOPS and on ZFS I can barely get 100k :(
Running macOS is only supported/licensing-compliant on Apple-branded hardware anyway, and with the supported Intel Macs getting pretty old this was inevitable anyway.
6 cores 12 threads
64GB DDR
nvme
4 thunderbolt3 ports for pci expansion
10GbE onboard nic
boots ESXi
boots Proxmox
boots or virtualizes windows
boots or virtualizes linux
boots or virtualizes macos
iGPU passthrough
Supports nested virtMacOS VMs do work in Proxmox with a Hackintosh setup but you pretty much have to passthrough a GPU to the VM if you're using the GUI. Otherwise you're stuck with the Apple VNC remote desktop and that's unbearably slow.
You can also accelerate video (though it's uncertain what "pure software" means?)
this guy: https://www.nicksherlock.com
has been writing up proxox vms running macos for a while, the latest being:
https://www.nicksherlock.com/2022/10/installing-macos-13-ven...
yes, it uses opencore.
I haven't really bothered with macos GPU acceleration, It is possible but a little fiddly with an nvidia card. I mostly rely on screen sharing to display the remote vm and that's really good (rich text copy/paste, drag and drop files, etc)
As to general guest 3d, you can do this with GPU passthrough, and although it's technical the first time, each vm is then easy.
basically I add this line to each VM for my graphics card:
hostpci0: 01:00,pcie=1,x-vga=1
and this passthroughs a USB controller for my USB dac: hostpci1: 03:00,pcie=1
(this is specific to my system device tree, it will usually be different for each specific machine)The passthough has worked wonderfully for Windows, ubuntu and arch linux guests, giving a full accelerated desktop, and smooth gaming with non-choppy sound.
two things I wish proxmox did better:
- passthrough USB into containers
usb is pretty fiddly to propagate into a container. It is marginally better with VMs but sometimes devices still don't show up.
- docker/podmain containers
proxmox has lxc containers, but using them is a little like being a sysadmin. Docker/podmain containers that you can build from a dockerfile would be a much better experience.
For graphics, another cool thing is intel iGPU pci passthrough - I have had success with this when running esxi on my 2018 mac mini https://williamlam.com/2020/06/passthrough-of-integrated-gpu...
* If you don't mind dedicating the video card to a VM, you can do PCI-passthrough. https://pve.proxmox.com/wiki/PCI_Passthrough
On Proxmox you can do the same. You're going to need OpenCore if you're not on a Mac indeed. But if you're not on a Mac you're breaking the EULA anyway.
https://github.com/notAperson535/OneClick-macOS-Simple-KVM/
Mostly used it when trying to track down reported macOS bugs for an OSS project, so maybe once every few months. But it's worked quite well at those times. :)
Sure, your organization is spending another million dollars on VMware this year, but what are the options?
* Your outsourced VMware-certified experts don't actually know that much about virtualization (somehow)
* Your backup software provider is just now researching adding Proxmox support (https://www.theregister.com/2024/01/22/veeam_proxmox_oracle_...)
* A few years ago you 'reduced storage cost and complexity' by moving to VMware vSAN, now you have a SAN purchase and data migration on your task list
* The hybrid cloud solution that was implemented isn't compatible with Proxmox
* The ServiceNow integration for VMware works great and is saving you tons of time and money. You want to give that up?
* Can you live without logging, reporting, and dashboards until your team gets some free time?
* I did the VCP4 and 5 courses. It's entirely a sales certification. I mean it's a technical certification, but I've never run into anyone who certified for the purpose of running an organisation's tech. Rather, you certify for the purpose of your company being able to sell the product. Note also much of VMware's training focus lately has been on things outside their main virtualisation, like Horizon or their MDM product. * Accurate. But I don't think it'll be far off. * Proxmox does Ceph out of the box. I'll also add that it's very easy to manage, unlike vSAN. I'll further add that none of the VMware training and certifications I've ever done covered vSAN, all the courses assume someone bought a SAN. * All the "hybrid cloud" pushed at least by Microsoft completely assumes you're in Hyper-V and is irrelevant to VMware * I've consulted to an awful lot of VMware organisations and I've never seen servicenow integration in place. I'm sure it's relevant for some peopel.
Now airwatch is surpassed even by Intune.
* You are big enough to need that and actually implement it
* You have the budget to do so
* You actually have the need to do that in-house
If you are at that scale but you don't have the internal knowledge, you were going to get bitten anyway. If you are not at that scale, you were already bitten and you shouldn't have been doing it anyway.
Definitively, and situations like the Broadcom one IMO just underline that as a company you should never ever get your core infra locked into proprietary vendors' ecosystem, as that is just waiting for getting squeezed out, which they can for the reasons you laid out.
> Your outsourced VMware-certified experts don't actually know that much about virtualization (somehow).
That should be a wake-up call to have some in-house expertise for any core infra you run, at least as a mid-sized, or bigger, company. Most projects targeting the enterprise, like Proxmox VE, provide official trainings for exactly that reason.
https://proxmox.com/en/services/training
> * Your backup software provider is just now researching adding Proxmox support (https://www.theregister.com/2024/01/22/veeam_proxmox_oracle_...)
Yeah, that's understandable, one wants to avoid switching both, the hyper-visors that hosts core-infrastructure and the backup solution that holds all data, often even from the whole period a company needs to legally save that.
But as you saw, even the biggest backup player sees enough reason to hedge their offerings and takes Proxmox VE very seriously as alternative, the rest is a matter of time.
> A few years ago you 'reduced storage cost and complexity' by moving to VMware vSAN, now you have a SAN purchase and data migration on your task list
No, you should rather evaluate Proxmox's Ceph integration instead of getting yet another overly expensive SAN box. As ceph allows you to also run a powerful and near indestructible HCI storage, but avoids any lock-in as Ceph is FLOSS and there are many companies providing support for it and other hyper-visors that can use it.
> * The hybrid cloud solution that was implemented isn't compatible with Proxmox. > * The ServiceNow integration for VMware works great and is saving you tons of time and money. You want to give that up?
That certainly needs more work and is part of the chicken and egg problem that backup support is (or well, was) facing, but also somewhat underlines how lock-in works.
> * Can you live without logging, reporting, and dashboards until your team gets some free time?
Proxmox VE has some integrated logging and metrics, and provides native support to send to external metrics server, we use that for all of our infra (that runs on a dozen PVE servers in various datacenters around the world) with great success and not much initial implementation effort.
So yeah, it's the ecosystem, but there are alternatives for most things and just throwing up your hands only signals to those companies that they can squeeze you much tighter.
Proxmox on zfs means zfs snapshot send/receive, simple. I made my own immutable zfs backup system for £5
> newuidmap: write to uid_map failed: Operation not permitted
I tried googling it, tried some of the solutions, but reached the conclusion that it's happening because the LXC is not privileged.
Proxmox doco for unprivileged LxC is here: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers
In the LXC, I created a new non-root user and then set it's uid and git to a LOWER number than what every tutorial about rootless Podman recommends (devel being my non-root user here):
# cat /etc/subuid /etc/subgid
devel:10000:5000
devel:10000:5000
Then I also had to edit configuration for the LXC itself on the Proxmox host to allow tun and have it created on container boot: # cat /etc/pve/lxc/{lxc_vmid}.conf
...truncated...
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir
Note: I have no idea why a lower number of ids works...The advantage of using LXC for me is resource consumption and separation of concerns. I have about 35 Docker containers spread over 10 LXCs. The average CPU use is 1-3% and I only need about 10GB of memory (even with running bigger containers like Nextcloud, Gitlab, mailcow-dockerized etc.). With docker-compose.yml's, automatic updates are easy and robust.
[1]: https://du.nkel.dev/blog/2021-03-25_proxmox_docker/
[2]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...
ServeTheHome talked about this a while ago. https://youtu.be/peH4ic7g5yc
2024/02/26 Can confirm a current Broadcom VMware customer went from $8M renewal to $100M https://news.ycombinator.com/item?id=39509506
2024/02/13 VMware vSphere ESXi free edition is dead https://news.ycombinator.com/item?id=39359534
2024/01/18 VMware End of Availability of perpetual licensing and associated products https://news.ycombinator.com/item?id=39048120
2024/01/15 Order/license chaos for VMware products after Broadcom takeover https://news.ycombinator.com/item?id=38998615
2023/12/12 VMware transition to subscription, end of sale of perpetual license https://news.ycombinator.com/item?id=38615315
Being pretty bad doesn't mean they don't work of course, but when the best a product has to offer is clickops, they have missed the boat about 15 years ago.
Not sure about where ESXi is at lately on that level, but latest proxmox is really, really simple to start with if you've never used an hypervisor. You boot on the usb drive, press yes a few times, open the ip:port they give you and then you can click "create vm", next next next here is the iso to boot from and that's it.
Any tech user who has some vague knowledge about virtual machine or even run virtualbox on his computer could do it, and the more advanced fonctions (from proper backups and snapshot to multi node replication and load balancing) are absurdly simple to figure out in the UI.
I can't talk about the performance or quality of one against the other, but in pure difficulty to approach proxmox is doing very very good.
https://i0.wp.com/williamlam.com/wp-content/uploads/2023/04/...
https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....
I think it comes pretty close - close enough for probably most but the very largest of users, who, I think, should probably have tried to become hyperscalers themselves, instead of betting the farm and all the land around it on VMware (by Broadcom).
Some things (that were created before NSX) may have come from internet exchanges and hyperscalers, like openflow, P4, and FRR, but were really not missing parts that were required to do software defined networking. If anything, the only thing you really needed for SDN was Linux, and the only real distinction between SDN and non-SDN was hardwired ASICs in the network fabric (well, not hard-hardwired, but with limited programmability or 'secret' APIs).
There won’t be another year.
For most vanilla hosting, you could get away with Proxmox and be just fine. I've been running it for at least 5 years in my basement and haven't had a single hiccup. I bet a lot of VMware customers will be jumping ship when their licenses expire.
At the end of the day OVA is just machine metadata packaged as XML with all required VM artifacts, there is also some cool things like supporting launch variables. Leveraging the format would bring a bunch of momentum considering all the existing OVAs in the wild
Seems a bit barebones, as in no support for a nice OVF properties launch UI, but one should be able to extract an OVA to an OVF and VMDK an manually edit the OVF with appropriate properties.
I actually had plans this week to try exactly that...
Another handy feature is the ContentLibrary for organizing and launching OVA/OVF, as well as launching OVA directly from a URL without needing to download it next to the cli.
This makes me think there could be an opportunity in "PhotoPea (kvm gui) for vCenter" - in the same manner photopea is a clean room implementation of the photoshop UI/UX
I learned of it via:
https://news.ycombinator.com/item?id=39300317
Also interesting: vgpu suport (intel srv-io)
Q: Will other import sources be supported in the future?
A: We plan to integrate our OVF/OVA import tools into this new stack in the future. Currently, integrating additional import sources is not on our roadmap, but will be re-evaluated periodically.
Broadcom knows this very well and likely turned the price screw exactly right - just before the breaking point for the critical mass of their customers.
What I think will lead to the eventual implosion of VMware's market share, on a longer timescale, is the removal of free ESXi. Many people acquire familiarity with small scale/home/demo labs or PoC prototypes, then they recommend going with what they're familiar with. This led Microsoft where they are now, by always giving big discounts to students and never going too hard on those running cracked copies. They saw it as an investment and they were bloody right. If the product had been better it would completely dominate now, but even as shoddy as it is, it's a huge cash cow.
Of course, destroying the trust they had with their customers means the long-term prospects of the VMware are not so good.
SDN is one thing but the amount of effort put in vROPS / vRA / vRO etc is not easily replaced. Workflows integrating with backups, CMDB, IAM, Security and what not have no catch all migration path with some import wizards.
Meanwhile, Broadcom will happily litigate where necessary and invoice their way to a higher stock price.
$0.02
It’s about answering the question: Why is the current price of puts wrong?
If it doesn't, any idea if it's something they could automate easily?
virt-customize -a disk.img --inject-virtio-win <METHOD>
https://libguestfs.org/virt-customize.1.htmlHowever they'll also be missing out on all the other stuff that virt-v2v does.
I understand that LVM holds data in it but when I make a Windows VM in proxmox it stores the data in a LVM partition(?) as opposed to ESXi or Hyper-V making a VHD or VMDK.
Kinda confusing .
So an direct attached lvm volume is the best solution performance wise. In the vmware world this would be an direct attached raw device either from local disk or SAN.
For fresh install on proxmox its better to chose qcow as disk image format with virtio-scsi bus (comparable to vhdx, vmdk, qemus disk format) and add virtio drivers during windows setup.
off the top of my head:
- keep in mind there is LVM and LVM2, and proxmox now uses lvm2
- I don't understand the thinpool allocation. You don't have to use lvm-thin if you don't want to deal with oversubscribed volumes, or don't care about snapshots or cloning storage.
- get to know "pvesm". A lot of things you can do in the gui
- when making linux VMs, I found it easier to use separate devices for the efi partition and the linux partition, such as:
efidisk0: local-lvm:vm-205-disk-0,size=4M
virtio0: local-lvm:vm-205-disk-1,iothread=1,size=1G
virtio1: local-lvm:vm-205-disk-2,cache=writeback,discard=on,iothread=1,size=32G
(virtio0 = efi, virtio1 = /)and I can mount/expand/resize /dev/mapper/big2-vm--205--disk--2 without having to deal with disk partitions
It also has some decent clustering capabilities enabling online VM migration between hosts (equivalent to VMotion), which can go a long way towards solving related use cases. :)
https://shop.proxmox.com/index.php?rp=/store/proxmox-ve-comm...
You don't actually need a subscription to run Proxmox, it's FOSS software after all.
It has been, to say the least, an adventure. And I have nothing but good things to say about Proxmox at this point. Its running not only my home related items (MQTT, Homeassitant), it also plays host to some of the projects I'm working on (postgres, go apps, etc...) rather than runing some sort of local dev.
If you need to exit vmware, proxmox seems like a good way to go.
Once I found the surface, I have really grown to like it, expanding my footprint to use their backup server, too. Proxmox makes you work for it, but is worth it.
I believe the rationale for this is to prevent issues when migrating to different hosts that may not have the same CPU or CPU features. Definitely a more "conservative" choice - maybe it should be a node-wide option or only default to a generic CPU type when there is more than 1 node.
Case in point: just this weekend a drive started to die on one of my hosts (I still use HDDs on older machines). I backed up the VMs on it to my NAS (you can do that by just having a Samba storage entry defined across the entire cluster), replaced the disk, restored, done.
For the GPU box I completely abandoned the install after attempting to do the gymnastics around GPU passthrough. I like Proxmox but I’m not a masochist - Looking forward to the day when that just works.
Because yes, in terms of core functionality it should be in the same ballpark. And in terms of UI, Virtual Machine Manager [0] was not that bad.
And Proxmox is just skin on lxc and quemu/kvm.
I will say that as I have just started playing with the lxc api, having the Proxmox UI as a quick and easy visual cross check has been lovely.
Podman is an amazing alternative to docker, cant say enough good things about it.