If your needs are simple or you're less technical with the VMs, Gnome Boxes uses the same backend and has a beautiful streamlined GUI. With the simplicity of course comes less flexibility, but cool thing is you can actually open Gnome Boxes VMs with virt-manager should you later need to tweak a setting that isn't exposed through Boxes.
Run it on a remote system via ssh, and it will "X-Forward" the Qemu console on my local Wayland session in Fedora.
First time I ran it, thinking I was doing a headless mode, and it popped up a window, was quite surprising. :)
Imagine what life would be like if configuration was separated from the software it configures. You could choose your favorite configuration manager, and use that, rather than learn how each and every program with a UI reinvented the wheel.
The closest thing we have are text configuration files. Every program that uses them has to choose a specific language, and a specific place to save its configs.
An idea I've been playing with a lot lately is a configuration intermediary. Use whatever language/format you want for the user-facing config UI, and use that data as a single source of truth to generate the software-facing config files.
You would do well to learn by past and current attempts. This book should be enlightenig (and yes, Elektra is very much alive): https://www.libelektra.org/ftp/elektra/publications/raab2017...
Would also be a useful excercice to write a new configuration UI for existing configuration backend(s) (preferably something already in use by some software you're already in want of better configuration for) - even if you do end up aiming at your own standard (xkcd.com/927), it should give you some clarity on ways to approach it.
Yes, they have some additional useful administration features like start/stop based on a config file, serial console access, but these are really simple to implement in your own shell scripts. Storage handling in libvirt is horrible, verbose, complex, yet it can't even work with thin LVs or ZFS properly.
Unless you just want to run stuff the standard corporate way and do not care about learning fundamental software like qemu and shell, or require some obscure feature of libvirt, I recommend using qemu on KVM directly, using your own scripts. You'll learn more about qemu and less about underwhelming Python wrappers, and you'll have more control on your systems.
Also, IBM/Red Hat seems to have deprecated virt-manager in favour (of course) a new web interface (Copilot).
Quickemu seems to be of more interest, as it allows launching new VM right after a quick look at examples, without time wasting on learning a big complicated UI.
Why would anyone want a qt frontend when you can call a cli wrapper, or better yet the core binary directly?
I thought about running it over the network using XQuartz, but I'm not sure how maintained / well supported that is anymore.
Select Hypervisor "Custom URL", and enter: qemu+ssh://root@<host>/system
And Bob's your uncle.
It works great for me! This means it likely won't work for you until you've paid the proper penance to the computer god.
ssh -L 5901:localhost:5901 username@hypervisor
on the hypervisor, start Qemu with -vnc :1
Then open a local VNC client like RealVNC and connect to localhost:1
Being able to connect to my TrueNAS Scale server and run VMs across the network is the icing on the cake.
Do people normally move from virt-manager to proxmox or the opposite?
This tool downloads random files from the internet, and check their checksum against other random files from the internet. [2]
This is not the best security practice. (The right security practice would be to have the gpg keys of the distro developers committed in the repository, and checking all files against these keys)
This is not downplaying the effort which was put in this project to find the correct flags to pass to QEMU to boot all of these.
[1] https://news.ycombinator.com/item?id=28797129
[2] https://github.com/quickemu-project/quickemu/blob/0c8e1a5205...
I just looked at the shell script and it's not "random" at all, it's getting both the checksum and the ISO from the official source over TLS.
The only way this technique is going to fail is if the distro site is compromised, their DNS lapses, or if there's a MITM attack combined with an incorrectly issued certificate. GPG would be more robust but it's hardly like what this tool is doing is some unforgivable failure either.
It's not that the OP is wrong but I think they give a really dire view of what's happening here.
[0] https://www.zdnet.com/article/hacker-hundreds-were-tricked-i...
It's a subtle difference, but the trust-chain could indeed be (mildly) improved by re-distributing the upstream gpg keys.
It might go above and beyond what most people are doing, but not what most tools are doing. Old school package managers are still a step ahead in this area, because they use GPG to check the authenticity of the data files, independent of the transportation channel. A website to download files and checksums is one such channel. This enables supporting multiple transportation channels, then it was a mirror of ftp mirrors. Today, it might be bittorent or ipfs or a CDN. And GPG supports revoking the trust. Checksums that are hardcoded into a tool cannot be revoked.
As soon as we start to codify practices into tools, they become easier to game and attack. Therefore tools should be held to higher security standards than humans.
People who are responding to you with "you are absolutely right" might not represent the silent majority (within our field, not talking about normal users).
In the days of FTP, checksum and gpg were vital. With http/TCP, you need more GPG due to TCP handling retries checksum etc, but still both due to MitM.
But with https, how does it still matter? It's doing both verifications and signature checks for you.
GPG signing covers this threat model but much more, the threats include:
* The server runs vulnerable software and is compromised by script-kiddies. They, then, upload arbitrary packages on the server
* The cloud provider is compromised and attackers take over the server from the admin cloud provider account.
* Attacker use a vulnerability (from SSH, HTTPd, ...) to upload arbitrary software packages to the server
GPG doesn't protect against the developer machine getting compromised, but it guarantees that what you're downloading has been issued from the developer's machine.
- Signatures are checked for macOS now
- No signatures are available for Windows
Maybe this year attention from Hacker News will encourage someone to step up and implement signature checking for Linux!
Here's a recent example with Alma Linux:
$ virt-install --name alma9 --memory 1536 --vcpus 1 --disk path=$PWD/alma9.img,size=20 --cdrom alma9.iso --unattended
Then you go for a coffee, come back and have a fully installed and working Alma Linux VM. To get the list of supported operating systems (which varies with your version of libvirt), use: $ osinfo-query os $ virt-builder fedora-39
if you wanted a Fedora 39 disk image. (Can be later imported to libvirt using virt-install --import). $ virt-install --name alma9 --memory 1536 --vcpus 1 --disk path=$PWD/alma9.img,size=20 --cdrom alma9.iso --unattended
ERROR Validating install media 'alma9.iso' failed: Must specify storage creation parameters for non-existent path '/home/foo/alma9.iso'.I'd want to vet such a thing before I run it, but I also really don't want to read 5000 lines of bash.
Not sure if this will sway you, but for what it's worth, I did read the bash script before running it, and it's actually very well-structured. Functionality is nicely broken into functions, variables are sensibly named, there are some helpful comments, there is no crazy control flow or indirection, and there is minimal use of esoteric commands. Overall this repo contains some of the most readable shell scripts I've seen.
Reflecting on what these scripts actually do, it makes sense that the code is fairly straightforward. At its core it really just wants to run one command: the one to start QEMU. All of the other code is checking out the local system for whether to set certain arguments to that one command, and maybe downloading some files if necessary.
For example, `--delete-vm` is effectively `rm -rf $(dirname ${disk_img})`, but the function takes no arguments. It's getting the folder name from the global variable `$VMDIR`, which is set by the handling of the `--vm` option (another global variable named $VM) to `$(dirname ${disk_img})`, which in turn relies on sourcing a script named `$VM`.
First, when it works, it'll `rm -rf` the parent path of the VMs disk_img variable is set to, irrespective of whether it exists or is valid as dirname doesn't check that - it just tries to snip the end of the string. Enter an arbitrary string, and you'll `rm -rf` your current working directory as `dirname` just return ".".
Second, it does not handle relative paths. If you you pass `--vm somedir/name` with `disk_img` just set to the relative file name, it will not resolve`$VMDIR` relative to "somedir"- `dirname` will return ".", resulting in your current working directory being wiped rather than the VM directory.
Third, you're relying on the flow of global variables across several code paths in a huge bash script, not to mention global variables from a sourced bash script that could accidentally mess up quickemu's state, to protect you against even more broken rm -rf behavior. This is fragile and easily messed up by future changes.
The core functionality of just piecing together a qemu instantiation is an entirely fine and safe use of bash, and the script is well-organized for a bash script... But all the extra functionality makes this convoluted, fragile, and one bug away from rm -rf'ing your home folder.
There's a big difference between "large, structured projects developed by thousands of companies with a clear goal" vs. "humongous shell script by small group that downloads and runs random things from the internet without proper validation".
And my own personal opinion: The venn diagram of "Projects that have trustworthy design and security practices", and "projects that are based on multi-thousand line bash scripts" is two circles, each on their own distinct piece of paper.
(Not trying to be mean to the developers - we all had to build our toolkits from somewhere.)
After skimming through the source code though, I'd say the concerns are probably overstated.
Over for windows, there's been a constant presence of tweak utilities for decades that attract people trying to get everything out of their system on the assumption that 'big corp' developers don't have the motivation to do so and leave easy options on the table behind quick config or registry tweaks that are universally useful. One that comes to mind which I see occasionally is TronScript which if I had to bet on it passes the 'sniff test' with its history and participation I'd say it's good, but presents itself as automation, abstracting away the details and hoping they make good decisions on your behalf. While you could dig into it and research/educate yourself on what is happening and why, for many it might as well be a binary.
I think only saving grace for this is that most of these tools have a limited audience, so they're not worth compromising. When one brand does become used often enough you may get situations like CCleaner from piriform that was backdoored in 2017.
> DO NOT DOWNLOAD TRON FROM GITHUB, IT WILL NOT WORK!! YOU NEED THE ENTIRE PACKAGE FROM r/TronScript
I see later it mentions you can check some signed checksums but that doesn't inspire confidence. Very much epitomises the state of Windows tweaky utilities vs stuff you see on other platforms.
Also, give it 128 MB of RAM as a minimum.
``` incus launch images:ubuntu/22.04 --vm my-ubuntu-vm ```
After launching, access a shell with:
``` incus exec my-ubuntu-vm /bin/bash ```
Incus/LXD also works with system containers.
systemd-nspawn is very simple and lightweight and emulates a real Linux machine very well. You can take an arbitrary root partition based on systemd and boot it using systemd-nspawn and it will just work.
Incus/LXD runs containers as normal users (by default) and also confines the whole namespace in apparmor to further isolate containerized processes from the host. Apparmor confinement is also used for VMs (the qemu process cannot access anything that is not defined in the whitelist)
This might be less useful for those who are quite familiar with QEMU, but it’s great for someone like me who isn’t. So this saves me a whole lot more than 2 minutes. And that’s generally what I want from a wrapper: improved UX.
I think that is useful practically, as a learning tool, and as a repository of recommended settings.
As others have said, it's to get past the awful QEMU configuration step. It makes spinning up a VM as easy as VirtualBox (and friends).
But something like El Capitan will be somehow acceptable and Lion will be actually usable
I would really love to have someone prove me wrong on this thread but I've never found a solution other than building on MacOS hardware, which is such a pain to maintain.
I have multiple old MacOS machines that I keep in a stable state just so I can be sure I'll be able to build our app. I'm terrified of failure or just clicking the wrong update button.
``` Only the latest major release of macOS (currently macOS Sonoma) receives patches for all known security vulnerabilities.
The previous two releases receive some security updates, but not for all vulnerabilities known to Apple.
In 2021, Apple fixed a critical privilege escalation vulnerability in macOS Big Sur, but a fix remained unavailable for the previous release, macOS Catalina, for 234 days, until Apple was informed that the vulnerability was being used to infect the computers of people who visited Hong Kong pro-democracy websites. ```
[0] https://www.codeweavers.com/crossover
[1] https://www.applegamingwiki.com/wiki/Game_Porting_Toolkit