* My bluetooth headset can now use the HFP profile with the mSBC codec (16 kHz sample rate) instead of the terrible CVSD codec (8 kHz sample rate) with the basic HSP profile.
* Higher quality A2DP stereo codecs, like LDAC, also work.
* AVDTP 1.3 delay reporting works (!!) to delay the video for perfect A/V sync.
* DMA-BUF based screen recording works with OBS + obs-xdg-portal + pipewire (for 60fps game capture).
For my use cases, the only things still missing are automatic switching between A2DP and HFP bluetooth profiles and support for AVRCP absolute volume (so that the OS changes the headset's hardware volume instead of having a separate software volume).
On the video front I use obs-xdg-portal for Wayland screen capture as well - finally there's a good story for doing this! You even get a nifty permission dialogue in GNOME. You have to launch OBS in forced wayland mode with 'QT_QPA_PLATFORM=wayland obs'
As for xdg-desktop-portal screensharing, while I'm glad to see at least some standard way of screen capture on Wayland and in theory, permissions are cool, it's still a bad situation. Because each window capture needs explicit permission, dynamically capturing windows is basically impossible and proper movable region capture is tedious and confusing at best. (also d-bus just feels...grosss..but that's obviously very subjective)
I can't tell, but it sounds like the two things it might do for me are:
1. Allow my software stack to remember and restore levels on a hardware device, so maybe my big on-ear headset is super loud compared to the cheap earbuds I use, lets have the software notice they're different and put the hardware output levels back where they were on each device when it sees them. This avoids the device (which presumably is battery powered) needing to correctly remember how it was set up. I would like this.
2. Try to avoid noise from analogue amplifier stages in the headset by only using as much amplification as is strictly needed for current volume settings, which in turn involves guessing how linear (or not) that amplifier's performance is, or makes my level controls needlessly coarse. I don't want this, I'll put up with it but mostly by finding settings where it's not too annoying and swearing every time it trips up.
There's some work on getting AVRCP absolute volume implemented here: https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/46...
I believe your first point should automatically work once this is implemented. Pipewire seems to already support remembering the volume per bluetooth device (though it's just the software volume that's being remembered right now).
What configuration did you have to do to get this to work? I'm also using pipewire built from the latest master (PipeWire 0.3.22).
bluez5.msbc-support = true
bluez5.sbc-xq-support = true
You'll also need to make sure you're on a kernel supporting the WBS fallback patch[1], which should be the case if you have the latest 5.10 kernel or the 5.12 development kernel.You can check if it's working by running `info all` in pw-cli. It'll mention if the bluez codec is "mSBC".
[1] https://patchwork.kernel.org/project/bluetooth/patch/2020121...
For example, this is the PR that enabled mSBC support for the HFP profile: https://gitlab.freedesktop.org/pipewire/pipewire/-/merge_req...
If the raw in that sentence is not meant as a special qualifier and this is meant as a statement about ALSA in general, this is wrong. I recently read up on this to confirm my memory was right when reading a similar statement. In fact, ALSA had just a very short period where typically only one application accessed the sound card. After that, dmix was enabled by default. Supporting multiple applications was actually the big advantage of ALSA compared to OSS, which at the time really did support only one application per soundcard (without hardware mixing, which broke away at that time). I'm not sure why this seems to be remembered so wrongly?
> Speaking of transitions, Fedora 8's own switch to PulseAudio in late 2007 was not a smooth one. Longtime Linux users still remember having the daemon branded as the software that will break your audio.
This wasn't just a Fedora problem. Ubuntu when making the switch also broke Audio on countless of systems. I was active as supporter in a Ubuntu support forum at that time and we got flooded with help requests. My very own system did not work with Pulseaudio when I tried to switch, that was years later. I still use only ALSA because of that experience. At that time Pulseaudio was garbage, it should never have been used then. It only got acceptable later - but still has bugs and issues.
That said, PipeWire has a better vibe than Pulseaudio did. It intends to replace a system that never worked flawlessly, seems to focus on compatibility, and the apparent endorsement from the JACK-developers also does not hurt. User reports I have seen so far have been positive, though I'm not deep into support forums anymore. Maybe this can at least replace Pulseaudio, that would be a win. I'm cautiously optimistic about this one.
1. The ALSA kernel interface
2. The interface provided by libasound2
The former is basically the device files living in /dev/snd, this interface is hardware dependent and whether you can or cannot send multiple streams to the sound card all depends on the actual underlying hardware and driver support.
The later is actually a shared library that when linked into your application exposes "virtual" devices (such as `default`, or `plughw:0` ...), these devices are defined through plugins. The actual configuration of these virtual devices is defined in `/etc/asound.conf` and `~/.asoundrc`. This is typically where dmix is defined/used. Which means that if you have any application that does not use libasound2 or uses a different libasound2 version, you are in trouble.
p.s. Pulseaudio implements alsa API compatibility by exporting an alsa device plugin to reroute audio from all applications making use of libasound2 (except itself).
For some definition of 'short period'. Software mixing via dmix worked for me, but at the time I've heard for years that dmix was broken for many other people. Not sure whether things are better nowadays.
The breakage seems to be caused by hardware bugs. Various authors had the stance that they refuse, on principle, to work around hardware bugs. I guess I understand the technical purism, but as a user that attitude was unhelpful: there was no way to get sound working other than to ditch your laptop, and hope that the next one doesn't have hardware bugs. In practice, it seems a large number of hardware had bugs. Are things better nowadays?
According to https://alsa.opensrc.org/Dmix, enabled by default since 1.0.9rc2. https://www.alsa-project.org/wiki/Main_Page_News shows that was 2005. Alsa 1.0.1 release was 2004. So it's only short when counting from then on, project start was 1998. But https://www.linuxjournal.com/article/6735 for example called it new 2004, so I don't think it was much of a default choice before then.
In the interest of balance, PulseAudio was a huge improvement for me.
I remember playing SuperTux on my laptop. After the switch to PulseAudio, the sound was flawless. Before that, on ALSA, the audio was dominated by continuous 'popping' noises—as if buffers were underrunning.
> the apparent endorsement from the JACK-developers also does not hurt.
Indeed, it seems a better UX to only require one sound daemon, instead of having to switch for pro work.
I guess I could be remembering that wrong, but I know I was listening to multiple audio streams long before PulseAudio came onto the scene.
Only using alsa fixed this, mumble I think then became a good alternative for a short while.
It didn't work reliably on all chipsets/soundcards.
I kinda assumed people mix up Alsa and OSS or don't remember anymore what actually did and what did not work before Pulseaudio was introduced.
I remember the transition to PulseAudio. Initially, most things were broken, and we still had some applications that worked only with OSS so the whole audio suite in Linux was a mess. I remember that I already switched from Fedora (very broken)/Ubuntu (slightly less broken) to Arch Linux, and for sometime I kept using ALSA too.
Eventually, between switching Desktop Environments (I think Gnome already used PulseAudio by default them, while KDE was optional but also recommended(?) PA), I decided to try PulseAudio and was surprised how much better the situation was afterwards (also, OSS eventually died completely in Linux systems, so I stopped using OSS emulation afterwards).
With the time it was getting better and better until PulseAudio just worked. And getting audio output nowadays is much more complex (Bluetooth, HDMI Audio, network streaming, etc). So yeah, while I understand why PipeWire exists (and I am planning a migration after the next NixOS release that will bring multiple PipeWire changes), I am still gladly that PulseAudio was created.
Jokes aside my first reaction upon hearing about pipewire was "oh no, not yet an other Linux audio API" but maybe a miracle will happen and it'll be the Chosen One.
I know that audio is hard but man the situation on Linux is such a terrible mess, not in small part because everybody reinvents the wheel instead of fixing the existing solutions. Jack is definitely the sanest of them all in my experience (haven't played with pipewire) but it's also not the most widely supported so I often run into frustrating issues with the compatibility layers.
I'm using all of these reinventions. Wayland, systemd, flatpak, btrfs and soon pipewire. I'm absolutely loving linux right now. Everything works so nice in a way it will never on a distro with legacy tools. Some of these projects like flatpak have a few rough edges but the future is very bright for them and most problems seem very short term rather than architectural.
Pipewire lets me use all the JACK tooling, but without needing a special compat layer to manage it. so for now, I'm pretty excited
Still havent figured out how to get anything working for video.
Did you just set services.pipewire.pulse.enable=true?
https://search.nixos.org/options?channel=unstable&show=servi...
My major concern is that I use PulseEffects as a key component of my setup so I'll need to check if that works well with PipeWire. But the only way to be sure is to try it!
services.pipewire = {
enable = true;
alsa.enable = true;
alsa.support32Bit = true;
jack.enable = true;
pulse.enable = true;
socketActivation = true;
};
That allows me to run pretty much any application that uses ALSA, JACK, or PulseAudio.How low is "extremely low", especially compared to JACK that I'm currently using when doing music production?
Yeah making PulseAudio play nice with JACK seems to be tricky. Over time I configured it in four different environments (different Linux Distributions and/or Versions) and for each of them I had to do things (at least slightly) differently to get them to work.
I have come to describe PW as like a superset of JACK and PulseAudio.
Also to note, #pipewire is very active on freenode, and wtay regularly drops into #lad.
for me https://github.com/brummer10/pajackconnect has worked flawlessly... but I've switched to pipewire and I'm not looking back !
Definitely for day-to-day use the Ubuntu Studio app has actually been the most helpful (direct control / visibility into the Jack <-> PA bridging is great), or a combo of qjackctl and Carla for more Audio-focused stuff.
https://gitlab.freedesktop.org/pulseaudio/pulseaudio/-/merge...
https://gitlab.freedesktop.org/pulseaudio/pulseaudio/-/merge...
I'm surprised to read this; I was under the impression that D-Bus was the de jure path forward for interprocess communication like this. That's not to say I'm disappointed - the simpler, Unix-y style of domain sockets sounds much more in the style of what I hope for in a Linux service. I've written a little bit of D-Bus code and it always felt very ceremonial as opposed to "send bytes to this path".
Are there any discussions somewhere about this s/D-Bus/domain socket/ trend, which the article implies is a broader movement given Wayland's similar decision as well?
Dbus is about the simplest approach that solves the issues that need addressing.
Now, the more general model of moving towards "domain sockets" and doing things like giving handles to file descriptors by transporting them over sockets, etc can all be traced back to the ideas of "capability-oriented security". The idea behind capability oriented security is very simple: if you want to perform an operation on some object, you need a handle to that object. Easy!
For example, consider rmdir(2). It just takes a filepath. This isn't capability-secure, because it requires ambient authority: you simply refer to a thing by name and the kernel figures out if you have access, based on the filesystem permissions of the object. But this can lead to all kinds of huge ramifications; filesystem race conditions, for instance, almost always come down to exploiting ambient authority.
In contrast, in a capability oriented design, rmdir would take a file descriptor that pointed to a directory. And you can only produce or create this file descriptor either A) from a more general, permissive file descriptor or B) on behalf of someone else (e.g. a privileged program passes a file descriptor it created to you over a socket.... sound familiar, all of a sudden?) And this file descriptor is permanent, immutable, and cannot be turned into "another" descriptor of any kind that is more permissive. A file descriptor can only become "more restrictive" and never "more permissive" — a property called "capability monotonicity." You can extend this idea basically as much as you want. Capabilities (glorified file descriptors) can be extremely granular.
As an example, you might obtain a capability to your homedir (let's say every process, on startup, has such a capability.) Then you could turn that into a capability for access to `$HOME/tmp`. And from that, you could turn it into a read-only capability. And from that, you could turn it into a read-only capability for exactly one file. Now, you can hand that capability to, say, gzip as its input file. Gzip can now never read from any other file on the whole system, no matter if it was exploited or ran malicious code.
For the record, this kind of model is what Google Chrome used from the beginning. As an example, rendering processes in Chrome, the process that determines how to render a "thing" on the screen, don't actually talk to OpenGL contexts or your GPU at all; they actually write command buffers over sockets to a separate process that manages the context. Rendering logic is a browser is extremely security sensitive since it is based exactly on potentially untrusted input. (This might have changed over time, but I believe it was true at one point.)
There's one problem with capability oriented design: once you learn about it, everything else is obviously, painfully broken and inadequate. Because then you start realizing things like "Oh, my password manager could actually rm -rf my entire homedir or read my ssh key, and it shouldn't be able to do that, honestly" or "Why the hell can an exploit for zlib result in my whole system being compromised" and it's because our entire permission model for modern Unix is built on a 1970s model that had vastly different assumptions about how programs are composed to create a usable computing system.
In any case, Linux is moving more and more towards adopting a capability-based models for userspace. Such a design is absolutely necessary for a future where sandboxing is a key feature (Flatpak, AppImage, etc.) I think the kernel actually has enough features now to where you could reasonably write a userspace library, similar to libcapsicum for FreeBSD, which would allow you to program with this model quite easily.
> Well, D-Bus was originally designed to solve the problem of... a message bus.
I guess no one has ever told me what the "message bus problem" actually is. I get sending messages -- a very useful way of structuring computation, but why do I want a message _bus_?
I get wanting service discovery, but don't see why that means a bus. I get wanting RPC, but don't see why that means a bus.
Heck, I don't even know what "have it appear atomically for N listeners" means if you have less than N CPUs or the listeners are scheduled separately. Or why that's a good thing. Did you just mean globally consistent ordering of all messages?
I once wrote a proof of concept that uses the file system to do this. Basically, writers write their message as a file to a directory that readers watch via inotify. When done in a RAM based file system like tmpfs, you need not even touch the disk. There are security and permission snags that I hadn't thought of and it may be difficult if not totally infeasible to work in production, but yeah... the file system is pretty much the traditional one-to-many communication channel.
oh yes!
I wonder if localhost-only UDP multicast can be an usablesubstitute for the missing AF_UNIX multicast.
This sort of thing is better handled by the kernel, with filesystem device file permissions. As a bonus, you save context switching into the bus userspace process on the fast path. So, “the unix way” is simpler, faster and more secure.
Yes, you can configure PulseAudio as a Jack client, but the session handling is also a bit messy. (I used to have a PA -> Jack setup on my work computer just so I could use the Calf equalizer / compressor plugins for listening to music. I dropped it again after a while, because session handling and restoring wasn't always working properly. But that was around 6-7 years ago, maybe it would work better nowadays.)
>unlike JACK, PipeWire uses timer-based audio scheduling. A dynamically reconfigurable timer is used for scheduling wake-ups to fill the audio buffer instead of depending on a constant rate of sound card interrupts. Beside the power-saving benefits, this allows the audio daemon to provide dynamic latency: higher for power-saving and consumer-grade audio like music playback; low for latency-sensitive workloads like professional audio.
That's pretty interesting. It sounds like it's backwards compatible with jack programs but uses timer based scheduling similar to pulseaudio. Can you actually get the same low levels of latency needed for audio production without realtime scheduling?
JACK's used over pulse for professional audio typically because of its realtime scheduling. How does pipewire provide low enough latency for recording or other audio production using timer based scheduling?
Does anyone have any experience using pipewire for music recording or production?
It would be nice to have one sound server, instead of three layered on top of eachother precariously, if it works well for music production.
Saying rewindable audio is a non-feature might simplify the codebase, but if it makes it work badly for most use cases, it ought to be rethought.
My main concern is that without rewriting how can you handle pressing play or pause? Sure, that music isn't realtime and can use large buffers but if I start playing something else, or stop the music I still want it to be responsive which may require remixing.
Edit: does this help? https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ...
- You want a "ding" sound within Xms of the time a user performs some action.
- You want a volume change to happen within Yms of the user pressing the volume up/down keys
Without buffer rewinding, your buffer size when playing music cannot be longer than the smallest of such requirements.
With buffer rewinding, your buffer size can be very long when playing media, and if a latency-sensitive event happens, you throw away the buffer. This reduces wakeups and increases the batch size for mixing, which is good for battery life.
The PipeWire people seem fairly smart, so they are probably aware of this, but I'd like to see power numbers on say a big-little ARM system of PW compared to PA.
Because then software volume will take too long to apply, and a new stream will have to wait (again, too long) until everything currently in the queue has been played. PulseAudio tried to solve this with rewinds, but failed to do it correctly. ALSA plugins other than hw and the simplest ones (that just shuffle data around without any processing) also contain a lot of code that attempt to process rewinds but the end result is wrong.
Rewindable audio is a non-feature because nobody except two or three people in the world (specifically, David Henningsson, Christopher Snowhill and maybe Georg Chini) can write non-trivial DSP code that correctly supports rewinds.
I got suggestions that I could go tweak buffer sizes stuff in a config file somewhere, but for my simple desktop use case I'd rather my audio just sounds right out of the box.
Hopefully this sort of thing gets straightened out, because having to muck with config files to make my sound server actually work is like going back to working directly with ALSA or OSS.
Sounds like a lot of lessons have been learned from JACK, PulseAudio etc that have been factored in to the architecture of PipeWire. Maybe it really is the true coming of reliable Linux audio :)
I know that there are arguments against having high quality audio rate resampling inside the kernel that are routinely brought up to block any kind of useful sound mixing and routing inside the kernel. But I think that all necessary resampling can easily be provided as part of the user space API wrapper that hands buffers off to the kernel. And the mixing can be handled in integer maths, including some postprocessing. Device specific corrections (e.g. output volume dependent equalization) can also fit into the kernel audio subsystem if so desired.
AFAIK, Windows runs part of the Audio subsystem outside the kernel, but these processes get special treatment by the scheduler to meet deadlines. And the system is built in a way that applications have no way to touch these implementation details. On Linux, the first thing audio daemons do is break the kernel provided interface and forcing applications to become aware of yet another audio API that may or may not be present.
This is just my general opinion on how the design of the Linux audio system is lacking. I am aware that it's probably not a terribly popular opinion. No need to hate me for it.
[End of rambling.]
Another problem with ALSA, as well as PA, is that you can't change the device settings (sampling rate, bitrate, buffer size and shape) without basically restarting all audio. (note: you can't reeealy do it anyway as multiple programs could want different rates, buffers, and such)
In my opinion, the proper way to do audio would be to do it in the kernel and to have one (just one) daemon that controls the state of the system. That would require resampling in the kernel for almost all audio hardware. Resampling is not a problem really. Yes, resampling should be fixed-point, and not just because the kernel doesn't want floating point math in it. Controlling volume is a cheap multiply(or divide), mixing streams is just an addition (bout with saturation, ofc).
Special cases are one program streaming to another (ala JACK), and stuff like bluetooth or audio over the network. Those should be in userspace, for the most part. Oh, and studio hardware, as they often have special hardware switches, DSP-s, or whatever.
Sincerely; I doubt i could do it (and even if i could, nobody would care and the Fedoras would say "no, we are doing what ~we~ want"). So i gave up a long while ago. And i doubt anybody else would fight up that hill to do it properly. Half-assed solutions usually prevail, especially if presented as full-ass (as most don't know better).
PS Video is a series of bitmaps, just as audio is a series of samples. They are already in memory (system or gpu). Treating either of them as a networking problem is the wrong way of thinking, IMO. Only thing that matters is timing.
PPS And transparency. A user should always easily be able to see when a stream is being resampled, where it is going, etc, etc. And should be able to change anything relating to that stream, and to the hardware, in flight via a GUI.
Also keep in mind that these audio daemons work as an IPC to route sound between applications and over the network, not just to audio hardware. Even if you put a new API in the kernel that did the graph processing and routing there, you would still likely need a daemon for all the other things.
The other main goal is to simultaneously support both pro-audio flows like JACK, and consumer flows like PulseAudio without all the headaches caused by trying to run both of those together.
Lastly PipeWire is specifically designed to support the protocols of basically all existing audio daemons. So if the new APIs provide no benefit to your program, then you might as well just ignore it, and continue to use PulseAudio APIs or JACK APIs or the ESD APIs or the ALSA APIs or ... (you get the idea).
Now you are not wrong that audio is a real time task, and that there are advantages to running part of it kernel side (especially if low latency is desired, since the main way to mitigate issues from scheduling uncertainties is to use large buffers, which is the opposite of low latency).
On the other hand, I'm not sure an API like you propose will work as needed. For example, There really are cases where sources A, B, C and D need to be output to devices W, X, Y, and Z, but with different mixes for each, some of which might need delays added, effects (like reverb, compression, application of frequency equalization curves, etc) applied, and I have not even mentioned yet that device W is not a physical device, but actually the audio feed for a video stream to be encoded and transmitted live.
Try designing something that can handle all of that kernel side. Some of it you will have no chance of running in kernel mode obviously. That typically implies that everything before it in the audio pipeline ought to get done in user mode. Otherwise the kernel mode to user mode transition has most of the scheduling concerns that a full user-space audio pipeline implementation has. For things like per output device effects that would imply basically the whole pipeline be in user mode.
The whole thing is a very thorny issue with no perfect solutions, just a whole load of different potential tradeoffs. Moving more into kernel mode may the a sensible tradeoff for some scenarios, yet for others that kernel side implementation may be unusable, and just contributing more complexity to the endless array of possible audio APIs.
Assigning deadline-based scheduling priorities to the pipewire daemon wouldn't do the same job?
So every VST/Virtual instrument in a DAW or for live performance should be running in the kernel? Because that's definitely a fresh take.
I'm currently looking for a last resort before reinstalling everything since probably after an apt upgrade all native Linux software kept working perfectly with all my MIDI devices while all WINE applications simply stopped detecting them, no matter the software or WINE version used. No error messages, they suddenly just disappeared from every WINE application but kept working under native Linux. Audio still works fine in WINE software, they just can't be used with MIDI devices because according to them I have none. WINE and applications reinstalls didn't work.
(After realizing it was broken, because I didn't have the pipewire-alsa package installed => No audio devices) The pulse drop-in worked flawlessly out of the box. I'd had some isssues with the jack drop-in libraries tho. (metalic voice, basically not useable) To fix this, I had to change the sample rate in /etc/pipewire/pipewire.cfg from the default 48000 to 44100.
But then I think about pro-audio:
* gotta go fast
* devices don't suddenly appear and disappear after boot
* hey Paul Davis-- isn't the current consensus that people just wanna run a single pro-audio software environment and run anything else they need as plugins within that environment? (As opposed to running a bazillion different applications and gluing them together with Jack?)
So for pro-audio, rather than dev'ing more generic solutions to rule all the generic solutions (and hoping pro-audio still fits one of the generic-inside-generic nestings), wouldn't time be better spent creating a dead simple round-trip audio latency test GUI (and/or API), picking a reference distro, testing various alsa configurations to measure which one is most reliable at the lowest latency, and publishing the results?
Perhaps start with most popular high-end devices, then work your way down from there...
Or has someone done this already?
Only problems I had so far are:
* sometimes bluetooth devices would connect but not output audio, have to restart pipewire.
* sometimes pipewire gets confusing and doesn't assign audio outputs properly. (shows up in pavucontrol as "Unknown output")
Is it comparable to jackd when used with something like Ardour?
https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ...
https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ...
https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ...
Oh well. Maybe next version?
I got pulseaudio wrapped by pipewire-pulse, and applications launched and acted as if they produced audio (as opposed to being blocked), but I still couldn't get any sound.
Granted I have a complicated setup with a laptop with 2 HDMI outputs, 2 USB-soundcards, built-in headphone adapter and a built-in speaker.
Obviously that setup is going to take some configuration to get right, but with PulseAudio I could set it up pretty quickly with pavucontrol.
No such luck with PipeWire ... yet. I guess it will get there sooner or later :)
I've seen lots of folks talking about pipewire, but I'm a simple audio user - I want software mixing and audio out via a headphone jack and that's all.
I'm pretty sure for most folks we'll just wait until our distro decides to move over, it'll happen in the background, and we'll not notice or care.
LP wasn't joking about breaking sound: things did break, many, many times for many, many people, for years. And, almost always the only information readily available about what went wrong was just sound no longer coming out, or going in. And, almost always the reliable fix was to delete PA.
But it really was often a consequence of something broken outside of PA. That doesn't mean there was always nothing the PA developers could do, and often they did. The only way it all ended up working as well as it does today--pretty well--is that those things finally got done, and bulldozed through the distro release pipelines. The result was that we gradually stopped needing to delete PA.
Gstreamer crashed all the damn time, for a very long time, too. I never saw PA crash much.
The thing is, all that most of us wanted, almost all the time, was for exactly one program to operate on sound at any time, with exactly one input device and one output device. UI warbling and meeping was never a high-value process. Mixing was most of the time an unnecessary complication and source of latency. The only complicated thing most of us ever wanted was to change routing to and from a headset when it was plugged or unplugged. ALSA was often wholly good enough at that.
To this day, I have UI warbling and meeping turned off, not because it is still broken or might crash gstreamer, but because it is a net-negative feature. I am happiest that it is mostly easy to turn off. (I wish I could make my phone not scritch every damn time it sees a new wifi hub.)
Pipewire benefits from things fixed to make PA work, so I have expectations that the transition will be quicker. But Pipewire is (like PA and Systemd) coded in a language that makes correct code much harder to write than buggy, insecure code; and Pipewire relies on not always necessarily especially mature kernel facilities. Those are both risk factors. I would be happier if Pipewire were coded in modern C++ (Rust is--let's be honest, at least with ourselves!--not portable enough yet), for reliability and security. I would be happier if it used only mature kernel features in its core operations, and dodgy new stuff only where needed for correspondingly dodgy Bluetooth configurations that nobody, seriously, expects ever to work anyway.
What would go a long way to smoothing the transition would be a way to see, graphically, where it has stopped working. The graph in the article, annotated in real time with flow rates, sample rates, bit depths, buffer depths, and attenuation figures, would give us a hint about what is failing, with a finer resolution than "damn Pipewire". If we had such a thing for PA, it might have generated less animosity.
A system was needed for video, turns out it was a good fit for audio.
Audio and video aren't that different, TBH (audio just has more alpha/blending rules, and lower tolerance on missed frames; video has higher bandwidth requirements). Wouldn't surprise me if both pipelines eventually completely converge. Both "need" compositors anyways.
The main benefit imo is to pro audio so you don't need to configure separate tools and manually swap between pulse and jack every time you want pro audio.
It also manages permissions to record audio and the screen for wayland users.
I hope I'm wrong. There is a lot of potential to do better in that realm.
Is this audio/video bus a result of the space program?