In that thread, the topic of macOS performance came up there. Basically Anukari works great for most people on Apple silicon, including base-model M1 hardware. I've done all my testing on a base M1 and it works wonderfully. The hardware is incredible.
But to make it work, I had to implement an unholy abomination of a workaround to get macOS to increase the GPU clock rate for the audio processing to be fast enough. The normal heuristics that macOS uses for the GPU performance state don't understand the weird Anukari workload.
Anyway, I finally had time to write down the full situation, in terrible detail, so that I could ask for help getting in touch with the right person at Apple, probably someone who works on the Metal API.
Help! :)
Well, I read it all and found it not too long, extremely clear and well-written, and informative! Congrats on the writing.
I've never owned a Mac and my pc is old and without a serious GPU, so it's unlikely that I'll get to use Anukari soon, but I regret it very much, as it looks sooo incredibly cool.
Hope this gets resolved fast!
wonder if com.apple.developer.sustained-execution also goes the other way around...
> It would be great if someone can connect me with the right person inside Apple, or direct them to my feedback request FB17475838 as well as this devlog entry.
https://anukari.com/blog/devlog/productive-conversation-appl...
Great that you have a workaround now, but the fact that you can't even share what the workaround is, ironically speaks to the last line in https://news.ycombinator.com/item?id=43904921 of how Apple communicates
>there’s this trick of setting it to this but then change to that and it’ll work. Undocumented but now you know
When you do implement the workaround, maybe you could do it in an overtly-named function spottable via disassembly so that others facing similar constraints of latency-sensitive GPU have some lead as to the magic incantation to use?
Congratulations and good luck with your project!
The team we talked to at Apple never ever cared about our problems, but very often invited us to their office to discuss the latest feature they were going to announce at WWDC to strong arm us into supporting it. That was always the start and stop of their engagement with us. We had to burn technical support tickets to ever get any insight into why their buggy software wasn’t working.
Apples dev relations are not serious people.
Seems like there might be a private API for this. Maybe it's easier to go the reverse engineering route? Unless it'll end up requiring some special entitlement that you can't bypass without disabling SIP.
> The Metal profiler has an incredibly useful feature: it allows you to choose the Metal “Performance State” while profiling the application. This is not configurable outside of the profiler.
How would the Metal profiler be able to do that if not for a private API? (Could some debugging tool find out what's going on by watching the profiler?)
Sorry about that!
GPU audio is extremely niche these days, but with the company mentioned in TFA releasing their SDK recently it may become more popular. Although I don't buy it because if you're doing thing on GPU you're saying you don't care about latency, so bump your i/o buffer sizes.
This does not follow. Evidently it is possible to have low-latency audio processing on the GPU today (per the SDK).
And default deny at the OS level for Zoom, Teams and web browsers :)
It's better to trust, the amount of people that won't abuse it far outweigh the ones that do.
1. Go through WWDC videos and find the engineer who seems the most knowledgable about the issue you're facing.
2. Email them directly with this format: mthomson@apple.com for Michael Thomson.
> Lol on the second day it's out, you have already absolutely demolished all of the demos I've made with it and I've used it every day for two years
2. Users can arbitrarily connect objects to one another, so each object has to read connections and do processing for N other entities
3. Using the full CPU requires synchronization across cores at each physics step, which is slow
4. Processing per object is relatively large, lots of transcendentals (approx OK) but also just a lot of features, every parameter can be modulated, needs to be NaN-proof, so on
5. Users want to run multiple copies of Anukari in parallel for multiple tracks, effects, etc
Another way to look at it is: 4 GHz / (16 voice * 1024 obj * 4 connections * 48,000 sample) = 1.3 cycles per thing
The GPU eats this workload alive, it's absolutely perfect for it. All 16 voice * 1024 obj can be done fully in parallel, with trivial synchronization at each step and user-managed L1 cache.
Is that a limitation of the audio plug-in APIs?
Possibly what you describe is a bit more like double-buffering, which I also explored. The problem here is latency: any form of N-buffering introduces additional latency. This is one reason why some gamers don't like triple-buffering for graphics, because it introduces further latency between their mouse inputs and the visual change.
But furthermore, when the GPU clock rate is too low, double-buffering or pipelining don't help anyway, because fundamentally Anukari has to keep up with real time, and every block it processes is dependent on the previous one. With a fully-lowered GPU clock, the issue does actually become one of throughput and not just latency.
That’s why I asked about the plug-in APIs. They may have to be async, with functions not returning when they’re fully done processing a ‘packet’ but as soon as they can accept more data, which may be earlier.
Perhaps there's something in this video that might help you? They made a lot of changes to scheduling and resource allocation in the M3 generation:
Have you tried buffering for 5ms? Was result bad? 1 ms?
>The Metal API could simply provide an option on MTLCommandQueue to indicate that it is real-time sensitive, and the clock for the GPU chiplet handling that queue could be adjusted accordingly.
Realtime scheduling on a GPU and what the GPU is clocked to are separate concepts. From the article it sounds like the issue is with the clock speeds and not how the work is being scheduled. It sounds like you need something else for providing a hint for requesting a higher GPU clock.
That's quite the hack and I feel for the developers. As they state in the post, audio on the GPU is really new and I sadly wouldn't be holding my breath for Apple to cater to it.
That looks to be a smoother chalkboard than I’ve ever encountered. If I had been using such chalkboards, I suspect I’d agree, but based purely on my experiences to this point, my opinion has been that chalkboards are significantly better for most art due to finer control and easier and more flexible editing, but whiteboards are better for most teaching purposes (in small or large groups), mostly due to higher contrast. But there’s a lot of variance within both, and placement angles and reflection characteristics matter a lot, as do the specific chalk, markers and ink you use.
Ableton engineers already evaluated this in the past: https://github.com/Ableton/AudioPerfLab
While I feel for the complaints about the Apple lack of "feedback assiting" The core issue itself is very tricky. Many years ago, before being an audio developer, I've worked in a Pro Audio PC shop...
And guess what... interrupts, abusive drivers (GPUs included) and Intels SpeedStep, Sleep states, parking cores... all were tricky.
Fast forward, We got asymmetric CPUs, arm64 CPUs and still Intel or AMDs (especially laptops) might need bios tweaks to avoid dropouts/stutters.
But if there's a broken driver by CPU or GPU... good luck reporting that one :)
Proprietary technologies, poor or no documentation, silent deprecations and removals of APIs, slow trickle feed of yearly WWDC releases that enable just a bit more functionality, introducing newer more entrenched ways to do stuff but still never allowing the basics that every other developer platform has made possible on day 1.
A broken UI system that is confusing and quickly becomes undebuggable once you do anything complex. Replaces Autolayout but over a decade of apps have to transition over. Combine framework? Is it dead? Is it alive? Networking APIs that require the use of a 3rd party library because the native APIs don’t even handle the basics easily. Core data a complete mess of a local storage system, still not thread safe. Xcode. The only IDE forced on you by Apple while possibly being the worst rated app on the store. Every update is a nearly 1 hour process of unxipping (yes, .xip) that needs verification and if you skip it, you could potentially have bad actors code inject into your application from within a bad copy of Xcode unbeknownst to you. And it crashes all the time. Swift? Ha. Unused everywhere else but Apple platforms. Swift on server is dead. IBM pulled out over 5 years ago and no one wants to use Swift anywhere but Apple because it’s required.
The list goes on. Yet, Apple developers love to be abused by corporate. Ever talk to DTS or their 1-1 WWDC sessions? It’s some of the most condescending, out of touch experience. “You have to use our API this way, and there’s this trick of setting it to this but then change to that and it’ll work. Undocumented but now you know!”
Just leave the platform and make it work cross platform. That’s the only way Apple will ever learn that people don’t want to put up with their nonsense.
Now a lot of people may reply to this that Windows isn't that bad with ASIO (third party driver framework) or modern APIs like WASAPI (which is still lacking), or how pipewire is changing things on Linux so you don't need jack anymore (but god forbid, you want to write pipewire native software in a language besides C, since the only documented API are macros). Despite these changes you have to go where the revenue is, which is on MacOS.
People used to say this about video pros too, until Apple royally screwed the pooch by failing to refresh its stale Mac Pro hardware lineup for many years, followed by a lackluster Final Cut release. An entire industry suddenly realized Windows was viable after all, they just hadn't bothered to look.
One of the worst things about Apple is how much time and effort they spend trying to lock you into their platform if you want to support it. There's no excuse for it. Even once they have you on their system, they're doing everything in their power to lock you in to their workflows and development environments. It's actually insane how shamelessly hostile OSX is.
That said, Reaper and many others have done great things with DAWs and other audio processing in C++. Maybe getting a "native" look is too difficult, but I figured I'd throw it out there.
I've read that Zig can wrap C macros. So maybe there is some hope.
You go to a different market.
> there simply isn't an alternative for pro audio developers.
Tell me you don't work on live audio without telling me you don't work on live audio. Windows has always been usable if you have a suitable ASIO (same as you used to use on Mac). Most shows will use some permutation of Windows boxen to handle lighting, visuals rendering, compositing and audio processing. The ratio of Macs to Windows machines is at least 1:10 in my experience.
Heck, nowadays even Linux is viable if you're brave enough. Pipewire has all the same features Coreaudio was lauded for back in the day, in theory you can use it to replace just about anything that isn't conjoined at the waist with AU plugins. Things are very different from how they were in 2012.
There is no revenue in MacOS, there is only revenue in machines that run A free OS, that they consistently lock their loyal customers out of.
In fact, I'm now working on a USB hardware replacement for what used to be a macOS app, simply because Apple isn't allowing enough control anymore. Their DX has degraded to the point where delivering the features as an app has become impossible.
Also, USB gadgets are exempt from the 30% app store tax. You can even sell them with recurring subscriptions through your own payment methods. Both for the business owner and for the developer, sidestepping Apple is better than jumping through their ridiculous hoops.
And yea, over the years you could tell Apple stopped giving a shit except to turn everything into an app store where they can earn 30% and it's lessened the experience.
Core Data threading? Well, it has got its pitfalls, but those are known, and anyway, nothing is forcing you to use it.
Xcode is so slim these days, it a ~3 GB download, it doesn't take an hour to unxip, and it can be dowloaded from the developer website.
Swift? It might be needed for a bunch of new frameworks, Objective-C isn't going anywhere anytime soon either.
Core Data threading? Does Linux even attempt something like Core Data? How well is that going?
Swift? I remember when Linux diehards invented Vala. The Swift of Linux, but with none of the adoption.
As for UI code, Linux is finally starting to get a little more stable there. GTK 2 to 3 was a disaster; Qt wasn't fun between major upgrades; if you weren't using a framework, you needed to have fun learning the quirks of Xorg; nobody who builds for Linux gets to lecture Mac about UI stability.
Or, for that matter, app stability in general. Will a specific build of Blender outside of a Flatpak still work on the Linux desktop after 2 release cycles? No? Then don't lecture me about good practices. Don't lecture me about how my website or app was sloppily engineered because it has dependencies.
Swift on the server is for Apple ecosystem developers, to share code, just like all those reasons to apparently use JavaScript on the server instead of something saner.
JS on the server is actually really fast and well supported. Not really sure what you're driving at here.
I don't think that's apt. What you find to be "abuse" others might find to be the kind of obstacles/issues that every platform/ecosystem has.
It probably helps if you never put Apple on a pedestal in the first place, so there's no special disappointment when they inevitably turn out to be imperfect. E.g., just because Apple publishes a new API/framework, that doesn't mean you need to jump on board and use it.
Anyway, developers are adults who can make their own judgements about whether it's worth it to work in Apple's ecosystem or not. It sounds like you've made your decision. Now let everyone else make theirs.
Units sold in the smartphone world uses the same function video game consoles' market does: you simply offer a bigger and better software offerings, not just hardware.
If you, as a developer, have a worse time contributing to that ecosystem, then it is just a matter of time before the users themselves have a worse time with their device.
I take the comment above as a signal that something is clearly not working towards Apple's goals. Of course, you make your own judgements to support a platform or not, but this indicates that decision is a lot easier than it should be. In detriment of Apple's ecosystem.
All in all I wouldn't discount it.
Right, that's why judges are making criminal recommendations to the US prosecutors. No abuse at all.....
Oh poor Apple. If only they had the resources and engineers to fix that. /s
Apple's also been deleting more and more of its old documentation. Much of the it can only be found on aging DVDs now, or web/FTP archives if you're lucky. Even more annoying is how some of the deleted docs are _still_ referenced modern docs and code samples.
Apple has done nothing and continues to do nothing to engender any confidence in their platform as a development target.
You're missing the forest for the trees. Apple is very difficult to work with indeed, but they have a shit-ton of paying users. Still to this day, iOS is a better revenue maker than Android. Same for macOS compared to Windows. You want to make a living? Release on macOS. People there pay for software.
Once upon a time I thought either GNOME or KDE would win, and we could all enjoy the one Linux distribution, I was proven wrong.
Then again, I have been back on Windows as main OS since Windows 7.
For example there are over a dozen ways to define a string and you constantly are having to convert between them depending on the API you are using.
https://www.reddit.com/r/cpp_questions/comments/10pvfia/look...
It’s honestly nuts that so many developers continue to try to make software using a bloated JavaScript framework and thousands of Node dependencies.
That might also be true but that misses the point - programming is not engineering; nothing is done to an engineer’s preferred standard; and probably never will.
It’s like being a CNC Technician and complaining about how 90% of stuff on store shelves is plastic. A metal gallon of milk would be so much more durable! Less milk would be spilled from puncturing! Production costs, and how they go downstream, are being ignored.
(Edit for the downvotes, dispute me if you care enough, but literally nobody other than computer programmers ogles your clean code. Just like how nobody other than CNC mechanics are going to ogle the milk carton made on a lathe.)
This is nonsense. I've been a professional Mac and iOS developer for well over a decade, and even in the days of NSURLConnection, I've never needed a 3rd party networking library. Uploading, downloading, streaming, proxying, caching, cookies, auth challenges, certificate validation, mTLS, HTTP/3, etc. – it's all available out of the box.