This is just Apple's overreach extended to the desktop. Excessive control that makes developer's lives hell while adding barely any security on top.
Apple do this too, it's called XProtect: https://support.apple.com/guide/security/protecting-against-...
They also have a built-in malware remediation tool, which is presumably what was used when they killed the vulnerable Zoom web server on everyone's Mac: https://www.zdnet.com/article/apple-update-kills-off-zoom-we...
Notarization is clearly part of a defense in depth strategy for macOS.
Defense in depth means layering security. It's, for example, when you use password hashing but also full disk encryption. That way if someone gets your hard drive, even if they break the disk encryption, they don't get your password in plaintext. Even if they know how to crack the password hash, they first have to get past the disk encryption.
Notarization and signatures aren't two separate measures. They're the same measure implemented two different ways. That's basically useless. If some piece of code is identified as malware then it both gets revoked and added to the malware list, and then they both catch it. If it hasn't been identified then it's neither revoked nor on the malware list.
The things that make it past one also make it past the other. There is no defense in depth because there is no depth. The two measures would have to operate based on a different principle in order to achieve that.
The issues being called out here is that it comes at too high of a cost to both develops and users compared to the benefits it provides.
Is it really that hard to get your code signed as a malware developer? No, not at all. Is that worth bothering developers so much? Maybe not. Is it a power grab? Probably. Does that together make notarization useless for security? No, not really.
Notarization is just a step in the chain. It disincentives malware, especially trivial malware (which is the largest quantity and the most relevant for the bulk of the users) by tipping the economics of it slightly less in the malware developer's favor. It does this at the cost of also tipping economics less in regular developer's favor. You may disagree whether or not that's worth it (and I might be inclined to share that opinion), but that doesn't make notarization useless from a security perspective.
I think the issue with pushing malware signatures to the client is that it is reactive rather than proactive - i.e. by the time you have identified a malware signature, it is already too late (which leads to an inevitable cat-and-mouse / whack-a-mole game).
The big issue I’ve always had with capability security (as implemented here and in Fuschia) is that, while it is a better security model in many ways, it’s also a lot easier to use against developers and power users, especially when you depend on PKI to implement your unforgeable tokens.
But notarization is the same. Apple isn't vetting notarized apps before they're distributed. All it does is impose a cost on the developer, who could still for all you know be a member of the Russian mafia. Or any random developer who has had their machine compromised and then used to sign the compromising party's malware.
It doesn't get revoked until somebody identifies the code as malware. It's the same reactive process as malware signatures.
A nightmare (and not cheap) to deal with it as a developer.
And that’s just non-Xcode. If you use Xcode it’s often automatic.
Makes sense to me...
The plugin issue described in the article is probably related to the hardened runtime, so it's unrelated to the actual notarisation process.
./notary.sh notarize app.dmg
./notary.sh staple app.dmg
I store a password as a keychain entry named AC_PASSWORD. If you're running this in headless CI you should run a notarize command once interactively so you can tell keychain to always allow altool to access the password.And for what? There is no way Apple can make it impossible to get malware notarized.
The trouble I ran into was this: Apple wants the process to be a little mystifying as a barrier to people trying to find exploits within it, I think. I disagree: for instance, using a shell script and the Apple terminal tools is very much the Apple-intended approach, but integrating the third party tool got me sending code to Apple's servers quicker, and it's those servers that matter. I wasn't considered a significant developer to Apple, nor will I ever be, but I code open source software that's being adopted by other projects and used as an on-ramp for would-be DSP coders: my choices and attitudes matter.
Apple in the form of a key Gatekeeper dev had very specific intentions for me: I was strongly advised to use automatic code signing in XCode and use Terminal and a separate workflow to remove the incorrect cert that Xcode assigns, and put in the correct one (Developer ID Application, in this case).
Apple's defaults for an XCode application (written in Swift, because of course it is) and automatic signing, work perfectly first time for something like their example 'hello world' app. For any Audio Unit, this fails every time. You can set it correctly inside XCode from the start, using manual settings. I chose to do this.
Near as I can tell, I am expected to have a manager to whom I will turn over code, and who is the only one with a code signing ID, because as a lowly DSP coder I'm not expected to be allowed to put code out into the wild without supervision. There's a clear expectation that if I mattered, I'd either be the boss of (not necessarily trustworthy) coders, or I'd have a boss whose job it is to be more trustworthy and oversee my code in case I have a wild hair some day and code a bomb into things. I feel there's a resistance at Apple to putting the keys to distribution into the hands of untrustworthy people. Perhaps a belt and suspenders approach? I'm not convinced this is in any way a good thing, though.
Doing the code signing correctly, which is not the same thing as having an Apple-specified process, means every bit of code I generate gets checked for malware by an Apple process. This could save my butt if I got owned by some extremely clever second-level malware that tried to commandeer my XCode and build malware into everything I make. I get that there are also possible risks with having every executable sent to the mothership to be studied: if I competed with them, that's wildly anticompetitive and lets them decompile and pirate anything I do (given sufficient effort). I'm literally sending them all my work before anybody else ever runs it.
I feel that this type of risk (which I'm not convinced is a currently active threat) is better handled by government, regulation, and the law, than by forcing the software ecosystem to normalize running any old code from anywhere.
Doing Apple notarization the way Apple wants it done, should be a nontrivial factor in keeping Apple products from being a giant pile of malware, spyware, and user-manipulation in future. If you're able to do it properly (as in, get the code AUDITED, not 'do it only the way Apple says') the result is distributed plugins and applications that 'just work' the way they used to, but without the same level of risk to the end user.
I think it's worth the trouble to do this. The benefit is clear, and possible dangers of the approach belong to the legal sphere rather than being a technical reason to avoid code signing, or normalize having everybody avoid code signing on your behalf.
Minor tip: Stapling, while optional, should be recommended (and might as well be mandatory) to everyone that notarize (you staple a certificate signed from Apple that avoids the call home when the user tries to open your software).
The only thing that slightly irks me is the contract situation, if you have a "paid" developer account, you absolutely need to sign any update to the "paid app" contract from the App Store even when you want to notarize an "out of store", open source app.
Plus it breaks my script every time...
That is the part I find most offensive, if it was just difficult and buggy I would suck it up and work around it. But having to pay for the privilege is too painful, particularly if you're offering free software.
For my case (non GUI app) I can at least distribute via Homebrew and have the user build from source in a more or less automated way.
Another notarization helper tool is here https://github.com/mitchellh/gon
Does code-signing with an ad hoc certificate and no notarization provide any better experience than just unsigned code?
Do you get a friendlier message (c/f "malicious software: Move to Trash") when Gatekeeper blocks it?
https://lapcatsoftware.com/articles/unsigned.html
This simply is not a viable distribution method for the mass market. Apple has positioned apps from devs that pay Apple so far above apps from devs that don't that you cannot compete outside of their subscription revenue model.
Imagine what sort of system-level settings you'd have to change on macOS today if you wanted to ship a competing macOS App Store on Apple devices with UX similar to Apple's own, but without Apple signing keys.
You'd basically have to write some malware-style code to get inside of Gatekeeper and privilege your downloaded/purchased apps the same way Apple does for apps from their own App Store.
How long do you think they would let this stand before trying to whack your installer daemon with XProtect for posing "danger to profit integrity"? (They'd spell "profit" as "system", though.)
The blog post talks about waiting to upgrade to macOS 10.15, but the current macOS is version 11, so I'm thinking this is fairly old. Because at first I thought this might have been related to a recent info.plist vulnerability. [0]
[0] https://www.wired.com/story/macos-malware-shlayer-gatekeeper...
Now I'm battling with Notarization which is exactly this hell that either pretends to work and doesn't, or spits inscrutable errors and sends me in circles between multiple tools and services.
And these days all the documentation that Apple produces is in form of brief mentions in WWDC videos. Aaaarrggh!
I'm seriously considering switching to WASM or just abandoning my apps.
There's actually a very detailed guide that explains both how to do it from the Xcode UI, and from the command line: https://developer.apple.com/documentation/security/notarizin...
On the other hand, code signing is perennially confusing, and I wish the documentation was better.
https://nixpulvis.com/ramblings/2021-02-02-signing-and-notar...
There are issues with the way we develop and distribute applications and software in general, but none of the major platforms are doing anything but extracting $$$ for themselves and tricking users into a false sense of security.
This really captured the constant "wtf" of building against the sloppy moving target.
But... its still better than a lot of toolchains/ecosystems, and when it all does work, for 1 month a year lol, it's great!
(bad joke I know, it's Friday anyway)
The process has improved since then so not sure how much of this still applies.
Unix and VMS/NT, the two most popular kernel lineages, were both designed when computers were either isolated or connected to an Internet that was effectively an academic/government walled garden. They absolutely were not designed to deal with the present information war zone where everything is trying to spy/hack/ransomware you and every piece of code is guilty until proven innocent.
Since the Internet went mainstream we've been constantly stuffing wads of chewing gum into their many cracks, adding hack after hack to try to secure that which is not secure. Address layout randomization, pointer authentication hacks, stack canaries, clunky system call whitelisting solutions, trying to shoehorn BPF into a system call filtering role, leaky containers and sandboxes, and so on.
Code signing is an admission that none of those measures have worked.
A secure OS would be built from the ground up with security as a primary concern. It would be written in a safe language like Rust or perhaps even in a system that permits strong proofs of correctness. Every process would run with minimal required permissions. Everything everywhere would be authenticated. The user would have visibility into all this and would be able (if they desired) to control it, or they could rely on sets of sane defaults.
There'd be no need for code signing on such an OS. You could safely run anything and know it would not be able to access things you didn't grant it permission to access. The web JavaScript sandbox is the closest thing we have to that but it's extremely limited. By providing a Turing-complete sandbox that can be generally trusted to run code from anywhere, it does show that such a thing is possible.
(Mobile OSes look like they've kind of done this, but they haven't. They've just stuffed more chewing gum into the cracks in Unix and put a UI on top that hides it. They also "solve" the problem by nerfing everything.)
As you point out, security engineers have been working for decades on a vast array of techniques to mitigate classes of vulnerabilities. There's no reason to believe this is something that can ever be finished. There will always be bugs, always. Code signing embraces that reality by making it much easier to contain bad programs after they get out into the wild. It is just another tool in the toolbox, as with all security mitigations.
It's silly to suggest that you can solve security by simply rewriting the entire OS in Rust; and in a modern OS, every process already does run with minimal required permissions, and authentication is generally enforced, and users do have visibility and control, at least by design. Sometimes things slip through, of course. That will still happen even in the shiny new world you're proposing.
The existence of JavaScript does not imply that a completely secure OS is possible. There's a rich history of JS bugs that have led to total compromise of the OS -- in fact, earlier in your comment, you listed several vulnerability classes that have disproportionately affected JavaScript VMs.
Apps absolutely do not run with least privilege on any current popular OS. If I install an app on Windows, Linux, or Mac it can see tons of my data out of the box. In some cases it can see the whole system except for specifically locked directories and files. Then there’s the huge pile of local exploits afforded by unsafe languages and cruft.
Perfection may not be possible but if OS app isolation were as good as popular browser JS environments that would go a long, long way toward making it safer to run stuff locally.
It's not like the big development shops that do take the time to get the notarization process working get a special green checkmark by their app. After the app has been launched the first time, it's back to an even playing field with the apps that didn't notarize.
For example, showing a screenshot that doesn't contain the word "malware" and then saying:
> Using my application name and the word "malware" in one sentence is suggestive and extremely offensive by Apple.
Does not fill me with much hope that the author is detail-oriented. I'll keep reading, and I know already that the notarizing process isn't smooth, but my "snowflake meter" is already in the yellow zone, and I've yet to reach the part of the essay labeled "Part 1"
Which means the same thing, and if there weren't quotes around the words, a paraphrase is fine. But if you put quotes around a word or phrase, then it should be accurate, or I will start to wonder if your attention to detail is adequate to things like, um, notarizing software.
Try packaging a python interpreter with a ton of .so's and .dylibs with your .app and see how much hair you have left!
(I also ship the same app to Windows with EV signing and I think that is more of a pain, due to the physical HSM requirement)
As others pointed out, https://github.com/mitchellh/gon is a great tool for doing this on your local machine (e.g., with a cron job). In addition, if you are building your app using a GitHub action (which I highly recommend if it is open-source), you can use my https://github.com/hubomatic/hubomat action to package, notarize, and staple a release build in one shot. The sample/template app does this automatically on every commit as well as once per day: https://github.com/hubomatic/MicroVector/actions.
So when this fails from a scheduled job, you at least know that something has changed on the Apple side and can investigate that right away. And if it fails as a result of a commit, then at least you can start looking at what changes you may have made to your entitlements or code signing settings or embedded frameworks or any of the other million things that can cause it to fail.
The main annoying thing so far for me using notarization long term is the terms and conditions signing step, which is silly because they're only updating the paid apps contract and we're notarizing explicitly so we can distribute outside the app store.
I think simply spreading signatures of known malware for a local check would be a much better option.
However as a Mac enterprise admin I don't think the process is particularly difficult. When it came in I scripted it all once and that worked fine. Only issue is that sometimes it doesn't like if I make a PKG with a package from another supplier embedded in it. The problem is that I have to do that because some solutions have several packages that need to be installed in a particular order, and my MDM (MS Intune) does not provide a means by which to specify installation order. It just blasts all packages in a random order at the machines. So I re-package those. But anyway even that is not all that tough to get around.
There isn't; the OCSP checks happen on launch automatically.
I got Apple to encrypt it next year and delete their logs, though, thanks in part to the publicity afforded by HN to my yelling about it. They also committed to adding an off switch.
Hopefully they'll do it in a clever, privacy-preserving way using a bloom filter or something, instead of just sending the developer cert hash up to Apple as soon as you double-click an app.
By the way another issue I have with the developer cert thing is that this way they will block all your apps if they have an issue with just one thing you've uploaded. And we all know Apple tends to blur the line between plain old malware and "against our T&C/Commercial interests". They already have a say in what apps I can use on my iPad. Like the ban on emulators, etc. It's my device, it should be a recommendation at most.. This is why I fear they are moving Mac in this direction as well.
PS: I didn't realise you were the one who raised this issue a couple months ago. Thanks for your work!!
Anyone knows if "stapling" the distributed bundles files (.app .pkg executable files etc.) is useful in any way ?
My CI pipeline is build -> test -> deploy. The Mac "build" job uploads the app for notarization as a side effect. There is an additional "mac archive" job during the test stage. This job runs general tests on the DMG (checks code signing is valid, makes sure I'm not depending on system libraries), then waits for notarization to finish and staples the DMG. By the time I'm done mounting and checking the DMG, notarization is almost done anyway.
My typical release time right now (from git push to having a fresh app available to install for windows/linux/mac) is 7 minutes. I think I could get it down to around 3 minutes with optimizations.
My primary bottleneck right now is building / xz-compressing a windows installer (which means my windows tests finish last).
Remember the days of Windows 95 when you could make an application, sell it to a person in your own town and nobody in the world knew?! Not anymore!
Now Apple has to know that you made an app and get an exact copy of it, just for safe-keeping.
Fantastic opportunity for Linux apps to gain more dev resources, as anyone with a bit of foresight sees little future in macOS, iOS, Windows, or Android as development platforms.
Developers do not dictate the success of a platform. Users do. And they don’t want Linux on the desktop.
Nothing laughable about that. It's absolutely 100% true. That's why only 25% of developers are on a Mac. Outside of the Silicon Valley bubble that number goes way down.
> Fantastic opportunity for Linux apps to gain more dev resources, as anyone with a bit of foresight sees little future in macOS, iOS, Windows, or Android as development platforms.
Linux apps have steadily been gaining more dev resources. That's why we have big companies like Microsoft bending over backwards to make things like VS Code run on Linux or in Linux container tech.
> Developers do not dictate the success of a platform. Users do. And they don’t want Linux on the desktop.
Linux is on more systems than any other OS. Was that because of Users? Nope.
If Macs and iPhones disappeared tomorrow, the world would largely continue on without much hassle. If Linux or Windows disappeared, we'd have a worldwide catastrophe on our hands. Users never chose Windows either. Developers and the businesses that they worked for did.
The number of normies using Linux on the desktop isn't a good metric. There are as many devs on Linux as there are on a Mac. Of those, I'd say more than half are not even targetting iOS but rather the web. So, Apple is always just a few bad moves away from losing those web devs to a Linux desktop.
Developers embraced Windows over Mac, users followed. (Until iOS development made Mac's the default dev machine.) Developers embraced iOS and Android over Symbian and webOS.
Windows and Mac were great development platforms ten years ago, and iOS and Android were way better than the now-dead competition.
I wish we could drop it, but our Analytics disagree, sadly.
The problem is that they’d rather complain about systemd or reinvent the wheel in 10 different ways which in the end means their resources are spread thin and nothing gets done.
The most complete it has now are the Qt-based KDE frameworks, but those are natively C++, and many if not most developers are not going to want to have to deal with C++ no matter how many quality of life affordances are offered by Qt. Bindings exist, but are limited to a handful of languages and come with their own quirks.
GTK is much better from a bindings standpoint, but isn't as complete as the KDE frameworks meaning devs have to bring a lot of their own stuff, plus GTK devs are subjected to new releases pulling the rug out from under them with new versions.
What I'd really like to see is an equivalent to AppKit, which includes practically everything needed for the most app (reducing dependencies to a minimum) and is C-accessible making it reasonable to write bindings for other languages for.
Probably not, but it sometimes feels like it.
This is weird.
I'd say they're probably one of the most developer hostile companies. The only way the look friendly is in comparison with Nintendo.
Currently I'm annoyed by a long known bug that requires to manually confirm 50+ popups with username and password whenever signing an iOS App...
How much time should I expect to budget for the initial signing/notarizing/submission process?