> It’s important to note that this technique requires access to the machine, which could either be a shell or physical access to it
I mean... what? I can literally do code injection on (almost) any application I'm running given that I have shell or physical access to the machine. It's like the author never heard of Detours[1] or VTable injection[2]. This is a low-effort clickbaity post that brings nothing to the table to serious security researchers or even hobbyist hackers.
It's a shame, too, because there are a lot of very interesting techniques out there for injection and remote execution, but they are OS-dependent and require a lot of research. Clearly, a more interesting post would have been too much effort for OP and instead we're going to pile on Electron.
PS: ASAR code-signing is not fool-proof, as we can still do in-memory patching, etc. Game hackers have been patching (signed) OpenGL and DirectX drivers for decades. It's a very common technique.
[1] https://www.microsoft.com/en-us/research/project/detours/
Notably, according to that Ars Technica coverage:
> attackers could backdoor applications and then redistribute them, and the modified applications would be unlikely to trigger warnings—since their digital signature is not modified
That isn't in a claim in the original post, and doesn't seem to be true afaict: every distribution mechanism I can think of signs the entire distributable, so you really can't just modify the ASAR without breaking the signature. Windows & macOS both require you to only install from signed application bundles/installers (or at least they make it very difficult for you to use unsigned software). On Linux you could get caught out, but only if you download and install software with no signing/verification whatsoever, and that's a whole other can of worms.
If that claim were true this would be a bigger concern, but given that it's not I'm inclined to agree this is basically nonsense.
Only drivers have to be signed on Windows, and even then not all kinds until Windows 8. Also many apps, including Visual Studio Code, are available in 'run from USB' form, so there's no installer, just an archive you unpack and run. Those archives can be modified and redistributed without invalidating any of the PE signatures within, but since nobody pays attention to these signatures anyway and Windows doesn't enforce them, yeah, this is typical Black Hat-week PR nonsense.
Windows and macOS both make it difficult to install self-signed (or unsigned) software. For example, I made http://www.lofi.rocks (an open source Electron-based music player) and I'm not going to spend like a few hundred bucks a year to have a non-self-signed cert. This makes both macOS and Windows complain when users install the app. More draconian practices (that "protect users from themselves") will make it even harder for independent open source devs like me to share cool projects with a wide audience.
The author presents the info clearly and even includes videos demonstrating the “technique,” so it doesn’t seem “low effort” and click-baity to me.
I’m not sure I can support your view that this is unworthy of attention or fix because of in-memory patching, etc. If I told my customers Not to worry about my product because there are much scarier ways they can get hacked elsewhere, they would still ask why I didn’t put my best effort into closing a known loop.
Compare that with an actual Chromium RCE vulnerability (a very clever PDF heap corruption exploit): https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1748...
1) We should absolutely work towards allowing developers to sign their JavaScript.
2) Re-packaging apps and including some menacing component as a threat vector isn't really all that unique. We should ensure that you can sign "the whole" app, but once we've done that, an attacker could still take the whole thing, modify or add code, and repackage. We sadly know that getting Windows SmartScreen and macOS to accept a code signature doesn't necessarily require exposing your identity and I'd _suggest_ that most people don't _actually_ check who've signed their code.
3) If you ship your app as a setup bundle (say, an AppSetup.exe, an App.dmg, or rpm/deb files), you should code-sign the whole thing, which completely sidesteps this issue. The same is true if you use the Mac App Store, Windows Store, or Snapcraft Store.
I've already been working on this for my own projects. It might be something that can be generalized for all Electron projects.
https://github.com/soatok/libvalence
https://github.com/soatok/valence-updateserver
https://github.com/soatok/valence-devtools
This uses Ed25519 signatures and an append-only cryptographic ledger to provide secure code delivery. The only piece it's currently missing is reproducible builds.
For greater context: https://defuse.ca/triangle-of-secure-code-delivery.htm
macOS code signing does not extend to Contents/Resources/ which, unfortunately, is where — without exception — every application on my system stores 'electron.asar'.
/Applications/VMware Fusion.app/Contents/Library/VMware Fusion Applications Menu.app/Contents/Resources/electron.asar
/Applications/balenaEtcher.app/Contents/Resources/electron.asar
/Applications/itch.app/Contents/Resources/electron.asar
/Applications/lghub.app/Contents/Resources/electron.asar
/Applications/Boxy SVG.app/Contents/Resources/electron.asar
/Applications/Slack.app/Contents/Resources/electron.asar
/Applications/Discord.app/Contents/Resources/electron.asar> Here's the thing with how gatekeeper works, that application had already passed gatekeeper and will never be _fully_ validated ever again.
> If you zipped your modified Slack.app up, uploaded it to google drive, and downloaded it again. Gatekeeper would 100% reject that application, the ASAR file is included as part of the application signature. You can prove this by checking the "CodeResources" file in the apps signature.
> You can't re-distribute the app without gatekeeper completely shutting you down.
We ship binaries for MacOS, Linux and Windows. ALL our binaries are signed. You're INSANE if you don't do it. It's still a MAJOR pain though and wish it was a lot easier.
If ANYTHING what we need to do is make it easier for MacOS and Windows developers to ship code signed binaries.
It took me about 2-3 weeks of time to actually get them shipped. Code signing is an very difficult to setup and while Electron tries to make it easy it's still rather frustrating.
The biggest threat to Electron is the configuration of the app and permissions like disabling web security. If you're making silly decisions you might be able to get Electron to do privilege escalation.
The diligence applied for both platforms at least exceeded pure security theater. They actually did a modicum of effort to ensure I was who I said I was, but it wasn't much. It just took a lot of wall time.
> Tsakalidis said that in order to make modifications to Electron apps, local access is needed, so remote attacks to modify Electron apps aren't (currently) a threat. But attackers could backdoor applications and then redistribute them, and the modified applications would be unlikely to trigger warnings—since their digital signature is not modified.
So the issue is that Electron app distributions dont include a signed integrity check so there's no way for end-users to detect if they got a modified version. I thought that the MacOS builds did do this, but maybe the ASAR bundles aren't included in the hash, or maybe I'm wrong entirely.
I assume the a solution would store the signing pubkey on initial install and then check updates against that. The only way the signing key could be checked other than trust-on-first-install would be through some kind of registry, which is what I assume the Windows and Mac stores are geared toward. Am I correct on all this?
EDIT: Either way, it seems like the solution is to only use the projects' official distribution channels. Signed integrity checks would be useful but probably not change the situation that dramatically. Is that accurate?
> I thought that the MacOS builds did do this, but maybe the ASAR bundles aren't included in the hash?
Yeah, I think that's the problem they're describing. It sounds like the Mac setup will require binaries -- like the Electron runtime itself -- to be codesigned, but if the first thing your codesigned binary does is to read an unprotected JS file off disk and execute it, there's no codesigning benefit.
> Either way, I assume the a solution would store the signing pubkey on initial install and then check updates against that
Not just updates that you initiate yourself, though -- I think the idea is that any other app on the system could backdoor the JS in the ASAR at any time. That's pretty hard to defend against.
> but if the first thing your codesigned binary does is to read an unprotected JS file off disk and execute it, there's no codesigning benefit.
The ASAR files described in this post are part of the signature of the application though. You can't modify that file and then redistribute the app to another machine without gatekeeper getting incredibly angry at you.
E.g. Try modifying the ASAR file, zip the app up, upload to google drive, download again and try run the app. Gatekeeper will boot it into the shadow realm :)
Good point, but if the attacker has filesystem access you're already hosed. I suppose there could be some other risk where the ASAR could be modified without full FS access? But I'd want to know what that attack is, if that's the case.
[1] https://docs.microsoft.com/en-us/windows-hardware/drivers/in...
But this will all be in vain if the attacker scenario includes unfettered file-system access. (They can modify the app to not perform these checks, for example.)
Resources on macOS get signed as part of the application bundle. I wonder why this isn't possible for Electron apps as well.
ASAR files are signed as part of the application bundle. The issue is that folks don't understand how gatekeeper works so let me try explain it here.
When you download an application from the internet, macOS initially considers it "quarantined". When a quarantined application is first opened gatekeeper scans it _completely_ and if it's happy removes the quarantine tag and let's it launch.
Once that quarantine tag is removed, gatekeeper will never run a complete check of that application again. Meaning the ASAR files are validated once, when the application is first launched.
What people are seeing here is they're taking an application that gatekeeper has already signed off on, modifying it, and then asking why gatekeeper didn't stop them.
If you took that modified application, zipped it up, uploaded it somewhere, downloaded it again and tried to run it, it would NOT work. Gatekeeper would boot that invalid application to the shadow realm.
https://github.com/electron/electron-packager/issues/656#iss...
I feel like this will get a ton of discussion here anyway due to the Electron hate train.
It's also true to say something like "Rails can be back-doored by modifying the code and redistributing it to unsuspecting developers!"
Actually, you could say the same or similar things about many applications, including binary distributes. With some analysis you can figure out what conditions a jump instruction is using, and modify it to always jump where you want. Cheat Engine lets you analyse game memory at runtime and substantially modify behavior.
As you can see from the issue, this exploit has been known for 2 years and probably longer than that. As I said (November 2018) in the linked issue, I believe it's only a matter of time before Skype/Slack/VSCode gets packaged up with malicious code and flies under the radar of SmartScreen and Gatekeeper. It probably won't be downloaded from the official websites but there are plenty of other ways of distributing the software. I get the feeling that the Electron team aren't taking it too seriously. I think this has the potential for a really dangerous exploit.
My startup (ToDesktop[1]) uses Electron and I've put a huge effort into securing certificates on HSMs (Hardware Security Modules). But it's mostly a pointless exercise when a hacker can simply edit the javascript source.
I was doing a security assessment for a client, and after gaining foothold on the host we needed to establish persistence. As the endpoint protection was blocking anything non signed, I used slack to inject a powershell payload that's executed on startup and gains us access back to the internal network.
So the risk is there, but not the individual user but the organisations using it. I didn't expect this to become a big deal over "redistribution" but I hoped for the command execution without modifying the binary.
Having said that, this can be solved with a simple integrity check of the asar files. Sure, the attacker can modify the binary file too, but then it's not signed anymore.
> attackers could backdoor applications and then redistribute them
Most distribution mechanisms however ship a single signed bundle, containing & thereby signing the entire application, including resources like ASARs. Any that don't sign the application are of course vulnerable to all sorts of trivial attacks (replace the whole binary with anything you like).
To make this a danger from a distribution POV, it seems you would need the application to be partly signed; i.e. the executable but not the included resources. Where does that happen?
For macOS for example, all resources (including ASAR files) are signed, and macOS makes it intentionally difficult to install anything that isn't signed.
Similarly for Windows you'll see large warnings if you open an unsigned application; Electron apps are almost always distributed as a single signed installer exe file, including the ASAR file.
On Linux it depends wildly, but most of the time either the entire package (e.g. a deb from the official repos) is signed, or nothing is signed and you're vulnerable regardless.
What am I missing?
(I'm not addressing the risk of altering an already-installed application - that's a separate attack also mentioned, but requires local access to rewrite files on the target machine, at which stage there's many other options)
EDIT: URL has now been updated, here I'm discussing points from https://arstechnica.com/information-technology/2019/08/skype.... The post now referenced doesn't mention redistribution, and I suspect that in fact Ars is wrong, and allowing signed redistribution of subverted versions isn't a real vulnerability here. I'd love to hear if I'm wrong though!
I just tried this with Slack on macOS, and it launched without a single complaint about code signing. It would appear that either the ASAR files are not included in the signature, or the OS doesn't check the entire application bundle on every launch.
(Edit: That said, I needed sudo to do the mod in the first place, so I'm not about to start panicking about this as an attack vector.)
(Edit 2: As 'marshallofsound pointed out below and elsewhere, it is the latter case; the OS doesn't check the entire bundle on every launch. Which makes sense, and also means TFA is not really about Electron at all.)
>Tsakalidis said that in order to make modifications to Electron apps, local access is needed, so remote attacks to modify Electron apps aren't (currently) a threat. But attackers could backdoor applications and then redistribute them, and the modified applications would be unlikely to trigger warnings—since their digital signature is not modified.
But that's achievable only with Sciter :)
Here is the Google Cache link https://webcache.googleusercontent.com/search?q=cache:xeIOGz...
VSCode, Discord, and (the new) Slack are written in Electron, and those absolutely excel, even at the cost of a bit extra memory usage.
There's a circlejerk of Electron hate, but there's a reason it's so popular, the ease of development outweighs its weighty memory drawbacks for many companies and individuals.
Edit: Not to mention that it uses Node.js, HTML, CSS, so moving from web-apps => desktop becomes a much simpler endeavor.
As the sysadmin at my company I ban all electron apps unless it's clear they are exceptionally well written and/or there are absolutely no alternatives. VSCode is really good and one of my few exceptions. I strongly suspect it would have been even better if they had developed with a more performant platform, but who knows.
Edit: To expand on my reasoning:
Say an operation on a well built performant application is 5 seconds faster than the electron (or otherwise bloated) version. Say employees do that operation on average 20 times a day. Say I have 2500 employees who work 246 days a year and get payed $25/h on average. The slow version will cost the company $427,083 every year. That's the amount of money I'd be willing to spend per year for the fast version of this hypothetical application.
A company like Slack has hundreds of thousands of users and the poor performance must be costing someone millions. It boggles my mind that with all that money, they still can't find the resources to make a performant application?
And that's the naive calculation, there's also the administrative aspect of installing, upgrading, and supporting the application. (The worse the applications quality, the more time I, who am payed a lot more than $25/h, spend supporting it.) While there are multiple variables here, a development team that prioritizes "easy and fast" development doesn't inspire me with confidence they have also prioritized building a quality product.
That is literally how Adobe Air was billed
For the developer, while the user has to deal with horrid battery and memory consumption.
Devs are paid well enough that "easy" shouldn't be a top priority.
edit: Just to be clear I don't suggest notepad++ is best at large files. EditPad is clearly the winner in that category. But I do mean to say VSCode and N++ are on par with each other.
Besides, VSCode provides an incredible amount of functionality. So what if it uses as much RAM as my browser. I routinely run multiple browsers, each with multiple windows.
Aside from that, I don't know what the situation is in the year 2019 but does C still allow you to easily mess up memory management?
Only other considerations are to have a more basic hash for certain financial websites/insurance companies (cough Allstate) that for some reason think an 11-character max password is still okay in this millenium, and to have a method of "incrementing" the password in case you have a service that forces rotations. The only reason to write the hash down is for financial service access in the case of estate planning - store it securely/safely, of course.
Ever since switching to this, I've found it's even more convenient than a password manager. You get used to running your hash in a very short time, and don't need to have access to an electronic device to recall a password.
The fact that billion dollar companies insist on continuing to take the shortcut approach when they have the resources available to "be better" is not the fault of framework developers who originally innovated to fill the demand
Then you don't understand why people use them.