Does the name "Ubuntu Snap Store" carry a connotation that code is reviewed for malware by Ubuntu, the way that the Apple, Google, Amazon, etc. mobile app stores are? Or does its presence in the software center app imply a connotation that it's endorsed by the OS vendor?
I was at a PyCon BoF earlier today about security where I learned that many developers - including experienced developers - believe that the presence of a package on the PyPI or npm package registries is some sort of indicator of quality/review, and they're surprised to learn that anyone can upload code to PyPI/npm. One reason they believe this is that they're hosted by the same organizations that provide the installer tools, so it feels like it's from an official source. (And on the flip side, I was surprised to learn that Conda does do security review of things they include in their official repositories; I assumed Conda would work like pip in this regard.)
Whether or not people should believe this, it's clear that they do. Is there something that the development communities can do to make it clearer that software in a certain repository is untrusted and unreviewed and we regard this as a feature? The developers above generally don't believe that the presence of a package on GitHub, for instance, is an indicator of anything, largely because they know that they themselves can get code on GitHub. But we don't really want people publishing hello-worlds to PyPI, npm, and so forth the way they would to GitHub as part of a tutorial, and the Ubuntu Snap Store is targeted at people who aren't app developers at all.
The processes for installing from the two are also different enough that the user can't mistake one for the other: official packages are a pacman -S away, but installing from the AUR either requires a git clone and a makepkg -sri, or an AUR helper that bugs you to review the PKGBUILD.
> Safe to run - Not only are snaps kept separate, their data is kept separate too. Snaps communicate with each other only in ways that you approve.
Versus the AUR:
> DISCLAIMER: AUR packages are user produced content. Any use of the provided files is at your own risk.
Uh, not everyone. I ran Manjaro for a bit and found that many of the things I ran were available via AUR. The usual thing I'd find in a search was usually something like: sudo pacman -Sy sudo pacman -S yaourt base-devel yaourt -Sy yaourt -S gpodder (That's the entire reply, BTW.) At some point I started to wonder what the provenance of these packages was and what the security implications were. I might have looked for information on the security risks of these packages but this is the first concrete claim I recall seeing about the subject. Probably a good thing I'm not running Manjaro any more.
I do run Ubuntu and have some snaps installed (Golang, VS code among others) and I'm now wondering if it would be possible for a malicious developer to substitute compromised snaps for the official ones. My understanding is that they update silently and automatically so I wouldn't even know about updates if I didn't check logs.
If someone were to compromise an upstream Arch server I suspect it wouldn't be especially difficult to inject malware or trojans somewhere even those building from source would receive.
Since doing otherwise is a few clicks away and sufficiently subtle attempts are unlikely to be noticed by even observant parties this is about as bad as the windows hunt down an exe model which has been proven for decades NOT TO WORK.
The AUR isn't filled with malware because arch is a very small target compared to windows full of observant people.
It cannot possibly scale even to the levels ubuntu aspires to achieve.
As far as I know, Apple is the only company that manually reviews the code of apps, and even they let some (in my opinion) malware through [1]. Everybody else just does some heuristic anti-malware checking and then publishes the app.
1: Uber was permanently fingerprinting devices, even though Apple was disallowing this kind of tracking in their ToS.
Is that perception correct?
There's also a limited set of people who can upload new packages and a separate team that reviews those, so duplicated functionality / low-quality apps are unlikely to make it into the archive in the first place. Yet Another 2048 Clone would probably not be allowed in unless it was part of e.g. an official GNOME game set.
It also helps that Debian insists on recompiling everything from source and does not redistribute binaries from an upstream source, even if freely-licensed source code is provided.
Instead, the reason you don't see malware pushed to those repositories is because the incentives in the free software world don't align to make them happen in the first place. The moment some project would embed phone-home advertising it would be forked and replaced by all the major distros, so it doesn't happen.
There's also an alignment of incentives between upstream and packagers. If e.g. Xorg tried to embed something evil the volunteer contributors to Xorg would pro-actively sound the alarm and tell distros before they shipped that code.
None of this is true in the iOS and Android stores where you have proprietary paid-for apps where the incentive is to extract as much value from the user as the app store in question will allow, and where the upstream maintainers aren't free software advocates but some corporate employees that'll do what they're told at the cost of the wider software community.
It's an adversarial relationship, not a cooperative relationship.
But the brew cask package “table-tool”? That sure sounds official!
(since an app can always download and execute extra code after it's installed.)
iOS enforces this in several ways. Any executable page of code must be signed by Apple (unless your phone is jailbroken), so you simply can't ship native code outside of the App Store delivery path. Apple looks at what functions you link against and bans "private API", and functions like dlsym() that let you open arbitrary symbols from a runtime string are forbidden. Apple usually disallows things that look like they're downloading and interpreting some language at runtime (though I'm not clear on the current rules for this, and I think things like e.g. Python shells are fine as long as it's user-supplied code). The only exception is JavaScript inside a webview, and that doesn't give you any access to the system without having native code to expose things to JavaScript, and Apple can review that native code.
Debian will enforce this too, for computing-freedom reasons as opposed to platform-control reasons: it's impossible for Debian to say "yes, this is free software" if the code isn't available for Debian to audit. And it's obviously impossible for Debian to check it for malware / unwanted functionality. Applications like Firefox or pip can download and install code at the user's request, but applications that automatically download part of their core functionality cannot go into Debian without being patched to allow Debian to compile and ship those parts as part of the package.
>"X has no real concept of different levels of application trust. Any application can register to receive keystrokes from any other application. Any application can inject fake key events into the input stream. An application that is otherwise confined by strong security policies can simply type into another window," he wrote.
They might have wrapped X protocol to provide more security and control. Instead they decided not to.
They might have created a system which is as bulletproof as on iOS where you can install any apps and be 99.9999% sure that they won't steal your data unless you allow them to. But they created this instead.
Wayland will also solve a few of these problems.
Personally, I'm of the opinion that the Linux security model is a horribly outdated ticking time-bomb and people really aren't taking it seriously enough. It drives me kind of crazy; a lot of people act like X security is no big deal, like it's fine that our primary security model for Linux is just based on file permissions. I think that once we have a better permissions system in place people are going to look back with 20/20 hindsight and say "Well duh, of course apps should be isolated from each other and the system in general. Everyone knows that."
There are two permissions that my desktop/web/mobile environment doesn't ask me for that would prevent most attacks like this: network access and cpu access.
Network access is obvious. It kind of boggles my mind that apps can by default just access the network and make a request to any server that they want. Blocking that alone would take care of a huge number of crypto miners (and spyware), because they all need network access to operate. There are almost no good reasons I can think of for a desktop app to have network access by default.
The less obvious permission that I think is probably worth exploring is CPU access. I don't necessarily know what a control for that would look like in a standard permission system, but if an app wants to start going crazy with my CPU, whether they're being malicious or just innocently inefficient, my OS/browser/phone should probably bring it to my attention and give me the opportunity to either permanently throttle them or set some kind of ground rules.
Vista and subsequent reduced the problem by introduce levels, so that lower-privileged applications can't interact with higher-privileged, but as far as I know they can still interact with applications at the same level.
And Microsoft is still quite confident that eventually Win32 will join Win16, even if it takes considerably longer that they were initially willing to wait for.
>Snaps are containerised software packages that are simple to create and install. They auto-update and are safe to run. And because they bundle their dependencies, they work on all major Linux systems without modification.
Of course you should also prevent the program from reading the original privileged Xauthority data. Running it as a different user does the trick.
By the way Android, unlike Linux, runs every app under a different user account.
Regardless of whether you love or hate Electron, its rise in popularity has clearly shown that a number of HN users feel like they don't have complete control over their computer's resources - that their only choice is to either avoid an app entirely or slow down their computers.
A user should be able to pick up an application and easily say something like "you can have up to 2 CPUs and 250mb of RAM. If you want more, come back and ask me." And honestly, if Google couldn't trust that most users would give it unfettered access to 4 gigs of RAM, I bet their engine would magically get a lot more efficient really quickly.
Does the license actually mention it mines? I am reminded of a lot of "freemium"/"ad-supported"/etc. software that makes its author money via ads or whatever else --- and you agree to that if you read the license --- and it is a bit shady to name the miner 'systemd', but it seems rather overboard to call this "malware"... when I see that term I think of software that self-propagates and exfiltrates personal data, delete/encrypts files for ransom, etc.
Also from the page:
Size 138.8 MB
I'm not really familiar with the latest trends in (bloatware?) development, but a simple game like that taking >100MB would make me suspicious --- even 10MB is in the "questionable" range, and ~1MB would be closer to what I consider "typical". 138MB is bigger than the installed size of Firefox, and that's a far more complex application...
Nah. Games often feature a bunch of textures and video and sound files. Bad compression or too high resolution on those is quite common, which is why games _are_ often that large.
Also proprietary software usually ships a bunch of libraries - games often ship with a premade engine, which are also often quite large.
As a datapoint, I have a copy of "Strata", which is a simple /minimalistic puzzle game, and it _is_ 78MB.
You'd be hard pressed to find an engine or runtime (except electron as some people are saying it actually is..) to get a game like that (literally moving boxes and text) up to that size.
Even if he used static images and ttf fonts the size is way off. Pngs are a couple to couple dozen kilos a piece. Fonts are a few megs at most each. The single biggest font 'file' I know of/used for real (except for experiments people might do with the file formats) is Noto Sans CJK ttc file and it's not a single font but a collection (and it covers all of CJK[0] which is an insane range).
Entire Minecraft is under 300 megs and that includes the launcher, the language packs, and the entire JRE that is 140 megs in itself (!).
On gamejolt there is a (very nice) small low poly game called The Very Organized Thief, it was made in Unity3D and is just 13 megs in a zip (EDIT: and 35 unpacked).
I couldn't find a low poly game in Unreal Engine 4 nor one that is under 100-200 megs (EDIT: when packed) so maybe Unreal Engine 4 has that high static cost but I'm not sure right now.
In any case: 2048 taking over 100 megs is actually crazy, especially since it's a game so simple you can rewrite it in almost any engine overnight. He/she could have done at least that much.
https://github.com/xmrig/xmrig/releases
edit: apparently the actual game is based on https://github.com/gabrielecirulli/2048 which is HTML+JS, so probably the Snap was bundling in Chromium/Electron, which explains the size.
This is very much the idea of these awful (IMHO) ways of distributing software. Bundle all of your dependencies, share nothing, expose users to the risks of exploits in the libraries you've bundled (and maybe statically so no one can even figure out you have done that).
Please stop this madness.
Nothing to look here.
edit: https://forum.snapcraft.io/t/snap-license-metadata/856/53 Still it is unresolved, they probably use the deprecated licence feature in VLC.
I use one donationware app and it's cleanly marked it display an add and it's explained they fetch it from their own site via a dumb static image request on startup (it sits under the main GUI, it's not a splash or wait or anything) and for a donation you can get rid of it. That's leagues above the experience an average website gives you without adblocking software these days.
As for the size - I've no idea what happened, the original page[0] tested by [1] reports as 153 KB. Maybe he just wrapped it in electron?
[0] - http://gabrielecirulli.github.io/2048/ [1] - https://tools.pingdom.com/
Like any other Electron based app...
Note that at 12c per kilowatt hour 200W of extra draw on a machine that's always running is 2.4c per hour.
Over the expected 5 year lifespan of a machine this could cost you $880 in the EU this would be more like $1980 because electricity is on average more expensive there.
Stealing up to 2K from users isn't much more friendly that cryptolockers.
Tell that to your electricity provider
Canonical's Snapcraft literally says "Get published in minutes"
Any random guy would publish his malware with near no review
https://dashboard.snapcraft.io/snaps/
Yes, they maybe win the counter for published apps compared to flathub. Congratulations!
According to the devs involved, on mailing lists and bug reports, the point of snap over apt/etc is the auto updates can't be disabled, so end-users can't put off or forget about security updates. Even adding a way to delay or configure when an update happens seemed to take a lot of convincing before it was added.
(In the end I just disabled the snap service entirely to stop auto-updates. Only downside seems to be that I can't query or install new things through snap without it.)
Snaps are not Ubuntu-only. You can find install instructions for many distros here: https://docs.snapcraft.io/core/install
Is this actual policy? How do they determine who is the original developer?
Yes, it's a policy Quote
If there’s an app that you'd like to be distributed on Flathub, the best first course of action is to approach the app’s developers and ask them to submit it.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
Part of the problem with letting people have freedom is that they have the freedom to make decisions that impact communities in a negative way. But it's usually worth the tradeoff.
This is probably the most relevant line though:
For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software.
i.e. you have nothing to worry about, but you also probably can't do anything to punish the misuse. After all, misuse is subjective.
They feel it is a convenient binary distribution format for software.
I ran into this realization the other day. I wanted to give Mint a try. I run over to Vagrant's site which prominently displays a "Discover Boxes" link. But gives zero indication from the main site that these Boxes are not provided by any kind of official maintainer or Hashicorp itself but are community uploads I suppose... at least I can't find any vetted information about who the uploader's are and why I should trust them.
This should be a big read flag in the quick start guide that screams: Don't just download any old box from our site and then load it up with all your customer data and put it into production. Instead it's buried deep in the documentation: https://www.vagrantup.com/docs/vagrant-cloud/boxes/catalog.h...
Beyond that, numerous escape exploits in linux containerization (and docker specifically) have popped up over the years, and many more are going to pop up over the coming years. This is not a mature space.
Running random binary code distributed from an non-curated source, even in a "container" is going to end in heartache.
Distributions and their package maintainers serve an important role. In the interests of consuming more & faster people seem to be ignoring that.
I wish we had enough resources in the free softare community for all software to be packaged and maintained in the distributions by independent parties unaffiliated with the creators as a rule.
A bunch of the guidelines governing that process are simply unenforcable in a model where the developer builds and publishes the release.
I am not of the opinion that those rules are irrelevant to the stability and security of our systems.
There's a significant push to establish a more app-store model for linux distributions, taking the distributor largely out of the loop for software that isn't part of the base system.
This has both positive and negative consequences.
Today the negative consequences are largely hand-waved away with something along the lines of "containers will protect you".
I guess Main is safe since it's handled by Canonical, but the rest?
Moreover, a lot of installers simply add a custom repository to sources.list.
What are some good practices for a novice user, regarding apt?
I would say if you are at all concerned about safety: don't install apps through .deb file that developers sometimes push. They are generally safe, but there is always a potential that these files are malware.
For instance, lots of people use Atom as their text editor, but Atom does not make it possible/easy for packagers to build Atom from source[1]! Everything used to come with a configure, build, & install script, but I guess it's not hip enough anymore.
[1]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=747824
Repositories are why Linux repos are free of malware. With more snap-based packages being made available then we are going to see a lot more of this sort of thing.
FreeBSD ports has a ton of packages, though. Maybe they have incredible quality control, but I would bet a few of those have some malware in them. That goes for all Linux/BSD build systems, obviously, just huge ones make it more likely.
Ubuntu snaps very easily installable, just a quick command away, which can give users a false sense of security.
Discussion of this issue with snap developers here: https://forum.snapcraft.io/t/disabling-automatic-refresh-for...
However this changes everything. Disallowing owners from controlling their software and hardware is not something I want to encourage. We have enough of that from Microsoft (and to some extent GNOME Shell developers).
Does Flatpack also force developers' intentions upon their users? Or is AppImage my only recourse?
I want a solution that doesn't mistreat customers.
Want to read this article? Please click here to mine a cryptocoin for 30 seconds. Great, thanks! Here's a cookie so we won't ask you again to mine for a whole month.
I would much rather have this than being shamed into looking at ads. It always struck me as utterly bizarre to be told that not wanting to see ads is somehow immoral.
Excuse my ignorance but I'm intensely suspicious of "stores" on open source operating systems.
Say you want the newest version of LibreOffice for whatever reason. This is a typical use-case where Snaps will come in handy. They have most dependencies bundled into the application, so you don't have to worry about your whole system getting wonky by installing newer versions of those dependencies to go with the newer version of the application.
This is also meant to serve as a way for devs to release software without much hassle. So, they don't have to open-source their code, hope that someone finds it, packages it for Ubuntu and in like five years time is available to end-users through the repositories.
They also don't have to worry about building a .deb, .rpm, Arch's format and whatever else there is, including accounting for the differences between distros. So, Snaps are supposed to work on all distros the same.
Ultimately, this will bring in more proprietary applications.
Well, and Snaps are sandboxed, so there's some protection, which makes those proprietary applications somewhat more acceptable, but as this piece of news shows, it's not complete protection.
Is it another attempt of Canonical to jump on the app store bandwagon? Most definitely yes. There's a competing format, Flatpak, which does pretty much the same, also AppImage which is somewhat older and without sandboxing, and Canonical is mainly just pushing their own format, because they'll have control of the store behind it.
Like, it's not impossible to hook up other Snap stores, but Canonical has established their infrastructure as the primary source and then how many users are going to look elsewhere?
I think I'll wait for this most recent example of the trend to make everything into an app to blow over...
I don't even mind creating a VM for every single distro a user requests, and doing a huge automated binary compilation fest for every release. The only thing I care about is that the software is distributed through channels which make it explicit that the current stable version is the only version I support.
But if it does not ask for special permissions, then it goes in automatically. Because the app is quite confined.
Compared to other package repos, here it's somewhat better.
I understand that with Snap devs have to bundle their own dependencies and take care of upgrading, which is bad if I understood correctly.
In my case, a few programs I had installed needed to be connected to other snaps, and they would suddenly stop working for no apparent reason. Only by trying to launch the misbehaving program from the command line I'd find out I had to update the connected program(s).
Has never happened to me with Apt, so my opinion so far is that installing .deb files is vastly superior, at the moment.
Where does snapcraft.yaml get executed? On my computer? On Canonical's infra? On the packager's computer?
For example, do apps need to request permissions for accomplishing specific tasks, or is there any kind of sandboxing involved?
I do not know if there are any auto checks when the package is added automatically.
I'm sorry, Canonical and Ubuntu are the point were Open Source Software apparently breaks with its traditions. No review on binary blobs uploads most certainly made with OSS when marked "proprietary"? They are kidding, right?
The default GUI package manager, "Ubuntu Software" shows up snap packages just like ordinary packages. It was uploaded by somebody who is not bright at the domain and badly configured for locale. It can only handle ASCII characters. Probably reviewed by nobody.
Unless the Snap Store uses some kind of DRM, I don't see how that can be the case. Just install it and see the contents in your filesystem?
I have no alternative suggestion, because Ubuntu is still what I would recommend to new users even if it's not what I personally use anymore, because they have the track record for getting new Linux users on board and helping them out (via the prominent search results for common problems people come across, etc). On the whole, I also think their design has been consistently decent (including Unity).
No! MIT is not a proprietary license!