I'd liken it more to a hardware product being shipped with faulty materials when multiple manufacturers are involved and it's extremely difficult to identify the responsible party. Sure, they're responsible for the end product, but it's not so black and white.
So, don't do that.
> It's not that simple.
It is absolutely that simple.
Your argument is basically, "Their process is so complicated they can't avoid serving up malware." But that's not a justification for serving up malware. If your process is too complicated to avoid serving malware, then you need to simplify your process until you can avoid serving up malware.
You don't get a free pass on ethics just because ethics are inconvenient for your business model.
Yes, I think publishers have a responsibility to monitor their ads and keep their users safe. But I still don't think it's a black and white ethics situation. The exchanges and other ad providers need to fix their business model and not depend on publishers to filter the malware ads they're sneaking onto their page.
1. Publishers can't compete if they behave ethically. If this is true, then the solution is simple: if you can't run your business ethically then close your business. However, I believe that it's entirely possible for publishers to compete ethically; there are, for example, plenty of business models besides selling ad space.
2. Publishers are only accomplices in serving malware. We can't ignore publisher's role in serving malware and blame the ad networks. Both the ad networks AND the publishers are to blame.
I think there's an even simpler and more realistic solution: someone needs to make a Chrome extension that is like uBlock Origin, but unblockable.
It should be:
* an adblocker that also blocks tracking scripts
* ...with no "acceptable ads" whitelist like Adblock Plus
* ...open source
* ...with workarounds for ad-blocker-blockers
Here's how I think we can do that: we maintain a list of domains that are using ad-blocker-blockers, and for those, rather than blocking ads at the HTTP request level, the user agent loads them and simply doesn't display them.
In principle, you can do this in a way that is completely undetectable by the site owner. There doesn't need to be an "arms race" between ad-blocker-blockers and the workaround developers--you simply win outright by having DOM look exactly the same as if it had ads, but rendering, say, a box with a tasteful light grey smiley face instead of the ad. (You can even make it so that canvas.getPixelData returns the ad that the page thinks it drew, but the actual screen output doesn't show it.)
At that point, sites are actually incentivized to stop using ad-blocker-blockers, because the only difference they'll make is that site will load slower, since the user agent has to load and pretend to display all those extra resources. The user never actually sees the ads either way.
--
For extra credit, this hypothetical browser extension can also simulate a click on YouTube's "Skip This Ad" button as soon as it appears, etc.
You could even keep per-domain blacklists of bloat resources and simply not load those. That would make the internet feel a lot faster. A user with this extension would visit The Verge and instead of getting a janky 5 megabyte page load, they'd get a near-instant load with just the text and images.
Finally, this extension could keep a mapping of desktop sites to auto-redirect to the lightweight mobile equivalent, with a body{max-width:800px} thrown in to keep things readable.
I know I'd install this hypothetical extension immediately and never go back.
That's an assumption you're making, and not a correct one.
This solution also doesn't in any way address anything I said: it is still not ethical to serve malware to your users.
I think calling this an ethical issue is quite a stretch. In many cases, we're talking about visitors who are not only enjoying the content from someone else's site completely for free, but also employing tools that actively modify the intended presentation of that content to the detriment of the host site's operators. And now you're saying that not only should the site operators make their content freely available and accept that some visitors will circumvent possibly the only way they have of generating revenue, those operators should also be actively responsible for vetting any third party content they incorporate within their site in case the third party is hostile and those visitors know enough to run ad-blockers but not enough to run anti-virus software? That seems a very short-sighted and one-sided position, entirely in favour of the party who isn't actually contributing anything in this scenario, and I see no ethical basis for that.
Edit: For those who are downvoting, please consider that I did not disagree with the original premise I quoted. Obviously unethical business models are still unethical even if the ethical ones are inconvenient.
What I'm asking is why we should consider it an ethical requirement for someone who is already generously offering their content for free and accepting that a significant fraction of visitors will circumvent their intended ad-funded model to also go to unrealistic lengths to vet any third party content they include for safety against arbitrary unknown threats that could change at any time without notice, all for the benefit of a visitor who is offering them nothing. I'm not sure whether someone operating a web site really owes their visitors anything in this scenario, other than perhaps a basic "good citizen" principle of not negligently serving up malicious content, and I don't see how operating within the same infrastructure as a huge number of other web sites could reasonably be considered negligent in this respect.
Unless you think we should also close down all third party CDNs, image hosting services, caching services, web font services, and so on, the web is fundamentally a linked medium where sites can usefully be built by combining resources from other services, and naturally those other services will retain control of what they are hosting themselves. Making Joe Blogger responsible if some massive service's CDN version of jQuery got hacked doesn't seem like a good way to encourage Joe Blogger to spend their time sharing their writing with the rest of the world.
No, I'm not saying that. I'm saying it's unethical to have ads with malware on your site. I didn't propose a way not to have malware on your site.
The way you propose, by vetting ads, has been used successfully, but it's not a particularly imaginative solution. What about donations, freemium, PWYW, subscriptions, grants? Or what about giving your work away for free and using that reputation to get jobs?
> Making Joe Blogger responsible if some massive service's CDN version of jQuery got hacked doesn't seem like a good way to encourage Joe Blogger to spend their time sharing their writing with the rest of the world.
It's utterly ridiculous to claim this is about Joe Blogger. Joe Blogger is quite often happy to do his blogging as a labor of love and let blogspot/livejournal/whatever reap all the ad revenue. And small-time bloggers who do make money are frequently more sensitive to their readers' complaints and explore alternatives to big add networks that serve ads. The problem is big content providers who are under shareholder pressure to produce growth each quarter, so they try to squeeze out every bit of ad revenue with no concern for users. They're also too risk averse to try alternative monetization strategies to ads. It is well within the capability of those players to provide ads without malware, but they don't because it doesn't hurt their bottom line enough.
Serving up malware to your users and readers is unethical. I'm all for supporting content providers; I donate to NPR and to artists on Patreon frequently. But if you can't run your business ethically, then you should shut down your business.
If you really think serving up malware to finance content is okay, then why don't you propose that content creators just hack some small percentage of their users and sell the data online? The effect on users is the same, but it cuts out the middlemen so it's more efficient.
If they're attempting to make money from ads, they're not offering it for free.
The reason that the bad advertisement issue is such a big problem is that very often the anti-virus programs simply don't work on the malware being served. The exploits used either aren't in the definitions database or the AV has a blindspot.
It's also very difficult to be running without an anti-virus on a modern computer. Windows Defender doesn't always rank the strongest, but it's certainly competitive with other AV solutions, and Windows will nag-nag-nag if you don't have what it considers to be an active AV installed. It's not the early 2000's anymore when you had to find a good AV - for the most part, if you buy a modern computer, there are AV protections in place already.
As is such, these aren't users thumbing their nose at safety and running around unprotected, these are people who have a reasonable expectation to not be served malware by reading an article at Forbes.
Simply put, regardless of how you're doing it, you should not be serving malware to people. If your site is the vector, you have a responsibility to deal with it, and ignoring this, as many sites have done, is an ethical breach. Malware can and does do harm, sometimes in the form of lost data and lost money. Ensuring you're not serving up malware isn't just in the lines of "good citizen", it's a duty to not harm - the people affected by the malware have no recourse in virtually every situation. If it's ransomware, they either have to hope that it's poorly made and gets broken, if their machine is otherwise unrecoverable, that data is lost.
Forbes and the other sites that are proposed to be blocked may be getting fingered right now, but the complaint is a larger complaint about advertising; as participants who are not working to clean it up, I think users have every right to be upset and to call it unethical - the response that they're receiving is, well, no response. The websites don't care.
All that being said, I'm actually fine with them putting up an ad-wall, as it kind of forces them to put their money where their mouth is. Part of the change that will need to happen is to show the sites that consumers don't want to put up with dangerous ads and to prompt action, and ad-walls pretty much force a boycott if users want to continue using adblockers. This will give them the metrics to see the effect that bad advertising has, and hopefully prompt change.
But, I still think that you have an obligation to ensure your website is not a hazard, regardless of how it became one. "Everyone else is doing it" isn't a defense, especially when it causes real and immediate damage to potentially thousands of people.
An innocent gets malware from an advertisement. The publisher blames the ad-network who in turn blames the criminal. The ad-network claims that they can't curate the advertisements or the margins becomes to small. In the end, the innocent has to take the full fallout of the crime.
An innocent gets mistreated by a fake doctor. The hospital blames the hiring agency, who in turn blames the criminal. The hiring agency claims that they can't do background checks and verify CVs of applicants or the margins becomes to small. In the end, the innocent has to take the full fallout of the crime?
Why is there such discrepancy between the two cases? Why can one agency do curation and still function, while the other can't? Why can the publisher get away with using a bad ad-network while a hospital is fully legally responsible for using a bad hiring agency? Those seem to be very simple questions, ones that should have very simple answers.
The word "legally" gives it away: because there are laws.
But we don't like dem laws here on teh intertubes, so far-west it is.
But if it were to happen, the website is who is sue, because it is within their responsibility to ensure that ads and/or content should not cause harm to a visitor.
I don't think that's been established. With the current state of advertising on the internet, it's not even possible to do this.
In general, websites use advertising networks which do not allow them to proactively vet the content. Even if they did, no amount of vetting can guarantee the content is benign (active content can do naughty things only some of the time or on some platforms, or things not yet recognized as naughty - this is also why antivirus isn't reliable). So, clearly the solution is to not allow Javascript or flash, right? Nope - exploits in image parsers, font parsers, video parsers, audio parsers, etc. come out fairly often.
This could maybe be dealt with by contracts between websites and advertising networks specifying that the advertising network will be liable for malicious content, but I don't see that happening.
We don't accept paying for content to avoid ads that may be dangerous.
The countries that pay less than us often are at least as highly regulated when it comes to medicine, medical treatments, and their delivery, so while we certainly are paying a lot more for those things, I don't think that the argument that we do so "in exchange for regulations" is particularly defensible.
And just as a computer reseller/free-computers-for-everyone-distributor has an obligation not to send bombs to people who asked for computers, a website owner has an obligation not to serve malware to people who asked for content.
It is not black and white but it is pretty close.
Edit: I have to give Firefox credit here. It actually just prompted me whether I'd like to save the file. I did and uploaded it to virus total. Probably not a good idea if you do anything sensitive on your computer.